diff mbox

wide-int branch updated.

Message ID 521CE56E.5010605@naturalbridge.com
State New
Headers show

Commit Message

Kenneth Zadeck Aug. 27, 2013, 5:44 p.m. UTC
removed all knowledge of SHIFT_COUNT_TRUNCATED from wide-int

both Richard Biener and Richard Sandiford had commented negatively about 
this.

fixed bug with wide-int::fits_uhwi_p.

kenny

Comments

Richard Biener Aug. 28, 2013, 7:45 a.m. UTC | #1
On Tue, 27 Aug 2013, Kenneth Zadeck wrote:

> removed all knowledge of SHIFT_COUNT_TRUNCATED from wide-int
> 
> both Richard Biener and Richard Sandiford had commented negatively about 
> this.
>
> fixed bug with wide-int::fits_uhwi_p.

 inline bool
 wide_int_ro::fits_uhwi_p () const
 {
-  return (len == 1 && val[0] >= 0) || (len == 2 && val[1] == 0);
+  return (precision <= HOST_BITS_PER_WIDE_INT)
+    || (len == 1 && val[0] >= 0) 
+    || (len == 2 && (precision >= 2 * HOST_BITS_PER_WIDE_INT) && (val[1] 
==
0))
+    || (len == 2 && (sext_hwi (val[1], precision & 
(HOST_BITS_PER_WIDE_INT
- 1)) == 0));
 }

it now get's scary ;)  Still wrong for precision == 0?

;)

I wonder what it's semantic is ... in double_int we simply require
high == 0 (thus, negative numbers are not allowed).  with
precision <= HOST_BITS_PER_WIDE_INT you allow negative numbers.

Matching what double-int fits_uhwi does would be

(len == 1 && ((signed HOST_WIDE_INT)val[0]) >= 0)
|| (len == 2 && val[1] == 0)

(I don't remember off-hand the signedness of val[], but eventually
you missed the conversion to signed)

Now, what double-int does is supposed to match
host_integerp (..., 1) which I think it does.

Richard.
Kenneth Zadeck Aug. 28, 2013, 12:04 p.m. UTC | #2
On 08/28/2013 03:45 AM, Richard Biener wrote:
> On Tue, 27 Aug 2013, Kenneth Zadeck wrote:
>
>> removed all knowledge of SHIFT_COUNT_TRUNCATED from wide-int
>>
>> both Richard Biener and Richard Sandiford had commented negatively about
>> this.
>>
>> fixed bug with wide-int::fits_uhwi_p.
>   inline bool
>   wide_int_ro::fits_uhwi_p () const
>   {
> -  return (len == 1 && val[0] >= 0) || (len == 2 && val[1] == 0);
> +  return (precision <= HOST_BITS_PER_WIDE_INT)
> +    || (len == 1 && val[0] >= 0)
> +    || (len == 2 && (precision >= 2 * HOST_BITS_PER_WIDE_INT) && (val[1]
> ==
> 0))
> +    || (len == 2 && (sext_hwi (val[1], precision &
> (HOST_BITS_PER_WIDE_INT
> - 1)) == 0));
>   }
>
> it now get's scary ;)  Still wrong for precision == 0?
no, because anything that comes in at precision 0 is a canonized sign 
extended number already.   the precision 0 just means that it is safe to 
be any precision.
> ;)
>
> I wonder what it's semantic is ... in double_int we simply require
> high == 0 (thus, negative numbers are not allowed).  with
> precision <= HOST_BITS_PER_WIDE_INT you allow negative numbers.
>
> Matching what double-int fits_uhwi does would be
>
> (len == 1 && ((signed HOST_WIDE_INT)val[0]) >= 0)
it is signed so i am matching this part.
> || (len == 2 && val[1] == 0)
so this does not work.   say i had a precision 70 bit wide-int. The bits 
above the precision are undefined, so i have to clear it out.   This is 
what the two lines at len 2 are for.   However if the precision is 
greater than 2 hwi's then we can do something this simple.

kenny
> (I don't remember off-hand the signedness of val[], but eventually
> you missed the conversion to signed)
>
> Now, what double-int does is supposed to match
> host_integerp (..., 1) which I think it does.
>
> Richard.
Richard Biener Aug. 28, 2013, 12:32 p.m. UTC | #3
On Wed, 28 Aug 2013, Kenneth Zadeck wrote:

> On 08/28/2013 03:45 AM, Richard Biener wrote:
> > On Tue, 27 Aug 2013, Kenneth Zadeck wrote:
> > 
> > > removed all knowledge of SHIFT_COUNT_TRUNCATED from wide-int
> > > 
> > > both Richard Biener and Richard Sandiford had commented negatively about
> > > this.
> > > 
> > > fixed bug with wide-int::fits_uhwi_p.
> >   inline bool
> >   wide_int_ro::fits_uhwi_p () const
> >   {
> > -  return (len == 1 && val[0] >= 0) || (len == 2 && val[1] == 0);
> > +  return (precision <= HOST_BITS_PER_WIDE_INT)
> > +    || (len == 1 && val[0] >= 0)
> > +    || (len == 2 && (precision >= 2 * HOST_BITS_PER_WIDE_INT) && (val[1]
> > ==
> > 0))
> > +    || (len == 2 && (sext_hwi (val[1], precision &
> > (HOST_BITS_PER_WIDE_INT
> > - 1)) == 0));
> >   }
> > 
> > it now get's scary ;)  Still wrong for precision == 0?
> no, because anything that comes in at precision 0 is a canonized sign extended
> number already.   the precision 0 just means that it is safe to be any
> precision.

Hmm, how can "any" precision be valid?  Only any precision that can
represent the value.  fits_uhwi_p asks whether truncation to
hwi precision is value-preserving.

> > ;)
> > 
> > I wonder what it's semantic is ... in double_int we simply require
> > high == 0 (thus, negative numbers are not allowed).  with
> > precision <= HOST_BITS_PER_WIDE_INT you allow negative numbers.
> > 
> > Matching what double-int fits_uhwi does would be
> > 
> > (len == 1 && ((signed HOST_WIDE_INT)val[0]) >= 0)
> it is signed so i am matching this part.
> > || (len == 2 && val[1] == 0)
> so this does not work.   say i had a precision 70 bit wide-int. The bits above
> the precision are undefined, so i have to clear it out.   This is what the two
> lines at len 2 are for.   However if the precision is greater than 2 hwi's
> then we can do something this simple.

?  The bits in the encoding should not be undefined.  And why should
they be magically defined when the precision is greater than 2 hwi's then?

Richard.
diff mbox

Patch

Index: gcc/fold-const.c
===================================================================
--- gcc/fold-const.c	(revision 201985)
+++ gcc/fold-const.c	(working copy)
@@ -992,9 +992,9 @@  int_const_binop_1 (enum tree_code code,
 	/* It's unclear from the C standard whether shifts can overflow.
 	   The following code ignores overflow; perhaps a C standard
 	   interpretation ruling is needed.  */
-	res = op1.rshift (arg2, sign, GET_MODE_BITSIZE (TYPE_MODE (type)), TRUNC);
+	res = op1.rshift (arg2, sign, GET_MODE_BITSIZE (TYPE_MODE (type)));
       else
-	res = op1.lshift (arg2, GET_MODE_BITSIZE (TYPE_MODE (type)), TRUNC);
+	res = op1.lshift (arg2, GET_MODE_BITSIZE (TYPE_MODE (type)));
       break;
       
     case RROTATE_EXPR:
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c	(revision 201985)
+++ gcc/simplify-rtx.c	(working copy)
@@ -3507,9 +3507,7 @@  rtx
 simplify_const_binary_operation (enum rtx_code code, enum machine_mode mode,
 				 rtx op0, rtx op1)
 {
-#if TARGET_SUPPORTS_WIDE_INT == 0
   unsigned int width = GET_MODE_PRECISION (mode);
-#endif
 
   if (VECTOR_MODE_P (mode)
       && code != VEC_CONCAT
@@ -3787,40 +3785,45 @@  simplify_const_binary_operation (enum rt
 	  break;
 
 	case LSHIFTRT:
-	  if (wide_int (std::make_pair (op1, mode)).neg_p ())
-	    return NULL_RTX;
-
-	  result = wop0.rshiftu (pop1, bitsize, TRUNC);
-	  break;
-	  
 	case ASHIFTRT:
-	  if (wide_int (std::make_pair (op1, mode)).neg_p ())
-	    return NULL_RTX;
-
-	  result = wop0.rshifts (pop1, bitsize, TRUNC);
-	  break;
-	  
 	case ASHIFT:
-	  if (wide_int (std::make_pair (op1, mode)).neg_p ())
-	    return NULL_RTX;
-
-	  result = wop0.lshift (pop1, bitsize, TRUNC);
-	  break;
-	  
 	case ROTATE:
-	  if (wide_int (std::make_pair (op1, mode)).neg_p ())
-	    return NULL_RTX;
-
-	  result = wop0.lrotate (pop1);
-	  break;
-	  
 	case ROTATERT:
-	  if (wide_int (std::make_pair (op1, mode)).neg_p ())
-	    return NULL_RTX;
+	  {
+	    wide_int wop1 = pop1;
+	    if (wop1.neg_p ())
+	      return NULL_RTX;
 
-	  result = wop0.rrotate (pop1);
-	  break;
+	    if (SHIFT_COUNT_TRUNCATED)
+	      wop1 = wop1.umod_trunc (width); 
 
+	    switch (code)
+	      {
+	      case LSHIFTRT:
+		result = wop0.rshiftu (wop1, bitsize);
+		break;
+		
+	      case ASHIFTRT:
+		result = wop0.rshifts (wop1, bitsize);
+		break;
+		
+	      case ASHIFT:
+		result = wop0.lshift (wop1, bitsize);
+		break;
+		
+	      case ROTATE:
+		result = wop0.lrotate (wop1);
+		break;
+		
+	      case ROTATERT:
+		result = wop0.rrotate (wop1);
+		break;
+
+	      default:
+		gcc_unreachable ();
+	      }
+	    break;
+	  }
 	default:
 	  return NULL_RTX;
 	}
Index: gcc/wide-int.cc
===================================================================
--- gcc/wide-int.cc	(revision 201985)
+++ gcc/wide-int.cc	(working copy)
@@ -2404,7 +2404,12 @@  wide_int_ro::divmod_internal_2 (unsigned
 
 
 /* Do a truncating divide DIVISOR into DIVIDEND.  The result is the
-   same size as the operands.  SIGN is either SIGNED or UNSIGNED.  */
+   same size as the operands.  SIGN is either SIGNED or UNSIGNED.  If
+   COMPUTE_QUOTIENT is set, the quotient is computed and returned.  If
+   it is not set, the result is undefined.  If COMPUTE_REMAINDER is
+   set, the remaineder is returned in remainder.  If it is not set,
+   the remainder is undefined.  If OFLOW is not null, it is set to the
+   overflow value.  */
 wide_int_ro
 wide_int_ro::divmod_internal (bool compute_quotient,
 			      const HOST_WIDE_INT *dividend,
@@ -2470,6 +2475,10 @@  wide_int_ro::divmod_internal (bool compu
   quotient.precision = dividend_prec;
   remainder->precision = dividend_prec;
 
+  /* Initialize the incoming overflow if it has been provided.  */
+  if (oflow)
+    *oflow = false;
+
   /* If overflow is set, just get out.  There will only be grief by
      continuing.  */
   if (overflow)
Index: gcc/wide-int.h
===================================================================
--- gcc/wide-int.h	(revision 201995)
+++ gcc/wide-int.h	(working copy)
@@ -259,19 +259,6 @@  const int addr_max_bitsize = 64;
 const int addr_max_precision
   = ((addr_max_bitsize + 4 + HOST_BITS_PER_WIDE_INT - 1) & ~(HOST_BITS_PER_WIDE_INT - 1));
 
-enum ShiftOp {
-  NONE,
-  /* There are two uses for the wide-int shifting functions.  The
-     first use is as an emulation of the target hardware.  The
-     second use is as service routines for other optimizations.  The
-     first case needs to be identified by passing TRUNC as the value
-     of ShiftOp so that shift amount is properly handled according to the
-     SHIFT_COUNT_TRUNCATED flag.  For the second case, the shift
-     amount is always truncated by the bytesize of the mode of
-     THIS.  */
-  TRUNC
-};
-
 /* This is used to bundle an rtx and a mode together so that the pair
    can be used as the second operand of a wide int expression.  If we
    ever put modes into rtx integer constants, this should go away and
@@ -661,7 +648,7 @@  public:
   HOST_WIDE_INT extract_to_hwi (int, int) const;
 
   template <typename T>
-  wide_int_ro lshift (const T &, unsigned int = 0, ShiftOp = NONE) const;
+  wide_int_ro lshift (const T &, unsigned int = 0) const;
 
   template <typename T>
   wide_int_ro lshift_widen (const T &, unsigned int) const;
@@ -672,14 +659,13 @@  public:
   wide_int_ro lrotate (unsigned HOST_WIDE_INT, unsigned int = 0) const;
 
   template <typename T>
-  wide_int_ro rshift (const T &, signop, unsigned int = 0,
-		      ShiftOp = NONE) const;
+  wide_int_ro rshift (const T &, signop, unsigned int = 0) const;
 
   template <typename T>
-  wide_int_ro rshiftu (const T &, unsigned int = 0, ShiftOp = NONE) const;
+  wide_int_ro rshiftu (const T &, unsigned int = 0) const;
 
   template <typename T>
-  wide_int_ro rshifts (const T &, unsigned int = 0, ShiftOp = NONE) const;
+  wide_int_ro rshifts (const T &, unsigned int = 0) const;
 
   template <typename T>
   wide_int_ro rrotate (const T &, unsigned int = 0) const;
@@ -749,13 +735,11 @@  private:
 				      signop, wide_int_ro *, bool, bool *);
 
   /* Private utility routines.  */
+  int trunc_shift (const HOST_WIDE_INT *cnt, unsigned int bitsize) const;
   wide_int_ro decompress (unsigned int, unsigned int) const;
   void canonize ();
   static wide_int_ro from_rtx (const rtx_mode_t);
 
-  int trunc_shift (const HOST_WIDE_INT *, unsigned int, unsigned int,
-		   ShiftOp) const;
-
   template <typename T>
   static bool top_bit_set (T);
 
@@ -1650,7 +1634,10 @@  wide_int_ro::fits_shwi_p () const
 inline bool
 wide_int_ro::fits_uhwi_p () const
 {
-  return (len == 1 && val[0] >= 0) || (len == 2 && val[1] == 0);
+  return (precision <= HOST_BITS_PER_WIDE_INT)
+    || (len == 1 && val[0] >= 0) 
+    || (len == 2 && (precision >= 2 * HOST_BITS_PER_WIDE_INT) && (val[1] == 0))
+    || (len == 2 && (sext_hwi (val[1], precision & (HOST_BITS_PER_WIDE_INT - 1)) == 0));
 }
 
 /* Return the signed or unsigned min of THIS and C.  */
@@ -2358,8 +2345,6 @@  wide_int_ro::div_trunc (const T &c, sign
   unsigned int cl;
   unsigned int p1, p2;
 
-  if (overflow)
-    *overflow = false;
   p1 = precision;
   s = to_shwi1 (ws, &cl, &p2, c);
   check_precision (&p1, &p2, false, true);
@@ -2399,8 +2384,6 @@  wide_int_ro::div_floor (const T &c, sign
   unsigned int cl;
   unsigned int p1, p2;
 
-  if (overflow)
-    *overflow = false;
   p1 = precision;
   s = to_shwi1 (ws, &cl, &p2, c);
   check_precision (&p1, &p2, false, true);
@@ -2440,8 +2423,6 @@  wide_int_ro::div_ceil (const T &c, signo
   unsigned int cl;
   unsigned int p1, p2;
 
-  if (overflow)
-    *overflow = false;
   p1 = precision;
   s = to_shwi1 (ws, &cl, &p2, c);
   check_precision (&p1, &p2, false, true);
@@ -2469,8 +2450,6 @@  wide_int_ro::div_round (const T &c, sign
   unsigned int cl;
   unsigned int p1, p2;
 
-  if (overflow)
-    *overflow = false;
   p1 = precision;
   s = to_shwi1 (ws, &cl, &p2, c);
   check_precision (&p1, &p2, false, true);
@@ -2544,7 +2523,7 @@  wide_int_ro::udivmod_trunc (const T &c,
 
 /* Divide DIVISOR into THIS.  The remainder is also produced in
    REMAINDER.  The result is the same size as the operands.
-   The sign is specified in SGN.  The output is floor truncated.  */
+   The sign is specified in SGN.  The outputs is floor truncated.  */
 template <typename T>
 inline wide_int_ro
 wide_int_ro::divmod_floor (const T &c, wide_int_ro *remainder,
@@ -2579,9 +2558,10 @@  wide_int_ro::sdivmod_floor (const T &c,
 }
 
 /* Divide DIVISOR into THIS producing the remainder.  The result is
-   the same size as the operands.  The sign is specified in SGN.
-   The output is truncated.  If the pointer to OVERFLOW is not 0,
-   OVERFLOW is set to true if the result overflows, false otherwise.  */
+   the same size as the operands.  The sign is specified in SGN.  The
+   output is adjusted to be compatible with truncating divide.  If the
+   pointer to OVERFLOW is not 0, OVERFLOW is set to true if the result
+   overflows, false otherwise.  */
 template <typename T>
 inline wide_int_ro
 wide_int_ro::mod_trunc (const T &c, signop sgn, bool *overflow) const
@@ -2592,8 +2572,6 @@  wide_int_ro::mod_trunc (const T &c, sign
   unsigned int cl;
   unsigned int p1, p2;
 
-  if (overflow)
-    *overflow = false;
   p1 = precision;
   s = to_shwi1 (ws, &cl, &p2, c);
   check_precision (&p1, &p2, false, true);
@@ -2620,9 +2598,9 @@  wide_int_ro::umod_trunc (const T &c) con
 }
 
 /* Divide DIVISOR into THIS producing the remainder.  The result is
-   the same size as the operands.  The sign is specified in SGN.
-   The output is floor truncated.  OVERFLOW is set to true if the
-   result overflows, false otherwise.  */
+   the same size as the operands.  The sign is specified in SGN.  The
+   output is adjusted to be compatible with floor divide.  OVERFLOW is
+   set to true if the result overflows, false otherwise.  */
 template <typename T>
 inline wide_int_ro
 wide_int_ro::mod_floor (const T &c, signop sgn, bool *overflow) const
@@ -2634,8 +2612,6 @@  wide_int_ro::mod_floor (const T &c, sign
   unsigned int cl;
   unsigned int p1, p2;
 
-  if (overflow)
-    *overflow = false;
   p1 = precision;
   s = to_shwi1 (ws, &cl, &p2, c);
   check_precision (&p1, &p2, false, true);
@@ -2657,9 +2633,10 @@  wide_int_ro::umod_floor (const T &c) con
 }
 
 /* Divide DIVISOR into THIS producing the remainder.  The result is
-   the same size as the operands.  The sign is specified in SGN.
-   The output is ceil truncated.  If the pointer to OVERFLOW is not 0,
-   OVERFLOW is set to true if the result overflows, false otherwise.  */
+   the same size as the operands.  The sign is specified in SGN.  The
+   output is adjusted to be compatible with ceil divide.  If the
+   pointer to OVERFLOW is not 0, OVERFLOW is set to true if the result
+   overflows, false otherwise.  */
 template <typename T>
 inline wide_int_ro
 wide_int_ro::mod_ceil (const T &c, signop sgn, bool *overflow) const
@@ -2671,8 +2648,6 @@  wide_int_ro::mod_ceil (const T &c, signo
   unsigned int cl;
   unsigned int p1, p2;
 
-  if (overflow)
-    *overflow = false;
   p1 = precision;
   s = to_shwi1 (ws, &cl, &p2, c);
   check_precision (&p1, &p2, false, true);
@@ -2686,9 +2661,9 @@  wide_int_ro::mod_ceil (const T &c, signo
 }
 
 /* Divide DIVISOR into THIS producing the remainder.  The result is
-   the same size as the operands.  The sign is specified in SGN.
-   The output is round truncated.  OVERFLOW is set to true if the
-   result overflows, false otherwise.  */
+   the same size as the operands.  The sign is specified in SGN.  The
+   output is adjusted to be compatible with rounding divide.  OVERFLOW
+   is set to true if the result overflows, false otherwise.  */
 template <typename T>
 inline wide_int_ro
 wide_int_ro::mod_round (const T &c, signop sgn, bool *overflow) const
@@ -2700,8 +2675,6 @@  wide_int_ro::mod_round (const T &c, sign
   unsigned int cl;
   unsigned int p1, p2;
 
-  if (overflow)
-    *overflow = false;
   p1 = precision;
   s = to_shwi1 (ws, &cl, &p2, c);
   check_precision (&p1, &p2, false, true);
@@ -2738,11 +2711,10 @@  wide_int_ro::mod_round (const T &c, sign
 }
 
 /* Left shift THIS by C.  C must be non-negative.  BITSIZE is the
-   width of *THIS used for truncating the shift amount.  See the
-   definition of Op.TRUNC for how to set TRUNC_OP.  */
+   width of *THIS used for truncating the shift amount.   */
 template <typename T>
 inline wide_int_ro
-wide_int_ro::lshift (const T &c, unsigned int bitsize, ShiftOp trunc_op) const
+wide_int_ro::lshift (const T &c, unsigned int bitsize) const
 {
   wide_int_ro result;
   HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
@@ -2754,7 +2726,7 @@  wide_int_ro::lshift (const T &c, unsigne
 
   gcc_checking_assert (precision);
 
-  shift = trunc_shift (s, cl, bitsize, trunc_op);
+  shift = trunc_shift (s, bitsize);
   if (shift == -1)
     result = wide_int_ro::zero (precision);
   else if (shift == 0)
@@ -2857,25 +2829,23 @@  wide_int_ro::lrotate (unsigned HOST_WIDE
 }
 
 /* Right shift THIS by C.  BITSIZE is the width of *THIS used for
-   truncating the shift amount.  SGN indicates the sign.  TRUNC_OP
-   indicates the truncation option.  C must be non-negative.  */
+   truncating the shift amount.  SGN indicates the sign.  C must be
+   non-negative.  */
 template <typename T>
 inline wide_int_ro
-wide_int_ro::rshift (const T &c, signop sgn, unsigned int bitsize,
-		     ShiftOp trunc_op) const
+wide_int_ro::rshift (const T &c, signop sgn, unsigned int bitsize) const
 {
   if (sgn == UNSIGNED)
-    return rshiftu (c, bitsize, trunc_op);
+    return rshiftu (c, bitsize);
   else
-    return rshifts (c, bitsize, trunc_op);
+    return rshifts (c, bitsize);
 }
 
 /* Unsigned right shift THIS by C.  C must be non-negative.  BITSIZE
-   is width of *THIS used for truncating the shift amount.  See the
-   definition of Op.TRUNC for how to set TRUNC_OP.  */
+   is width of *THIS used for truncating the shift amount. */
 template <typename T>
 inline wide_int_ro
-wide_int_ro::rshiftu (const T &c, unsigned int bitsize, ShiftOp trunc_op) const
+wide_int_ro::rshiftu (const T &c, unsigned int bitsize) const
 {
   wide_int_ro result;
   HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
@@ -2885,7 +2855,7 @@  wide_int_ro::rshiftu (const T &c, unsign
 
   s = to_shwi2 (ws, &cl, c);
   gcc_checking_assert (precision);
-  shift = trunc_shift (s, cl, bitsize, trunc_op);
+  shift = trunc_shift (s, bitsize);
 
   if (shift == 0)
     result = *this;
@@ -2914,11 +2884,10 @@  wide_int_ro::rshiftu (const T &c, unsign
 }
 
 /* Signed right shift THIS by C.  C must be non-negative, BITSIZE is
-   the width of *THIS used for truncating the shift amount.  See the
-   definition of Op.TRUNC for how to set TRUNC_OP.  */
+   the width of *THIS used for truncating the shift amount.   */
 template <typename T>
 inline wide_int_ro
-wide_int_ro::rshifts (const T &c, unsigned int bitsize, ShiftOp trunc_op) const
+wide_int_ro::rshifts (const T &c, unsigned int bitsize) const
 {
   wide_int_ro result;
   HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
@@ -2928,7 +2897,7 @@  wide_int_ro::rshifts (const T &c, unsign
 
   s = to_shwi2 (ws, &cl, c);
   gcc_checking_assert (precision);
-  shift = trunc_shift (s, cl, bitsize, trunc_op);
+  shift = trunc_shift (s, bitsize);
 
   if (shift == 0)
     result = *this;
@@ -3001,62 +2970,14 @@  wide_int_ro::rrotate (unsigned HOST_WIDE
   return result;
 }
 
-/* If SHIFT_COUNT_TRUNCATED is defined, truncate CNT.
-
-   At first look, the shift truncation code does not look right.
-   Shifts (and rotates) are done according to the precision of the
-   mode but the shift count is truncated according to the bitsize of
-   the mode.  This is how real hardware works (Knuth's mix machine
-   is the only known exception to this rule, but it was never real).
-
-   On an ideal machine, like Knuth's mix machine, a shift count is a
-   word long and all of the bits of that word are examined to
-   compute the shift amount.  But on real hardware, especially on
-   machines with fast (single cycle shifts) that takes too long.
-   On these machines, the amount of time to perform a shift dictates
-   the cycle time of the machine so corners are cut to keep this
-   fast.  A comparison of an entire 64 bit word would take something
-   like 6 gate delays before the shifting can even start.
-
-   So real hardware only looks at a small part of the shift amount.
-   On IBM machines, this tends to be 1 more than what is necessary
-   to encode the shift amount.  The rest of the world looks at only
-   the minimum number of bits.  This means that only 3 gate delays
-   are necessary to set up the shifter.
-
-   On the other hand, right shifts and rotates must be according to
-   the precision or the operation does not make any sense.
-
-   This function is called in two contexts.  If TRUNC_OP == TRUNC,
-   this function provides a count that matches the semantics of the
-   target machine depending on the value of SHIFT_COUNT_TRUNCATED.
-   Note that if SHIFT_COUNT_TRUNCATED is not defined, this function
-   may produce -1 as a value if the shift amount is greater than the
-   bitsize of the mode.  -1 is a surrogate for a very large amount.
-
-   If TRUNC_OP == NONE, then this function always truncates the shift
-   value to the bitsize because this shifting operation is a
-   function that is internal to GCC.  */
+/* Truncate the value of the shift so that the value is within the
+   BITSIZE. */
 inline int
-wide_int_ro::trunc_shift (const HOST_WIDE_INT *cnt,
-			  unsigned int len ATTRIBUTE_UNUSED,
-			  unsigned int bitsize, ShiftOp trunc_op) const
+wide_int_ro::trunc_shift (const HOST_WIDE_INT *cnt, unsigned int bitsize) const
 {
   gcc_checking_assert (cnt[0] >= 0);
 
-  if (trunc_op == TRUNC)
-    {
-      gcc_checking_assert (bitsize != 0);
-#ifdef SHIFT_COUNT_TRUNCATED
-      return cnt[0] & (bitsize - 1);
-#else
-      if (cnt[0] < bitsize && cnt[0] >= 0 && len == 1)
-	return cnt[0];
-      else
-	return -1;
-#endif
-    }
-  else if (bitsize == 0)
+  if (bitsize == 0)
     return cnt[0];
   else
     return cnt[0] & (bitsize - 1);
@@ -3477,20 +3398,19 @@  public:
   fixed_wide_int lrotate (unsigned HOST_WIDE_INT, unsigned int) const;
 
   template <typename T>
-  fixed_wide_int lshift (const T &, unsigned int = 0, ShiftOp = NONE) const;
+  fixed_wide_int lshift (const T &, unsigned int = 0) const;
 
   template <typename T>
   fixed_wide_int lshift_widen (const T &, unsigned int) const;
 
   template <typename T>
-  fixed_wide_int rshift (const T &, signop, unsigned int = 0,
-			 ShiftOp = NONE) const;
+  fixed_wide_int rshift (const T &, signop, unsigned int = 0) const;
 
   template <typename T>
-  fixed_wide_int rshiftu (const T &, unsigned int = 0, ShiftOp = NONE) const;
+  fixed_wide_int rshiftu (const T &, unsigned int = 0) const;
 
   template <typename T>
-  fixed_wide_int rshifts (const T &, unsigned int = 0,  ShiftOp = NONE) const;
+  fixed_wide_int rshifts (const T &, unsigned int = 0) const;
 
   template <typename T>
   fixed_wide_int rrotate (const T &, unsigned int) const;
@@ -4026,10 +3946,9 @@  fixed_wide_int <bitsize>::lrotate (unsig
 template <int bitsize>
 template <typename T>
 inline fixed_wide_int <bitsize>
-fixed_wide_int <bitsize>::lshift (const T &c, unsigned int bit_size,
-				  ShiftOp z) const
+fixed_wide_int <bitsize>::lshift (const T &c, unsigned int bit_size) const
 {
-  return wide_int_ro::lshift (c, bit_size, z);
+  return wide_int_ro::lshift (c, bit_size);
 }
 
 template <int bitsize>
@@ -4044,28 +3963,28 @@  fixed_wide_int <bitsize>::lshift_widen (
 template <int bitsize>
 template <typename T>
 inline fixed_wide_int <bitsize>
-fixed_wide_int <bitsize>::rshift (const T &c, signop sgn,
-				  unsigned int bit_size, ShiftOp z) const
+fixed_wide_int <bitsize>::rshift (const T &c, signop sgn, 
+				  unsigned int bit_size) const
 {
-  return wide_int_ro::rshift (c, sgn, bit_size, z);
+  return wide_int_ro::rshift (c, sgn, bit_size);
 }
 
 template <int bitsize>
 template <typename T>
 inline fixed_wide_int <bitsize>
-fixed_wide_int <bitsize>::rshiftu (const T &c, unsigned int bit_size,
-				   ShiftOp z) const
+fixed_wide_int <bitsize>::rshiftu (const T &c, 
+				   unsigned int bit_size) const
 {
-  return wide_int_ro::rshiftu (c, bit_size, z);
+  return wide_int_ro::rshiftu (c, bit_size);
 }
 
 template <int bitsize>
 template <typename T>
 inline fixed_wide_int <bitsize>
-fixed_wide_int <bitsize>::rshifts (const T &c, unsigned int bit_size,
-				   ShiftOp z) const
+fixed_wide_int <bitsize>::rshifts (const T &c,
+				   unsigned int bit_size) const
 {
-  return wide_int_ro::rshifts (c, bit_size, z);
+  return wide_int_ro::rshifts (c, bit_size);
 }
 
 template <int bitsize>