diff mbox

[wide-int] Handle zero-precision INTEGER_CSTs again

Message ID 871twcouo6.fsf@sandifor-thinkpad.stglab.manchester.uk.ibm.com
State New
Headers show

Commit Message

Richard Sandiford May 2, 2014, 7:19 p.m. UTC
I'd hoped the days of zero-precision INTEGER_CSTs were behind us after
Richard's patch to remove min amd max values from zero-width bitfields,
but a boostrap-ubsan showed otherwise.  One source is in:

  null_pointer_node = build_int_cst (build_pointer_type (void_type_node), 0);

if no_target, since the precision of the type comes from ptr_mode,
which remains the default VOIDmode.  Maybe that's a bug, but setting
it to an arbitrary nonzero value would also be dangerous.

The other use is in the C/C++ void_zero_node, which is specifically
defined as a VOID_TYPE, zero-precision INTEGER_CST.  This is used by the
ubsan code in some ?: tests, as well as by other places in the frontend
proper.  At least the ubsan use persists until gimple.

This patch therefore restores the wide-int handling for zero precision,
for which the length must be 1 and the single HWI must be zero.
I've tried to wrap up most of the dependencies in two new functions,
wi::blocks_needed (a function version of the .cc BLOCKS_NEEDED, now also
used in the header) and wi::excess_bits, so that it'll be easier to
remove the handling again if zero precision ever goes away.
There are some remaining, easily-greppable cases that check directly
for a precision of 0 though.

The combination of this and the other patches allows boostrap-ubsan to
complete.  There are a lot of extra testsuite failures compared to a
normal bootstrap, but they don't look related to wide-int.

Tested on x86_64-linux-gnu and powerpc64-linux-gnu.  OK to install?

Thanks,
Richard

Comments

Kenneth Zadeck May 3, 2014, 4:42 p.m. UTC | #1
The doc at wide-int.h:136 needs work.    The doc currently says that the 
precision and length are always greater than 0.   This is now not 
correct.     It also says that the bits above the precision are defined 
to be the sign extension if the precision does not cover that block.

I do not know if i believe this completely.

I noticed that in the eq_p_large code, you removed a block of code left 
over from the days when this was not true, but there is still so of this 
code remaining that does not assume this.

I worry about this because we have moved back and forth on this issue 
many times and i still see code at rtl.h:1440 which assumes that BImode 
constants are stored differently on some platforms. It is possible that 
i am reading that code incorrectly but at first reading it looks 
questionable.   so code comparing a bimode 1 with a 1 of a different 
precision would not work.

aside from that the patch is fine.

kenny

On 05/02/2014 03:19 PM, Richard Sandiford wrote:
> I'd hoped the days of zero-precision INTEGER_CSTs were behind us after
> Richard's patch to remove min amd max values from zero-width bitfields,
> but a boostrap-ubsan showed otherwise.  One source is in:
>
>    null_pointer_node = build_int_cst (build_pointer_type (void_type_node), 0);
>
> if no_target, since the precision of the type comes from ptr_mode,
> which remains the default VOIDmode.  Maybe that's a bug, but setting
> it to an arbitrary nonzero value would also be dangerous.
>
> The other use is in the C/C++ void_zero_node, which is specifically
> defined as a VOID_TYPE, zero-precision INTEGER_CST.  This is used by the
> ubsan code in some ?: tests, as well as by other places in the frontend
> proper.  At least the ubsan use persists until gimple.
>
> This patch therefore restores the wide-int handling for zero precision,
> for which the length must be 1 and the single HWI must be zero.
> I've tried to wrap up most of the dependencies in two new functions,
> wi::blocks_needed (a function version of the .cc BLOCKS_NEEDED, now also
> used in the header) and wi::excess_bits, so that it'll be easier to
> remove the handling again if zero precision ever goes away.
> There are some remaining, easily-greppable cases that check directly
> for a precision of 0 though.
>
> The combination of this and the other patches allows boostrap-ubsan to
> complete.  There are a lot of extra testsuite failures compared to a
> normal bootstrap, but they don't look related to wide-int.
>
> Tested on x86_64-linux-gnu and powerpc64-linux-gnu.  OK to install?
>
> Thanks,
> Richard
>
>
> Index: gcc/wide-int.cc
> ===================================================================
> --- gcc/wide-int.cc	2014-05-02 16:28:07.657847935 +0100
> +++ gcc/wide-int.cc	2014-05-02 16:28:09.560842845 +0100
> @@ -48,8 +48,6 @@ static const HOST_WIDE_INT zeros[WIDE_IN
>   #define HALF_INT_MASK (((HOST_WIDE_INT) 1 << HOST_BITS_PER_HALF_WIDE_INT) - 1)
>   
>   #define BLOCK_OF(TARGET) ((TARGET) / HOST_BITS_PER_WIDE_INT)
> -#define BLOCKS_NEEDED(PREC) \
> -  (PREC ? (((PREC) + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT) : 1)
>   #define SIGN_MASK(X) ((HOST_WIDE_INT) (X) < 0 ? -1 : 0)
>   
>   /* Return the value a VAL[I] if I < LEN, otherwise, return 0 or -1
> @@ -69,20 +67,20 @@ safe_uhwi (const HOST_WIDE_INT *val, uns
>   static unsigned int
>   canonize (HOST_WIDE_INT *val, unsigned int len, unsigned int precision)
>   {
> -  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
> +  unsigned int blocks_needed = wi::blocks_needed (precision);
>     HOST_WIDE_INT top;
>     int i;
>   
>     if (len > blocks_needed)
>       len = blocks_needed;
>   
> +  if (wi::excess_bits (len, precision) > 0)
> +    val[len - 1] = sext_hwi (val[len - 1], precision % HOST_BITS_PER_WIDE_INT);
>     if (len == 1)
>       return len;
>   
>     top = val[len - 1];
> -  if (len * HOST_BITS_PER_WIDE_INT > precision)
> -    val[len - 1] = top = sext_hwi (top, precision % HOST_BITS_PER_WIDE_INT);
> -  if (top != 0 && top != (HOST_WIDE_INT)-1)
> +  if (top != 0 && top != (HOST_WIDE_INT) -1)
>       return len;
>   
>     /* At this point we know that the top is either 0 or -1.  Find the
> @@ -134,7 +132,7 @@ wi::from_buffer (const unsigned char *bu
>   
>     /* We have to clear all the bits ourself, as we merely or in values
>        below.  */
> -  unsigned int len = BLOCKS_NEEDED (precision);
> +  unsigned int len = wi::blocks_needed (precision);
>     HOST_WIDE_INT *val = result.write_val ();
>     for (unsigned int i = 0; i < len; ++i)
>       val[i] = 0;
> @@ -180,7 +178,7 @@ wi::to_mpz (const wide_int_ref &x, mpz_t
>   {
>     int len = x.get_len ();
>     const HOST_WIDE_INT *v = x.get_val ();
> -  int excess = len * HOST_BITS_PER_WIDE_INT - x.get_precision ();
> +  unsigned int excess = wi::excess_bits (len, x.get_precision ());
>   
>     if (wi::neg_p (x, sgn))
>       {
> @@ -306,7 +304,8 @@ wi::force_to_size (HOST_WIDE_INT *val, c
>   		   unsigned int xlen, unsigned int xprecision,
>   		   unsigned int precision, signop sgn)
>   {
> -  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
> +  unsigned int blocks_needed = wi::blocks_needed (precision);
> +  unsigned int xblocks_needed = wi::blocks_needed (xprecision);
>     unsigned int len = blocks_needed < xlen ? blocks_needed : xlen;
>     for (unsigned i = 0; i < len; i++)
>       val[i] = xval[i];
> @@ -318,11 +317,11 @@ wi::force_to_size (HOST_WIDE_INT *val, c
>         /* Expanding.  */
>         if (sgn == UNSIGNED)
>   	{
> -	  if (small_xprecision && len == BLOCKS_NEEDED (xprecision))
> +	  if (small_xprecision && len == xblocks_needed)
>   	    val[len - 1] = zext_hwi (val[len - 1], small_xprecision);
>   	  else if (val[len - 1] < 0)
>   	    {
> -	      while (len < BLOCKS_NEEDED (xprecision))
> +	      while (len < xblocks_needed)
>   		val[len++] = -1;
>   	      if (small_xprecision)
>   		val[len - 1] = zext_hwi (val[len - 1], small_xprecision);
> @@ -332,7 +331,7 @@ wi::force_to_size (HOST_WIDE_INT *val, c
>   	}
>         else
>   	{
> -	  if (small_xprecision && len == BLOCKS_NEEDED (xprecision))
> +	  if (small_xprecision && len == xblocks_needed)
>   	    val[len - 1] = sext_hwi (val[len - 1], small_xprecision);
>   	}
>       }
> @@ -372,10 +371,8 @@ selt (const HOST_WIDE_INT *a, unsigned i
>   static inline HOST_WIDE_INT
>   top_bit_of (const HOST_WIDE_INT *a, unsigned int len, unsigned int prec)
>   {
> -  int excess = len * HOST_BITS_PER_WIDE_INT - prec;
>     unsigned HOST_WIDE_INT val = a[len - 1];
> -  if (excess > 0)
> -    val <<= excess;
> +  val <<= wi::excess_bits (len, prec);
>     return val >> (HOST_BITS_PER_WIDE_INT - 1);
>   }
>   
> @@ -391,28 +388,16 @@ wi::eq_p_large (const HOST_WIDE_INT *op0
>   		const HOST_WIDE_INT *op1, unsigned int op1len,
>   		unsigned int prec)
>   {
> -  int l0 = op0len - 1;
> -  unsigned int small_prec = prec & (HOST_BITS_PER_WIDE_INT - 1);
> -
>     if (op0len != op1len)
>       return false;
>   
> -  if (op0len == BLOCKS_NEEDED (prec) && small_prec)
> -    {
> -      /* It does not matter if we zext or sext here, we just have to
> -	 do both the same way.  */
> -      if (zext_hwi (op0 [l0], small_prec) != zext_hwi (op1 [l0], small_prec))
> -	return false;
> -      l0--;
> -    }
> -
> -  while (l0 >= 0)
> -    if (op0[l0] != op1[l0])
> +  for (unsigned int i = 0; i < op0len - 1; i++)
> +    if (op0[i] != op1[i])
>         return false;
> -    else
> -      l0--;
>   
> -  return true;
> +  unsigned HOST_WIDE_INT top0 = op0[op0len - 1];
> +  unsigned HOST_WIDE_INT top1 = op1[op1len - 1];
> +  return ((top0 ^ top1) << wi::excess_bits (op0len, prec)) == 0;
>   }
>   
>   /* Return true if OP0 < OP1 using signed comparisons.  */
> @@ -423,7 +408,7 @@ wi::lts_p_large (const HOST_WIDE_INT *op
>   {
>     HOST_WIDE_INT s0, s1;
>     unsigned HOST_WIDE_INT u0, u1;
> -  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
> +  unsigned int blocks_needed = wi::blocks_needed (precision);
>     unsigned int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
>     int l = MAX (op0len - 1, op1len - 1);
>   
> @@ -461,7 +446,7 @@ wi::cmps_large (const HOST_WIDE_INT *op0
>   {
>     HOST_WIDE_INT s0, s1;
>     unsigned HOST_WIDE_INT u0, u1;
> -  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
> +  unsigned int blocks_needed = wi::blocks_needed (precision);
>     unsigned int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
>     int l = MAX (op0len - 1, op1len - 1);
>   
> @@ -498,7 +483,7 @@ wi::ltu_p_large (const HOST_WIDE_INT *op
>   {
>     unsigned HOST_WIDE_INT x0;
>     unsigned HOST_WIDE_INT x1;
> -  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
> +  unsigned int blocks_needed = wi::blocks_needed (precision);
>     unsigned int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
>     int l = MAX (op0len - 1, op1len - 1);
>   
> @@ -525,7 +510,7 @@ wi::cmpu_large (const HOST_WIDE_INT *op0
>   {
>     unsigned HOST_WIDE_INT x0;
>     unsigned HOST_WIDE_INT x1;
> -  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
> +  unsigned int blocks_needed = wi::blocks_needed (precision);
>     unsigned int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
>     int l = MAX (op0len - 1, op1len - 1);
>   
> @@ -673,7 +658,7 @@ wide_int_storage::bswap () const
>   {
>     wide_int result = wide_int::create (precision);
>     unsigned int i, s;
> -  unsigned int len = BLOCKS_NEEDED (precision);
> +  unsigned int len = wi::blocks_needed (precision);
>     unsigned int xlen = get_len ();
>     const HOST_WIDE_INT *xval = get_val ();
>     HOST_WIDE_INT *val = result.write_val ();
> @@ -1149,7 +1134,7 @@ wi_unpack (unsigned HOST_HALF_WIDE_INT *
>     unsigned int i;
>     unsigned int j = 0;
>     unsigned int small_prec = prec & (HOST_BITS_PER_WIDE_INT - 1);
> -  unsigned int blocks_needed = BLOCKS_NEEDED (prec);
> +  unsigned int blocks_needed = wi::blocks_needed (prec);
>     HOST_WIDE_INT mask;
>   
>     if (sgn == SIGNED)
> @@ -1222,7 +1207,7 @@ wi::mul_internal (HOST_WIDE_INT *val, co
>     unsigned HOST_WIDE_INT o0, o1, k, t;
>     unsigned int i;
>     unsigned int j;
> -  unsigned int blocks_needed = BLOCKS_NEEDED (prec);
> +  unsigned int blocks_needed = wi::blocks_needed (prec);
>     unsigned int half_blocks_needed = blocks_needed * 2;
>     /* The sizes here are scaled to support a 2x largest mode by 2x
>        largest mode yielding a 4x largest mode result.  This is what is
> @@ -1426,6 +1411,9 @@ wi::popcount (const wide_int_ref &x)
>     unsigned int i;
>     int count;
>   
> +  if (x.precision == 0)
> +    return 0;
> +
>     /* The high order block is special if it is the last block and the
>        precision is not an even multiple of HOST_BITS_PER_WIDE_INT.  We
>        have to clear out any ones above the precision before doing
> @@ -1645,8 +1633,8 @@ wi::divmod_internal (HOST_WIDE_INT *quot
>   		     unsigned int divisor_prec, signop sgn,
>   		     bool *oflow)
>   {
> -  unsigned int dividend_blocks_needed = 2 * BLOCKS_NEEDED (dividend_prec);
> -  unsigned int divisor_blocks_needed = 2 * BLOCKS_NEEDED (divisor_prec);
> +  unsigned int dividend_blocks_needed = 2 * wi::blocks_needed (dividend_prec);
> +  unsigned int divisor_blocks_needed = 2 * wi::blocks_needed (divisor_prec);
>     unsigned HOST_HALF_WIDE_INT
>       b_quotient[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
>     unsigned HOST_HALF_WIDE_INT
> @@ -1671,7 +1659,7 @@ wi::divmod_internal (HOST_WIDE_INT *quot
>     /* The smallest signed number / -1 causes overflow.  The dividend_len
>        check is for speed rather than correctness.  */
>     if (sgn == SIGNED
> -      && dividend_len == BLOCKS_NEEDED (dividend_prec)
> +      && dividend_len == wi::blocks_needed (dividend_prec)
>         && divisor == -1
>         && wi::only_sign_bit_p (dividend))
>       overflow = true;
> @@ -1838,7 +1826,7 @@ wi::lshift_large (HOST_WIDE_INT *val, co
>     unsigned int small_shift = shift % HOST_BITS_PER_WIDE_INT;
>   
>     /* The whole-block shift fills with zeros.  */
> -  unsigned int len = BLOCKS_NEEDED (precision);
> +  unsigned int len = wi::blocks_needed (precision);
>     for (unsigned int i = 0; i < skip; ++i)
>       val[i] = 0;
>   
> @@ -1876,7 +1864,7 @@ rshift_large_common (HOST_WIDE_INT *val,
>   
>     /* Work out how many blocks are needed to store the significant bits
>        (excluding the upper zeros or signs).  */
> -  unsigned int len = BLOCKS_NEEDED (xprecision - shift);
> +  unsigned int len = wi::blocks_needed (xprecision - shift);
>   
>     /* It's easier to handle the simple block case specially.  */
>     if (small_shift == 0)
> @@ -1949,6 +1937,9 @@ wi::arshift_large (HOST_WIDE_INT *val, c
>   int
>   wi::clz (const wide_int_ref &x)
>   {
> +  if (x.precision == 0)
> +    return 0;
> +
>     /* Calculate how many bits there above the highest represented block.  */
>     int count = x.precision - x.len * HOST_BITS_PER_WIDE_INT;
>   
> @@ -1973,6 +1964,9 @@ wi::clz (const wide_int_ref &x)
>   int
>   wi::clrsb (const wide_int_ref &x)
>   {
> +  if (x.precision == 0)
> +    return 0;
> +
>     /* Calculate how many bits there above the highest represented block.  */
>     int count = x.precision - x.len * HOST_BITS_PER_WIDE_INT;
>   
> @@ -2017,6 +2011,9 @@ wi::ctz (const wide_int_ref &x)
>   int
>   wi::exact_log2 (const wide_int_ref &x)
>   {
> +  if (x.precision == 0)
> +    return -1;
> +
>     /* Reject cases where there are implicit -1 blocks above HIGH.  */
>     if (x.len * HOST_BITS_PER_WIDE_INT < x.precision && x.sign_mask () < 0)
>       return -1;
> Index: gcc/wide-int.h
> ===================================================================
> --- gcc/wide-int.h	2014-05-02 16:28:07.657847935 +0100
> +++ gcc/wide-int.h	2014-05-02 16:28:09.561842842 +0100
> @@ -430,6 +430,9 @@ #define WIDE_INT_REF_FOR(T) \
>   /* Public functions for querying and operating on integers.  */
>   namespace wi
>   {
> +  unsigned int excess_bits (unsigned int, unsigned int);
> +  unsigned int blocks_needed (unsigned int);
> +
>     template <typename T>
>     unsigned int get_precision (const T &);
>   
> @@ -740,7 +743,7 @@ inline generic_wide_int <storage>::gener
>   inline HOST_WIDE_INT
>   generic_wide_int <storage>::to_shwi (unsigned int precision) const
>   {
> -  if (precision < HOST_BITS_PER_WIDE_INT)
> +  if (precision > 0 && precision < HOST_BITS_PER_WIDE_INT)
>       return sext_hwi (this->get_val ()[0], precision);
>     else
>       return this->get_val ()[0];
> @@ -764,7 +767,7 @@ generic_wide_int <storage>::to_shwi () c
>   inline unsigned HOST_WIDE_INT
>   generic_wide_int <storage>::to_uhwi (unsigned int precision) const
>   {
> -  if (precision < HOST_BITS_PER_WIDE_INT)
> +  if (precision > 0 && precision < HOST_BITS_PER_WIDE_INT)
>       return zext_hwi (this->get_val ()[0], precision);
>     else
>       return this->get_val ()[0];
> @@ -797,12 +800,7 @@ generic_wide_int <storage>::sign_mask ()
>     unsigned int len = this->get_len ();
>     unsigned HOST_WIDE_INT high = this->get_val ()[len - 1];
>     if (!is_sign_extended)
> -    {
> -      unsigned int precision = this->get_precision ();
> -      int excess = len * HOST_BITS_PER_WIDE_INT - precision;
> -      if (excess > 0)
> -	high <<= excess;
> -    }
> +    high <<= wi::excess_bits (len, this->get_precision ());
>     return (HOST_WIDE_INT) (high) < 0 ? -1 : 0;
>   }
>   
> @@ -1068,7 +1066,7 @@ wide_int_storage::write_val ()
>   wide_int_storage::set_len (unsigned int l, bool is_sign_extended)
>   {
>     len = l;
> -  if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision)
> +  if (!is_sign_extended && wi::excess_bits (len, precision) > 0)
>       val[len - 1] = sext_hwi (val[len - 1],
>   			     precision % HOST_BITS_PER_WIDE_INT);
>   }
> @@ -1347,7 +1345,7 @@ trailing_wide_int_storage::write_val ()
>   trailing_wide_int_storage::set_len (unsigned int len, bool is_sign_extended)
>   {
>     *m_len = len;
> -  if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > m_precision)
> +  if (!is_sign_extended && wi::excess_bits (len, m_precision) > 0)
>       m_val[len - 1] = sext_hwi (m_val[len - 1],
>   			       m_precision % HOST_BITS_PER_WIDE_INT);
>   }
> @@ -1368,8 +1366,7 @@ trailing_wide_int_storage::operator = (c
>   trailing_wide_ints <N>::set_precision (unsigned int precision)
>   {
>     m_precision = precision;
> -  m_max_len = ((precision + HOST_BITS_PER_WIDE_INT - 1)
> -	       / HOST_BITS_PER_WIDE_INT);
> +  m_max_len = wi::blocks_needed (precision);
>   }
>   
>   /* Return a reference to element INDEX.  */
> @@ -1387,9 +1384,7 @@ trailing_wide_ints <N>::operator [] (uns
>   inline size_t
>   trailing_wide_ints <N>::extra_size (unsigned int precision)
>   {
> -  unsigned int max_len = ((precision + HOST_BITS_PER_WIDE_INT - 1)
> -			  / HOST_BITS_PER_WIDE_INT);
> -  return (N * max_len - 1) * sizeof (HOST_WIDE_INT);
> +  return (N * wi::blocks_needed (precision) - 1) * sizeof (HOST_WIDE_INT);
>   }
>   
>   /* This macro is used in structures that end with a trailing_wide_ints field
> @@ -1621,6 +1616,26 @@ decompose (HOST_WIDE_INT *scratch, unsig
>   				signop, bool *);
>   }
>   
> +/* If a value of length LEN blocks has more than PRECISION bits, return
> +   the number of excess bits, otherwise return 0.  For the special case
> +   of PRECISION being zero, the single HWI must have the value zero and
> +   there are no excess bits.  Handling zero precision this way means
> +   that the result is always a valid shift amount.  */
> +inline unsigned int
> +wi::excess_bits (unsigned int len, unsigned int precision)
> +{
> +  unsigned int excess = len * HOST_BITS_PER_WIDE_INT - precision;
> +  return excess < HOST_BITS_PER_WIDE_INT ? excess : 0;
> +}
> +
> +/* Return the number of blocks needed for precision PRECISION.  */
> +inline unsigned int
> +wi::blocks_needed (unsigned int precision)
> +{
> +  return precision == 0 ? 1 : ((precision + HOST_BITS_PER_WIDE_INT - 1)
> +			       / HOST_BITS_PER_WIDE_INT);
> +}
> +
>   /* Return the number of bits that integer X can hold.  */
>   template <typename T>
>   inline unsigned int
> @@ -1729,9 +1744,7 @@ wi::eq_p (const T1 &x, const T2 &y)
>   	return xi.val[0] == 0;
>         /* Otherwise flush out any excess bits first.  */
>         unsigned HOST_WIDE_INT diff = xi.val[0] ^ yi.val[0];
> -      int excess = HOST_BITS_PER_WIDE_INT - precision;
> -      if (excess > 0)
> -	diff <<= excess;
> +      diff <<= wi::excess_bits (1, precision);
>         return diff == 0;
>       }
>     return eq_p_large (xi.val, xi.len, yi.val, yi.len, precision);
> @@ -2323,7 +2336,9 @@ wi::add (const T1 &x, const T2 &y, signo
>         unsigned HOST_WIDE_INT xl = xi.ulow ();
>         unsigned HOST_WIDE_INT yl = yi.ulow ();
>         unsigned HOST_WIDE_INT resultl = xl + yl;
> -      if (sgn == SIGNED)
> +      if (precision == 0)
> +	*overflow = false;
> +      else if (sgn == SIGNED)
>   	*overflow = (((resultl ^ xl) & (resultl ^ yl))
>   		     >> (precision - 1)) & 1;
>         else
> @@ -2396,7 +2411,9 @@ wi::sub (const T1 &x, const T2 &y, signo
>         unsigned HOST_WIDE_INT xl = xi.ulow ();
>         unsigned HOST_WIDE_INT yl = yi.ulow ();
>         unsigned HOST_WIDE_INT resultl = xl - yl;
> -      if (sgn == SIGNED)
> +      if (precision == 0)
> +	*overflow = false;
> +      else if (sgn == SIGNED)
>   	*overflow = (((xl ^ yl) & (resultl ^ xl)) >> (precision - 1)) & 1;
>         else
>   	*overflow = ((resultl << (HOST_BITS_PER_WIDE_INT - precision))
Richard Sandiford May 3, 2014, 6:59 p.m. UTC | #2
Kenneth Zadeck <zadeck@naturalbridge.com> writes:
> The doc at wide-int.h:136 needs work.    The doc currently says that the 
> precision and length are always greater than 0.   This is now not 
> correct.     It also says that the bits above the precision are defined 
> to be the sign extension if the precision does not cover that block.

I agree that the comment needs work.

Reordering slightly:

> I do not know if i believe this completely.

This is now covered by the compile-time is_sign_extended property.

Trees aren't extended in that way because we want to be able to access
them quickly in wider precisions, with the extension following the
signedness of the underlying type.  So an 8-bit all-ones INTEGER_CST
will be stored as -1 if the type is signed and as 0xff otherwise.

RTL constants aren't sign-extended because of the case you note:

> I worry about this because we have moved back and forth on this issue 
> many times and i still see code at rtl.h:1440 which assumes that BImode 
> constants are stored differently on some platforms. It is possible that 
> i am reading that code incorrectly but at first reading it looks 
> questionable.   so code comparing a bimode 1 with a 1 of a different 
> precision would not work.

I hope to get rid of the special BImode case after the merge and set
rtl's is_sign_extended back to true.

wide_int itself is always sign-extended.  The situation never occurs
for offset_int and widest_int.

> I noticed that in the eq_p_large code, you removed a block of code left 
> over from the days when this was not true, but there is still so of this 
> code remaining that does not assume this.

The *_large functions have to be conservative and assume that the inputs
are not sign-extended.  My eq_p_large change doesn't change that,
it just represents it differently.  The idea is that rather than
extending each HWI individually from small_prec and then comparing them,
we can shift the XOR of the two values left by the number of excess bits,
so that all undefined bits disappear from the value.  The shifted-in bits
are all zeros so the shifted XOR value will be zero iff the meaningful
bits are the same and nonzero iff the meaningful bits are different,
regardless of whether the input is sign-extended or not.  The same
technique is already used in the inline version.

Thanks,
Richard
Kenneth Zadeck May 3, 2014, 7:53 p.m. UTC | #3
Then with a fixed comment, this patch is fine.

kenny

On 05/03/2014 02:59 PM, Richard Sandiford wrote:
> Kenneth Zadeck <zadeck@naturalbridge.com> writes:
>> The doc at wide-int.h:136 needs work.    The doc currently says that the
>> precision and length are always greater than 0.   This is now not
>> correct.     It also says that the bits above the precision are defined
>> to be the sign extension if the precision does not cover that block.
> I agree that the comment needs work.
>
> Reordering slightly:
>
>> I do not know if i believe this completely.
> This is now covered by the compile-time is_sign_extended property.
>
> Trees aren't extended in that way because we want to be able to access
> them quickly in wider precisions, with the extension following the
> signedness of the underlying type.  So an 8-bit all-ones INTEGER_CST
> will be stored as -1 if the type is signed and as 0xff otherwise.
>
> RTL constants aren't sign-extended because of the case you note:
>
>> I worry about this because we have moved back and forth on this issue
>> many times and i still see code at rtl.h:1440 which assumes that BImode
>> constants are stored differently on some platforms. It is possible that
>> i am reading that code incorrectly but at first reading it looks
>> questionable.   so code comparing a bimode 1 with a 1 of a different
>> precision would not work.
> I hope to get rid of the special BImode case after the merge and set
> rtl's is_sign_extended back to true.
>
> wide_int itself is always sign-extended.  The situation never occurs
> for offset_int and widest_int.
>
>> I noticed that in the eq_p_large code, you removed a block of code left
>> over from the days when this was not true, but there is still so of this
>> code remaining that does not assume this.
> The *_large functions have to be conservative and assume that the inputs
> are not sign-extended.  My eq_p_large change doesn't change that,
> it just represents it differently.  The idea is that rather than
> extending each HWI individually from small_prec and then comparing them,
> we can shift the XOR of the two values left by the number of excess bits,
> so that all undefined bits disappear from the value.  The shifted-in bits
> are all zeros so the shifted XOR value will be zero iff the meaningful
> bits are the same and nonzero iff the meaningful bits are different,
> regardless of whether the input is sign-extended or not.  The same
> technique is already used in the inline version.
>
> Thanks,
> Richard
Richard Biener May 5, 2014, 8:31 a.m. UTC | #4
On Fri, May 2, 2014 at 9:19 PM, Richard Sandiford
<rdsandiford@googlemail.com> wrote:
> I'd hoped the days of zero-precision INTEGER_CSTs were behind us after
> Richard's patch to remove min amd max values from zero-width bitfields,
> but a boostrap-ubsan showed otherwise.  One source is in:
>
>   null_pointer_node = build_int_cst (build_pointer_type (void_type_node), 0);
>
> if no_target, since the precision of the type comes from ptr_mode,
> which remains the default VOIDmode.  Maybe that's a bug, but setting
> it to an arbitrary nonzero value would also be dangerous.

Can you explain what 'no_target' should be?  ptr_mode should never be
VOIDmode.  So I don't think we want this patch.  Instead stor-layout should
ICE on zero-precision integer/pointer types.

Richard.

>
> The other use is in the C/C++ void_zero_node, which is specifically
> defined as a VOID_TYPE, zero-precision INTEGER_CST.  This is used by the
> ubsan code in some ?: tests, as well as by other places in the frontend
> proper.  At least the ubsan use persists until gimple.
>
> This patch therefore restores the wide-int handling for zero precision,
> for which the length must be 1 and the single HWI must be zero.
> I've tried to wrap up most of the dependencies in two new functions,
> wi::blocks_needed (a function version of the .cc BLOCKS_NEEDED, now also
> used in the header) and wi::excess_bits, so that it'll be easier to
> remove the handling again if zero precision ever goes away.
> There are some remaining, easily-greppable cases that check directly
> for a precision of 0 though.
>
> The combination of this and the other patches allows boostrap-ubsan to
> complete.  There are a lot of extra testsuite failures compared to a
> normal bootstrap, but they don't look related to wide-int.
>
> Tested on x86_64-linux-gnu and powerpc64-linux-gnu.  OK to install?
>
> Thanks,
> Richard
>
>
> Index: gcc/wide-int.cc
> ===================================================================
> --- gcc/wide-int.cc     2014-05-02 16:28:07.657847935 +0100
> +++ gcc/wide-int.cc     2014-05-02 16:28:09.560842845 +0100
> @@ -48,8 +48,6 @@ static const HOST_WIDE_INT zeros[WIDE_IN
>  #define HALF_INT_MASK (((HOST_WIDE_INT) 1 << HOST_BITS_PER_HALF_WIDE_INT) - 1)
>
>  #define BLOCK_OF(TARGET) ((TARGET) / HOST_BITS_PER_WIDE_INT)
> -#define BLOCKS_NEEDED(PREC) \
> -  (PREC ? (((PREC) + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT) : 1)
>  #define SIGN_MASK(X) ((HOST_WIDE_INT) (X) < 0 ? -1 : 0)
>
>  /* Return the value a VAL[I] if I < LEN, otherwise, return 0 or -1
> @@ -69,20 +67,20 @@ safe_uhwi (const HOST_WIDE_INT *val, uns
>  static unsigned int
>  canonize (HOST_WIDE_INT *val, unsigned int len, unsigned int precision)
>  {
> -  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
> +  unsigned int blocks_needed = wi::blocks_needed (precision);
>    HOST_WIDE_INT top;
>    int i;
>
>    if (len > blocks_needed)
>      len = blocks_needed;
>
> +  if (wi::excess_bits (len, precision) > 0)
> +    val[len - 1] = sext_hwi (val[len - 1], precision % HOST_BITS_PER_WIDE_INT);
>    if (len == 1)
>      return len;
>
>    top = val[len - 1];
> -  if (len * HOST_BITS_PER_WIDE_INT > precision)
> -    val[len - 1] = top = sext_hwi (top, precision % HOST_BITS_PER_WIDE_INT);
> -  if (top != 0 && top != (HOST_WIDE_INT)-1)
> +  if (top != 0 && top != (HOST_WIDE_INT) -1)
>      return len;
>
>    /* At this point we know that the top is either 0 or -1.  Find the
> @@ -134,7 +132,7 @@ wi::from_buffer (const unsigned char *bu
>
>    /* We have to clear all the bits ourself, as we merely or in values
>       below.  */
> -  unsigned int len = BLOCKS_NEEDED (precision);
> +  unsigned int len = wi::blocks_needed (precision);
>    HOST_WIDE_INT *val = result.write_val ();
>    for (unsigned int i = 0; i < len; ++i)
>      val[i] = 0;
> @@ -180,7 +178,7 @@ wi::to_mpz (const wide_int_ref &x, mpz_t
>  {
>    int len = x.get_len ();
>    const HOST_WIDE_INT *v = x.get_val ();
> -  int excess = len * HOST_BITS_PER_WIDE_INT - x.get_precision ();
> +  unsigned int excess = wi::excess_bits (len, x.get_precision ());
>
>    if (wi::neg_p (x, sgn))
>      {
> @@ -306,7 +304,8 @@ wi::force_to_size (HOST_WIDE_INT *val, c
>                    unsigned int xlen, unsigned int xprecision,
>                    unsigned int precision, signop sgn)
>  {
> -  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
> +  unsigned int blocks_needed = wi::blocks_needed (precision);
> +  unsigned int xblocks_needed = wi::blocks_needed (xprecision);
>    unsigned int len = blocks_needed < xlen ? blocks_needed : xlen;
>    for (unsigned i = 0; i < len; i++)
>      val[i] = xval[i];
> @@ -318,11 +317,11 @@ wi::force_to_size (HOST_WIDE_INT *val, c
>        /* Expanding.  */
>        if (sgn == UNSIGNED)
>         {
> -         if (small_xprecision && len == BLOCKS_NEEDED (xprecision))
> +         if (small_xprecision && len == xblocks_needed)
>             val[len - 1] = zext_hwi (val[len - 1], small_xprecision);
>           else if (val[len - 1] < 0)
>             {
> -             while (len < BLOCKS_NEEDED (xprecision))
> +             while (len < xblocks_needed)
>                 val[len++] = -1;
>               if (small_xprecision)
>                 val[len - 1] = zext_hwi (val[len - 1], small_xprecision);
> @@ -332,7 +331,7 @@ wi::force_to_size (HOST_WIDE_INT *val, c
>         }
>        else
>         {
> -         if (small_xprecision && len == BLOCKS_NEEDED (xprecision))
> +         if (small_xprecision && len == xblocks_needed)
>             val[len - 1] = sext_hwi (val[len - 1], small_xprecision);
>         }
>      }
> @@ -372,10 +371,8 @@ selt (const HOST_WIDE_INT *a, unsigned i
>  static inline HOST_WIDE_INT
>  top_bit_of (const HOST_WIDE_INT *a, unsigned int len, unsigned int prec)
>  {
> -  int excess = len * HOST_BITS_PER_WIDE_INT - prec;
>    unsigned HOST_WIDE_INT val = a[len - 1];
> -  if (excess > 0)
> -    val <<= excess;
> +  val <<= wi::excess_bits (len, prec);
>    return val >> (HOST_BITS_PER_WIDE_INT - 1);
>  }
>
> @@ -391,28 +388,16 @@ wi::eq_p_large (const HOST_WIDE_INT *op0
>                 const HOST_WIDE_INT *op1, unsigned int op1len,
>                 unsigned int prec)
>  {
> -  int l0 = op0len - 1;
> -  unsigned int small_prec = prec & (HOST_BITS_PER_WIDE_INT - 1);
> -
>    if (op0len != op1len)
>      return false;
>
> -  if (op0len == BLOCKS_NEEDED (prec) && small_prec)
> -    {
> -      /* It does not matter if we zext or sext here, we just have to
> -        do both the same way.  */
> -      if (zext_hwi (op0 [l0], small_prec) != zext_hwi (op1 [l0], small_prec))
> -       return false;
> -      l0--;
> -    }
> -
> -  while (l0 >= 0)
> -    if (op0[l0] != op1[l0])
> +  for (unsigned int i = 0; i < op0len - 1; i++)
> +    if (op0[i] != op1[i])
>        return false;
> -    else
> -      l0--;
>
> -  return true;
> +  unsigned HOST_WIDE_INT top0 = op0[op0len - 1];
> +  unsigned HOST_WIDE_INT top1 = op1[op1len - 1];
> +  return ((top0 ^ top1) << wi::excess_bits (op0len, prec)) == 0;
>  }
>
>  /* Return true if OP0 < OP1 using signed comparisons.  */
> @@ -423,7 +408,7 @@ wi::lts_p_large (const HOST_WIDE_INT *op
>  {
>    HOST_WIDE_INT s0, s1;
>    unsigned HOST_WIDE_INT u0, u1;
> -  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
> +  unsigned int blocks_needed = wi::blocks_needed (precision);
>    unsigned int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
>    int l = MAX (op0len - 1, op1len - 1);
>
> @@ -461,7 +446,7 @@ wi::cmps_large (const HOST_WIDE_INT *op0
>  {
>    HOST_WIDE_INT s0, s1;
>    unsigned HOST_WIDE_INT u0, u1;
> -  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
> +  unsigned int blocks_needed = wi::blocks_needed (precision);
>    unsigned int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
>    int l = MAX (op0len - 1, op1len - 1);
>
> @@ -498,7 +483,7 @@ wi::ltu_p_large (const HOST_WIDE_INT *op
>  {
>    unsigned HOST_WIDE_INT x0;
>    unsigned HOST_WIDE_INT x1;
> -  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
> +  unsigned int blocks_needed = wi::blocks_needed (precision);
>    unsigned int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
>    int l = MAX (op0len - 1, op1len - 1);
>
> @@ -525,7 +510,7 @@ wi::cmpu_large (const HOST_WIDE_INT *op0
>  {
>    unsigned HOST_WIDE_INT x0;
>    unsigned HOST_WIDE_INT x1;
> -  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
> +  unsigned int blocks_needed = wi::blocks_needed (precision);
>    unsigned int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
>    int l = MAX (op0len - 1, op1len - 1);
>
> @@ -673,7 +658,7 @@ wide_int_storage::bswap () const
>  {
>    wide_int result = wide_int::create (precision);
>    unsigned int i, s;
> -  unsigned int len = BLOCKS_NEEDED (precision);
> +  unsigned int len = wi::blocks_needed (precision);
>    unsigned int xlen = get_len ();
>    const HOST_WIDE_INT *xval = get_val ();
>    HOST_WIDE_INT *val = result.write_val ();
> @@ -1149,7 +1134,7 @@ wi_unpack (unsigned HOST_HALF_WIDE_INT *
>    unsigned int i;
>    unsigned int j = 0;
>    unsigned int small_prec = prec & (HOST_BITS_PER_WIDE_INT - 1);
> -  unsigned int blocks_needed = BLOCKS_NEEDED (prec);
> +  unsigned int blocks_needed = wi::blocks_needed (prec);
>    HOST_WIDE_INT mask;
>
>    if (sgn == SIGNED)
> @@ -1222,7 +1207,7 @@ wi::mul_internal (HOST_WIDE_INT *val, co
>    unsigned HOST_WIDE_INT o0, o1, k, t;
>    unsigned int i;
>    unsigned int j;
> -  unsigned int blocks_needed = BLOCKS_NEEDED (prec);
> +  unsigned int blocks_needed = wi::blocks_needed (prec);
>    unsigned int half_blocks_needed = blocks_needed * 2;
>    /* The sizes here are scaled to support a 2x largest mode by 2x
>       largest mode yielding a 4x largest mode result.  This is what is
> @@ -1426,6 +1411,9 @@ wi::popcount (const wide_int_ref &x)
>    unsigned int i;
>    int count;
>
> +  if (x.precision == 0)
> +    return 0;
> +
>    /* The high order block is special if it is the last block and the
>       precision is not an even multiple of HOST_BITS_PER_WIDE_INT.  We
>       have to clear out any ones above the precision before doing
> @@ -1645,8 +1633,8 @@ wi::divmod_internal (HOST_WIDE_INT *quot
>                      unsigned int divisor_prec, signop sgn,
>                      bool *oflow)
>  {
> -  unsigned int dividend_blocks_needed = 2 * BLOCKS_NEEDED (dividend_prec);
> -  unsigned int divisor_blocks_needed = 2 * BLOCKS_NEEDED (divisor_prec);
> +  unsigned int dividend_blocks_needed = 2 * wi::blocks_needed (dividend_prec);
> +  unsigned int divisor_blocks_needed = 2 * wi::blocks_needed (divisor_prec);
>    unsigned HOST_HALF_WIDE_INT
>      b_quotient[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
>    unsigned HOST_HALF_WIDE_INT
> @@ -1671,7 +1659,7 @@ wi::divmod_internal (HOST_WIDE_INT *quot
>    /* The smallest signed number / -1 causes overflow.  The dividend_len
>       check is for speed rather than correctness.  */
>    if (sgn == SIGNED
> -      && dividend_len == BLOCKS_NEEDED (dividend_prec)
> +      && dividend_len == wi::blocks_needed (dividend_prec)
>        && divisor == -1
>        && wi::only_sign_bit_p (dividend))
>      overflow = true;
> @@ -1838,7 +1826,7 @@ wi::lshift_large (HOST_WIDE_INT *val, co
>    unsigned int small_shift = shift % HOST_BITS_PER_WIDE_INT;
>
>    /* The whole-block shift fills with zeros.  */
> -  unsigned int len = BLOCKS_NEEDED (precision);
> +  unsigned int len = wi::blocks_needed (precision);
>    for (unsigned int i = 0; i < skip; ++i)
>      val[i] = 0;
>
> @@ -1876,7 +1864,7 @@ rshift_large_common (HOST_WIDE_INT *val,
>
>    /* Work out how many blocks are needed to store the significant bits
>       (excluding the upper zeros or signs).  */
> -  unsigned int len = BLOCKS_NEEDED (xprecision - shift);
> +  unsigned int len = wi::blocks_needed (xprecision - shift);
>
>    /* It's easier to handle the simple block case specially.  */
>    if (small_shift == 0)
> @@ -1949,6 +1937,9 @@ wi::arshift_large (HOST_WIDE_INT *val, c
>  int
>  wi::clz (const wide_int_ref &x)
>  {
> +  if (x.precision == 0)
> +    return 0;
> +
>    /* Calculate how many bits there above the highest represented block.  */
>    int count = x.precision - x.len * HOST_BITS_PER_WIDE_INT;
>
> @@ -1973,6 +1964,9 @@ wi::clz (const wide_int_ref &x)
>  int
>  wi::clrsb (const wide_int_ref &x)
>  {
> +  if (x.precision == 0)
> +    return 0;
> +
>    /* Calculate how many bits there above the highest represented block.  */
>    int count = x.precision - x.len * HOST_BITS_PER_WIDE_INT;
>
> @@ -2017,6 +2011,9 @@ wi::ctz (const wide_int_ref &x)
>  int
>  wi::exact_log2 (const wide_int_ref &x)
>  {
> +  if (x.precision == 0)
> +    return -1;
> +
>    /* Reject cases where there are implicit -1 blocks above HIGH.  */
>    if (x.len * HOST_BITS_PER_WIDE_INT < x.precision && x.sign_mask () < 0)
>      return -1;
> Index: gcc/wide-int.h
> ===================================================================
> --- gcc/wide-int.h      2014-05-02 16:28:07.657847935 +0100
> +++ gcc/wide-int.h      2014-05-02 16:28:09.561842842 +0100
> @@ -430,6 +430,9 @@ #define WIDE_INT_REF_FOR(T) \
>  /* Public functions for querying and operating on integers.  */
>  namespace wi
>  {
> +  unsigned int excess_bits (unsigned int, unsigned int);
> +  unsigned int blocks_needed (unsigned int);
> +
>    template <typename T>
>    unsigned int get_precision (const T &);
>
> @@ -740,7 +743,7 @@ inline generic_wide_int <storage>::gener
>  inline HOST_WIDE_INT
>  generic_wide_int <storage>::to_shwi (unsigned int precision) const
>  {
> -  if (precision < HOST_BITS_PER_WIDE_INT)
> +  if (precision > 0 && precision < HOST_BITS_PER_WIDE_INT)
>      return sext_hwi (this->get_val ()[0], precision);
>    else
>      return this->get_val ()[0];
> @@ -764,7 +767,7 @@ generic_wide_int <storage>::to_shwi () c
>  inline unsigned HOST_WIDE_INT
>  generic_wide_int <storage>::to_uhwi (unsigned int precision) const
>  {
> -  if (precision < HOST_BITS_PER_WIDE_INT)
> +  if (precision > 0 && precision < HOST_BITS_PER_WIDE_INT)
>      return zext_hwi (this->get_val ()[0], precision);
>    else
>      return this->get_val ()[0];
> @@ -797,12 +800,7 @@ generic_wide_int <storage>::sign_mask ()
>    unsigned int len = this->get_len ();
>    unsigned HOST_WIDE_INT high = this->get_val ()[len - 1];
>    if (!is_sign_extended)
> -    {
> -      unsigned int precision = this->get_precision ();
> -      int excess = len * HOST_BITS_PER_WIDE_INT - precision;
> -      if (excess > 0)
> -       high <<= excess;
> -    }
> +    high <<= wi::excess_bits (len, this->get_precision ());
>    return (HOST_WIDE_INT) (high) < 0 ? -1 : 0;
>  }
>
> @@ -1068,7 +1066,7 @@ wide_int_storage::write_val ()
>  wide_int_storage::set_len (unsigned int l, bool is_sign_extended)
>  {
>    len = l;
> -  if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision)
> +  if (!is_sign_extended && wi::excess_bits (len, precision) > 0)
>      val[len - 1] = sext_hwi (val[len - 1],
>                              precision % HOST_BITS_PER_WIDE_INT);
>  }
> @@ -1347,7 +1345,7 @@ trailing_wide_int_storage::write_val ()
>  trailing_wide_int_storage::set_len (unsigned int len, bool is_sign_extended)
>  {
>    *m_len = len;
> -  if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > m_precision)
> +  if (!is_sign_extended && wi::excess_bits (len, m_precision) > 0)
>      m_val[len - 1] = sext_hwi (m_val[len - 1],
>                                m_precision % HOST_BITS_PER_WIDE_INT);
>  }
> @@ -1368,8 +1366,7 @@ trailing_wide_int_storage::operator = (c
>  trailing_wide_ints <N>::set_precision (unsigned int precision)
>  {
>    m_precision = precision;
> -  m_max_len = ((precision + HOST_BITS_PER_WIDE_INT - 1)
> -              / HOST_BITS_PER_WIDE_INT);
> +  m_max_len = wi::blocks_needed (precision);
>  }
>
>  /* Return a reference to element INDEX.  */
> @@ -1387,9 +1384,7 @@ trailing_wide_ints <N>::operator [] (uns
>  inline size_t
>  trailing_wide_ints <N>::extra_size (unsigned int precision)
>  {
> -  unsigned int max_len = ((precision + HOST_BITS_PER_WIDE_INT - 1)
> -                         / HOST_BITS_PER_WIDE_INT);
> -  return (N * max_len - 1) * sizeof (HOST_WIDE_INT);
> +  return (N * wi::blocks_needed (precision) - 1) * sizeof (HOST_WIDE_INT);
>  }
>
>  /* This macro is used in structures that end with a trailing_wide_ints field
> @@ -1621,6 +1616,26 @@ decompose (HOST_WIDE_INT *scratch, unsig
>                                 signop, bool *);
>  }
>
> +/* If a value of length LEN blocks has more than PRECISION bits, return
> +   the number of excess bits, otherwise return 0.  For the special case
> +   of PRECISION being zero, the single HWI must have the value zero and
> +   there are no excess bits.  Handling zero precision this way means
> +   that the result is always a valid shift amount.  */
> +inline unsigned int
> +wi::excess_bits (unsigned int len, unsigned int precision)
> +{
> +  unsigned int excess = len * HOST_BITS_PER_WIDE_INT - precision;
> +  return excess < HOST_BITS_PER_WIDE_INT ? excess : 0;
> +}
> +
> +/* Return the number of blocks needed for precision PRECISION.  */
> +inline unsigned int
> +wi::blocks_needed (unsigned int precision)
> +{
> +  return precision == 0 ? 1 : ((precision + HOST_BITS_PER_WIDE_INT - 1)
> +                              / HOST_BITS_PER_WIDE_INT);
> +}
> +
>  /* Return the number of bits that integer X can hold.  */
>  template <typename T>
>  inline unsigned int
> @@ -1729,9 +1744,7 @@ wi::eq_p (const T1 &x, const T2 &y)
>         return xi.val[0] == 0;
>        /* Otherwise flush out any excess bits first.  */
>        unsigned HOST_WIDE_INT diff = xi.val[0] ^ yi.val[0];
> -      int excess = HOST_BITS_PER_WIDE_INT - precision;
> -      if (excess > 0)
> -       diff <<= excess;
> +      diff <<= wi::excess_bits (1, precision);
>        return diff == 0;
>      }
>    return eq_p_large (xi.val, xi.len, yi.val, yi.len, precision);
> @@ -2323,7 +2336,9 @@ wi::add (const T1 &x, const T2 &y, signo
>        unsigned HOST_WIDE_INT xl = xi.ulow ();
>        unsigned HOST_WIDE_INT yl = yi.ulow ();
>        unsigned HOST_WIDE_INT resultl = xl + yl;
> -      if (sgn == SIGNED)
> +      if (precision == 0)
> +       *overflow = false;
> +      else if (sgn == SIGNED)
>         *overflow = (((resultl ^ xl) & (resultl ^ yl))
>                      >> (precision - 1)) & 1;
>        else
> @@ -2396,7 +2411,9 @@ wi::sub (const T1 &x, const T2 &y, signo
>        unsigned HOST_WIDE_INT xl = xi.ulow ();
>        unsigned HOST_WIDE_INT yl = yi.ulow ();
>        unsigned HOST_WIDE_INT resultl = xl - yl;
> -      if (sgn == SIGNED)
> +      if (precision == 0)
> +       *overflow = false;
> +      else if (sgn == SIGNED)
>         *overflow = (((xl ^ yl) & (resultl ^ xl)) >> (precision - 1)) & 1;
>        else
>         *overflow = ((resultl << (HOST_BITS_PER_WIDE_INT - precision))
Richard Sandiford May 5, 2014, 10:54 a.m. UTC | #5
Richard Biener <richard.guenther@gmail.com> writes:
> On Fri, May 2, 2014 at 9:19 PM, Richard Sandiford
> <rdsandiford@googlemail.com> wrote:
>> I'd hoped the days of zero-precision INTEGER_CSTs were behind us after
>> Richard's patch to remove min amd max values from zero-width bitfields,
>> but a boostrap-ubsan showed otherwise.  One source is in:
>>
>>   null_pointer_node = build_int_cst (build_pointer_type (void_type_node), 0);
>>
>> if no_target, since the precision of the type comes from ptr_mode,
>> which remains the default VOIDmode.  Maybe that's a bug, but setting
>> it to an arbitrary nonzero value would also be dangerous.
>
> Can you explain what 'no_target' should be?  ptr_mode should never be
> VOIDmode.

Sorry, meant "no_backend" rather than "no_target".  See do_compile.

> So I don't think we want this patch.  Instead stor-layout should
> ICE on zero-precision integer/pointer types.

What should happen for void_zero_node?

Thanks,
Richard
Richard Biener May 5, 2014, 12:15 p.m. UTC | #6
On Mon, May 5, 2014 at 12:54 PM, Richard Sandiford
<rdsandiford@googlemail.com> wrote:
> Richard Biener <richard.guenther@gmail.com> writes:
>> On Fri, May 2, 2014 at 9:19 PM, Richard Sandiford
>> <rdsandiford@googlemail.com> wrote:
>>> I'd hoped the days of zero-precision INTEGER_CSTs were behind us after
>>> Richard's patch to remove min amd max values from zero-width bitfields,
>>> but a boostrap-ubsan showed otherwise.  One source is in:
>>>
>>>   null_pointer_node = build_int_cst (build_pointer_type (void_type_node), 0);
>>>
>>> if no_target, since the precision of the type comes from ptr_mode,
>>> which remains the default VOIDmode.  Maybe that's a bug, but setting
>>> it to an arbitrary nonzero value would also be dangerous.
>>
>> Can you explain what 'no_target' should be?  ptr_mode should never be
>> VOIDmode.
>
> Sorry, meant "no_backend" rather than "no_target".  See do_compile.

Ok.  So we do

      /* This must be run always, because it is needed to compute the FP
         predefined macros, such as __LDBL_MAX__, for targets using non
         default FP formats.  */
      init_adjust_machine_modes ();

      /* Set up the back-end if requested.  */
      if (!no_backend)
        backend_init ();

where I think that init_adjust_machine_modes should initialize the
{byte,word,ptr}_mode globals.  Move from init_emit_once.

But why do we even run into uses of null_pointer_node for no_backend?
For sure for -fsyntax-only we can't really omit backend init as IMHO
no parser piece is decoupled enough to never call target hooks or so.

Thus, omit less of the initialization for no_backend.

>> So I don't think we want this patch.  Instead stor-layout should
>> ICE on zero-precision integer/pointer types.
>
> What should happen for void_zero_node?

Not sure what that beast is supposed to be or why it should be
of INTEGER_CST kind (it's not even initialized in any meaningful
way).

That said, the wide-int code shouldn't be slowed down by
precision == 0 checks.  We should never ever reach wide-int
with such a constant.

Richard.

> Thanks,
> Richard
Richard Sandiford May 5, 2014, 2:51 p.m. UTC | #7
Richard Biener <richard.guenther@gmail.com> writes:
> On Mon, May 5, 2014 at 12:54 PM, Richard Sandiford
> <rdsandiford@googlemail.com> wrote:
>> Richard Biener <richard.guenther@gmail.com> writes:
>>> On Fri, May 2, 2014 at 9:19 PM, Richard Sandiford
>>> <rdsandiford@googlemail.com> wrote:
>>>> I'd hoped the days of zero-precision INTEGER_CSTs were behind us after
>>>> Richard's patch to remove min amd max values from zero-width bitfields,
>>>> but a boostrap-ubsan showed otherwise.  One source is in:
>>>>
>>>>   null_pointer_node = build_int_cst (build_pointer_type
>>>> (void_type_node), 0);
>>>>
>>>> if no_target, since the precision of the type comes from ptr_mode,
>>>> which remains the default VOIDmode.  Maybe that's a bug, but setting
>>>> it to an arbitrary nonzero value would also be dangerous.
>>>
>>> Can you explain what 'no_target' should be?  ptr_mode should never be
>>> VOIDmode.
>>
>> Sorry, meant "no_backend" rather than "no_target".  See do_compile.
>
> Ok.  So we do
>
>       /* This must be run always, because it is needed to compute the FP
>          predefined macros, such as __LDBL_MAX__, for targets using non
>          default FP formats.  */
>       init_adjust_machine_modes ();
>
>       /* Set up the back-end if requested.  */
>       if (!no_backend)
>         backend_init ();
>
> where I think that init_adjust_machine_modes should initialize the
> {byte,word,ptr}_mode globals.  Move from init_emit_once.
>
> But why do we even run into uses of null_pointer_node for no_backend?
> For sure for -fsyntax-only we can't really omit backend init as IMHO
> no parser piece is decoupled enough to never call target hooks or so.
>
> Thus, omit less of the initialization for no_backend.

OK, but I don't think the wide-int merge should be held up for clean-ups
like that.  At the moment the tree & gimple leve do use 0 precision, and
although I think we both agree it'd be better if they didn't, let's do
one thing at a time.

>>> So I don't think we want this patch.  Instead stor-layout should
>>> ICE on zero-precision integer/pointer types.
>>
>> What should happen for void_zero_node?
>
> Not sure what that beast is supposed to be or why it should be
> of INTEGER_CST kind (it's not even initialized in any meaningful
> way).
>
> That said, the wide-int code shouldn't be slowed down by
> precision == 0 checks.  We should never ever reach wide-int
> with such a constant.

void_zero_node is used for ubsan too, and survives into gimple.
I did hit this in real tests, it wasn't just theoretical.

Thanks,
Richard
Richard Biener May 5, 2014, 5:32 p.m. UTC | #8
On Mon, May 5, 2014 at 4:51 PM, Richard Sandiford
<rdsandiford@googlemail.com> wrote:
> Richard Biener <richard.guenther@gmail.com> writes:
>> On Mon, May 5, 2014 at 12:54 PM, Richard Sandiford
>> <rdsandiford@googlemail.com> wrote:
>>> Richard Biener <richard.guenther@gmail.com> writes:
>>>> On Fri, May 2, 2014 at 9:19 PM, Richard Sandiford
>>>> <rdsandiford@googlemail.com> wrote:
>>>>> I'd hoped the days of zero-precision INTEGER_CSTs were behind us after
>>>>> Richard's patch to remove min amd max values from zero-width bitfields,
>>>>> but a boostrap-ubsan showed otherwise.  One source is in:
>>>>>
>>>>>   null_pointer_node = build_int_cst (build_pointer_type
>>>>> (void_type_node), 0);
>>>>>
>>>>> if no_target, since the precision of the type comes from ptr_mode,
>>>>> which remains the default VOIDmode.  Maybe that's a bug, but setting
>>>>> it to an arbitrary nonzero value would also be dangerous.
>>>>
>>>> Can you explain what 'no_target' should be?  ptr_mode should never be
>>>> VOIDmode.
>>>
>>> Sorry, meant "no_backend" rather than "no_target".  See do_compile.
>>
>> Ok.  So we do
>>
>>       /* This must be run always, because it is needed to compute the FP
>>          predefined macros, such as __LDBL_MAX__, for targets using non
>>          default FP formats.  */
>>       init_adjust_machine_modes ();
>>
>>       /* Set up the back-end if requested.  */
>>       if (!no_backend)
>>         backend_init ();
>>
>> where I think that init_adjust_machine_modes should initialize the
>> {byte,word,ptr}_mode globals.  Move from init_emit_once.
>>
>> But why do we even run into uses of null_pointer_node for no_backend?
>> For sure for -fsyntax-only we can't really omit backend init as IMHO
>> no parser piece is decoupled enough to never call target hooks or so.
>>
>> Thus, omit less of the initialization for no_backend.
>
> OK, but I don't think the wide-int merge should be held up for clean-ups
> like that.  At the moment the tree & gimple leve do use 0 precision, and
> although I think we both agree it'd be better if they didn't, let's do
> one thing at a time.

Fair enough.

>>>> So I don't think we want this patch.  Instead stor-layout should
>>>> ICE on zero-precision integer/pointer types.
>>>
>>> What should happen for void_zero_node?
>>
>> Not sure what that beast is supposed to be or why it should be
>> of INTEGER_CST kind (it's not even initialized in any meaningful
>> way).
>>
>> That said, the wide-int code shouldn't be slowed down by
>> precision == 0 checks.  We should never ever reach wide-int
>> with such a constant.
>
> void_zero_node is used for ubsan too, and survives into gimple.
> I did hit this in real tests, it wasn't just theoretical.

Ugh - for what does it use that ... :/

Please remember how to trigger those issues and I'll happily have
a look after the merge.

Thanks,
Richard.

> Thanks,
> Richard
Marek Polacek May 5, 2014, 5:40 p.m. UTC | #9
On Mon, May 05, 2014 at 07:32:31PM +0200, Richard Biener wrote:
> > void_zero_node is used for ubsan too, and survives into gimple.
> > I did hit this in real tests, it wasn't just theoretical.
> 
> Ugh - for what does it use that ... :/

It's used like this:
t = fold_build3 (COND_EXPR, void_type_node, t, tt, void_zero_node);
so if condition T isn't true, nothing should happen.  I remember that
initially I used something similar to void_zero_node, but that
resulted in ICEs.

	Marek
diff mbox

Patch

Index: gcc/wide-int.cc
===================================================================
--- gcc/wide-int.cc	2014-05-02 16:28:07.657847935 +0100
+++ gcc/wide-int.cc	2014-05-02 16:28:09.560842845 +0100
@@ -48,8 +48,6 @@  static const HOST_WIDE_INT zeros[WIDE_IN
 #define HALF_INT_MASK (((HOST_WIDE_INT) 1 << HOST_BITS_PER_HALF_WIDE_INT) - 1)
 
 #define BLOCK_OF(TARGET) ((TARGET) / HOST_BITS_PER_WIDE_INT)
-#define BLOCKS_NEEDED(PREC) \
-  (PREC ? (((PREC) + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT) : 1)
 #define SIGN_MASK(X) ((HOST_WIDE_INT) (X) < 0 ? -1 : 0)
 
 /* Return the value a VAL[I] if I < LEN, otherwise, return 0 or -1
@@ -69,20 +67,20 @@  safe_uhwi (const HOST_WIDE_INT *val, uns
 static unsigned int
 canonize (HOST_WIDE_INT *val, unsigned int len, unsigned int precision)
 {
-  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
+  unsigned int blocks_needed = wi::blocks_needed (precision);
   HOST_WIDE_INT top;
   int i;
 
   if (len > blocks_needed)
     len = blocks_needed;
 
+  if (wi::excess_bits (len, precision) > 0)
+    val[len - 1] = sext_hwi (val[len - 1], precision % HOST_BITS_PER_WIDE_INT);
   if (len == 1)
     return len;
 
   top = val[len - 1];
-  if (len * HOST_BITS_PER_WIDE_INT > precision)
-    val[len - 1] = top = sext_hwi (top, precision % HOST_BITS_PER_WIDE_INT);
-  if (top != 0 && top != (HOST_WIDE_INT)-1)
+  if (top != 0 && top != (HOST_WIDE_INT) -1)
     return len;
 
   /* At this point we know that the top is either 0 or -1.  Find the
@@ -134,7 +132,7 @@  wi::from_buffer (const unsigned char *bu
 
   /* We have to clear all the bits ourself, as we merely or in values
      below.  */
-  unsigned int len = BLOCKS_NEEDED (precision);
+  unsigned int len = wi::blocks_needed (precision);
   HOST_WIDE_INT *val = result.write_val ();
   for (unsigned int i = 0; i < len; ++i)
     val[i] = 0;
@@ -180,7 +178,7 @@  wi::to_mpz (const wide_int_ref &x, mpz_t
 {
   int len = x.get_len ();
   const HOST_WIDE_INT *v = x.get_val ();
-  int excess = len * HOST_BITS_PER_WIDE_INT - x.get_precision ();
+  unsigned int excess = wi::excess_bits (len, x.get_precision ());
 
   if (wi::neg_p (x, sgn))
     {
@@ -306,7 +304,8 @@  wi::force_to_size (HOST_WIDE_INT *val, c
 		   unsigned int xlen, unsigned int xprecision,
 		   unsigned int precision, signop sgn)
 {
-  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
+  unsigned int blocks_needed = wi::blocks_needed (precision);
+  unsigned int xblocks_needed = wi::blocks_needed (xprecision);
   unsigned int len = blocks_needed < xlen ? blocks_needed : xlen;
   for (unsigned i = 0; i < len; i++)
     val[i] = xval[i];
@@ -318,11 +317,11 @@  wi::force_to_size (HOST_WIDE_INT *val, c
       /* Expanding.  */
       if (sgn == UNSIGNED)
 	{
-	  if (small_xprecision && len == BLOCKS_NEEDED (xprecision))
+	  if (small_xprecision && len == xblocks_needed)
 	    val[len - 1] = zext_hwi (val[len - 1], small_xprecision);
 	  else if (val[len - 1] < 0)
 	    {
-	      while (len < BLOCKS_NEEDED (xprecision))
+	      while (len < xblocks_needed)
 		val[len++] = -1;
 	      if (small_xprecision)
 		val[len - 1] = zext_hwi (val[len - 1], small_xprecision);
@@ -332,7 +331,7 @@  wi::force_to_size (HOST_WIDE_INT *val, c
 	}
       else
 	{
-	  if (small_xprecision && len == BLOCKS_NEEDED (xprecision))
+	  if (small_xprecision && len == xblocks_needed)
 	    val[len - 1] = sext_hwi (val[len - 1], small_xprecision);
 	}
     }
@@ -372,10 +371,8 @@  selt (const HOST_WIDE_INT *a, unsigned i
 static inline HOST_WIDE_INT
 top_bit_of (const HOST_WIDE_INT *a, unsigned int len, unsigned int prec)
 {
-  int excess = len * HOST_BITS_PER_WIDE_INT - prec;
   unsigned HOST_WIDE_INT val = a[len - 1];
-  if (excess > 0)
-    val <<= excess;
+  val <<= wi::excess_bits (len, prec);
   return val >> (HOST_BITS_PER_WIDE_INT - 1);
 }
 
@@ -391,28 +388,16 @@  wi::eq_p_large (const HOST_WIDE_INT *op0
 		const HOST_WIDE_INT *op1, unsigned int op1len,
 		unsigned int prec)
 {
-  int l0 = op0len - 1;
-  unsigned int small_prec = prec & (HOST_BITS_PER_WIDE_INT - 1);
-
   if (op0len != op1len)
     return false;
 
-  if (op0len == BLOCKS_NEEDED (prec) && small_prec)
-    {
-      /* It does not matter if we zext or sext here, we just have to
-	 do both the same way.  */
-      if (zext_hwi (op0 [l0], small_prec) != zext_hwi (op1 [l0], small_prec))
-	return false;
-      l0--;
-    }
-
-  while (l0 >= 0)
-    if (op0[l0] != op1[l0])
+  for (unsigned int i = 0; i < op0len - 1; i++)
+    if (op0[i] != op1[i])
       return false;
-    else
-      l0--;
 
-  return true;
+  unsigned HOST_WIDE_INT top0 = op0[op0len - 1];
+  unsigned HOST_WIDE_INT top1 = op1[op1len - 1];
+  return ((top0 ^ top1) << wi::excess_bits (op0len, prec)) == 0;
 }
 
 /* Return true if OP0 < OP1 using signed comparisons.  */
@@ -423,7 +408,7 @@  wi::lts_p_large (const HOST_WIDE_INT *op
 {
   HOST_WIDE_INT s0, s1;
   unsigned HOST_WIDE_INT u0, u1;
-  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
+  unsigned int blocks_needed = wi::blocks_needed (precision);
   unsigned int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
   int l = MAX (op0len - 1, op1len - 1);
 
@@ -461,7 +446,7 @@  wi::cmps_large (const HOST_WIDE_INT *op0
 {
   HOST_WIDE_INT s0, s1;
   unsigned HOST_WIDE_INT u0, u1;
-  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
+  unsigned int blocks_needed = wi::blocks_needed (precision);
   unsigned int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
   int l = MAX (op0len - 1, op1len - 1);
 
@@ -498,7 +483,7 @@  wi::ltu_p_large (const HOST_WIDE_INT *op
 {
   unsigned HOST_WIDE_INT x0;
   unsigned HOST_WIDE_INT x1;
-  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
+  unsigned int blocks_needed = wi::blocks_needed (precision);
   unsigned int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
   int l = MAX (op0len - 1, op1len - 1);
 
@@ -525,7 +510,7 @@  wi::cmpu_large (const HOST_WIDE_INT *op0
 {
   unsigned HOST_WIDE_INT x0;
   unsigned HOST_WIDE_INT x1;
-  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
+  unsigned int blocks_needed = wi::blocks_needed (precision);
   unsigned int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
   int l = MAX (op0len - 1, op1len - 1);
 
@@ -673,7 +658,7 @@  wide_int_storage::bswap () const
 {
   wide_int result = wide_int::create (precision);
   unsigned int i, s;
-  unsigned int len = BLOCKS_NEEDED (precision);
+  unsigned int len = wi::blocks_needed (precision);
   unsigned int xlen = get_len ();
   const HOST_WIDE_INT *xval = get_val ();
   HOST_WIDE_INT *val = result.write_val ();
@@ -1149,7 +1134,7 @@  wi_unpack (unsigned HOST_HALF_WIDE_INT *
   unsigned int i;
   unsigned int j = 0;
   unsigned int small_prec = prec & (HOST_BITS_PER_WIDE_INT - 1);
-  unsigned int blocks_needed = BLOCKS_NEEDED (prec);
+  unsigned int blocks_needed = wi::blocks_needed (prec);
   HOST_WIDE_INT mask;
 
   if (sgn == SIGNED)
@@ -1222,7 +1207,7 @@  wi::mul_internal (HOST_WIDE_INT *val, co
   unsigned HOST_WIDE_INT o0, o1, k, t;
   unsigned int i;
   unsigned int j;
-  unsigned int blocks_needed = BLOCKS_NEEDED (prec);
+  unsigned int blocks_needed = wi::blocks_needed (prec);
   unsigned int half_blocks_needed = blocks_needed * 2;
   /* The sizes here are scaled to support a 2x largest mode by 2x
      largest mode yielding a 4x largest mode result.  This is what is
@@ -1426,6 +1411,9 @@  wi::popcount (const wide_int_ref &x)
   unsigned int i;
   int count;
 
+  if (x.precision == 0)
+    return 0;
+
   /* The high order block is special if it is the last block and the
      precision is not an even multiple of HOST_BITS_PER_WIDE_INT.  We
      have to clear out any ones above the precision before doing
@@ -1645,8 +1633,8 @@  wi::divmod_internal (HOST_WIDE_INT *quot
 		     unsigned int divisor_prec, signop sgn,
 		     bool *oflow)
 {
-  unsigned int dividend_blocks_needed = 2 * BLOCKS_NEEDED (dividend_prec);
-  unsigned int divisor_blocks_needed = 2 * BLOCKS_NEEDED (divisor_prec);
+  unsigned int dividend_blocks_needed = 2 * wi::blocks_needed (dividend_prec);
+  unsigned int divisor_blocks_needed = 2 * wi::blocks_needed (divisor_prec);
   unsigned HOST_HALF_WIDE_INT
     b_quotient[4 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
   unsigned HOST_HALF_WIDE_INT
@@ -1671,7 +1659,7 @@  wi::divmod_internal (HOST_WIDE_INT *quot
   /* The smallest signed number / -1 causes overflow.  The dividend_len
      check is for speed rather than correctness.  */
   if (sgn == SIGNED
-      && dividend_len == BLOCKS_NEEDED (dividend_prec)
+      && dividend_len == wi::blocks_needed (dividend_prec)
       && divisor == -1
       && wi::only_sign_bit_p (dividend))
     overflow = true;
@@ -1838,7 +1826,7 @@  wi::lshift_large (HOST_WIDE_INT *val, co
   unsigned int small_shift = shift % HOST_BITS_PER_WIDE_INT;
 
   /* The whole-block shift fills with zeros.  */
-  unsigned int len = BLOCKS_NEEDED (precision);
+  unsigned int len = wi::blocks_needed (precision);
   for (unsigned int i = 0; i < skip; ++i)
     val[i] = 0;
 
@@ -1876,7 +1864,7 @@  rshift_large_common (HOST_WIDE_INT *val,
 
   /* Work out how many blocks are needed to store the significant bits
      (excluding the upper zeros or signs).  */
-  unsigned int len = BLOCKS_NEEDED (xprecision - shift);
+  unsigned int len = wi::blocks_needed (xprecision - shift);
 
   /* It's easier to handle the simple block case specially.  */
   if (small_shift == 0)
@@ -1949,6 +1937,9 @@  wi::arshift_large (HOST_WIDE_INT *val, c
 int
 wi::clz (const wide_int_ref &x)
 {
+  if (x.precision == 0)
+    return 0;
+
   /* Calculate how many bits there above the highest represented block.  */
   int count = x.precision - x.len * HOST_BITS_PER_WIDE_INT;
 
@@ -1973,6 +1964,9 @@  wi::clz (const wide_int_ref &x)
 int
 wi::clrsb (const wide_int_ref &x)
 {
+  if (x.precision == 0)
+    return 0;
+
   /* Calculate how many bits there above the highest represented block.  */
   int count = x.precision - x.len * HOST_BITS_PER_WIDE_INT;
 
@@ -2017,6 +2011,9 @@  wi::ctz (const wide_int_ref &x)
 int
 wi::exact_log2 (const wide_int_ref &x)
 {
+  if (x.precision == 0)
+    return -1;
+
   /* Reject cases where there are implicit -1 blocks above HIGH.  */
   if (x.len * HOST_BITS_PER_WIDE_INT < x.precision && x.sign_mask () < 0)
     return -1;
Index: gcc/wide-int.h
===================================================================
--- gcc/wide-int.h	2014-05-02 16:28:07.657847935 +0100
+++ gcc/wide-int.h	2014-05-02 16:28:09.561842842 +0100
@@ -430,6 +430,9 @@  #define WIDE_INT_REF_FOR(T) \
 /* Public functions for querying and operating on integers.  */
 namespace wi
 {
+  unsigned int excess_bits (unsigned int, unsigned int);
+  unsigned int blocks_needed (unsigned int);
+
   template <typename T>
   unsigned int get_precision (const T &);
 
@@ -740,7 +743,7 @@  inline generic_wide_int <storage>::gener
 inline HOST_WIDE_INT
 generic_wide_int <storage>::to_shwi (unsigned int precision) const
 {
-  if (precision < HOST_BITS_PER_WIDE_INT)
+  if (precision > 0 && precision < HOST_BITS_PER_WIDE_INT)
     return sext_hwi (this->get_val ()[0], precision);
   else
     return this->get_val ()[0];
@@ -764,7 +767,7 @@  generic_wide_int <storage>::to_shwi () c
 inline unsigned HOST_WIDE_INT
 generic_wide_int <storage>::to_uhwi (unsigned int precision) const
 {
-  if (precision < HOST_BITS_PER_WIDE_INT)
+  if (precision > 0 && precision < HOST_BITS_PER_WIDE_INT)
     return zext_hwi (this->get_val ()[0], precision);
   else
     return this->get_val ()[0];
@@ -797,12 +800,7 @@  generic_wide_int <storage>::sign_mask ()
   unsigned int len = this->get_len ();
   unsigned HOST_WIDE_INT high = this->get_val ()[len - 1];
   if (!is_sign_extended)
-    {
-      unsigned int precision = this->get_precision ();
-      int excess = len * HOST_BITS_PER_WIDE_INT - precision;
-      if (excess > 0)
-	high <<= excess;
-    }
+    high <<= wi::excess_bits (len, this->get_precision ());
   return (HOST_WIDE_INT) (high) < 0 ? -1 : 0;
 }
 
@@ -1068,7 +1066,7 @@  wide_int_storage::write_val ()
 wide_int_storage::set_len (unsigned int l, bool is_sign_extended)
 {
   len = l;
-  if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision)
+  if (!is_sign_extended && wi::excess_bits (len, precision) > 0)
     val[len - 1] = sext_hwi (val[len - 1],
 			     precision % HOST_BITS_PER_WIDE_INT);
 }
@@ -1347,7 +1345,7 @@  trailing_wide_int_storage::write_val ()
 trailing_wide_int_storage::set_len (unsigned int len, bool is_sign_extended)
 {
   *m_len = len;
-  if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > m_precision)
+  if (!is_sign_extended && wi::excess_bits (len, m_precision) > 0)
     m_val[len - 1] = sext_hwi (m_val[len - 1],
 			       m_precision % HOST_BITS_PER_WIDE_INT);
 }
@@ -1368,8 +1366,7 @@  trailing_wide_int_storage::operator = (c
 trailing_wide_ints <N>::set_precision (unsigned int precision)
 {
   m_precision = precision;
-  m_max_len = ((precision + HOST_BITS_PER_WIDE_INT - 1)
-	       / HOST_BITS_PER_WIDE_INT);
+  m_max_len = wi::blocks_needed (precision);
 }
 
 /* Return a reference to element INDEX.  */
@@ -1387,9 +1384,7 @@  trailing_wide_ints <N>::operator [] (uns
 inline size_t
 trailing_wide_ints <N>::extra_size (unsigned int precision)
 {
-  unsigned int max_len = ((precision + HOST_BITS_PER_WIDE_INT - 1)
-			  / HOST_BITS_PER_WIDE_INT);
-  return (N * max_len - 1) * sizeof (HOST_WIDE_INT);
+  return (N * wi::blocks_needed (precision) - 1) * sizeof (HOST_WIDE_INT);
 }
 
 /* This macro is used in structures that end with a trailing_wide_ints field
@@ -1621,6 +1616,26 @@  decompose (HOST_WIDE_INT *scratch, unsig
 				signop, bool *);
 }
 
+/* If a value of length LEN blocks has more than PRECISION bits, return
+   the number of excess bits, otherwise return 0.  For the special case
+   of PRECISION being zero, the single HWI must have the value zero and
+   there are no excess bits.  Handling zero precision this way means
+   that the result is always a valid shift amount.  */
+inline unsigned int
+wi::excess_bits (unsigned int len, unsigned int precision)
+{
+  unsigned int excess = len * HOST_BITS_PER_WIDE_INT - precision;
+  return excess < HOST_BITS_PER_WIDE_INT ? excess : 0;
+}
+
+/* Return the number of blocks needed for precision PRECISION.  */
+inline unsigned int
+wi::blocks_needed (unsigned int precision)
+{
+  return precision == 0 ? 1 : ((precision + HOST_BITS_PER_WIDE_INT - 1)
+			       / HOST_BITS_PER_WIDE_INT);
+}
+
 /* Return the number of bits that integer X can hold.  */
 template <typename T>
 inline unsigned int
@@ -1729,9 +1744,7 @@  wi::eq_p (const T1 &x, const T2 &y)
 	return xi.val[0] == 0;
       /* Otherwise flush out any excess bits first.  */
       unsigned HOST_WIDE_INT diff = xi.val[0] ^ yi.val[0];
-      int excess = HOST_BITS_PER_WIDE_INT - precision;
-      if (excess > 0)
-	diff <<= excess;
+      diff <<= wi::excess_bits (1, precision);
       return diff == 0;
     }
   return eq_p_large (xi.val, xi.len, yi.val, yi.len, precision);
@@ -2323,7 +2336,9 @@  wi::add (const T1 &x, const T2 &y, signo
       unsigned HOST_WIDE_INT xl = xi.ulow ();
       unsigned HOST_WIDE_INT yl = yi.ulow ();
       unsigned HOST_WIDE_INT resultl = xl + yl;
-      if (sgn == SIGNED)
+      if (precision == 0)
+	*overflow = false;
+      else if (sgn == SIGNED)
 	*overflow = (((resultl ^ xl) & (resultl ^ yl))
 		     >> (precision - 1)) & 1;
       else
@@ -2396,7 +2411,9 @@  wi::sub (const T1 &x, const T2 &y, signo
       unsigned HOST_WIDE_INT xl = xi.ulow ();
       unsigned HOST_WIDE_INT yl = yi.ulow ();
       unsigned HOST_WIDE_INT resultl = xl - yl;
-      if (sgn == SIGNED)
+      if (precision == 0)
+	*overflow = false;
+      else if (sgn == SIGNED)
 	*overflow = (((xl ^ yl) & (resultl ^ xl)) >> (precision - 1)) & 1;
       else
 	*overflow = ((resultl << (HOST_BITS_PER_WIDE_INT - precision))