Patchwork wide-int branch now up for public comment and review

login
register
mail settings
Submitter Richard Sandiford
Date Aug. 24, 2013, 12:05 p.m.
Message ID <874nafe1l6.fsf@talisman.default>
Download mbox | patch
Permalink /patch/269635/
State New
Headers show

Comments

Richard Sandiford - Aug. 24, 2013, 12:05 p.m.
Richard Sandiford <rdsandiford@googlemail.com> writes:
> I wonder how easy it would be to restrict this use of "zero precision"
> (i.e. flexible precision) to those where primitive types like "int" are
> used as template arguments to operators, and require a precision when
> constructing a wide_int.  I wouldn't have expected "real" precision 0
> (from zero-width bitfields or whatever) to need any special handling
> compared to precision 1 or 2.

I tried the last bit -- requiring a precision when constructing a
wide_int -- and it seemed surprising easy.  What do you think of
the attached?  Most of the forced knock-on changes seem like improvements,
but the java part is a bit ugly.  I also went with "wide_int (0, prec).cmp"
for now, although I'd like to add static cmp, cmps and cmpu alongside
leu_p, etc., if that's OK.  It would then be possible to write
"wide_int::cmp (0, ...)" and avoid the wide_int construction altogether.

I wondered whether you might also want to get rid of the build_int_cst*
functions, but that still looks a long way off, so I hope using them in
these two places doesn't seem too bad.

This is just an incremental step.  I've also only run it through a
subset of the testsuite so far, but full tests are in progress...

Thanks,
Richard
Kenneth Zadeck - Aug. 24, 2013, 4:08 p.m.
On 08/24/2013 08:05 AM, Richard Sandiford wrote:
> Richard Sandiford <rdsandiford@googlemail.com> writes:
>> I wonder how easy it would be to restrict this use of "zero precision"
>> (i.e. flexible precision) to those where primitive types like "int" are
>> used as template arguments to operators, and require a precision when
>> constructing a wide_int.  I wouldn't have expected "real" precision 0
>> (from zero-width bitfields or whatever) to need any special handling
>> compared to precision 1 or 2.
> I tried the last bit -- requiring a precision when constructing a
> wide_int -- and it seemed surprising easy.  What do you think of
> the attached?  Most of the forced knock-on changes seem like improvements,
> but the java part is a bit ugly.  I also went with "wide_int (0, prec).cmp"
> for now, although I'd like to add static cmp, cmps and cmpu alongside
> leu_p, etc., if that's OK.  It would then be possible to write
> "wide_int::cmp (0, ...)" and avoid the wide_int construction altogether.
>
> I wondered whether you might also want to get rid of the build_int_cst*
> functions, but that still looks a long way off, so I hope using them in
> these two places doesn't seem too bad.
>
> This is just an incremental step.  I've also only run it through a
> subset of the testsuite so far, but full tests are in progress...
So i am going to make two high level comments here and then i am going 
to leave the ultimate decision to the community.   (1) I am mildly in 
favor of leaving prec 0 stuff the way that it is (2) my guess is that 
richi also will favor this.   My justification for (2) is because he had 
a lot of comments about this before he went on leave and this is 
substantially the way that it was when he left. Also, remember that one 
of his biggest dislikes was having to specify precisions.

However, this question is really bigger than this branch which is why i 
hope others will join in, because this really comes down to how do we 
want the compiler to look when it is fully converted to c++.   It has 
taken me a while to get used to writing and reading code like this where 
there is a lot of c++ magic going on behind the scenes.   And with that 
magic comes the responsibility of the programmer to get it right.  There 
were/are a lot of people in the gcc community that did not want go down 
the c++ pathway for exactly this reason.  However, i am being converted.

The rest of my comments are small comments about the patch, because some 
of it should be done no matter how the decision is made.
=====
It is perfectly fine to add the static versions of the cmp functions and 
the usage of those functions in this patch looks perfectly reasonable.

>
> Thanks,
> Richard
>
>
> Index: gcc/fold-const.c
> ===================================================================
> --- gcc/fold-const.c	2013-08-24 12:11:08.085684013 +0100
> +++ gcc/fold-const.c	2013-08-24 01:00:00.000000000 +0100
> @@ -8865,15 +8865,16 @@ pointer_may_wrap_p (tree base, tree offs
>     if (bitpos < 0)
>       return true;
>   
> +  int precision = TYPE_PRECISION (TREE_TYPE (base));
>     if (offset == NULL_TREE)
> -    wi_offset = wide_int::zero (TYPE_PRECISION (TREE_TYPE (base)));
> +    wi_offset = wide_int::zero (precision);
>     else if (TREE_CODE (offset) != INTEGER_CST || TREE_OVERFLOW (offset))
>       return true;
>     else
>       wi_offset = offset;
>   
>     bool overflow;
> -  wide_int units = wide_int::from_shwi (bitpos / BITS_PER_UNIT);
> +  wide_int units = wide_int::from_shwi (bitpos / BITS_PER_UNIT, precision);
>     total = wi_offset.add (units, UNSIGNED, &overflow);
>     if (overflow)
>       return true;
So this is a part of the code that really should have been using 
addr_wide_int rather that wide_int.  It is doing address arithmetic with 
bit positions.    Because of this, the precision that the calculations 
should have been done with the precision of 3 + what comes out of the 
type.   The addr_wide_int has a fixed precision that is guaranteed to be 
large enough for any address math on the machine.

> Index: gcc/gimple-ssa-strength-reduction.c
> ===================================================================
> --- gcc/gimple-ssa-strength-reduction.c	2013-08-24 12:11:08.085684013 +0100
> +++ gcc/gimple-ssa-strength-reduction.c	2013-08-24 01:00:00.000000000 +0100
> @@ -777,7 +777,6 @@ restructure_reference (tree *pbase, tree
>   {
>     tree base = *pbase, offset = *poffset;
>     max_wide_int index = *pindex;
> -  wide_int bpu = BITS_PER_UNIT;
>     tree mult_op0, t1, t2, type;
>     max_wide_int c1, c2, c3, c4;
>   
> @@ -786,7 +785,7 @@ restructure_reference (tree *pbase, tree
>         || TREE_CODE (base) != MEM_REF
>         || TREE_CODE (offset) != MULT_EXPR
>         || TREE_CODE (TREE_OPERAND (offset, 1)) != INTEGER_CST
> -      || !index.umod_floor (bpu).zero_p ())
> +      || !index.umod_floor (BITS_PER_UNIT).zero_p ())
>       return false;
>   
>     t1 = TREE_OPERAND (base, 0);
> @@ -822,7 +821,7 @@ restructure_reference (tree *pbase, tree
>         c2 = 0;
>       }
>   
> -  c4 = index.udiv_floor (bpu);
> +  c4 = index.udiv_floor (BITS_PER_UNIT);
>   
this is just coding style and i like your way better,  however, richi 
asked me to go easy on the cleanups because it makes it difficult more 
difficult to review a patch this big.    use your judgment here.
>     *pbase = t1;
>     *poffset = fold_build2 (MULT_EXPR, sizetype, t2,
> Index: gcc/java/jcf-parse.c
> ===================================================================
> --- gcc/java/jcf-parse.c	2013-08-24 12:11:08.085684013 +0100
> +++ gcc/java/jcf-parse.c	2013-08-24 01:00:00.000000000 +0100
> @@ -1043,9 +1043,10 @@ get_constant (JCF *jcf, int index)
>   	wide_int val;
>   
>   	num = JPOOL_UINT (jcf, index);
> -	val = wide_int (num).sforce_to_size (32).lshift_widen (32, 64);
> +	val = wide_int::from_hwi (num, long_type_node)
> +	  .sforce_to_size (32).lshift_widen (32, 64);
>   	num = JPOOL_UINT (jcf, index + 1);
> -	val |= wide_int (num);
> +	val |= wide_int::from_hwi (num, long_type_node);
>   
>   	value = wide_int_to_tree (long_type_node, val);
>   	break;
This is a somewhat older patch before we got aggressive about this.
if i was coding this now, it would be

val |= num;

> Index: gcc/loop-unroll.c
> ===================================================================
> --- gcc/loop-unroll.c	2013-08-24 12:11:08.085684013 +0100
> +++ gcc/loop-unroll.c	2013-08-24 01:00:00.000000000 +0100
> @@ -816,8 +816,7 @@ unroll_loop_constant_iterations (struct
>   	  desc->niter -= exit_mod;
>   	  loop->nb_iterations_upper_bound -= exit_mod;
>   	  if (loop->any_estimate
> -	      && wide_int (exit_mod).leu_p
> -	           (loop->nb_iterations_estimate))
> +	      && wide_int::leu_p (exit_mod, loop->nb_iterations_estimate))
>   	    loop->nb_iterations_estimate -= exit_mod;
>   	  else
>   	    loop->any_estimate = false;
> @@ -860,8 +859,7 @@ unroll_loop_constant_iterations (struct
>   	  desc->niter -= exit_mod + 1;
>   	  loop->nb_iterations_upper_bound -= exit_mod + 1;
>   	  if (loop->any_estimate
> -	      && wide_int (exit_mod + 1).leu_p
> -	           (loop->nb_iterations_estimate))
> +	      && wide_int::leu_p (exit_mod + 1, loop->nb_iterations_estimate))
>   	    loop->nb_iterations_estimate -= exit_mod + 1;
>   	  else
>   	    loop->any_estimate = false;
> @@ -1381,7 +1379,7 @@ decide_peel_simple (struct loop *loop, i
>     if (estimated_loop_iterations (loop, &iterations))
>       {
>         /* TODO: unsigned/signed confusion */
> -      if (wide_int::from_shwi (npeel).leu_p (iterations))
> +      if (wide_int::leu_p (npeel, iterations))
>   	{
>   	  if (dump_file)
>   	    {

All of this is perfectly fine.   this reflects us not going back after 
the static functions were put in and using them to the full extent.

> Index: gcc/real.c
> ===================================================================
> --- gcc/real.c	2013-08-24 12:11:08.085684013 +0100
> +++ gcc/real.c	2013-08-24 01:00:00.000000000 +0100
> @@ -2401,7 +2401,7 @@ real_digit (int n)
>     gcc_assert (n <= 9);
>   
>     if (n > 0 && num[n].cl == rvc_zero)
> -    real_from_integer (&num[n], VOIDmode, wide_int (n), UNSIGNED);
> +    real_from_integer (&num[n], VOIDmode, n, UNSIGNED);
>   
>     return &num[n];
>   }
> Index: gcc/tree-predcom.c
> ===================================================================
> --- gcc/tree-predcom.c	2013-08-24 12:11:08.085684013 +0100
> +++ gcc/tree-predcom.c	2013-08-24 01:00:00.000000000 +0100
> @@ -923,7 +923,7 @@ add_ref_to_chain (chain_p chain, dref re
>   
>     gcc_assert (root->offset.les_p (ref->offset));
>     dist = ref->offset - root->offset;
> -  if (max_wide_int::from_uhwi (MAX_DISTANCE).leu_p (dist))
> +  if (wide_int::leu_p (MAX_DISTANCE, dist))
>       {
>         free (ref);
>         return;
> Index: gcc/tree-pretty-print.c
> ===================================================================
> --- gcc/tree-pretty-print.c	2013-08-24 12:48:00.091379339 +0100
> +++ gcc/tree-pretty-print.c	2013-08-24 01:00:00.000000000 +0100
> @@ -1295,7 +1295,7 @@ dump_generic_node (pretty_printer *buffe
>   	tree field, val;
>   	bool is_struct_init = false;
>   	bool is_array_init = false;
> -	wide_int curidx = 0;
> +	wide_int curidx;
>   	pp_left_brace (buffer);
>   	if (TREE_CLOBBER_P (node))
>   	  pp_string (buffer, "CLOBBER");
> Index: gcc/tree-ssa-ccp.c
> ===================================================================
> --- gcc/tree-ssa-ccp.c	2013-08-24 12:11:08.085684013 +0100
> +++ gcc/tree-ssa-ccp.c	2013-08-24 01:00:00.000000000 +0100
> @@ -526,7 +526,7 @@ get_value_from_alignment (tree expr)
>   	      : -1).and_not (align / BITS_PER_UNIT - 1);
>     val.lattice_val = val.mask.minus_one_p () ? VARYING : CONSTANT;
>     if (val.lattice_val == CONSTANT)
> -    val.value = wide_int_to_tree (type, bitpos / BITS_PER_UNIT);
> +    val.value = build_int_cstu (type, bitpos / BITS_PER_UNIT);
>     else
>       val.value = NULL_TREE;
>   
> Index: gcc/tree-vrp.c
> ===================================================================
> --- gcc/tree-vrp.c	2013-08-24 12:48:00.093379358 +0100
> +++ gcc/tree-vrp.c	2013-08-24 01:00:00.000000000 +0100
> @@ -2420,9 +2420,9 @@ extract_range_from_binary_expr_1 (value_
>   	      wmin = min0 - max1;
>   	      wmax = max0 - min1;
>   
> -	      if (wide_int (0).cmp (max1, sgn) != wmin.cmp (min0, sgn))
> +	      if (wide_int (0, prec).cmp (max1, sgn) != wmin.cmp (min0, sgn))
>   		min_ovf = min0.cmp (max1, sgn);
> -	      if (wide_int (0).cmp (min1, sgn) != wmax.cmp (max0, sgn))
> +	      if (wide_int (0, prec).cmp (min1, sgn) != wmax.cmp (max0, sgn))
>   		max_ovf = max0.cmp (min1, sgn);
>   	    }
>   
of course this is really a place for the static functions.
> @@ -4911,8 +4911,8 @@ register_edge_assert_for_2 (tree name, e
>         gimple def_stmt = SSA_NAME_DEF_STMT (name);
>         tree name2 = NULL_TREE, names[2], cst2 = NULL_TREE;
>         tree val2 = NULL_TREE;
> -      wide_int mask = 0;
>         unsigned int prec = TYPE_PRECISION (TREE_TYPE (val));
> +      wide_int mask (0, prec);
>         unsigned int nprec = prec;
>         enum tree_code rhs_code = ERROR_MARK;
>   
> @@ -5101,7 +5101,7 @@ register_edge_assert_for_2 (tree name, e
>   	}
>         if (names[0] || names[1])
>   	{
> -	  wide_int minv, maxv = 0, valv, cst2v;
> +	  wide_int minv, maxv, valv, cst2v;
>   	  wide_int tem, sgnbit;
>   	  bool valid_p = false, valn = false, cst2n = false;
>   	  enum tree_code ccode = comp_code;
> @@ -5170,7 +5170,7 @@ register_edge_assert_for_2 (tree name, e
>   		      goto lt_expr;
>   		    }
>   		  if (!cst2n)
> -		    sgnbit = 0;
> +		    sgnbit = wide_int::zero (nprec);
>   		}
>   	      break;
>   
> Index: gcc/tree.c
> ===================================================================
> --- gcc/tree.c	2013-08-24 12:11:08.085684013 +0100
> +++ gcc/tree.c	2013-08-24 01:00:00.000000000 +0100
> @@ -1048,13 +1048,13 @@ build_int_cst (tree type, HOST_WIDE_INT
>     if (!type)
>       type = integer_type_node;
>   
> -  return wide_int_to_tree (type, low);
> +  return wide_int_to_tree (type, wide_int::from_hwi (low, type));
>   }
>   
>   /* static inline */ tree
>   build_int_cstu (tree type, unsigned HOST_WIDE_INT cst)
>   {
> -  return wide_int_to_tree (type, cst);
> +  return wide_int_to_tree (type, wide_int::from_hwi (cst, type));
>   }
>   
>   /* Create an INT_CST node with a LOW value sign extended to TYPE.  */
> @@ -1064,7 +1064,7 @@ build_int_cst_type (tree type, HOST_WIDE
>   {
>     gcc_assert (type);
>   
> -  return wide_int_to_tree (type, low);
> +  return wide_int_to_tree (type, wide_int::from_hwi (low, type));
>   }
>   
>   /* Constructs tree in type TYPE from with value given by CST.  Signedness
> @@ -10688,7 +10688,7 @@ lower_bound_in_type (tree outer, tree in
>   	 contains all values of INNER type.  In particular, both INNER
>   	 and OUTER types have zero in common.  */
>         || (oprec > iprec && TYPE_UNSIGNED (inner)))
> -    return wide_int_to_tree (outer, 0);
> +    return build_int_cst (outer, 0);
>     else
>       {
>         /* If we are widening a signed type to another signed type, we
> Index: gcc/wide-int.cc
> ===================================================================
> --- gcc/wide-int.cc	2013-08-24 12:48:00.096379386 +0100
> +++ gcc/wide-int.cc	2013-08-24 01:00:00.000000000 +0100
> @@ -32,6 +32,8 @@ along with GCC; see the file COPYING3.
>   const int MAX_SIZE = 4 * (MAX_BITSIZE_MODE_ANY_INT / 4
>   		     + MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_WIDE_INT + 32);
>   
> +static const HOST_WIDE_INT zeros[WIDE_INT_MAX_ELTS] = {};
> +
>   /*
>    * Internal utilities.
>    */
> @@ -2517,7 +2519,7 @@ wide_int_ro::divmod_internal (bool compu
>       {
>         if (top_bit_of (dividend, dividend_len, dividend_prec))
>   	{
> -	  u0 = sub_large (wide_int (0).val, 1,
> +	  u0 = sub_large (zeros, 1,
>   			  dividend_prec, dividend, dividend_len, UNSIGNED);
>   	  dividend = u0.val;
>   	  dividend_len = u0.len;
> @@ -2525,7 +2527,7 @@ wide_int_ro::divmod_internal (bool compu
>   	}
>         if (top_bit_of (divisor, divisor_len, divisor_prec))
>   	{
> -	  u1 = sub_large (wide_int (0).val, 1,
> +	  u1 = sub_large (zeros, 1,
>   			  divisor_prec, divisor, divisor_len, UNSIGNED);
>   	  divisor = u1.val;
>   	  divisor_len = u1.len;
> Index: gcc/wide-int.h
> ===================================================================
> --- gcc/wide-int.h	2013-08-24 12:14:20.979479335 +0100
> +++ gcc/wide-int.h	2013-08-24 01:00:00.000000000 +0100
> @@ -230,6 +230,11 @@ #define WIDE_INT_H
>   #define DEBUG_WIDE_INT
>   #endif
>   
> +/* Used for overloaded functions in which the only other acceptable
> +   scalar type is const_tree.  It stops a plain 0 from being treated
> +   as a null tree.  */
> +struct never_used {};
> +
>   /* The MAX_BITSIZE_MODE_ANY_INT is automatically generated by a very
>      early examination of the target's mode file.  Thus it is safe that
>      some small multiple of this number is easily larger than any number
> @@ -324,15 +329,16 @@ class GTY(()) wide_int_ro
>   public:
>     wide_int_ro ();
>     wide_int_ro (const_tree);
> -  wide_int_ro (HOST_WIDE_INT);
> -  wide_int_ro (int);
> -  wide_int_ro (unsigned HOST_WIDE_INT);
> -  wide_int_ro (unsigned int);
> +  wide_int_ro (never_used *);
> +  wide_int_ro (HOST_WIDE_INT, unsigned int);
> +  wide_int_ro (int, unsigned int);
> +  wide_int_ro (unsigned HOST_WIDE_INT, unsigned int);
> +  wide_int_ro (unsigned int, unsigned int);
>     wide_int_ro (const rtx_mode_t &);
>   
>     /* Conversions.  */
> -  static wide_int_ro from_shwi (HOST_WIDE_INT, unsigned int = 0);
> -  static wide_int_ro from_uhwi (unsigned HOST_WIDE_INT, unsigned int = 0);
> +  static wide_int_ro from_shwi (HOST_WIDE_INT, unsigned int);
> +  static wide_int_ro from_uhwi (unsigned HOST_WIDE_INT, unsigned int);
>     static wide_int_ro from_hwi (HOST_WIDE_INT, const_tree);
>     static wide_int_ro from_shwi (HOST_WIDE_INT, enum machine_mode);
>     static wide_int_ro from_uhwi (unsigned HOST_WIDE_INT, enum machine_mode);
> @@ -349,9 +355,11 @@ class GTY(()) wide_int_ro
>   
>     static wide_int_ro max_value (unsigned int, signop, unsigned int = 0);
>     static wide_int_ro max_value (const_tree);
> +  static wide_int_ro max_value (never_used *);
>     static wide_int_ro max_value (enum machine_mode, signop);
>     static wide_int_ro min_value (unsigned int, signop, unsigned int = 0);
>     static wide_int_ro min_value (const_tree);
> +  static wide_int_ro min_value (never_used *);
>     static wide_int_ro min_value (enum machine_mode, signop);
>   
>     /* Small constants.  These are generally only needed in the places
> @@ -842,18 +850,16 @@ class GTY(()) wide_int : public wide_int
>     wide_int ();
>     wide_int (const wide_int_ro &);
>     wide_int (const_tree);
> -  wide_int (HOST_WIDE_INT);
> -  wide_int (int);
> -  wide_int (unsigned HOST_WIDE_INT);
> -  wide_int (unsigned int);
> +  wide_int (never_used *);
> +  wide_int (HOST_WIDE_INT, unsigned int);
> +  wide_int (int, unsigned int);
> +  wide_int (unsigned HOST_WIDE_INT, unsigned int);
> +  wide_int (unsigned int, unsigned int);
>     wide_int (const rtx_mode_t &);
>   
>     wide_int &operator = (const wide_int_ro &);
>     wide_int &operator = (const_tree);
> -  wide_int &operator = (HOST_WIDE_INT);
> -  wide_int &operator = (int);
> -  wide_int &operator = (unsigned HOST_WIDE_INT);
> -  wide_int &operator = (unsigned int);
> +  wide_int &operator = (never_used *);
>     wide_int &operator = (const rtx_mode_t &);
>   
>     wide_int &operator ++ ();
> @@ -904,28 +910,28 @@ inline wide_int_ro::wide_int_ro (const_t
>   		      TYPE_PRECISION (TREE_TYPE (tcst)), false);
>   }
>   
> -inline wide_int_ro::wide_int_ro (HOST_WIDE_INT op0)
> +inline wide_int_ro::wide_int_ro (HOST_WIDE_INT op0, unsigned int prec)
>   {
> -  precision = 0;
> +  precision = prec;
>     val[0] = op0;
>     len = 1;
>   }
>   
> -inline wide_int_ro::wide_int_ro (int op0)
> +inline wide_int_ro::wide_int_ro (int op0, unsigned int prec)
>   {
> -  precision = 0;
> +  precision = prec;
>     val[0] = op0;
>     len = 1;
>   }
>   
> -inline wide_int_ro::wide_int_ro (unsigned HOST_WIDE_INT op0)
> +inline wide_int_ro::wide_int_ro (unsigned HOST_WIDE_INT op0, unsigned int prec)
>   {
> -  *this = from_uhwi (op0);
> +  *this = from_uhwi (op0, prec);
>   }
>   
> -inline wide_int_ro::wide_int_ro (unsigned int op0)
> +inline wide_int_ro::wide_int_ro (unsigned int op0, unsigned int prec)
>   {
> -  *this = from_uhwi (op0);
> +  *this = from_uhwi (op0, prec);
>   }
>   
>   inline wide_int_ro::wide_int_ro (const rtx_mode_t &op0)
> @@ -2264,7 +2270,7 @@ wide_int_ro::mul_high (const T &c, signo
>   wide_int_ro::operator - () const
>   {
>     wide_int_ro r;
> -  r = wide_int_ro (0) - *this;
> +  r = zero (precision) - *this;
>     return r;
>   }
>   
> @@ -2277,7 +2283,7 @@ wide_int_ro::neg (bool *overflow) const
>   
>     *overflow = only_sign_bit_p ();
>   
> -  return wide_int_ro (0) - *this;
> +  return zero (precision) - *this;
>   }
>   
>   /* Return THIS - C.  */
> @@ -3147,28 +3153,28 @@ inline wide_int::wide_int (const_tree tc
>   		      TYPE_PRECISION (TREE_TYPE (tcst)), false);
>   }
>   
> -inline wide_int::wide_int (HOST_WIDE_INT op0)
> +inline wide_int::wide_int (HOST_WIDE_INT op0, unsigned int prec)
>   {
> -  precision = 0;
> +  precision = prec;
>     val[0] = op0;
>     len = 1;
>   }
>   
> -inline wide_int::wide_int (int op0)
> +inline wide_int::wide_int (int op0, unsigned int prec)
>   {
> -  precision = 0;
> +  precision = prec;
>     val[0] = op0;
>     len = 1;
>   }
>   
> -inline wide_int::wide_int (unsigned HOST_WIDE_INT op0)
> +inline wide_int::wide_int (unsigned HOST_WIDE_INT op0, unsigned int prec)
>   {
> -  *this = wide_int_ro::from_uhwi (op0);
> +  *this = wide_int_ro::from_uhwi (op0, prec);
>   }
>   
> -inline wide_int::wide_int (unsigned int op0)
> +inline wide_int::wide_int (unsigned int op0, unsigned int prec)
>   {
> -  *this = wide_int_ro::from_uhwi (op0);
> +  *this = wide_int_ro::from_uhwi (op0, prec);
>   }
>   
>   inline wide_int::wide_int (const rtx_mode_t &op0)
> @@ -3567,31 +3573,28 @@ inline fixed_wide_int <bitsize>::fixed_w
>   
>   template <int bitsize>
>   inline fixed_wide_int <bitsize>::fixed_wide_int (HOST_WIDE_INT op0)
> -  : wide_int_ro (op0)
> +  : wide_int_ro (op0, bitsize)
>   {
> -  precision = bitsize;
>   }
>   
>   template <int bitsize>
> -inline fixed_wide_int <bitsize>::fixed_wide_int (int op0) : wide_int_ro (op0)
> +inline fixed_wide_int <bitsize>::fixed_wide_int (int op0)
> +  : wide_int_ro (op0, bitsize)
>   {
> -  precision = bitsize;
>   }
>   
>   template <int bitsize>
>   inline fixed_wide_int <bitsize>::fixed_wide_int (unsigned HOST_WIDE_INT op0)
> -  : wide_int_ro (op0)
> +  : wide_int_ro (op0, bitsize)
>   {
> -  precision = bitsize;
>     if (neg_p (SIGNED))
>       static_cast <wide_int_ro &> (*this) = zext (HOST_BITS_PER_WIDE_INT);
>   }
>   
>   template <int bitsize>
>   inline fixed_wide_int <bitsize>::fixed_wide_int (unsigned int op0)
> -  : wide_int_ro (op0)
> +  : wide_int_ro (op0, bitsize)
>   {
> -  precision = bitsize;
>     if (sizeof (int) == sizeof (HOST_WIDE_INT)
>         && neg_p (SIGNED))
>       *this = zext (HOST_BITS_PER_WIDE_INT);
> @@ -3661,9 +3664,7 @@ fixed_wide_int <bitsize>::operator = (co
>   inline fixed_wide_int <bitsize> &
>   fixed_wide_int <bitsize>::operator = (HOST_WIDE_INT op0)
>   {
> -  static_cast <wide_int_ro &> (*this) = op0;
> -  precision = bitsize;
> -
> +  static_cast <wide_int_ro &> (*this) = wide_int_ro (op0, bitsize);
>     return *this;
>   }
>   
> @@ -3671,9 +3672,7 @@ fixed_wide_int <bitsize>::operator = (HO
>   inline fixed_wide_int <bitsize> &
>   fixed_wide_int <bitsize>::operator = (int op0)
>   {
> -  static_cast <wide_int_ro &> (*this) = op0;
> -  precision = bitsize;
> -
> +  static_cast <wide_int_ro &> (*this) = wide_int_ro (op0, bitsize);
>     return *this;
>   }
>   
> @@ -3681,8 +3680,7 @@ fixed_wide_int <bitsize>::operator = (in
>   inline fixed_wide_int <bitsize> &
>   fixed_wide_int <bitsize>::operator = (unsigned HOST_WIDE_INT op0)
>   {
> -  static_cast <wide_int_ro &> (*this) = op0;
> -  precision = bitsize;
> +  static_cast <wide_int_ro &> (*this) = wide_int_ro (op0, bitsize);
>   
>     /* This is logically top_bit_set_p.  */
>     if (neg_p (SIGNED))
> @@ -3695,8 +3693,7 @@ fixed_wide_int <bitsize>::operator = (un
>   inline fixed_wide_int <bitsize> &
>   fixed_wide_int <bitsize>::operator = (unsigned int op0)
>   {
> -  static_cast <wide_int_ro &> (*this) = op0;
> -  precision = bitsize;
> +  static_cast <wide_int_ro &> (*this) = wide_int_ro (op0, bitsize);
>   
>     if (sizeof (int) == sizeof (HOST_WIDE_INT)
>         && neg_p (SIGNED))
Richard Sandiford - Aug. 25, 2013, 6:42 a.m.
Kenneth Zadeck <zadeck@naturalbridge.com> writes:
> On 08/24/2013 08:05 AM, Richard Sandiford wrote:
>> Richard Sandiford <rdsandiford@googlemail.com> writes:
>>> I wonder how easy it would be to restrict this use of "zero precision"
>>> (i.e. flexible precision) to those where primitive types like "int" are
>>> used as template arguments to operators, and require a precision when
>>> constructing a wide_int.  I wouldn't have expected "real" precision 0
>>> (from zero-width bitfields or whatever) to need any special handling
>>> compared to precision 1 or 2.
>> I tried the last bit -- requiring a precision when constructing a
>> wide_int -- and it seemed surprising easy.  What do you think of
>> the attached?  Most of the forced knock-on changes seem like improvements,
>> but the java part is a bit ugly.  I also went with "wide_int (0, prec).cmp"
>> for now, although I'd like to add static cmp, cmps and cmpu alongside
>> leu_p, etc., if that's OK.  It would then be possible to write
>> "wide_int::cmp (0, ...)" and avoid the wide_int construction altogether.
>>
>> I wondered whether you might also want to get rid of the build_int_cst*
>> functions, but that still looks a long way off, so I hope using them in
>> these two places doesn't seem too bad.
>>
>> This is just an incremental step.  I've also only run it through a
>> subset of the testsuite so far, but full tests are in progress...
> So i am going to make two high level comments here and then i am going 
> to leave the ultimate decision to the community.   (1) I am mildly in 
> favor of leaving prec 0 stuff the way that it is (2) my guess is that 
> richi also will favor this.   My justification for (2) is because he had 
> a lot of comments about this before he went on leave and this is 
> substantially the way that it was when he left. Also, remember that one 
> of his biggest dislikes was having to specify precisions.

Hmm, but you seem to be talking about zero precision in general.
(I'm going to call it "flexible precision" to avoid confusion with
the zero-width bitfield stuff.)  Whereas this patch is specifically
about constructing flexible-precision _wide_int_ objects.  I think
wide_int objects should always have a known, fixed precision.

Note that fixed_wide_ints can still use primitive types in the
same way as before, since there the precision is inherent to the
fixed_wide_int.  The templated operators also work in the same
way as before.  Only the construction of wide_int proper is affected.

As it stands you have various wide_int operators that cannot handle two
flexible-precision inputs.  This means that innocent-looking code like:

  extern wide_int foo (wide_int);
  wide_int bar () { return foo (0); }

ICEs when combined with equally innocent-looking code like:

  wide_int foo (wide_int x) { return x + 1; }

So in practice you have to know when calling a function whether any
paths in that function will try applying an operator with a primitive type.
If so, you need to specify a precison when constructing the wide_int
argument.  If not you can leave it out.  That seems really unclean.

The point of this template stuff is to avoid constructing wide_int objects
from primitive integers whereever possible.  And I think the fairly
small size of the patch shows that you've succeeded in doing that.
But I think we really should specify a precision in the handful of cases
where a wide_int does still need to be constructed directly from
a primitive type.

Thanks,
Richard
Kenneth Zadeck - Aug. 25, 2013, 1:14 p.m.
On 08/25/2013 02:42 AM, Richard Sandiford wrote:
> Kenneth Zadeck <zadeck@naturalbridge.com> writes:
>> On 08/24/2013 08:05 AM, Richard Sandiford wrote:
>>> Richard Sandiford <rdsandiford@googlemail.com> writes:
>>>> I wonder how easy it would be to restrict this use of "zero precision"
>>>> (i.e. flexible precision) to those where primitive types like "int" are
>>>> used as template arguments to operators, and require a precision when
>>>> constructing a wide_int.  I wouldn't have expected "real" precision 0
>>>> (from zero-width bitfields or whatever) to need any special handling
>>>> compared to precision 1 or 2.
>>> I tried the last bit -- requiring a precision when constructing a
>>> wide_int -- and it seemed surprising easy.  What do you think of
>>> the attached?  Most of the forced knock-on changes seem like improvements,
>>> but the java part is a bit ugly.  I also went with "wide_int (0, prec).cmp"
>>> for now, although I'd like to add static cmp, cmps and cmpu alongside
>>> leu_p, etc., if that's OK.  It would then be possible to write
>>> "wide_int::cmp (0, ...)" and avoid the wide_int construction altogether.
>>>
>>> I wondered whether you might also want to get rid of the build_int_cst*
>>> functions, but that still looks a long way off, so I hope using them in
>>> these two places doesn't seem too bad.
>>>
>>> This is just an incremental step.  I've also only run it through a
>>> subset of the testsuite so far, but full tests are in progress...
>> So i am going to make two high level comments here and then i am going
>> to leave the ultimate decision to the community.   (1) I am mildly in
>> favor of leaving prec 0 stuff the way that it is (2) my guess is that
>> richi also will favor this.   My justification for (2) is because he had
>> a lot of comments about this before he went on leave and this is
>> substantially the way that it was when he left. Also, remember that one
>> of his biggest dislikes was having to specify precisions.
> Hmm, but you seem to be talking about zero precision in general.
> (I'm going to call it "flexible precision" to avoid confusion with
> the zero-width bitfield stuff.)
I have tried to purge the zero width bitfield case from my mind. it was 
an ugly incident in the conversion.


> Whereas this patch is specifically
> about constructing flexible-precision _wide_int_ objects.  I think
> wide_int objects should always have a known, fixed precision.
This is where we differ.  I do not.   The top level idea is really 
motivated by richi, but i have come to appreciate his criticism. Many of 
the times, the specification of the precision is simply redundant and it 
glops up the code.

> Note that fixed_wide_ints can still use primitive types in the
> same way as before, since there the precision is inherent to the
> fixed_wide_int.  The templated operators also work in the same
> way as before.  Only the construction of wide_int proper is affected.
>
> As it stands you have various wide_int operators that cannot handle two
> flexible-precision inputs.  This means that innocent-looking code like:
>
>    extern wide_int foo (wide_int);
>    wide_int bar () { return foo (0); }
>
> ICEs when combined with equally innocent-looking code like:
>
>    wide_int foo (wide_int x) { return x + 1; }
>
> So in practice you have to know when calling a function whether any
> paths in that function will try applying an operator with a primitive type.
> If so, you need to specify a precison when constructing the wide_int
> argument.  If not you can leave it out.  That seems really unclean.
my wife, who is a lawyer, likes to quote an old Brittish chancellor: 
"hard cases make bad law".
The fact that you occasionally  have to specify one should not be 
justification for throwing out the entire thing.

>
> The point of this template stuff is to avoid constructing wide_int objects
> from primitive integers whereever possible.  And I think the fairly
> small size of the patch shows that you've succeeded in doing that.
> But I think we really should specify a precision in the handful of cases
> where a wide_int does still need to be constructed directly from
> a primitive type.
>
> Thanks,
> Richard
As i said earlier.    Lets see what others in the community feel about this.

Patch

Index: gcc/fold-const.c
===================================================================
--- gcc/fold-const.c	2013-08-24 12:11:08.085684013 +0100
+++ gcc/fold-const.c	2013-08-24 01:00:00.000000000 +0100
@@ -8865,15 +8865,16 @@  pointer_may_wrap_p (tree base, tree offs
   if (bitpos < 0)
     return true;
 
+  int precision = TYPE_PRECISION (TREE_TYPE (base));
   if (offset == NULL_TREE)
-    wi_offset = wide_int::zero (TYPE_PRECISION (TREE_TYPE (base)));
+    wi_offset = wide_int::zero (precision);
   else if (TREE_CODE (offset) != INTEGER_CST || TREE_OVERFLOW (offset))
     return true;
   else
     wi_offset = offset;
 
   bool overflow;
-  wide_int units = wide_int::from_shwi (bitpos / BITS_PER_UNIT);
+  wide_int units = wide_int::from_shwi (bitpos / BITS_PER_UNIT, precision);
   total = wi_offset.add (units, UNSIGNED, &overflow);
   if (overflow)
     return true;
Index: gcc/gimple-ssa-strength-reduction.c
===================================================================
--- gcc/gimple-ssa-strength-reduction.c	2013-08-24 12:11:08.085684013 +0100
+++ gcc/gimple-ssa-strength-reduction.c	2013-08-24 01:00:00.000000000 +0100
@@ -777,7 +777,6 @@  restructure_reference (tree *pbase, tree
 {
   tree base = *pbase, offset = *poffset;
   max_wide_int index = *pindex;
-  wide_int bpu = BITS_PER_UNIT;
   tree mult_op0, t1, t2, type;
   max_wide_int c1, c2, c3, c4;
 
@@ -786,7 +785,7 @@  restructure_reference (tree *pbase, tree
       || TREE_CODE (base) != MEM_REF
       || TREE_CODE (offset) != MULT_EXPR
       || TREE_CODE (TREE_OPERAND (offset, 1)) != INTEGER_CST
-      || !index.umod_floor (bpu).zero_p ())
+      || !index.umod_floor (BITS_PER_UNIT).zero_p ())
     return false;
 
   t1 = TREE_OPERAND (base, 0);
@@ -822,7 +821,7 @@  restructure_reference (tree *pbase, tree
       c2 = 0;
     }
 
-  c4 = index.udiv_floor (bpu);
+  c4 = index.udiv_floor (BITS_PER_UNIT);
 
   *pbase = t1;
   *poffset = fold_build2 (MULT_EXPR, sizetype, t2,
Index: gcc/java/jcf-parse.c
===================================================================
--- gcc/java/jcf-parse.c	2013-08-24 12:11:08.085684013 +0100
+++ gcc/java/jcf-parse.c	2013-08-24 01:00:00.000000000 +0100
@@ -1043,9 +1043,10 @@  get_constant (JCF *jcf, int index)
 	wide_int val;
 
 	num = JPOOL_UINT (jcf, index);
-	val = wide_int (num).sforce_to_size (32).lshift_widen (32, 64);
+	val = wide_int::from_hwi (num, long_type_node)
+	  .sforce_to_size (32).lshift_widen (32, 64);
 	num = JPOOL_UINT (jcf, index + 1);
-	val |= wide_int (num);
+	val |= wide_int::from_hwi (num, long_type_node);
 
 	value = wide_int_to_tree (long_type_node, val);
 	break;
Index: gcc/loop-unroll.c
===================================================================
--- gcc/loop-unroll.c	2013-08-24 12:11:08.085684013 +0100
+++ gcc/loop-unroll.c	2013-08-24 01:00:00.000000000 +0100
@@ -816,8 +816,7 @@  unroll_loop_constant_iterations (struct
 	  desc->niter -= exit_mod;
 	  loop->nb_iterations_upper_bound -= exit_mod;
 	  if (loop->any_estimate
-	      && wide_int (exit_mod).leu_p
-	           (loop->nb_iterations_estimate))
+	      && wide_int::leu_p (exit_mod, loop->nb_iterations_estimate))
 	    loop->nb_iterations_estimate -= exit_mod;
 	  else
 	    loop->any_estimate = false;
@@ -860,8 +859,7 @@  unroll_loop_constant_iterations (struct
 	  desc->niter -= exit_mod + 1;
 	  loop->nb_iterations_upper_bound -= exit_mod + 1;
 	  if (loop->any_estimate
-	      && wide_int (exit_mod + 1).leu_p
-	           (loop->nb_iterations_estimate))
+	      && wide_int::leu_p (exit_mod + 1, loop->nb_iterations_estimate))
 	    loop->nb_iterations_estimate -= exit_mod + 1;
 	  else
 	    loop->any_estimate = false;
@@ -1381,7 +1379,7 @@  decide_peel_simple (struct loop *loop, i
   if (estimated_loop_iterations (loop, &iterations))
     {
       /* TODO: unsigned/signed confusion */
-      if (wide_int::from_shwi (npeel).leu_p (iterations))
+      if (wide_int::leu_p (npeel, iterations))
 	{
 	  if (dump_file)
 	    {
Index: gcc/real.c
===================================================================
--- gcc/real.c	2013-08-24 12:11:08.085684013 +0100
+++ gcc/real.c	2013-08-24 01:00:00.000000000 +0100
@@ -2401,7 +2401,7 @@  real_digit (int n)
   gcc_assert (n <= 9);
 
   if (n > 0 && num[n].cl == rvc_zero)
-    real_from_integer (&num[n], VOIDmode, wide_int (n), UNSIGNED);
+    real_from_integer (&num[n], VOIDmode, n, UNSIGNED);
 
   return &num[n];
 }
Index: gcc/tree-predcom.c
===================================================================
--- gcc/tree-predcom.c	2013-08-24 12:11:08.085684013 +0100
+++ gcc/tree-predcom.c	2013-08-24 01:00:00.000000000 +0100
@@ -923,7 +923,7 @@  add_ref_to_chain (chain_p chain, dref re
 
   gcc_assert (root->offset.les_p (ref->offset));
   dist = ref->offset - root->offset;
-  if (max_wide_int::from_uhwi (MAX_DISTANCE).leu_p (dist))
+  if (wide_int::leu_p (MAX_DISTANCE, dist))
     {
       free (ref);
       return;
Index: gcc/tree-pretty-print.c
===================================================================
--- gcc/tree-pretty-print.c	2013-08-24 12:48:00.091379339 +0100
+++ gcc/tree-pretty-print.c	2013-08-24 01:00:00.000000000 +0100
@@ -1295,7 +1295,7 @@  dump_generic_node (pretty_printer *buffe
 	tree field, val;
 	bool is_struct_init = false;
 	bool is_array_init = false;
-	wide_int curidx = 0;
+	wide_int curidx;
 	pp_left_brace (buffer);
 	if (TREE_CLOBBER_P (node))
 	  pp_string (buffer, "CLOBBER");
Index: gcc/tree-ssa-ccp.c
===================================================================
--- gcc/tree-ssa-ccp.c	2013-08-24 12:11:08.085684013 +0100
+++ gcc/tree-ssa-ccp.c	2013-08-24 01:00:00.000000000 +0100
@@ -526,7 +526,7 @@  get_value_from_alignment (tree expr)
 	      : -1).and_not (align / BITS_PER_UNIT - 1);
   val.lattice_val = val.mask.minus_one_p () ? VARYING : CONSTANT;
   if (val.lattice_val == CONSTANT)
-    val.value = wide_int_to_tree (type, bitpos / BITS_PER_UNIT);
+    val.value = build_int_cstu (type, bitpos / BITS_PER_UNIT);
   else
     val.value = NULL_TREE;
 
Index: gcc/tree-vrp.c
===================================================================
--- gcc/tree-vrp.c	2013-08-24 12:48:00.093379358 +0100
+++ gcc/tree-vrp.c	2013-08-24 01:00:00.000000000 +0100
@@ -2420,9 +2420,9 @@  extract_range_from_binary_expr_1 (value_
 	      wmin = min0 - max1;
 	      wmax = max0 - min1;
 
-	      if (wide_int (0).cmp (max1, sgn) != wmin.cmp (min0, sgn))
+	      if (wide_int (0, prec).cmp (max1, sgn) != wmin.cmp (min0, sgn))
 		min_ovf = min0.cmp (max1, sgn);
-	      if (wide_int (0).cmp (min1, sgn) != wmax.cmp (max0, sgn))
+	      if (wide_int (0, prec).cmp (min1, sgn) != wmax.cmp (max0, sgn))
 		max_ovf = max0.cmp (min1, sgn);
 	    }
 
@@ -4911,8 +4911,8 @@  register_edge_assert_for_2 (tree name, e
       gimple def_stmt = SSA_NAME_DEF_STMT (name);
       tree name2 = NULL_TREE, names[2], cst2 = NULL_TREE;
       tree val2 = NULL_TREE;
-      wide_int mask = 0;
       unsigned int prec = TYPE_PRECISION (TREE_TYPE (val));
+      wide_int mask (0, prec);
       unsigned int nprec = prec;
       enum tree_code rhs_code = ERROR_MARK;
 
@@ -5101,7 +5101,7 @@  register_edge_assert_for_2 (tree name, e
 	}
       if (names[0] || names[1])
 	{
-	  wide_int minv, maxv = 0, valv, cst2v;
+	  wide_int minv, maxv, valv, cst2v;
 	  wide_int tem, sgnbit;
 	  bool valid_p = false, valn = false, cst2n = false;
 	  enum tree_code ccode = comp_code;
@@ -5170,7 +5170,7 @@  register_edge_assert_for_2 (tree name, e
 		      goto lt_expr;
 		    }
 		  if (!cst2n)
-		    sgnbit = 0;
+		    sgnbit = wide_int::zero (nprec);
 		}
 	      break;
 
Index: gcc/tree.c
===================================================================
--- gcc/tree.c	2013-08-24 12:11:08.085684013 +0100
+++ gcc/tree.c	2013-08-24 01:00:00.000000000 +0100
@@ -1048,13 +1048,13 @@  build_int_cst (tree type, HOST_WIDE_INT
   if (!type)
     type = integer_type_node;
 
-  return wide_int_to_tree (type, low);
+  return wide_int_to_tree (type, wide_int::from_hwi (low, type));
 }
 
 /* static inline */ tree
 build_int_cstu (tree type, unsigned HOST_WIDE_INT cst)
 {
-  return wide_int_to_tree (type, cst);
+  return wide_int_to_tree (type, wide_int::from_hwi (cst, type));
 }
 
 /* Create an INT_CST node with a LOW value sign extended to TYPE.  */
@@ -1064,7 +1064,7 @@  build_int_cst_type (tree type, HOST_WIDE
 {
   gcc_assert (type);
 
-  return wide_int_to_tree (type, low);
+  return wide_int_to_tree (type, wide_int::from_hwi (low, type));
 }
 
 /* Constructs tree in type TYPE from with value given by CST.  Signedness
@@ -10688,7 +10688,7 @@  lower_bound_in_type (tree outer, tree in
 	 contains all values of INNER type.  In particular, both INNER
 	 and OUTER types have zero in common.  */
       || (oprec > iprec && TYPE_UNSIGNED (inner)))
-    return wide_int_to_tree (outer, 0);
+    return build_int_cst (outer, 0);
   else
     {
       /* If we are widening a signed type to another signed type, we
Index: gcc/wide-int.cc
===================================================================
--- gcc/wide-int.cc	2013-08-24 12:48:00.096379386 +0100
+++ gcc/wide-int.cc	2013-08-24 01:00:00.000000000 +0100
@@ -32,6 +32,8 @@  along with GCC; see the file COPYING3.
 const int MAX_SIZE = 4 * (MAX_BITSIZE_MODE_ANY_INT / 4
 		     + MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_WIDE_INT + 32);
 
+static const HOST_WIDE_INT zeros[WIDE_INT_MAX_ELTS] = {};
+
 /*
  * Internal utilities.
  */
@@ -2517,7 +2519,7 @@  wide_int_ro::divmod_internal (bool compu
     {
       if (top_bit_of (dividend, dividend_len, dividend_prec))
 	{
-	  u0 = sub_large (wide_int (0).val, 1,
+	  u0 = sub_large (zeros, 1,
 			  dividend_prec, dividend, dividend_len, UNSIGNED);
 	  dividend = u0.val;
 	  dividend_len = u0.len;
@@ -2525,7 +2527,7 @@  wide_int_ro::divmod_internal (bool compu
 	}
       if (top_bit_of (divisor, divisor_len, divisor_prec))
 	{
-	  u1 = sub_large (wide_int (0).val, 1,
+	  u1 = sub_large (zeros, 1,
 			  divisor_prec, divisor, divisor_len, UNSIGNED);
 	  divisor = u1.val;
 	  divisor_len = u1.len;
Index: gcc/wide-int.h
===================================================================
--- gcc/wide-int.h	2013-08-24 12:14:20.979479335 +0100
+++ gcc/wide-int.h	2013-08-24 01:00:00.000000000 +0100
@@ -230,6 +230,11 @@  #define WIDE_INT_H
 #define DEBUG_WIDE_INT
 #endif
 
+/* Used for overloaded functions in which the only other acceptable
+   scalar type is const_tree.  It stops a plain 0 from being treated
+   as a null tree.  */
+struct never_used {};
+
 /* The MAX_BITSIZE_MODE_ANY_INT is automatically generated by a very
    early examination of the target's mode file.  Thus it is safe that
    some small multiple of this number is easily larger than any number
@@ -324,15 +329,16 @@  class GTY(()) wide_int_ro
 public:
   wide_int_ro ();
   wide_int_ro (const_tree);
-  wide_int_ro (HOST_WIDE_INT);
-  wide_int_ro (int);
-  wide_int_ro (unsigned HOST_WIDE_INT);
-  wide_int_ro (unsigned int);
+  wide_int_ro (never_used *);
+  wide_int_ro (HOST_WIDE_INT, unsigned int);
+  wide_int_ro (int, unsigned int);
+  wide_int_ro (unsigned HOST_WIDE_INT, unsigned int);
+  wide_int_ro (unsigned int, unsigned int);
   wide_int_ro (const rtx_mode_t &);
 
   /* Conversions.  */
-  static wide_int_ro from_shwi (HOST_WIDE_INT, unsigned int = 0);
-  static wide_int_ro from_uhwi (unsigned HOST_WIDE_INT, unsigned int = 0);
+  static wide_int_ro from_shwi (HOST_WIDE_INT, unsigned int);
+  static wide_int_ro from_uhwi (unsigned HOST_WIDE_INT, unsigned int);
   static wide_int_ro from_hwi (HOST_WIDE_INT, const_tree);
   static wide_int_ro from_shwi (HOST_WIDE_INT, enum machine_mode);
   static wide_int_ro from_uhwi (unsigned HOST_WIDE_INT, enum machine_mode);
@@ -349,9 +355,11 @@  class GTY(()) wide_int_ro
 
   static wide_int_ro max_value (unsigned int, signop, unsigned int = 0);
   static wide_int_ro max_value (const_tree);
+  static wide_int_ro max_value (never_used *);
   static wide_int_ro max_value (enum machine_mode, signop);
   static wide_int_ro min_value (unsigned int, signop, unsigned int = 0);
   static wide_int_ro min_value (const_tree);
+  static wide_int_ro min_value (never_used *);
   static wide_int_ro min_value (enum machine_mode, signop);
 
   /* Small constants.  These are generally only needed in the places
@@ -842,18 +850,16 @@  class GTY(()) wide_int : public wide_int
   wide_int ();
   wide_int (const wide_int_ro &);
   wide_int (const_tree);
-  wide_int (HOST_WIDE_INT);
-  wide_int (int);
-  wide_int (unsigned HOST_WIDE_INT);
-  wide_int (unsigned int);
+  wide_int (never_used *);
+  wide_int (HOST_WIDE_INT, unsigned int);
+  wide_int (int, unsigned int);
+  wide_int (unsigned HOST_WIDE_INT, unsigned int);
+  wide_int (unsigned int, unsigned int);
   wide_int (const rtx_mode_t &);
 
   wide_int &operator = (const wide_int_ro &);
   wide_int &operator = (const_tree);
-  wide_int &operator = (HOST_WIDE_INT);
-  wide_int &operator = (int);
-  wide_int &operator = (unsigned HOST_WIDE_INT);
-  wide_int &operator = (unsigned int);
+  wide_int &operator = (never_used *);
   wide_int &operator = (const rtx_mode_t &);
 
   wide_int &operator ++ ();
@@ -904,28 +910,28 @@  inline wide_int_ro::wide_int_ro (const_t
 		      TYPE_PRECISION (TREE_TYPE (tcst)), false);
 }
 
-inline wide_int_ro::wide_int_ro (HOST_WIDE_INT op0)
+inline wide_int_ro::wide_int_ro (HOST_WIDE_INT op0, unsigned int prec)
 {
-  precision = 0;
+  precision = prec;
   val[0] = op0;
   len = 1;
 }
 
-inline wide_int_ro::wide_int_ro (int op0)
+inline wide_int_ro::wide_int_ro (int op0, unsigned int prec)
 {
-  precision = 0;
+  precision = prec;
   val[0] = op0;
   len = 1;
 }
 
-inline wide_int_ro::wide_int_ro (unsigned HOST_WIDE_INT op0)
+inline wide_int_ro::wide_int_ro (unsigned HOST_WIDE_INT op0, unsigned int prec)
 {
-  *this = from_uhwi (op0);
+  *this = from_uhwi (op0, prec);
 }
 
-inline wide_int_ro::wide_int_ro (unsigned int op0)
+inline wide_int_ro::wide_int_ro (unsigned int op0, unsigned int prec)
 {
-  *this = from_uhwi (op0);
+  *this = from_uhwi (op0, prec);
 }
 
 inline wide_int_ro::wide_int_ro (const rtx_mode_t &op0)
@@ -2264,7 +2270,7 @@  wide_int_ro::mul_high (const T &c, signo
 wide_int_ro::operator - () const
 {
   wide_int_ro r;
-  r = wide_int_ro (0) - *this;
+  r = zero (precision) - *this;
   return r;
 }
 
@@ -2277,7 +2283,7 @@  wide_int_ro::neg (bool *overflow) const
 
   *overflow = only_sign_bit_p ();
 
-  return wide_int_ro (0) - *this;
+  return zero (precision) - *this;
 }
 
 /* Return THIS - C.  */
@@ -3147,28 +3153,28 @@  inline wide_int::wide_int (const_tree tc
 		      TYPE_PRECISION (TREE_TYPE (tcst)), false);
 }
 
-inline wide_int::wide_int (HOST_WIDE_INT op0)
+inline wide_int::wide_int (HOST_WIDE_INT op0, unsigned int prec)
 {
-  precision = 0;
+  precision = prec;
   val[0] = op0;
   len = 1;
 }
 
-inline wide_int::wide_int (int op0)
+inline wide_int::wide_int (int op0, unsigned int prec)
 {
-  precision = 0;
+  precision = prec;
   val[0] = op0;
   len = 1;
 }
 
-inline wide_int::wide_int (unsigned HOST_WIDE_INT op0)
+inline wide_int::wide_int (unsigned HOST_WIDE_INT op0, unsigned int prec)
 {
-  *this = wide_int_ro::from_uhwi (op0);
+  *this = wide_int_ro::from_uhwi (op0, prec);
 }
 
-inline wide_int::wide_int (unsigned int op0)
+inline wide_int::wide_int (unsigned int op0, unsigned int prec)
 {
-  *this = wide_int_ro::from_uhwi (op0);
+  *this = wide_int_ro::from_uhwi (op0, prec);
 }
 
 inline wide_int::wide_int (const rtx_mode_t &op0)
@@ -3567,31 +3573,28 @@  inline fixed_wide_int <bitsize>::fixed_w
 
 template <int bitsize>
 inline fixed_wide_int <bitsize>::fixed_wide_int (HOST_WIDE_INT op0)
-  : wide_int_ro (op0)
+  : wide_int_ro (op0, bitsize)
 {
-  precision = bitsize;
 }
 
 template <int bitsize>
-inline fixed_wide_int <bitsize>::fixed_wide_int (int op0) : wide_int_ro (op0)
+inline fixed_wide_int <bitsize>::fixed_wide_int (int op0)
+  : wide_int_ro (op0, bitsize)
 {
-  precision = bitsize;
 }
 
 template <int bitsize>
 inline fixed_wide_int <bitsize>::fixed_wide_int (unsigned HOST_WIDE_INT op0)
-  : wide_int_ro (op0)
+  : wide_int_ro (op0, bitsize)
 {
-  precision = bitsize;
   if (neg_p (SIGNED))
     static_cast <wide_int_ro &> (*this) = zext (HOST_BITS_PER_WIDE_INT);
 }
 
 template <int bitsize>
 inline fixed_wide_int <bitsize>::fixed_wide_int (unsigned int op0)
-  : wide_int_ro (op0)
+  : wide_int_ro (op0, bitsize)
 {
-  precision = bitsize;
   if (sizeof (int) == sizeof (HOST_WIDE_INT)
       && neg_p (SIGNED))
     *this = zext (HOST_BITS_PER_WIDE_INT);
@@ -3661,9 +3664,7 @@  fixed_wide_int <bitsize>::operator = (co
 inline fixed_wide_int <bitsize> &
 fixed_wide_int <bitsize>::operator = (HOST_WIDE_INT op0)
 {
-  static_cast <wide_int_ro &> (*this) = op0;
-  precision = bitsize;
-
+  static_cast <wide_int_ro &> (*this) = wide_int_ro (op0, bitsize);
   return *this;
 }
 
@@ -3671,9 +3672,7 @@  fixed_wide_int <bitsize>::operator = (HO
 inline fixed_wide_int <bitsize> &
 fixed_wide_int <bitsize>::operator = (int op0)
 {
-  static_cast <wide_int_ro &> (*this) = op0;
-  precision = bitsize;
-
+  static_cast <wide_int_ro &> (*this) = wide_int_ro (op0, bitsize);
   return *this;
 }
 
@@ -3681,8 +3680,7 @@  fixed_wide_int <bitsize>::operator = (in
 inline fixed_wide_int <bitsize> &
 fixed_wide_int <bitsize>::operator = (unsigned HOST_WIDE_INT op0)
 {
-  static_cast <wide_int_ro &> (*this) = op0;
-  precision = bitsize;
+  static_cast <wide_int_ro &> (*this) = wide_int_ro (op0, bitsize);
 
   /* This is logically top_bit_set_p.  */
   if (neg_p (SIGNED))
@@ -3695,8 +3693,7 @@  fixed_wide_int <bitsize>::operator = (un
 inline fixed_wide_int <bitsize> &
 fixed_wide_int <bitsize>::operator = (unsigned int op0)
 {
-  static_cast <wide_int_ro &> (*this) = op0;
-  precision = bitsize;
+  static_cast <wide_int_ro &> (*this) = wide_int_ro (op0, bitsize);
 
   if (sizeof (int) == sizeof (HOST_WIDE_INT)
       && neg_p (SIGNED))