Patchwork [wide-int] Make trees more like rtxes

login
register
mail settings
Submitter Richard Sandiford
Date Oct. 22, 2013, 12:48 p.m.
Message ID <87txg9cvzc.fsf@sandifor-thinkpad.stglab.manchester.uk.ibm.com>
Download mbox | patch
Permalink /patch/285418/
State New
Headers show

Comments

Richard Sandiford - Oct. 22, 2013, 12:48 p.m.
Hi Richard,

As discussed on IRC yesterday, I'd misunderstood what you were asking for,
and thought that we still wanted "maximum precision" to be the default for
trees.  This patch makes them behave like rtxes by default.  E.g.:

  wi::add (t1, t2)

requires trees t1 and t2 to have the same precision and produces a
wide_int result.

  wi::add (t1, 1)
  wi::add (1, t1)

still work but now produce a wide_int result that has the same precision
as t1.  On IRC you said:

<richi> IMHO only addr_wide_int + addr_wide_int should give addr_wide_int
...
<richi> addr_wide_int + tree would be disallowed or compute in 'tree' precision

<richi> addr_wide_int + rtx is disallowed, right?
<richi> so it should be disallowed for tree, too

The patch does that by adding:

  wi::address (t)

for when we want to extend tree t to addr_wide_int precision and:

  wi::extend (t)

for when we want to extend it to max_wide_int precision.  (Better names
welcome.)  These act just like addr_wide_int (t) and max_wide_int (t)
would on current sources, except that they use the tree representation
directly, so there's no copying.

Making them like rtxes means that it's no longer possible to have:

  addr_wide_int foo = t;

and:

  addr_wide_int (t)

since the idea is that a bare "t" always means a number in t's precision,
just like for rtxes.  So one downside is that you need to write:

  addr_wide_int foo = wi::address (t);

There aren't many places where we initialise a variable directly though.
In some ways it might even be an advantage, because it makes the site of
the extension obvious for:

   void foo (addr_wide_int);
   ...
   foo (t); // fails to compile
   foo (wi::address (t)); // OK

E.g. there are some places where we always want to sign-extend instead
of using TYPE_UNSIGNED (TREE_TYPE (t)).

Most of the patch is mechanical and many of the "wi::address (...)"s
and "wi::extend (...)"s reinstate "addr_wide_int (...)"s and
"max_wide_int (...)"s from the initial implementation.  Sorry for the
run-around on this.

One change I'd like to point out though is:

@@ -7287,7 +7287,9 @@ native_encode_int (const_tree expr, unsi
   for (byte = 0; byte < total_bytes; byte++)
     {
       int bitpos = byte * BITS_PER_UNIT;
-      value = wi::extract_uhwi (expr, bitpos, BITS_PER_UNIT);
+      /* Extend EXPR according to TYPE_SIGN if the precision isn't a whole
+	 number of bytes.  */
+      value = wi::extract_uhwi (wi::extend (expr), bitpos, BITS_PER_UNIT);
 
       if (total_bytes > UNITS_PER_WORD)
 	{

I think this preserves the existing trunk behaviour but I wasn't sure
whether it was supposed to work like that or whether upper bits should
be zero.

This patch is purely about getting the semantics right.  I'll try
out some optimisation ideas if this is OK.

The patch applies on top of the patch to make the upper bits undefined
on read by default:

  http://gcc.gnu.org/ml/gcc-patches/2013-10/msg01610.html

although I've since found one bug with that; the decompose routine should be:

  /* If an unsigned constant occupies a whole number of HWIs and has the
     upper bit set, its representation includes an extra zero HWI,
     so that the representation can be used for wider precisions.
     Trim the length if we're accessing the tree in its own precision.  */
  if (__builtin_expect (len > max_len, 0))
    do
      len--;
    while (len > 1 && val[len - 1] == -1 && val[len - 2] < 0);
                                        ^^^^^^^^^^^^^^^^^^^^

Tested on x86_64-linux-gnu and powerpc64-linux-gnu.  OK for wide-int?

Thanks,
Richard
Richard Guenther - Oct. 23, 2013, 9:09 a.m.
On Tue, 22 Oct 2013, Richard Sandiford wrote:

> Hi Richard,
> 
> As discussed on IRC yesterday, I'd misunderstood what you were asking for,
> and thought that we still wanted "maximum precision" to be the default for
> trees.  This patch makes them behave like rtxes by default.  E.g.:
> 
>   wi::add (t1, t2)
> 
> requires trees t1 and t2 to have the same precision and produces a
> wide_int result.
> 
>   wi::add (t1, 1)
>   wi::add (1, t1)
> 
> still work but now produce a wide_int result that has the same precision
> as t1.  On IRC you said:
> 
> <richi> IMHO only addr_wide_int + addr_wide_int should give addr_wide_int
> ...
> <richi> addr_wide_int + tree would be disallowed or compute in 'tree' precision
> 
> <richi> addr_wide_int + rtx is disallowed, right?
> <richi> so it should be disallowed for tree, too
> 
> The patch does that by adding:
> 
>   wi::address (t)
> 
> for when we want to extend tree t to addr_wide_int precision and:
> 
>   wi::extend (t)
> 
> for when we want to extend it to max_wide_int precision.  (Better names
> welcome.)  These act just like addr_wide_int (t) and max_wide_int (t)
> would on current sources, except that they use the tree representation
> directly, so there's no copying.

Good.  Better names - ah well, wi::to_max_wide_int (t) and
wi::to_addr_wide_int (t)?  Btw, "addr_wide_int" is an odd name as it
has at least the precision of the maximum _bit_ offset possible, right?
So more like [bit_]offset_wide_int?  Or just [bit_]offset_int?
And then wi::to_offset (t) and wi::to_max (t)?

> Making them like rtxes means that it's no longer possible to have:
> 
>   addr_wide_int foo = t;
> 
> and:
> 
>   addr_wide_int (t)
> 
> since the idea is that a bare "t" always means a number in t's precision,
> just like for rtxes.  So one downside is that you need to write:
> 
>   addr_wide_int foo = wi::address (t);
> 
> There aren't many places where we initialise a variable directly though.
> In some ways it might even be an advantage, because it makes the site of
> the extension obvious for:
> 
>    void foo (addr_wide_int);
>    ...
>    foo (t); // fails to compile
>    foo (wi::address (t)); // OK
> 
> E.g. there are some places where we always want to sign-extend instead
> of using TYPE_UNSIGNED (TREE_TYPE (t)).

Yes, having it obvious in the source where this happens is good.

> Most of the patch is mechanical and many of the "wi::address (...)"s
> and "wi::extend (...)"s reinstate "addr_wide_int (...)"s and
> "max_wide_int (...)"s from the initial implementation.  Sorry for the
> run-around on this.
> 
> One change I'd like to point out though is:
> 
> @@ -7287,7 +7287,9 @@ native_encode_int (const_tree expr, unsi
>    for (byte = 0; byte < total_bytes; byte++)
>      {
>        int bitpos = byte * BITS_PER_UNIT;
> -      value = wi::extract_uhwi (expr, bitpos, BITS_PER_UNIT);
> +      /* Extend EXPR according to TYPE_SIGN if the precision isn't a whole
> +	 number of bytes.  */
> +      value = wi::extract_uhwi (wi::extend (expr), bitpos, BITS_PER_UNIT);
>  
>        if (total_bytes > UNITS_PER_WORD)
>  	{
> 
> I think this preserves the existing trunk behaviour but I wasn't sure
> whether it was supposed to work like that or whether upper bits should
> be zero.

I think the upper bits are undefined, the trunk native_interpret_int
does

  result = double_int::from_buffer (ptr, total_bytes);

  return double_int_to_tree (type, result);

where the call to double_int_to_tree re-extends according to the types
precision and sign.  wide_int_to_tree doesn't though?

> This patch is purely about getting the semantics right.  I'll try
> out some optimisation ideas if this is OK.
> 
> The patch applies on top of the patch to make the upper bits undefined
> on read by default:
> 
>   http://gcc.gnu.org/ml/gcc-patches/2013-10/msg01610.html
> 
> although I've since found one bug with that; the decompose routine should be:
> 
>   /* If an unsigned constant occupies a whole number of HWIs and has the
>      upper bit set, its representation includes an extra zero HWI,
>      so that the representation can be used for wider precisions.
>      Trim the length if we're accessing the tree in its own precision.  */
>   if (__builtin_expect (len > max_len, 0))
>     do
>       len--;
>     while (len > 1 && val[len - 1] == -1 && val[len - 2] < 0);
>                                         ^^^^^^^^^^^^^^^^^^^^
>
> Tested on x86_64-linux-gnu and powerpc64-linux-gnu.  OK for wide-int?

Ok.

Thanks,
Richard.

> Thanks,
> Richard
> 
> 
> Index: gcc/alias.c
> ===================================================================
> --- gcc/alias.c	2013-10-22 10:18:04.416965373 +0100
> +++ gcc/alias.c	2013-10-22 10:18:20.376102089 +0100
> @@ -2352,9 +2352,9 @@ adjust_offset_for_component_ref (tree x,
>  	  return;
>  	}
>  
> -      woffset = xoffset;
> -      woffset += wi::udiv_trunc (addr_wide_int (DECL_FIELD_BIT_OFFSET (field)),
> -				 BITS_PER_UNIT);
> +      woffset = (wi::address (xoffset)
> +		 + wi::udiv_trunc (wi::address (DECL_FIELD_BIT_OFFSET (field)),
> +				   BITS_PER_UNIT));
>  
>        if (!wi::fits_uhwi_p (woffset))
>  	{
> Index: gcc/cp/init.c
> ===================================================================
> --- gcc/cp/init.c	2013-10-22 10:18:04.417965382 +0100
> +++ gcc/cp/init.c	2013-10-22 10:18:20.376102089 +0100
> @@ -2300,7 +2300,7 @@ build_new_1 (vec<tree, va_gc> **placemen
>        if (TREE_CODE (inner_nelts_cst) == INTEGER_CST)
>  	{
>  	  bool overflow;
> -	  addr_wide_int result = wi::mul (addr_wide_int (inner_nelts_cst),
> +	  addr_wide_int result = wi::mul (wi::address (inner_nelts_cst),
>  					  inner_nelts_count, SIGNED,
>  					  &overflow);
>  	  if (overflow)
> @@ -2417,9 +2417,9 @@ build_new_1 (vec<tree, va_gc> **placemen
>        /* Unconditionally subtract the cookie size.  This decreases the
>  	 maximum object size and is safe even if we choose not to use
>  	 a cookie after all.  */
> -      max_size -= cookie_size;
> +      max_size -= wi::address (cookie_size);
>        bool overflow;
> -      inner_size = wi::mul (addr_wide_int (size), inner_nelts_count, SIGNED,
> +      inner_size = wi::mul (wi::address (size), inner_nelts_count, SIGNED,
>  			    &overflow);
>        if (overflow || wi::gtu_p (inner_size, max_size))
>  	{
> Index: gcc/cp/mangle.c
> ===================================================================
> --- gcc/cp/mangle.c	2013-10-22 10:18:04.418965390 +0100
> +++ gcc/cp/mangle.c	2013-10-22 10:18:20.377102098 +0100
> @@ -3223,7 +3223,7 @@ write_array_type (const tree type)
>  	{
>  	  /* The ABI specifies that we should mangle the number of
>  	     elements in the array, not the largest allowed index.  */
> -	  addr_wide_int wmax = addr_wide_int (max) + 1;
> +	  addr_wide_int wmax = wi::address (max) + 1;
>  	  /* Truncate the result - this will mangle [0, SIZE_INT_MAX]
>  	     number of elements as zero.  */
>  	  wmax = wi::zext (wmax, TYPE_PRECISION (TREE_TYPE (max)));
> Index: gcc/cp/tree.c
> ===================================================================
> --- gcc/cp/tree.c	2013-10-22 10:18:04.419965399 +0100
> +++ gcc/cp/tree.c	2013-10-22 10:18:20.378102106 +0100
> @@ -2603,7 +2603,7 @@ cp_tree_equal (tree t1, tree t2)
>    switch (code1)
>      {
>      case INTEGER_CST:
> -      return max_wide_int (t1) == max_wide_int (t2);
> +      return wi::extend (t1) == wi::extend (t2);
>  
>      case REAL_CST:
>        return REAL_VALUES_EQUAL (TREE_REAL_CST (t1), TREE_REAL_CST (t2));
> Index: gcc/cp/typeck2.c
> ===================================================================
> --- gcc/cp/typeck2.c	2013-10-22 10:18:04.420965408 +0100
> +++ gcc/cp/typeck2.c	2013-10-22 10:18:20.378102106 +0100
> @@ -1119,8 +1119,8 @@ process_init_constructor_array (tree typ
>      {
>        tree domain = TYPE_DOMAIN (type);
>        if (domain && TREE_CONSTANT (TYPE_MAX_VALUE (domain)))
> -	len = wi::ext (addr_wide_int (TYPE_MAX_VALUE (domain))
> -		       - TYPE_MIN_VALUE (domain) + 1,
> +	len = wi::ext (wi::address (TYPE_MAX_VALUE (domain))
> +		       - wi::address (TYPE_MIN_VALUE (domain)) + 1,
>  		       TYPE_PRECISION (TREE_TYPE (domain)),
>  		       TYPE_SIGN (TREE_TYPE (domain))).to_uhwi ();
>        else
> Index: gcc/dwarf2out.c
> ===================================================================
> --- gcc/dwarf2out.c	2013-10-22 10:18:04.426965459 +0100
> +++ gcc/dwarf2out.c	2013-10-22 10:18:20.381102132 +0100
> @@ -10312,7 +10312,7 @@ wide_int_type_size_in_bits (const_tree t
>    else if (TYPE_SIZE (type) == NULL_TREE)
>      return 0;
>    else if (TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST)
> -    return TYPE_SIZE (type);
> +    return wi::address (TYPE_SIZE (type));
>    else
>      return TYPE_ALIGN (type);
>  }
> @@ -14721,7 +14721,7 @@ field_byte_offset (const_tree decl)
>    if (TREE_CODE (bit_position (decl)) != INTEGER_CST)
>      return 0;
>  
> -  bitpos_int = bit_position (decl);
> +  bitpos_int = wi::address (bit_position (decl));
>  
>  #ifdef PCC_BITFIELD_TYPE_MATTERS
>    if (PCC_BITFIELD_TYPE_MATTERS)
> @@ -14747,7 +14747,7 @@ field_byte_offset (const_tree decl)
>  
>        /* If the size of the field is not constant, use the type size.  */
>        if (TREE_CODE (field_size_tree) == INTEGER_CST)
> -	field_size_in_bits = field_size_tree;
> +	field_size_in_bits = wi::address (field_size_tree);
>        else
>  	field_size_in_bits = type_size_in_bits;
>  
> Index: gcc/expmed.c
> ===================================================================
> --- gcc/expmed.c	2013-10-22 10:18:04.428965476 +0100
> +++ gcc/expmed.c	2013-10-22 10:18:20.382102140 +0100
> @@ -1821,9 +1821,7 @@ extract_fixed_bit_field (enum machine_mo
>  lshift_value (enum machine_mode mode, unsigned HOST_WIDE_INT value,
>  	      int bitpos)
>  {
> -  return 
> -    immed_wide_int_const (wi::lshift (max_wide_int (value),
> -				      bitpos), mode);
> +  return immed_wide_int_const (wi::lshift (value, bitpos), mode);
>  }
>  
>  /* Extract a bit field that is split across two words
> Index: gcc/expr.c
> ===================================================================
> --- gcc/expr.c	2013-10-22 10:18:04.431965502 +0100
> +++ gcc/expr.c	2013-10-22 10:18:20.384102157 +0100
> @@ -6581,7 +6581,7 @@ get_inner_reference (tree exp, HOST_WIDE
>        switch (TREE_CODE (exp))
>  	{
>  	case BIT_FIELD_REF:
> -	  bit_offset += TREE_OPERAND (exp, 2);
> +	  bit_offset += wi::address (TREE_OPERAND (exp, 2));
>  	  break;
>  
>  	case COMPONENT_REF:
> @@ -6596,7 +6596,7 @@ get_inner_reference (tree exp, HOST_WIDE
>  	      break;
>  
>  	    offset = size_binop (PLUS_EXPR, offset, this_offset);
> -	    bit_offset += DECL_FIELD_BIT_OFFSET (field);
> +	    bit_offset += wi::address (DECL_FIELD_BIT_OFFSET (field));
>  
>  	    /* ??? Right now we don't do anything with DECL_OFFSET_ALIGN.  */
>  	  }
> @@ -6675,7 +6675,7 @@ get_inner_reference (tree exp, HOST_WIDE
>       this conversion.  */
>    if (TREE_CODE (offset) == INTEGER_CST)
>      {
> -      addr_wide_int tem = wi::sext (addr_wide_int (offset),
> +      addr_wide_int tem = wi::sext (wi::address (offset),
>  				    TYPE_PRECISION (sizetype));
>        tem = wi::lshift (tem, (BITS_PER_UNIT == 8
>  			      ? 3 : exact_log2 (BITS_PER_UNIT)));
> Index: gcc/fold-const.c
> ===================================================================
> --- gcc/fold-const.c	2013-10-22 10:18:04.437965553 +0100
> +++ gcc/fold-const.c	2013-10-22 10:18:20.386102175 +0100
> @@ -1581,7 +1581,7 @@ fold_convert_const_int_from_int (tree ty
>    /* Given an integer constant, make new constant with new type,
>       appropriately sign-extended or truncated.  Use max_wide_int
>       so that any extension is done according ARG1's type.  */
> -  return force_fit_type (type, max_wide_int (arg1),
> +  return force_fit_type (type, wi::extend (arg1),
>  			 !POINTER_TYPE_P (TREE_TYPE (arg1)),
>  			 TREE_OVERFLOW (arg1));
>  }
> @@ -1622,7 +1622,7 @@ fold_convert_const_int_from_real (enum t
>    if (REAL_VALUE_ISNAN (r))
>      {
>        overflow = true;
> -      val = max_wide_int (0);
> +      val = wi::zero (TYPE_PRECISION (type));
>      }
>  
>    /* See if R is less than the lower bound or greater than the
> @@ -1635,7 +1635,7 @@ fold_convert_const_int_from_real (enum t
>        if (REAL_VALUES_LESS (r, l))
>  	{
>  	  overflow = true;
> -	  val = max_wide_int (lt);
> +	  val = lt;
>  	}
>      }
>  
> @@ -1648,7 +1648,7 @@ fold_convert_const_int_from_real (enum t
>  	  if (REAL_VALUES_LESS (u, r))
>  	    {
>  	      overflow = true;
> -	      val = max_wide_int (ut);
> +	      val = ut;
>  	    }
>  	}
>      }
> @@ -6611,7 +6611,7 @@ fold_single_bit_test (location_t loc, en
>  	 not overflow, adjust BITNUM and INNER.  */
>        if (TREE_CODE (inner) == RSHIFT_EXPR
>  	  && TREE_CODE (TREE_OPERAND (inner, 1)) == INTEGER_CST
> -	  && wi::ltu_p (wi::add (TREE_OPERAND (inner, 1), bitnum),
> +	  && wi::ltu_p (wi::extend (TREE_OPERAND (inner, 1)) + bitnum,
>  			TYPE_PRECISION (type)))
>  	{
>  	  bitnum += tree_to_hwi (TREE_OPERAND (inner, 1));
> @@ -7287,7 +7287,9 @@ native_encode_int (const_tree expr, unsi
>    for (byte = 0; byte < total_bytes; byte++)
>      {
>        int bitpos = byte * BITS_PER_UNIT;
> -      value = wi::extract_uhwi (expr, bitpos, BITS_PER_UNIT);
> +      /* Extend EXPR according to TYPE_SIGN if the precision isn't a whole
> +	 number of bytes.  */
> +      value = wi::extract_uhwi (wi::extend (expr), bitpos, BITS_PER_UNIT);
>  
>        if (total_bytes > UNITS_PER_WORD)
>  	{
> @@ -10450,7 +10452,7 @@ fold_binary_loc (location_t loc,
>  	    code11 = TREE_CODE (tree11);
>  	    if (code01 == INTEGER_CST
>  		&& code11 == INTEGER_CST
> -		&& (wi::add (tree01, tree11)
> +		&& (wi::extend (tree01) + wi::extend (tree11)
>  		    == element_precision (TREE_TYPE (TREE_OPERAND (arg0, 0)))))
>  	      {
>  		tem = build2_loc (loc, LROTATE_EXPR,
> Index: gcc/fortran/trans-array.c
> ===================================================================
> --- gcc/fortran/trans-array.c	2013-10-22 10:18:04.440965579 +0100
> +++ gcc/fortran/trans-array.c	2013-10-22 10:18:20.387102183 +0100
> @@ -5385,7 +5385,7 @@ gfc_conv_array_initializer (tree type, g
>        else
>  	gfc_conv_structure (&se, expr, 1);
>  
> -      wtmp = addr_wide_int (TYPE_MAX_VALUE (TYPE_DOMAIN (type))) + 1;
> +      wtmp = wi::address (TYPE_MAX_VALUE (TYPE_DOMAIN (type))) + 1;
>        gcc_assert (wtmp != 0);
>        /* This will probably eat buckets of memory for large arrays.  */
>        while (wtmp != 0)
> Index: gcc/gimple-fold.c
> ===================================================================
> --- gcc/gimple-fold.c	2013-10-22 10:18:04.441965588 +0100
> +++ gcc/gimple-fold.c	2013-10-22 10:19:05.060484982 +0100
> @@ -2821,14 +2821,14 @@ fold_array_ctor_reference (tree type, tr
>        /* Static constructors for variably sized objects makes no sense.  */
>        gcc_assert (TREE_CODE (TYPE_MIN_VALUE (domain_type)) == INTEGER_CST);
>        index_type = TREE_TYPE (TYPE_MIN_VALUE (domain_type));
> -      low_bound = TYPE_MIN_VALUE (domain_type);
> +      low_bound = wi::address (TYPE_MIN_VALUE (domain_type));
>      }
>    else
>      low_bound = 0;
>    /* Static constructors for variably sized objects makes no sense.  */
>    gcc_assert (TREE_CODE (TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (ctor))))
>  	      == INTEGER_CST);
> -  elt_size = TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (ctor)));
> +  elt_size = wi::address (TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (ctor))));
>  
>    /* We can handle only constantly sized accesses that are known to not
>       be larger than size of array element.  */
> @@ -2866,12 +2866,12 @@ fold_array_ctor_reference (tree type, tr
>        if (cfield)
>  	{
>  	  if (TREE_CODE (cfield) == INTEGER_CST)
> -	    max_index = index = cfield;
> +	    max_index = index = wi::address (cfield);
>  	  else
>  	    {
>  	      gcc_assert (TREE_CODE (cfield) == RANGE_EXPR);
> -	      index = TREE_OPERAND (cfield, 0);
> -	      max_index = TREE_OPERAND (cfield, 1);
> +	      index = wi::address (TREE_OPERAND (cfield, 0));
> +	      max_index = wi::address (TREE_OPERAND (cfield, 1));
>  	    }
>  	}
>        else
> @@ -2913,7 +2913,7 @@ fold_nonarray_ctor_reference (tree type,
>        tree field_offset = DECL_FIELD_BIT_OFFSET (cfield);
>        tree field_size = DECL_SIZE (cfield);
>        addr_wide_int bitoffset;
> -      addr_wide_int byte_offset_cst = byte_offset;
> +      addr_wide_int byte_offset_cst = wi::address (byte_offset);
>        addr_wide_int bitoffset_end, access_end;
>  
>        /* Variable sized objects in static constructors makes no sense,
> @@ -2925,10 +2925,11 @@ fold_nonarray_ctor_reference (tree type,
>  		      : TREE_CODE (TREE_TYPE (cfield)) == ARRAY_TYPE));
>  
>        /* Compute bit offset of the field.  */
> -      bitoffset = wi::add (field_offset, byte_offset_cst * BITS_PER_UNIT);
> +      bitoffset = (wi::address (field_offset)
> +		   + byte_offset_cst * BITS_PER_UNIT);
>        /* Compute bit offset where the field ends.  */
>        if (field_size != NULL_TREE)
> -	bitoffset_end = bitoffset + field_size;
> +	bitoffset_end = bitoffset + wi::address (field_size);
>        else
>  	bitoffset_end = 0;
>  
> @@ -3043,8 +3044,8 @@ fold_const_aggregate_ref_1 (tree t, tree
>  	  if ((TREE_CODE (low_bound) == INTEGER_CST)
>  	      && (tree_fits_uhwi_p (unit_size)))
>  	    {
> -	      addr_wide_int woffset 
> -		= wi::sext (addr_wide_int (idx) - low_bound,
> +	      addr_wide_int woffset
> +		= wi::sext (wi::address (idx) - wi::address (low_bound),
>  			    TYPE_PRECISION (TREE_TYPE (idx)));
>  	      
>  	      if (wi::fits_shwi_p (woffset))
> Index: gcc/gimple-ssa-strength-reduction.c
> ===================================================================
> --- gcc/gimple-ssa-strength-reduction.c	2013-10-22 10:18:04.444965613 +0100
> +++ gcc/gimple-ssa-strength-reduction.c	2013-10-22 10:18:20.389102200 +0100
> @@ -792,7 +792,7 @@ backtrace_base_for_ref (tree *pbase)
>  	{
>  	  /* X = B + (1 * S), S is integer constant.  */
>  	  *pbase = base_cand->base_expr;
> -	  return base_cand->stride;
> +	  return wi::extend (base_cand->stride);
>  	}
>        else if (base_cand->kind == CAND_ADD
>  	       && TREE_CODE (base_cand->stride) == INTEGER_CST
> @@ -860,14 +860,14 @@ restructure_reference (tree *pbase, tree
>    type = TREE_TYPE (TREE_OPERAND (base, 1));
>  
>    mult_op0 = TREE_OPERAND (offset, 0);
> -  c3 = TREE_OPERAND (offset, 1);
> +  c3 = wi::extend (TREE_OPERAND (offset, 1));
>  
>    if (TREE_CODE (mult_op0) == PLUS_EXPR)
>  
>      if (TREE_CODE (TREE_OPERAND (mult_op0, 1)) == INTEGER_CST)
>        {
>  	t2 = TREE_OPERAND (mult_op0, 0);
> -	c2 = TREE_OPERAND (mult_op0, 1);
> +	c2 = wi::extend (TREE_OPERAND (mult_op0, 1));
>        }
>      else
>        return false;
> @@ -877,7 +877,7 @@ restructure_reference (tree *pbase, tree
>      if (TREE_CODE (TREE_OPERAND (mult_op0, 1)) == INTEGER_CST)
>        {
>  	t2 = TREE_OPERAND (mult_op0, 0);
> -	c2 = -(max_wide_int)TREE_OPERAND (mult_op0, 1);
> +	c2 = -wi::extend (TREE_OPERAND (mult_op0, 1));
>        }
>      else
>        return false;
> @@ -979,7 +979,7 @@ create_mul_ssa_cand (gimple gs, tree bas
>  	     ============================
>  	     X = B + ((i' * S) * Z)  */
>  	  base = base_cand->base_expr;
> -	  index = base_cand->index * base_cand->stride;
> +	  index = base_cand->index * wi::extend (base_cand->stride);
>  	  stride = stride_in;
>  	  ctype = base_cand->cand_type;
>  	  if (has_single_use (base_in))
> @@ -1035,7 +1035,7 @@ create_mul_imm_cand (gimple gs, tree bas
>  	     X = (B + i') * (S * c)  */
>  	  base = base_cand->base_expr;
>  	  index = base_cand->index;
> -	  temp = wi::mul (base_cand->stride, stride_in);
> +	  temp = wi::extend (base_cand->stride) * wi::extend (stride_in);
>  	  stride = wide_int_to_tree (TREE_TYPE (stride_in), temp);
>  	  ctype = base_cand->cand_type;
>  	  if (has_single_use (base_in))
> @@ -1065,7 +1065,7 @@ create_mul_imm_cand (gimple gs, tree bas
>  	     ===========================
>  	     X = (B + S) * c  */
>  	  base = base_cand->base_expr;
> -	  index = base_cand->stride;
> +	  index = wi::extend (base_cand->stride);
>  	  stride = stride_in;
>  	  ctype = base_cand->cand_type;
>  	  if (has_single_use (base_in))
> @@ -1166,7 +1166,7 @@ create_add_ssa_cand (gimple gs, tree bas
>  	     ===========================
>  	     X = Y + ((+/-1 * S) * B)  */
>  	  base = base_in;
> -	  index = addend_cand->stride;
> +	  index = wi::extend (addend_cand->stride);
>  	  if (subtract_p)
>  	    index = -index;
>  	  stride = addend_cand->base_expr;
> @@ -1216,7 +1216,7 @@ create_add_ssa_cand (gimple gs, tree bas
>  		     ===========================
>  		     Value:  X = Y + ((-1 * S) * B)  */
>  		  base = base_in;
> -		  index = subtrahend_cand->stride;
> +		  index = wi::extend (subtrahend_cand->stride);
>  		  index = -index;
>  		  stride = subtrahend_cand->base_expr;
>  		  ctype = TREE_TYPE (base_in);
> @@ -1272,7 +1272,8 @@ create_add_imm_cand (gimple gs, tree bas
>        signop sign = TYPE_SIGN (TREE_TYPE (base_cand->stride));
>  
>        if (TREE_CODE (base_cand->stride) == INTEGER_CST
> -	  && wi::multiple_of_p (index_in, base_cand->stride, sign, &multiple))
> +	  && wi::multiple_of_p (index_in, wi::extend (base_cand->stride),
> +				sign, &multiple))
>  	{
>  	  /* Y = (B + i') * S, S constant, c = kS for some integer k
>  	     X = Y + c
> @@ -1360,7 +1361,7 @@ slsr_process_add (gimple gs, tree rhs1,
>        max_wide_int index;
>  
>        /* Record an interpretation for the add-immediate.  */
> -      index = rhs2;
> +      index = wi::extend (rhs2);
>        if (subtract_p)
>  	index = -index;
>  
> @@ -2027,7 +2028,7 @@ replace_unconditional_candidate (slsr_ca
>      return;
>  
>    basis = lookup_cand (c->basis);
> -  bump = cand_increment (c) * c->stride;
> +  bump = cand_increment (c) * wi::extend (c->stride);
>  
>    replace_mult_candidate (c, gimple_assign_lhs (basis->cand_stmt), bump);
>  }
> @@ -2078,7 +2079,7 @@ create_add_on_incoming_edge (slsr_cand_t
>      {
>        tree bump_tree;
>        enum tree_code code = PLUS_EXPR;
> -      max_wide_int bump = increment * c->stride;
> +      max_wide_int bump = increment * wi::extend (c->stride);
>        if (wi::neg_p (bump))
>  	{
>  	  code = MINUS_EXPR;
> @@ -2246,7 +2247,7 @@ replace_conditional_candidate (slsr_cand
>    name = create_phi_basis (c, lookup_cand (c->def_phi)->cand_stmt,
>  			   basis_name, loc, KNOWN_STRIDE);
>    /* Replace C with an add of the new basis phi and a constant.  */
> -  bump = c->index * c->stride;
> +  bump = c->index * wi::extend (c->stride);
>  
>    replace_mult_candidate (c, name, bump);
>  }
> Index: gcc/ipa-prop.c
> ===================================================================
> --- gcc/ipa-prop.c	2013-10-22 10:18:04.445965622 +0100
> +++ gcc/ipa-prop.c	2013-10-22 10:18:20.390102209 +0100
> @@ -3640,8 +3640,7 @@ ipa_modify_call_arguments (struct cgraph
>  		  if (TYPE_ALIGN (type) > align)
>  		    align = TYPE_ALIGN (type);
>  		}
> -	      misalign += (wi::sext (addr_wide_int (off),
> -				     TYPE_PRECISION (TREE_TYPE (off)))
> +	      misalign += (addr_wide_int::from (off, SIGNED)
>  			   * BITS_PER_UNIT).to_short_addr ();
>  	      misalign = misalign & (align - 1);
>  	      if (misalign != 0)
> Index: gcc/predict.c
> ===================================================================
> --- gcc/predict.c	2013-10-22 10:18:04.445965622 +0100
> +++ gcc/predict.c	2013-10-22 10:18:20.390102209 +0100
> @@ -1300,10 +1300,10 @@ predict_iv_comparison (struct loop *loop
>        bool overflow, overall_overflow = false;
>        max_wide_int compare_count, tem, loop_count;
>  
> -      max_wide_int loop_bound = loop_bound_var;
> -      max_wide_int compare_bound = compare_var;
> -      max_wide_int base = compare_base;
> -      max_wide_int compare_step = compare_step_var;
> +      max_wide_int loop_bound = wi::extend (loop_bound_var);
> +      max_wide_int compare_bound = wi::extend (compare_var);
> +      max_wide_int base = wi::extend (compare_base);
> +      max_wide_int compare_step = wi::extend (compare_step_var);
>  
>        /* (loop_bound - base) / compare_step */
>        tem = wi::sub (loop_bound, base, SIGNED, &overflow);
> Index: gcc/stor-layout.c
> ===================================================================
> --- gcc/stor-layout.c	2013-10-22 10:18:04.446965630 +0100
> +++ gcc/stor-layout.c	2013-10-22 10:18:20.391102217 +0100
> @@ -2198,11 +2198,10 @@ layout_type (tree type)
>  		    && TYPE_UNSIGNED (TREE_TYPE (lb))
>  		    && tree_int_cst_lt (ub, lb))
>  		  {
> -		    unsigned prec = TYPE_PRECISION (TREE_TYPE (lb));
>  		    lb = wide_int_to_tree (ssizetype,
> -					   wi::sext (addr_wide_int (lb), prec));
> +					   addr_wide_int::from (lb, SIGNED));
>  		    ub = wide_int_to_tree (ssizetype,
> -					   wi::sext (addr_wide_int (ub), prec));
> +					   addr_wide_int::from (ub, SIGNED));
>  		  }
>  		length
>  		  = fold_convert (sizetype,
> Index: gcc/tree-affine.c
> ===================================================================
> --- gcc/tree-affine.c	2013-10-22 10:18:04.446965630 +0100
> +++ gcc/tree-affine.c	2013-10-22 10:18:20.391102217 +0100
> @@ -269,7 +269,7 @@ tree_to_aff_combination (tree expr, tree
>    switch (code)
>      {
>      case INTEGER_CST:
> -      aff_combination_const (comb, type, expr);
> +      aff_combination_const (comb, type, wi::extend (expr));
>        return;
>  
>      case POINTER_PLUS_EXPR:
> @@ -292,7 +292,7 @@ tree_to_aff_combination (tree expr, tree
>        if (TREE_CODE (cst) != INTEGER_CST)
>  	break;
>        tree_to_aff_combination (TREE_OPERAND (expr, 0), type, comb);
> -      aff_combination_scale (comb, cst);
> +      aff_combination_scale (comb, wi::extend (cst));
>        return;
>  
>      case NEGATE_EXPR:
> @@ -383,7 +383,7 @@ add_elt_to_tree (tree expr, tree type, t
>      {
>        elt = convert_to_ptrofftype (elt);
>        elt = fold_build1 (NEGATE_EXPR, TREE_TYPE (elt), elt);
> -      scale = max_wide_int (1);
> +      scale = 1;
>      }
>  
>    if (scale == 1)
> Index: gcc/tree-dfa.c
> ===================================================================
> --- gcc/tree-dfa.c	2013-10-22 10:18:04.472965853 +0100
> +++ gcc/tree-dfa.c	2013-10-22 10:18:20.391102217 +0100
> @@ -422,7 +422,7 @@ get_ref_base_and_extent (tree exp, HOST_
>        switch (TREE_CODE (exp))
>  	{
>  	case BIT_FIELD_REF:
> -	  bit_offset += TREE_OPERAND (exp, 2);
> +	  bit_offset += wi::address (TREE_OPERAND (exp, 2));
>  	  break;
>  
>  	case COMPONENT_REF:
> @@ -432,11 +432,11 @@ get_ref_base_and_extent (tree exp, HOST_
>  
>  	    if (this_offset && TREE_CODE (this_offset) == INTEGER_CST)
>  	      {
> -		addr_wide_int woffset = this_offset;
> +		addr_wide_int woffset = wi::address (this_offset);
>  		woffset = wi::lshift (woffset,
>  				      (BITS_PER_UNIT == 8
>  				       ? 3 : exact_log2 (BITS_PER_UNIT)));
> -		woffset += DECL_FIELD_BIT_OFFSET (field);
> +		woffset += wi::address (DECL_FIELD_BIT_OFFSET (field));
>  		bit_offset += woffset;
>  
>  		/* If we had seen a variable array ref already and we just
> @@ -497,10 +497,10 @@ get_ref_base_and_extent (tree exp, HOST_
>  		&& (unit_size = array_ref_element_size (exp),
>  		    TREE_CODE (unit_size) == INTEGER_CST))
>  	      {
> -		addr_wide_int woffset 
> -		  = wi::sext (addr_wide_int (index) - low_bound,
> +		addr_wide_int woffset
> +		  = wi::sext (wi::address (index) - wi::address (low_bound),
>  			      TYPE_PRECISION (TREE_TYPE (index)));
> -		woffset *= addr_wide_int (unit_size);
> +		woffset *= wi::address (unit_size);
>  		woffset = wi::lshift (woffset,
>  				      (BITS_PER_UNIT == 8
>  				       ? 3 : exact_log2 (BITS_PER_UNIT)));
> Index: gcc/tree-predcom.c
> ===================================================================
> --- gcc/tree-predcom.c	2013-10-22 10:18:04.473965862 +0100
> +++ gcc/tree-predcom.c	2013-10-22 10:18:20.392102226 +0100
> @@ -618,7 +618,7 @@ aff_combination_dr_offset (struct data_r
>  
>    tree_to_aff_combination_expand (DR_OFFSET (dr), type, offset,
>  				  &name_expansions);
> -  aff_combination_const (&delta, type, DR_INIT (dr));
> +  aff_combination_const (&delta, type, wi::extend (DR_INIT (dr)));
>    aff_combination_add (offset, &delta);
>  }
>  
> Index: gcc/tree-pretty-print.c
> ===================================================================
> --- gcc/tree-pretty-print.c	2013-10-22 10:18:04.474965870 +0100
> +++ gcc/tree-pretty-print.c	2013-10-22 10:18:20.392102226 +0100
> @@ -1509,7 +1509,7 @@ dump_generic_node (pretty_printer *buffe
>  	  {
>  	    tree minv = TYPE_MIN_VALUE (TYPE_DOMAIN (TREE_TYPE (node)));
>  	    is_array_init = true;
> -	    curidx = minv;
> +	    curidx = wi::extend (minv);
>  	  }
>  	FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (node), ix, field, val)
>  	  {
> @@ -1523,7 +1523,7 @@ dump_generic_node (pretty_printer *buffe
>  		  }
>  		else if (is_array_init
>  			 && (TREE_CODE (field) != INTEGER_CST
> -			     || curidx != field))
> +			     || curidx != wi::extend (field)))
>  		  {
>  		    pp_left_bracket (buffer);
>  		    if (TREE_CODE (field) == RANGE_EXPR)
> @@ -1534,12 +1534,12 @@ dump_generic_node (pretty_printer *buffe
>  			dump_generic_node (buffer, TREE_OPERAND (field, 1), spc,
>  					   flags, false);
>  			if (TREE_CODE (TREE_OPERAND (field, 1)) == INTEGER_CST)
> -			  curidx = TREE_OPERAND (field, 1);
> +			  curidx = wi::extend (TREE_OPERAND (field, 1));
>  		      }
>  		    else
>  		      dump_generic_node (buffer, field, spc, flags, false);
>  		    if (TREE_CODE (field) == INTEGER_CST)
> -		      curidx = field;
> +		      curidx = wi::extend (field);
>  		    pp_string (buffer, "]=");
>  		  }
>  	      }
> Index: gcc/tree-ssa-address.c
> ===================================================================
> --- gcc/tree-ssa-address.c	2013-10-22 10:18:04.474965870 +0100
> +++ gcc/tree-ssa-address.c	2013-10-22 10:18:20.392102226 +0100
> @@ -203,8 +203,7 @@ addr_for_mem_ref (struct mem_address *ad
>  
>    if (addr->offset && !integer_zerop (addr->offset))
>      {
> -      addr_wide_int dc = wi::sext (addr_wide_int (addr->offset),
> -				   TYPE_PRECISION (TREE_TYPE (addr->offset)));
> +      addr_wide_int dc = addr_wide_int::from (addr->offset, SIGNED);
>        off = immed_wide_int_const (dc, pointer_mode);
>      }
>    else
> Index: gcc/tree-ssa-ccp.c
> ===================================================================
> --- gcc/tree-ssa-ccp.c	2013-10-22 10:18:04.475965878 +0100
> +++ gcc/tree-ssa-ccp.c	2013-10-22 10:18:20.393102234 +0100
> @@ -202,7 +202,7 @@ dump_lattice_value (FILE *outf, const ch
>  	}
>        else
>  	{
> -	  wide_int cval = wi::bit_and_not (val.value, val.mask);
> +	  wide_int cval = wi::bit_and_not (wi::extend (val.value), val.mask);
>  	  fprintf (outf, "%sCONSTANT ", prefix);
>  	  print_hex (cval, outf);
>  	  fprintf (outf, " (");
> @@ -432,8 +432,8 @@ valid_lattice_transition (prop_value_t o
>    /* Bit-lattices have to agree in the still valid bits.  */
>    if (TREE_CODE (old_val.value) == INTEGER_CST
>        && TREE_CODE (new_val.value) == INTEGER_CST)
> -    return (wi::bit_and_not (old_val.value, new_val.mask)
> -	    == wi::bit_and_not (new_val.value, new_val.mask));
> +    return (wi::bit_and_not (wi::extend (old_val.value), new_val.mask)
> +	    == wi::bit_and_not (wi::extend (new_val.value), new_val.mask));
>  
>    /* Otherwise constant values have to agree.  */
>    return operand_equal_p (old_val.value, new_val.value, 0);
> @@ -458,7 +458,8 @@ set_lattice_value (tree var, prop_value_
>        && TREE_CODE (new_val.value) == INTEGER_CST
>        && TREE_CODE (old_val->value) == INTEGER_CST)
>      {
> -      max_wide_int diff = wi::bit_xor (new_val.value, old_val->value);
> +      max_wide_int diff = (wi::extend (new_val.value)
> +			   ^ wi::extend (old_val->value));
>        new_val.mask = new_val.mask | old_val->mask | diff;
>      }
>  
> @@ -505,7 +506,7 @@ value_to_wide_int (prop_value_t val)
>  {
>    if (val.value
>        && TREE_CODE (val.value) == INTEGER_CST)
> -    return val.value;
> +    return wi::extend (val.value);
>  
>    return 0;
>  }
> @@ -908,7 +909,7 @@ ccp_lattice_meet (prop_value_t *val1, pr
>           For INTEGER_CSTs mask unequal bits.  If no equal bits remain,
>  	 drop to varying.  */
>        val1->mask = (val1->mask | val2->mask
> -		    | (wi::bit_xor (val1->value, val2->value)));
> +		    | (wi::extend (val1->value) ^ wi::extend (val2->value)));
>        if (val1->mask == -1)
>  	{
>  	  val1->lattice_val = VARYING;
> Index: gcc/tree-ssa-loop-ivcanon.c
> ===================================================================
> --- gcc/tree-ssa-loop-ivcanon.c	2013-10-22 10:18:04.476965887 +0100
> +++ gcc/tree-ssa-loop-ivcanon.c	2013-10-22 10:18:20.393102234 +0100
> @@ -927,7 +927,7 @@ canonicalize_loop_induction_variables (s
>       by find_loop_niter_by_eval.  Be sure to keep it for future.  */
>    if (niter && TREE_CODE (niter) == INTEGER_CST)
>      {
> -      record_niter_bound (loop, niter,
> +      record_niter_bound (loop, wi::extend (niter),
>  			  exit == single_likely_exit (loop), true);
>      }
>  
> Index: gcc/tree-ssa-loop-ivopts.c
> ===================================================================
> --- gcc/tree-ssa-loop-ivopts.c	2013-10-22 10:18:04.478965904 +0100
> +++ gcc/tree-ssa-loop-ivopts.c	2013-10-22 10:18:20.394102243 +0100
> @@ -1590,7 +1590,7 @@ constant_multiple_of (tree top, tree bot
>        if (!constant_multiple_of (TREE_OPERAND (top, 0), bot, &res))
>  	return false;
>  
> -      *mul = wi::sext (res * mby, precision);
> +      *mul = wi::sext (res * wi::extend (mby), precision);
>        return true;
>  
>      case PLUS_EXPR:
> @@ -1608,8 +1608,8 @@ constant_multiple_of (tree top, tree bot
>        if (TREE_CODE (bot) != INTEGER_CST)
>  	return false;
>  
> -      p0 = wi::sext (top, precision);
> -      p1 = wi::sext (bot, precision);
> +      p0 = max_wide_int::from (top, SIGNED);
> +      p1 = max_wide_int::from (bot, SIGNED);
>        if (p1 == 0)
>  	return false;
>        *mul = wi::sext (wi::divmod_trunc (p0, p1, SIGNED, &res), precision);
> @@ -4632,7 +4632,7 @@ may_eliminate_iv (struct ivopts_data *da
>        max_niter = desc->max;
>        if (stmt_after_increment (loop, cand, use->stmt))
>          max_niter += 1;
> -      period_value = period;
> +      period_value = wi::extend (period);
>        if (wi::gtu_p (max_niter, period_value))
>          {
>            /* See if we can take advantage of inferred loop bound information.  */
> Index: gcc/tree-ssa-loop-niter.c
> ===================================================================
> --- gcc/tree-ssa-loop-niter.c	2013-10-22 10:18:04.480965921 +0100
> +++ gcc/tree-ssa-loop-niter.c	2013-10-22 10:18:20.394102243 +0100
> @@ -69,7 +69,6 @@ split_to_var_and_offset (tree expr, tree
>  {
>    tree type = TREE_TYPE (expr);
>    tree op0, op1;
> -  max_wide_int off;
>    bool negate = false;
>  
>    *var = expr;
> @@ -90,18 +89,15 @@ split_to_var_and_offset (tree expr, tree
>  	break;
>  
>        *var = op0;
> -      off = op1;
>        /* Always sign extend the offset.  */
> -      off = wi::sext (off, TYPE_PRECISION (type));
> -      wi::to_mpz (off, offset, SIGNED);
> +      wi::to_mpz (op1, offset, SIGNED);
>        if (negate)
>  	mpz_neg (offset, offset);
>        break;
>  
>      case INTEGER_CST:
>        *var = build_int_cst_type (type, 0);
> -      off = expr;
> -      wi::to_mpz (off, offset, TYPE_SIGN (type));
> +      wi::to_mpz (expr, offset, TYPE_SIGN (type));
>        break;
>  
>      default:
> @@ -810,7 +806,7 @@ number_of_iterations_lt_to_ne (tree type
>      niter->may_be_zero = fold_build2 (TRUTH_OR_EXPR, boolean_type_node,
>  				      niter->may_be_zero,
>  				      noloop);
> -  bounds_add (bnds, mod, type);
> +  bounds_add (bnds, wi::extend (mod), type);
>    *delta = fold_build2 (PLUS_EXPR, niter_type, *delta, mod);
>  
>    ret = true;
> @@ -926,10 +922,10 @@ assert_loop_rolls_lt (tree type, affine_
>    /* First check whether the answer does not follow from the bounds we gathered
>       before.  */
>    if (integer_nonzerop (iv0->step))
> -    dstep = iv0->step;
> +    dstep = wi::extend (iv0->step);
>    else
>      {
> -      dstep = wi::sext (iv1->step, TYPE_PRECISION (type));
> +      dstep = wi::sext (wi::extend (iv1->step), TYPE_PRECISION (type));
>        dstep = -dstep;
>      }
>  
> @@ -1913,7 +1909,7 @@ number_of_iterations_exit (struct loop *
>  
>    /* If NITER has simplified into a constant, update MAX.  */
>    if (TREE_CODE (niter->niter) == INTEGER_CST)
> -    niter->max = niter->niter;
> +    niter->max = wi::extend (niter->niter);
>  
>    if (integer_onep (niter->assumptions))
>      return true;
> @@ -2387,12 +2383,12 @@ derive_constant_upper_bound_ops (tree ty
>    else
>      maxt = upper_bound_in_type (type, type);
>  
> -  max = maxt;
> +  max = wi::extend (maxt);
>  
>    switch (code)
>      {
>      case INTEGER_CST:
> -      return op0;
> +      return wi::extend (op0);
>  
>      CASE_CONVERT:
>        subtype = TREE_TYPE (op0);
> @@ -2429,8 +2425,7 @@ derive_constant_upper_bound_ops (tree ty
>        /* Canonicalize to OP0 - CST.  Consider CST to be signed, in order to
>  	 choose the most logical way how to treat this constant regardless
>  	 of the signedness of the type.  */
> -      cst = op1;
> -      cst = wi::sext (cst, TYPE_PRECISION (type));
> +      cst = wi::sext (wi::extend (op1), TYPE_PRECISION (type));
>        if (code != MINUS_EXPR)
>  	cst = -cst;
>  
> @@ -2490,13 +2485,13 @@ derive_constant_upper_bound_ops (tree ty
>  	return max;
>  
>        bnd = derive_constant_upper_bound (op0);
> -      return wi::udiv_floor (bnd, op1);
> +      return wi::udiv_floor (bnd, wi::extend (op1));
>  
>      case BIT_AND_EXPR:
>        if (TREE_CODE (op1) != INTEGER_CST
>  	  || tree_int_cst_sign_bit (op1))
>  	return max;
> -      return op1;
> +      return wi::extend (op1);
>  
>      case SSA_NAME:
>        stmt = SSA_NAME_DEF_STMT (op0);
> @@ -2575,7 +2570,7 @@ record_estimate (struct loop *loop, tree
>    if (TREE_CODE (bound) != INTEGER_CST)
>      realistic = false;
>    else
> -    gcc_checking_assert (i_bound == bound);
> +    gcc_checking_assert (i_bound == wi::extend (bound));
>    if (!upper && !realistic)
>      return;
>  
> @@ -3363,7 +3358,7 @@ estimate_numbers_of_iterations_loop (str
>        && TREE_CODE (loop->nb_iterations) == INTEGER_CST)
>      {
>        loop->any_upper_bound = true;
> -      loop->nb_iterations_upper_bound = loop->nb_iterations;
> +      loop->nb_iterations_upper_bound = wi::extend (loop->nb_iterations);
>      }
>  }
>  
> Index: gcc/tree-ssa-pre.c
> ===================================================================
> --- gcc/tree-ssa-pre.c	2013-10-22 10:18:04.481965930 +0100
> +++ gcc/tree-ssa-pre.c	2013-10-22 10:18:20.395102252 +0100
> @@ -1581,9 +1581,9 @@ phi_translate_1 (pre_expr expr, bitmap_s
>  		&& TREE_CODE (op[1]) == INTEGER_CST
>  		&& TREE_CODE (op[2]) == INTEGER_CST)
>  	      {
> -		addr_wide_int off = op[0];
> -		off += -addr_wide_int (op[1]);
> -		off *= addr_wide_int (op[2]);
> +		addr_wide_int off = ((wi::address (op[0])
> +				      - wi::address (op[1]))
> +				     * wi::address (op[2]));
>  		if (wi::fits_shwi_p (off))
>  		  newop.off = off.to_shwi ();
>  	      }
> Index: gcc/tree-ssa-sccvn.c
> ===================================================================
> --- gcc/tree-ssa-sccvn.c	2013-10-22 10:18:04.483965947 +0100
> +++ gcc/tree-ssa-sccvn.c	2013-10-22 10:18:20.396102260 +0100
> @@ -801,8 +801,8 @@ copy_reference_ops_from_ref (tree ref, v
>  		if (tree_to_hwi (bit_offset) % BITS_PER_UNIT == 0)
>  		  {
>  		    addr_wide_int off
> -		      = (addr_wide_int (this_offset)
> -			 + wi::lrshift (addr_wide_int (bit_offset),
> +		      = (wi::address (this_offset)
> +			 + wi::lrshift (wi::address (bit_offset),
>  					BITS_PER_UNIT == 8
>  					? 3 : exact_log2 (BITS_PER_UNIT)));
>  		    if (wi::fits_shwi_p (off))
> @@ -822,9 +822,9 @@ copy_reference_ops_from_ref (tree ref, v
>  	      && TREE_CODE (temp.op1) == INTEGER_CST
>  	      && TREE_CODE (temp.op2) == INTEGER_CST)
>  	    {
> -	      addr_wide_int off = temp.op0;
> -	      off += -addr_wide_int (temp.op1);
> -	      off *= addr_wide_int (temp.op2);
> +	      addr_wide_int off = ((wi::address (temp.op0)
> +				    - wi::address (temp.op1))
> +				   * wi::address (temp.op2));
>  	      if (wi::fits_shwi_p (off))
>  		temp.off = off.to_shwi();
>  	    }
> @@ -1146,8 +1146,7 @@ vn_reference_fold_indirect (vec<vn_refer
>    gcc_checking_assert (addr_base && TREE_CODE (addr_base) != MEM_REF);
>    if (addr_base != TREE_OPERAND (op->op0, 0))
>      {
> -      addr_wide_int off = wi::sext (addr_wide_int (mem_op->op0),
> -				    TYPE_PRECISION (TREE_TYPE (mem_op->op0)));
> +      addr_wide_int off = addr_wide_int::from (mem_op->op0, SIGNED);
>        off += addr_offset;
>        mem_op->op0 = wide_int_to_tree (TREE_TYPE (mem_op->op0), off);
>        op->op0 = build_fold_addr_expr (addr_base);
> @@ -1180,8 +1179,7 @@ vn_reference_maybe_forwprop_address (vec
>        && code != POINTER_PLUS_EXPR)
>      return;
>  
> -  off = wi::sext (addr_wide_int (mem_op->op0),
> -		  TYPE_PRECISION (TREE_TYPE (mem_op->op0)));
> +  off = addr_wide_int::from (mem_op->op0, SIGNED);
>  
>    /* The only thing we have to do is from &OBJ.foo.bar add the offset
>       from .foo.bar to the preceding MEM_REF offset and replace the
> @@ -1211,7 +1209,7 @@ vn_reference_maybe_forwprop_address (vec
>  	  || TREE_CODE (ptroff) != INTEGER_CST)
>  	return;
>  
> -      off += ptroff;
> +      off += wi::address (ptroff);
>        op->op0 = ptr;
>      }
>  
> @@ -1369,9 +1367,9 @@ valueize_refs_1 (vec<vn_reference_op_s>
>  	       && TREE_CODE (vro->op1) == INTEGER_CST
>  	       && TREE_CODE (vro->op2) == INTEGER_CST)
>  	{
> -	  addr_wide_int off = vro->op0;
> -	  off += -addr_wide_int (vro->op1);
> -	  off *= addr_wide_int (vro->op2);
> +	  addr_wide_int off = ((wi::address (vro->op0)
> +				- wi::address (vro->op1))
> +			       * wi::address (vro->op2));
>  	  if (wi::fits_shwi_p (off))
>  	    vro->off = off.to_shwi ();
>  	}
> Index: gcc/tree-ssa-structalias.c
> ===================================================================
> --- gcc/tree-ssa-structalias.c	2013-10-22 10:18:04.485965964 +0100
> +++ gcc/tree-ssa-structalias.c	2013-10-22 10:18:20.397102269 +0100
> @@ -3012,8 +3012,7 @@ get_constraint_for_ptr_offset (tree ptr,
>    else
>      {
>        /* Sign-extend the offset.  */
> -      addr_wide_int soffset = wi::sext (addr_wide_int (offset),
> -					TYPE_PRECISION (TREE_TYPE (offset)));
> +      addr_wide_int soffset = addr_wide_int::from (offset, SIGNED);
>        if (!wi::fits_shwi_p (soffset))
>  	rhsoffset = UNKNOWN_OFFSET;
>        else
> Index: gcc/tree-vrp.c
> ===================================================================
> --- gcc/tree-vrp.c	2013-10-22 10:18:04.488965990 +0100
> +++ gcc/tree-vrp.c	2013-10-22 10:30:40.213441286 +0100
> @@ -3849,7 +3849,7 @@ adjust_range_with_scev (value_range_t *v
>  	  signop sgn = TYPE_SIGN (TREE_TYPE (step));
>  	  bool overflow;
>  	  
> -	  wtmp = wi::mul (step, nit, sgn, &overflow);
> +	  wtmp = wi::mul (wi::extend (step), nit, sgn, &overflow);
>  	  /* If the multiplication overflowed we can't do a meaningful
>  	     adjustment.  Likewise if the result doesn't fit in the type
>  	     of the induction variable.  For a signed type we have to
> @@ -6292,7 +6292,7 @@ search_for_addr_array (tree t, location_
>  	return;
>  
>        idx = mem_ref_offset (t);
> -      idx = wi::sdiv_trunc (idx, el_sz);
> +      idx = wi::sdiv_trunc (idx, wi::address (el_sz));
>        if (wi::lts_p (idx, 0))
>  	{
>  	  if (dump_file && (dump_flags & TDF_DETAILS))
> @@ -6305,7 +6305,8 @@ search_for_addr_array (tree t, location_
>  		      "array subscript is below array bounds");
>  	  TREE_NO_WARNING (t) = 1;
>  	}
> -      else if (wi::gts_p (idx, addr_wide_int (up_bound) - low_bound + 1))
> +      else if (wi::gts_p (idx, (wi::address (up_bound)
> +				- wi::address (low_bound) + 1)))
>  	{
>  	  if (dump_file && (dump_flags & TDF_DETAILS))
>  	    {
> @@ -8731,11 +8732,11 @@ range_fits_type_p (value_range_t *vr, un
>  
>    /* Then we can perform the conversion on both ends and compare
>       the result for equality.  */
> -  tem = wi::ext (vr->min, dest_precision, dest_sgn);
> -  if (tem != vr->min)
> +  tem = wi::ext (wi::extend (vr->min), dest_precision, dest_sgn);
> +  if (tem != wi::extend (vr->min))
>      return false;
> -  tem = wi::ext (vr->max, dest_precision, dest_sgn);
> -  if (tem != vr->max)
> +  tem = wi::ext (wi::extend (vr->max), dest_precision, dest_sgn);
> +  if (tem != wi::extend (vr->max))
>      return false;
>  
>    return true;
> @@ -9021,8 +9022,8 @@ simplify_conversion_using_ranges (gimple
>  
>    /* Simulate the conversion chain to check if the result is equal if
>       the middle conversion is removed.  */
> -  innermin = innervr->min;
> -  innermax = innervr->max;
> +  innermin = wi::extend (innervr->min);
> +  innermax = wi::extend (innervr->max);
>  
>    inner_prec = TYPE_PRECISION (TREE_TYPE (innerop));
>    middle_prec = TYPE_PRECISION (TREE_TYPE (middleop));
> @@ -9482,7 +9483,8 @@ vrp_finalize (void)
>  	    && (TREE_CODE (vr_value[i]->max) == INTEGER_CST))
>  	  {
>  	    if (vr_value[i]->type == VR_RANGE)
> -	      set_range_info (name, vr_value[i]->min, vr_value[i]->max);
> +	      set_range_info (name, wi::extend (vr_value[i]->min),
> +			      wi::extend (vr_value[i]->max));
>  	    else if (vr_value[i]->type == VR_ANTI_RANGE)
>  	      {
>  		/* VR_ANTI_RANGE ~[min, max] is encoded compactly as
> @@ -9496,16 +9498,14 @@ vrp_finalize (void)
>  		    && integer_zerop (vr_value[i]->min)
>  		    && integer_zerop (vr_value[i]->max))
>  		  {
> -		    max_wide_int tmmwi
> -		      = max_wide_int::from (wi::max_value (TYPE_PRECISION (TREE_TYPE (name)),
> -							   UNSIGNED),
> -					    UNSIGNED);
> -		    set_range_info (name, 1, tmmwi);
> +		    unsigned prec = TYPE_PRECISION (TREE_TYPE (name));
> +		    set_range_info (name, 1,
> +				    wi::mask <max_wide_int> (prec, false));
>  		  }
>  		else
>  		  set_range_info (name,
> -				  max_wide_int (vr_value[i]->max) + 1,
> -				  max_wide_int (vr_value[i]->min) - 1);
> +				  wi::extend (vr_value[i]->max) + 1,
> +				  wi::extend (vr_value[i]->min) - 1);
>  	      }
>  	  }
>        }
> Index: gcc/tree.c
> ===================================================================
> --- gcc/tree.c	2013-10-22 10:18:04.492966024 +0100
> +++ gcc/tree.c	2013-10-22 10:18:20.400102295 +0100
> @@ -4317,8 +4317,7 @@ build_simple_mem_ref_loc (location_t loc
>  addr_wide_int
>  mem_ref_offset (const_tree t)
>  {
> -  tree toff = TREE_OPERAND (t, 1);
> -  return wi::sext (addr_wide_int (toff), TYPE_PRECISION (TREE_TYPE (toff)));
> +  return addr_wide_int::from (TREE_OPERAND (t, 1), SIGNED);
>  }
>  
>  /* Return an invariant ADDR_EXPR of type TYPE taking the address of BASE
> @@ -6891,26 +6890,17 @@ type_num_arguments (const_tree type)
>  int
>  tree_int_cst_equal (const_tree t1, const_tree t2)
>  {
> -  unsigned int prec1, prec2;
>    if (t1 == t2)
>      return 1;
>  
>    if (t1 == 0 || t2 == 0)
>      return 0;
>  
> -  if (TREE_CODE (t1) != INTEGER_CST
> -      || TREE_CODE (t2) != INTEGER_CST)
> -    return 0;
> -
> -  prec1 = TYPE_PRECISION (TREE_TYPE (t1));
> -  prec2 = TYPE_PRECISION (TREE_TYPE (t2));
> +  if (TREE_CODE (t1) == INTEGER_CST
> +      && TREE_CODE (t2) == INTEGER_CST
> +      && wi::extend (t1) == wi::extend (t2))
> +    return 1;
>  
> -  if (prec1 == prec2)
> -    return wi::eq_p (t1, t2);
> -  else if (prec1 < prec2)
> -    return wide_int::from (t1, prec2, TYPE_SIGN (TREE_TYPE (t1))) == t2;
> -  else
> -    return wide_int::from (t2, prec1, TYPE_SIGN (TREE_TYPE (t2))) == t1;
>    return 0;
>  }
>  
> @@ -7080,7 +7070,7 @@ simple_cst_equal (const_tree t1, const_t
>    switch (code1)
>      {
>      case INTEGER_CST:
> -      return wi::eq_p (t1, t2);
> +      return wi::extend (t1) == wi::extend (t2);
>  
>      case REAL_CST:
>        return REAL_VALUES_IDENTICAL (TREE_REAL_CST (t1), TREE_REAL_CST (t2));
> Index: gcc/tree.h
> ===================================================================
> --- gcc/tree.h	2013-10-22 10:18:04.494966041 +0100
> +++ gcc/tree.h	2013-10-22 10:18:20.400102295 +0100
> @@ -5239,7 +5239,7 @@ #define ANON_AGGRNAME_FORMAT "__anon_%d"
>    template <>
>    struct int_traits <const_tree>
>    {
> -    static const enum precision_type precision_type = FLEXIBLE_PRECISION;
> +    static const enum precision_type precision_type = VAR_PRECISION;
>      static const bool host_dependent_precision = false;
>      static unsigned int get_precision (const_tree);
>      static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int,
> @@ -5248,6 +5248,34 @@ #define ANON_AGGRNAME_FORMAT "__anon_%d"
>  
>    template <>
>    struct int_traits <tree> : public int_traits <const_tree> {};
> +
> +  template <int N>
> +  class extended_tree
> +  {
> +  private:
> +    const_tree m_t;
> +
> +  public:
> +    extended_tree (const_tree);
> +
> +    unsigned int get_precision () const;
> +    const HOST_WIDE_INT *get_val () const;
> +    unsigned int get_len () const;
> +  };
> +
> +  template <>
> +  template <int N>
> +  struct int_traits <extended_tree <N> >
> +  {
> +    static const enum precision_type precision_type = CONST_PRECISION;
> +    static const bool host_dependent_precision = false;
> +    static const unsigned int precision = N;
> +  };
> +
> +  generic_wide_int <extended_tree <MAX_BITSIZE_MODE_ANY_INT> >
> +  extend (const_tree);
> +
> +  generic_wide_int <extended_tree <ADDR_MAX_PRECISION> > address (const_tree);
>  }
>  
>  inline unsigned int
> @@ -5265,9 +5293,8 @@ wi::int_traits <const_tree>::decompose (
>    const HOST_WIDE_INT *val = (const HOST_WIDE_INT *) &TREE_INT_CST_ELT (x, 0);
>    unsigned int max_len = ((precision + HOST_BITS_PER_WIDE_INT - 1)
>  			  / HOST_BITS_PER_WIDE_INT);
> -  unsigned int xprecision = get_precision (x);
>  
> -  gcc_assert (precision >= xprecision);
> +  gcc_checking_assert (precision == get_precision (x));
>  
>    /* If an unsigned constant occupies a whole number of HWIs and has the
>       upper bit set, its representation includes an extra zero HWI,
> @@ -5282,6 +5309,46 @@ wi::int_traits <const_tree>::decompose (
>    return wi::storage_ref (val, len, precision);
>  }
>  
> +inline generic_wide_int <wi::extended_tree <MAX_BITSIZE_MODE_ANY_INT> >
> +wi::extend (const_tree t)
> +{
> +  return t;
> +}
> +
> +inline generic_wide_int <wi::extended_tree <ADDR_MAX_PRECISION> >
> +wi::address (const_tree t)
> +{
> +  return t;
> +}
> +
> +template <int N>
> +inline wi::extended_tree <N>::extended_tree (const_tree t)
> +  : m_t (t)
> +{
> +  gcc_checking_assert (TYPE_PRECISION (TREE_TYPE (t)) <= N);
> +}
> +
> +template <int N>
> +inline unsigned int
> +wi::extended_tree <N>::get_precision () const
> +{
> +  return N;
> +}
> +
> +template <int N>
> +inline const HOST_WIDE_INT *
> +wi::extended_tree <N>::get_val () const
> +{
> +  return &TREE_INT_CST_ELT (m_t, 0);
> +}
> +
> +template <int N>
> +inline unsigned int
> +wi::extended_tree <N>::get_len () const
> +{
> +  return TREE_INT_CST_NUNITS (m_t);
> +}
> +
>  namespace wi
>  {
>    template <typename T>
> Index: gcc/varasm.c
> ===================================================================
> --- gcc/varasm.c	2013-10-22 10:18:04.495966050 +0100
> +++ gcc/varasm.c	2013-10-22 10:18:20.401102303 +0100
> @@ -4812,10 +4812,10 @@ array_size_for_constructor (tree val)
>  
>    /* Compute the total number of array elements.  */
>    tmp = TYPE_MIN_VALUE (TYPE_DOMAIN (TREE_TYPE (val)));
> -  i = addr_wide_int (max_index) - tmp + 1;
> +  i = wi::address (max_index) - wi::address (tmp) + 1;
>  
>    /* Multiply by the array element unit size to find number of bytes.  */
> -  i *= addr_wide_int (TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (val))));
> +  i *= wi::address (TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (val))));
>  
>    gcc_assert (wi::fits_uhwi_p (i));
>    return i.to_uhwi ();
> @@ -4899,8 +4899,10 @@ output_constructor_regular_field (oc_loc
>  	 but we are using an unsigned sizetype.  */
>        unsigned prec = TYPE_PRECISION (sizetype);
>        addr_wide_int idx 
> -	= wi::sext (addr_wide_int (local->index) - local->min_index, prec);
> -      fieldpos = (idx * TYPE_SIZE_UNIT (TREE_TYPE (local->val))).to_shwi ();
> +	= wi::sext (wi::address (local->index)
> +		    - wi::address (local->min_index), prec);
> +      fieldpos = (idx * wi::address (TYPE_SIZE_UNIT (TREE_TYPE (local->val))))
> +	.to_shwi ();
>      }
>    else if (local->field != NULL_TREE)
>      fieldpos = int_byte_position (local->field);
> Index: gcc/wide-int.h
> ===================================================================
> --- gcc/wide-int.h	2013-10-22 10:17:29.995670564 +0100
> +++ gcc/wide-int.h	2013-10-22 10:18:20.402102312 +0100
> @@ -251,6 +251,46 @@ #define ADDR_MAX_BITSIZE 64
>  #define ADDR_MAX_PRECISION \
>    ((ADDR_MAX_BITSIZE + 4 + HOST_BITS_PER_WIDE_INT - 1) & ~(HOST_BITS_PER_WIDE_INT - 1))
>  
> +/* The type of result produced by a binary operation on types T1 and T2.
> +   Defined purely for brevity.  */
> +#define WI_BINARY_RESULT(T1, T2) \
> +  typename wi::binary_traits <T1, T2>::result_type
> +
> +/* The type of result produced by a unary operation on type T.  */
> +#define WI_UNARY_RESULT(T) \
> +  typename wi::unary_traits <T>::result_type
> +
> +/* Define a variable RESULT to hold the result of a binary operation on
> +   X and Y, which have types T1 and T2 respectively.  Define VAR to
> +   point to the blocks of RESULT.  Once the user of the macro has
> +   filled in VAR, it should call RESULT.set_len to set the number
> +   of initialized blocks.  */
> +#define WI_BINARY_RESULT_VAR(RESULT, VAL, T1, X, T2, Y) \
> +  WI_BINARY_RESULT (T1, T2) RESULT = \
> +    wi::int_traits <WI_BINARY_RESULT (T1, T2)>::get_binary_result (X, Y); \
> +  HOST_WIDE_INT *VAL = RESULT.write_val ()
> +
> +/* Similar for the result of a unary operation on X, which has type T.  */
> +#define WI_UNARY_RESULT_VAR(RESULT, VAL, T, X) \
> +  WI_UNARY_RESULT (T) RESULT = \
> +    wi::int_traits <WI_UNARY_RESULT (T)>::get_binary_result (X, X); \
> +  HOST_WIDE_INT *VAL = RESULT.write_val ()
> +
> +template <typename T> struct generic_wide_int;
> +template <int N> struct fixed_wide_int_storage;
> +struct wide_int_storage;
> +
> +/* An N-bit integer.  Until we can use typedef templates, use this instead.  */
> +#define FIXED_WIDE_INT(N) \
> +  generic_wide_int < fixed_wide_int_storage <N> >
> +
> +typedef generic_wide_int <wide_int_storage> wide_int;
> +typedef FIXED_WIDE_INT (ADDR_MAX_PRECISION) addr_wide_int;
> +typedef FIXED_WIDE_INT (MAX_BITSIZE_MODE_ANY_INT) max_wide_int;
> +
> +struct wide_int_ref_storage;
> +typedef generic_wide_int <wide_int_ref_storage> wide_int_ref;
> +
>  namespace wi
>  {
>    /* Classifies an integer based on its precision.  */
> @@ -303,40 +343,70 @@ #define ADDR_MAX_PRECISION \
>       a binary operation on two values of type T.  */
>    template <typename T>
>    struct unary_traits : public binary_traits <T, T> {};
> -}
>  
> -/* The type of result produced by a binary operation on types T1 and T2.
> -   Defined purely for brevity.  */
> -#define WI_BINARY_RESULT(T1, T2) \
> -  typename wi::binary_traits <T1, T2>::result_type
> +  /* Specify the result type for each supported combination of binary
> +     inputs.  Note that CONST_PRECISION and VAR_PRECISION cannot be
> +     mixed, in order to give stronger type checking.  When both inputs
> +     are CONST_PRECISION, they must have the same precision.  */
> +  template <>
> +  template <typename T1, typename T2>
> +  struct binary_traits <T1, T2, FLEXIBLE_PRECISION, FLEXIBLE_PRECISION>
> +  {
> +    typedef max_wide_int result_type;
> +  };
>  
> -/* The type of result produced by a unary operation on type T.  */
> -#define WI_UNARY_RESULT(T) \
> -  typename wi::unary_traits <T>::result_type
> +  template <>
> +  template <typename T1, typename T2>
> +  struct binary_traits <T1, T2, FLEXIBLE_PRECISION, VAR_PRECISION>
> +  {
> +    typedef wide_int result_type;
> +  };
>  
> -/* Define a variable RESULT to hold the result of a binary operation on
> -   X and Y, which have types T1 and T2 respectively.  Define VAR to
> -   point to the blocks of RESULT.  Once the user of the macro has
> -   filled in VAR, it should call RESULT.set_len to set the number
> -   of initialized blocks.  */
> -#define WI_BINARY_RESULT_VAR(RESULT, VAL, T1, X, T2, Y) \
> -  WI_BINARY_RESULT (T1, T2) RESULT = \
> -    wi::int_traits <WI_BINARY_RESULT (T1, T2)>::get_binary_result (X, Y); \
> -  HOST_WIDE_INT *VAL = RESULT.write_val ()
> +  template <>
> +  template <typename T1, typename T2>
> +  struct binary_traits <T1, T2, FLEXIBLE_PRECISION, CONST_PRECISION>
> +  {
> +    /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
> +       so as not to confuse gengtype.  */
> +    typedef generic_wide_int < fixed_wide_int_storage
> +			       <int_traits <T2>::precision> > result_type;
> +  };
>  
> -/* Similar for the result of a unary operation on X, which has type T.  */
> -#define WI_UNARY_RESULT_VAR(RESULT, VAL, T, X) \
> -  WI_UNARY_RESULT (T) RESULT = \
> -    wi::int_traits <WI_UNARY_RESULT (T)>::get_binary_result (X, X); \
> -  HOST_WIDE_INT *VAL = RESULT.write_val ()
> +  template <>
> +  template <typename T1, typename T2>
> +  struct binary_traits <T1, T2, VAR_PRECISION, FLEXIBLE_PRECISION>
> +  {
> +    typedef wide_int result_type;
> +  };
>  
> -template <typename T> struct generic_wide_int;
> +  template <>
> +  template <typename T1, typename T2>
> +  struct binary_traits <T1, T2, CONST_PRECISION, FLEXIBLE_PRECISION>
> +  {
> +    /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
> +       so as not to confuse gengtype.  */
> +    typedef generic_wide_int < fixed_wide_int_storage
> +			       <int_traits <T1>::precision> > result_type;
> +  };
>  
> -struct wide_int_storage;
> -typedef generic_wide_int <wide_int_storage> wide_int;
> +  template <>
> +  template <typename T1, typename T2>
> +  struct binary_traits <T1, T2, CONST_PRECISION, CONST_PRECISION>
> +  {
> +    /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
> +       so as not to confuse gengtype.  */
> +    STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);
> +    typedef generic_wide_int < fixed_wide_int_storage
> +			       <int_traits <T1>::precision> > result_type;
> +  };
>  
> -struct wide_int_ref_storage;
> -typedef generic_wide_int <wide_int_ref_storage> wide_int_ref;
> +  template <>
> +  template <typename T1, typename T2>
> +  struct binary_traits <T1, T2, VAR_PRECISION, VAR_PRECISION>
> +  {
> +    typedef wide_int result_type;
> +  };
> +}
>  
>  /* Public functions for querying and operating on integers.  */
>  namespace wi
> @@ -572,38 +642,39 @@ #define BINARY_PREDICATE(OP, F) \
>    bool OP (const T &c) const { return wi::F (*this, c); }
>  
>  #define UNARY_OPERATOR(OP, F) \
> -  generic_wide_int OP () const { return wi::F (*this); }
> +  WI_UNARY_RESULT (generic_wide_int) OP () const { return wi::F (*this); }
>  
>  #define BINARY_OPERATOR(OP, F) \
>    template <typename T> \
> -  generic_wide_int OP (const T &c) const { return wi::F (*this, c); }
> +    WI_BINARY_RESULT (generic_wide_int, T) \
> +    OP (const T &c) const { return wi::F (*this, c); }
>  
>  #define ASSIGNMENT_OPERATOR(OP, F) \
>    template <typename T> \
> -  generic_wide_int &OP (const T &c) { return (*this = wi::F (*this, c)); }
> +    generic_wide_int &OP (const T &c) { return (*this = wi::F (*this, c)); }
>  
>  #define INCDEC_OPERATOR(OP, DELTA) \
>    generic_wide_int &OP () { *this += DELTA; return *this; }
>  
> -  UNARY_OPERATOR (operator ~, bit_not) \
> -  UNARY_OPERATOR (operator -, neg) \
> -  BINARY_PREDICATE (operator ==, eq_p) \
> -  BINARY_PREDICATE (operator !=, ne_p) \
> -  BINARY_OPERATOR (operator &, bit_and) \
> -  BINARY_OPERATOR (and_not, bit_and_not) \
> -  BINARY_OPERATOR (operator |, bit_or) \
> -  BINARY_OPERATOR (or_not, bit_or_not) \
> -  BINARY_OPERATOR (operator ^, bit_xor) \
> -  BINARY_OPERATOR (operator +, add) \
> -  BINARY_OPERATOR (operator -, sub) \
> -  BINARY_OPERATOR (operator *, mul) \
> -  ASSIGNMENT_OPERATOR (operator &=, bit_and) \
> -  ASSIGNMENT_OPERATOR (operator |=, bit_or) \
> -  ASSIGNMENT_OPERATOR (operator ^=, bit_xor) \
> -  ASSIGNMENT_OPERATOR (operator +=, add) \
> -  ASSIGNMENT_OPERATOR (operator -=, sub) \
> -  ASSIGNMENT_OPERATOR (operator *=, mul) \
> -  INCDEC_OPERATOR (operator ++, 1) \
> +  UNARY_OPERATOR (operator ~, bit_not)
> +  UNARY_OPERATOR (operator -, neg)
> +  BINARY_PREDICATE (operator ==, eq_p)
> +  BINARY_PREDICATE (operator !=, ne_p)
> +  BINARY_OPERATOR (operator &, bit_and)
> +  BINARY_OPERATOR (and_not, bit_and_not)
> +  BINARY_OPERATOR (operator |, bit_or)
> +  BINARY_OPERATOR (or_not, bit_or_not)
> +  BINARY_OPERATOR (operator ^, bit_xor)
> +  BINARY_OPERATOR (operator +, add)
> +  BINARY_OPERATOR (operator -, sub)
> +  BINARY_OPERATOR (operator *, mul)
> +  ASSIGNMENT_OPERATOR (operator &=, bit_and)
> +  ASSIGNMENT_OPERATOR (operator |=, bit_or)
> +  ASSIGNMENT_OPERATOR (operator ^=, bit_xor)
> +  ASSIGNMENT_OPERATOR (operator +=, add)
> +  ASSIGNMENT_OPERATOR (operator -=, sub)
> +  ASSIGNMENT_OPERATOR (operator *=, mul)
> +  INCDEC_OPERATOR (operator ++, 1)
>    INCDEC_OPERATOR (operator --, -1)
>  
>  #undef BINARY_PREDICATE
> @@ -848,6 +919,19 @@ class GTY(()) wide_int_storage
>    wide_int bswap () const;
>  };
>  
> +namespace wi
> +{
> +  template <>
> +  struct int_traits <wide_int_storage>
> +  {
> +    static const enum precision_type precision_type = VAR_PRECISION;
> +    /* Guaranteed by a static assert in the wide_int_storage constructor.  */
> +    static const bool host_dependent_precision = false;
> +    template <typename T1, typename T2>
> +    static wide_int get_binary_result (const T1 &, const T2 &);
> +  };
> +}
> +
>  inline wide_int_storage::wide_int_storage () {}
>  
>  /* Initialize the storage from integer X, in its natural precision.
> @@ -933,19 +1017,6 @@ wide_int_storage::create (unsigned int p
>    return x;
>  }
>  
> -namespace wi
> -{
> -  template <>
> -  struct int_traits <wide_int_storage>
> -  {
> -    static const enum precision_type precision_type = VAR_PRECISION;
> -    /* Guaranteed by a static assert in the wide_int_storage constructor.  */
> -    static const bool host_dependent_precision = false;
> -    template <typename T1, typename T2>
> -    static wide_int get_binary_result (const T1 &, const T2 &);
> -  };
> -}
> -
>  template <typename T1, typename T2>
>  inline wide_int
>  wi::int_traits <wide_int_storage>::get_binary_result (const T1 &x, const T2 &y)
> @@ -959,10 +1030,6 @@ wi::int_traits <wide_int_storage>::get_b
>      return wide_int::create (wi::get_precision (x));
>  }
>  
> -/* An N-bit integer.  Until we can use typedef templates, use this instead.  */
> -#define FIXED_WIDE_INT(N) \
> -  generic_wide_int < fixed_wide_int_storage <N> >
> -
>  /* The storage used by FIXED_WIDE_INT (N).  */
>  template <int N>
>  class GTY(()) fixed_wide_int_storage
> @@ -988,8 +1055,19 @@ class GTY(()) fixed_wide_int_storage
>  					bool = true);
>  };
>  
> -typedef FIXED_WIDE_INT (ADDR_MAX_PRECISION) addr_wide_int;
> -typedef FIXED_WIDE_INT (MAX_BITSIZE_MODE_ANY_INT) max_wide_int;
> +namespace wi
> +{
> +  template <>
> +  template <int N>
> +  struct int_traits < fixed_wide_int_storage <N> >
> +  {
> +    static const enum precision_type precision_type = CONST_PRECISION;
> +    static const bool host_dependent_precision = false;
> +    static const unsigned int precision = N;
> +    template <typename T1, typename T2>
> +    static FIXED_WIDE_INT (N) get_binary_result (const T1 &, const T2 &);
> +  };
> +}
>  
>  template <int N>
>  inline fixed_wide_int_storage <N>::fixed_wide_int_storage () {}
> @@ -1071,20 +1149,6 @@ fixed_wide_int_storage <N>::from_array (
>    return result;
>  }
>  
> -namespace wi
> -{
> -  template <>
> -  template <int N>
> -  struct int_traits < fixed_wide_int_storage <N> >
> -  {
> -    static const enum precision_type precision_type = CONST_PRECISION;
> -    static const bool host_dependent_precision = false;
> -    static const unsigned int precision = N;
> -    template <typename T1, typename T2>
> -    static FIXED_WIDE_INT (N) get_binary_result (const T1 &, const T2 &);
> -  };
> -}
> -
>  template <int N>
>  template <typename T1, typename T2>
>  inline FIXED_WIDE_INT (N)
> @@ -1094,72 +1158,6 @@ get_binary_result (const T1 &, const T2
>    return FIXED_WIDE_INT (N) ();
>  }
>  
> -/* Specify the result type for each supported combination of binary
> -   inputs.  Note that CONST_PRECISION and VAR_PRECISION cannot be
> -   mixed, in order to give stronger type checking.  When both inputs
> -   are CONST_PRECISION, they must have the same precision.  */
> -namespace wi
> -{
> -  template <>
> -  template <typename T1, typename T2>
> -  struct binary_traits <T1, T2, FLEXIBLE_PRECISION, FLEXIBLE_PRECISION>
> -  {
> -    typedef max_wide_int result_type;
> -  };
> -
> -  template <>
> -  template <typename T1, typename T2>
> -  struct binary_traits <T1, T2, FLEXIBLE_PRECISION, VAR_PRECISION>
> -  {
> -    typedef wide_int result_type;
> -  };
> -
> -  template <>
> -  template <typename T1, typename T2>
> -  struct binary_traits <T1, T2, FLEXIBLE_PRECISION, CONST_PRECISION>
> -  {
> -    /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
> -       so as not to confuse gengtype.  */
> -    typedef generic_wide_int < fixed_wide_int_storage
> -			       <int_traits <T2>::precision> > result_type;
> -  };
> -
> -  template <>
> -  template <typename T1, typename T2>
> -  struct binary_traits <T1, T2, VAR_PRECISION, FLEXIBLE_PRECISION>
> -  {
> -    typedef wide_int result_type;
> -  };
> -
> -  template <>
> -  template <typename T1, typename T2>
> -  struct binary_traits <T1, T2, CONST_PRECISION, FLEXIBLE_PRECISION>
> -  {
> -    /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
> -       so as not to confuse gengtype.  */
> -    typedef generic_wide_int < fixed_wide_int_storage
> -			       <int_traits <T1>::precision> > result_type;
> -  };
> -
> -  template <>
> -  template <typename T1, typename T2>
> -  struct binary_traits <T1, T2, CONST_PRECISION, CONST_PRECISION>
> -  {
> -    /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
> -       so as not to confuse gengtype.  */
> -    STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);
> -    typedef generic_wide_int < fixed_wide_int_storage
> -			       <int_traits <T1>::precision> > result_type;
> -  };
> -
> -  template <>
> -  template <typename T1, typename T2>
> -  struct binary_traits <T1, T2, VAR_PRECISION, VAR_PRECISION>
> -  {
> -    typedef wide_int result_type;
> -  };
> -}
> -
>  namespace wi
>  {
>    /* Implementation of int_traits for primitive integer types like "int".  */
> @@ -1288,9 +1286,7 @@ wi::two (unsigned int precision)
>    template <>
>    struct int_traits <wi::hwi_with_prec>
>    {
> -    /* Since we have a sign, we can extend or truncate the integer to
> -       other precisions where necessary.  */
> -    static const enum precision_type precision_type = FLEXIBLE_PRECISION;
> +    static const enum precision_type precision_type = VAR_PRECISION;
>      /* hwi_with_prec has an explicitly-given precision, rather than the
>         precision of HOST_WIDE_INT.  */
>      static const bool host_dependent_precision = false;
> 
>
Richard Sandiford - Oct. 23, 2013, noon
Richard Biener <rguenther@suse.de> writes:
>> The patch does that by adding:
>> 
>>   wi::address (t)
>> 
>> for when we want to extend tree t to addr_wide_int precision and:
>> 
>>   wi::extend (t)
>> 
>> for when we want to extend it to max_wide_int precision.  (Better names
>> welcome.)  These act just like addr_wide_int (t) and max_wide_int (t)
>> would on current sources, except that they use the tree representation
>> directly, so there's no copying.
>
> Good.  Better names - ah well, wi::to_max_wide_int (t) and
> wi::to_addr_wide_int (t)?  Btw, "addr_wide_int" is an odd name as it
> has at least the precision of the maximum _bit_ offset possible, right?
> So more like [bit_]offset_wide_int?  Or just [bit_]offset_int?
> And then wi::to_offset (t) and wi::to_max (t)?

offset_int, max_int, wi::to_offset and wi::to_max sound OK to me.
Kenny?  Mike?

>> Most of the patch is mechanical and many of the "wi::address (...)"s
>> and "wi::extend (...)"s reinstate "addr_wide_int (...)"s and
>> "max_wide_int (...)"s from the initial implementation.  Sorry for the
>> run-around on this.
>> 
>> One change I'd like to point out though is:
>> 
>> @@ -7287,7 +7287,9 @@ native_encode_int (const_tree expr, unsi
>>    for (byte = 0; byte < total_bytes; byte++)
>>      {
>>        int bitpos = byte * BITS_PER_UNIT;
>> -      value = wi::extract_uhwi (expr, bitpos, BITS_PER_UNIT);
>> +      /* Extend EXPR according to TYPE_SIGN if the precision isn't a whole
>> +	 number of bytes.  */
>> +      value = wi::extract_uhwi (wi::extend (expr), bitpos, BITS_PER_UNIT);
>>  
>>        if (total_bytes > UNITS_PER_WORD)
>>  	{
>> 
>> I think this preserves the existing trunk behaviour but I wasn't sure
>> whether it was supposed to work like that or whether upper bits should
>> be zero.
>
> I think the upper bits are undefined, the trunk native_interpret_int
> does
>
>   result = double_int::from_buffer (ptr, total_bytes);
>
>   return double_int_to_tree (type, result);
>
> where the call to double_int_to_tree re-extends according to the types
> precision and sign.  wide_int_to_tree doesn't though?

This is native_encode_int rather than native_interpret_int though.
AIUI it's used for VIEW_CONVERT_EXPRs, so I thought the upper bits
might get used.

Thanks,
Richard
Richard Guenther - Oct. 23, 2013, 12:13 p.m.
On Wed, 23 Oct 2013, Richard Sandiford wrote:

> Richard Biener <rguenther@suse.de> writes:
> >> The patch does that by adding:
> >> 
> >>   wi::address (t)
> >> 
> >> for when we want to extend tree t to addr_wide_int precision and:
> >> 
> >>   wi::extend (t)
> >> 
> >> for when we want to extend it to max_wide_int precision.  (Better names
> >> welcome.)  These act just like addr_wide_int (t) and max_wide_int (t)
> >> would on current sources, except that they use the tree representation
> >> directly, so there's no copying.
> >
> > Good.  Better names - ah well, wi::to_max_wide_int (t) and
> > wi::to_addr_wide_int (t)?  Btw, "addr_wide_int" is an odd name as it
> > has at least the precision of the maximum _bit_ offset possible, right?
> > So more like [bit_]offset_wide_int?  Or just [bit_]offset_int?
> > And then wi::to_offset (t) and wi::to_max (t)?
> 
> offset_int, max_int, wi::to_offset and wi::to_max sound OK to me.
> Kenny?  Mike?
> 
> >> Most of the patch is mechanical and many of the "wi::address (...)"s
> >> and "wi::extend (...)"s reinstate "addr_wide_int (...)"s and
> >> "max_wide_int (...)"s from the initial implementation.  Sorry for the
> >> run-around on this.
> >> 
> >> One change I'd like to point out though is:
> >> 
> >> @@ -7287,7 +7287,9 @@ native_encode_int (const_tree expr, unsi
> >>    for (byte = 0; byte < total_bytes; byte++)
> >>      {
> >>        int bitpos = byte * BITS_PER_UNIT;
> >> -      value = wi::extract_uhwi (expr, bitpos, BITS_PER_UNIT);
> >> +      /* Extend EXPR according to TYPE_SIGN if the precision isn't a whole
> >> +	 number of bytes.  */
> >> +      value = wi::extract_uhwi (wi::extend (expr), bitpos, BITS_PER_UNIT);
> >>  
> >>        if (total_bytes > UNITS_PER_WORD)
> >>  	{
> >> 
> >> I think this preserves the existing trunk behaviour but I wasn't sure
> >> whether it was supposed to work like that or whether upper bits should
> >> be zero.
> >
> > I think the upper bits are undefined, the trunk native_interpret_int
> > does
> >
> >   result = double_int::from_buffer (ptr, total_bytes);
> >
> >   return double_int_to_tree (type, result);
> >
> > where the call to double_int_to_tree re-extends according to the types
> > precision and sign.  wide_int_to_tree doesn't though?
> 
> This is native_encode_int rather than native_interpret_int though.

Yes, I was looking at the matched interpret variant though to see
what we do.

> AIUI it's used for VIEW_CONVERT_EXPRs, so I thought the upper bits
> might get used.

Yeah, that might happen, but still relying on the upper bits in any
way would be brittle here.

Richard.
Kenneth Zadeck - Oct. 23, 2013, 12:21 p.m.
On 10/23/2013 08:13 AM, Richard Biener wrote:
> On Wed, 23 Oct 2013, Richard Sandiford wrote:
>
>> Richard Biener <rguenther@suse.de> writes:
>>>> The patch does that by adding:
>>>>
>>>>    wi::address (t)
>>>>
>>>> for when we want to extend tree t to addr_wide_int precision and:
>>>>
>>>>    wi::extend (t)
>>>>
>>>> for when we want to extend it to max_wide_int precision.  (Better names
>>>> welcome.)  These act just like addr_wide_int (t) and max_wide_int (t)
>>>> would on current sources, except that they use the tree representation
>>>> directly, so there's no copying.
>>> Good.  Better names - ah well, wi::to_max_wide_int (t) and
>>> wi::to_addr_wide_int (t)?  Btw, "addr_wide_int" is an odd name as it
>>> has at least the precision of the maximum _bit_ offset possible, right?
>>> So more like [bit_]offset_wide_int?  Or just [bit_]offset_int?
>>> And then wi::to_offset (t) and wi::to_max (t)?
>> offset_int, max_int, wi::to_offset and wi::to_max sound OK to me.
>> Kenny?  Mike?
>>
>>>> Most of the patch is mechanical and many of the "wi::address (...)"s
>>>> and "wi::extend (...)"s reinstate "addr_wide_int (...)"s and
>>>> "max_wide_int (...)"s from the initial implementation.  Sorry for the
>>>> run-around on this.
>>>>
>>>> One change I'd like to point out though is:
>>>>
>>>> @@ -7287,7 +7287,9 @@ native_encode_int (const_tree expr, unsi
>>>>     for (byte = 0; byte < total_bytes; byte++)
>>>>       {
>>>>         int bitpos = byte * BITS_PER_UNIT;
>>>> -      value = wi::extract_uhwi (expr, bitpos, BITS_PER_UNIT);
>>>> +      /* Extend EXPR according to TYPE_SIGN if the precision isn't a whole
>>>> +	 number of bytes.  */
>>>> +      value = wi::extract_uhwi (wi::extend (expr), bitpos, BITS_PER_UNIT);
>>>>   
>>>>         if (total_bytes > UNITS_PER_WORD)
>>>>   	{
>>>>
>>>> I think this preserves the existing trunk behaviour but I wasn't sure
>>>> whether it was supposed to work like that or whether upper bits should
>>>> be zero.
>>> I think the upper bits are undefined, the trunk native_interpret_int
>>> does
>>>
>>>    result = double_int::from_buffer (ptr, total_bytes);
>>>
>>>    return double_int_to_tree (type, result);
>>>
>>> where the call to double_int_to_tree re-extends according to the types
>>> precision and sign.  wide_int_to_tree doesn't though?
>> This is native_encode_int rather than native_interpret_int though.
> Yes, I was looking at the matched interpret variant though to see
> what we do.
>
the wide_int_to_tree really needs to canonicalize the value before 
making it into a tree.
the calls to tree_fits*_p (the successor to host_integer_p) depend on 
this being clean.
Otherwise, these functions will have to clean the short integers and 
they get called all over the place.



>> AIUI it's used for VIEW_CONVERT_EXPRs, so I thought the upper bits
>> might get used.
> Yeah, that might happen, but still relying on the upper bits in any
> way would be brittle here.
>
> Richard.
Mike Stump - Oct. 23, 2013, 7:29 p.m.
On Oct 23, 2013, at 2:09 AM, Richard Biener <rguenther@suse.de> wrote:
> Good.  Better names - ah well, wi::to_max_wide_int (t) and
> wi::to_addr_wide_int (t)?  Btw, "addr_wide_int" is an odd name

The idea was to have one type to rule them all…  everything address related...  bit offsets, byte offsets…  Bit offset is the wrong name, as that would be a wrong type to use for a byte offset.  Someone that has a bit offset type, would naturally want to /8 to get a byte offset.  If one wanted a bit offset, then it should be split into two, bit offset and byte offset.
Mike Stump - Oct. 23, 2013, 7:34 p.m.
On Oct 23, 2013, at 5:00 AM, Richard Sandiford <rsandifo@linux.vnet.ibm.com> wrote:
> offset_int, max_int, wi::to_offset and wi::to_max sound OK to me.
> Kenny?  Mike?

Those two names seem reasonable.  to_offset should document that these are for address offsets (and address constants) exclusively.
Richard Sandiford - Oct. 23, 2013, 9:18 p.m.
Mike Stump <mikestump@comcast.net> writes:
> On Oct 23, 2013, at 5:00 AM, Richard Sandiford
> <rsandifo@linux.vnet.ibm.com> wrote:
>> offset_int, max_int, wi::to_offset and wi::to_max sound OK to me.
>> Kenny?  Mike?
>
> Those two names seem reasonable.  to_offset should document that these
> are for address offsets (and address constants) exclusively.

Reading this back, I realise "max_int" might sound too similar to INT_MAX.
Maybe we could follow the current HOST_* stuff and use: offset_int, widest_int,
wi::to_offset and wi::to_widest.

Bah.  I'm no good at naming stuff...

Richard
Richard Guenther - Oct. 24, 2013, 8:44 a.m.
On Wed, 23 Oct 2013, Richard Sandiford wrote:

> Mike Stump <mikestump@comcast.net> writes:
> > On Oct 23, 2013, at 5:00 AM, Richard Sandiford
> > <rsandifo@linux.vnet.ibm.com> wrote:
> >> offset_int, max_int, wi::to_offset and wi::to_max sound OK to me.
> >> Kenny?  Mike?
> >
> > Those two names seem reasonable.  to_offset should document that these
> > are for address offsets (and address constants) exclusively.
> 
> Reading this back, I realise "max_int" might sound too similar to INT_MAX.
> Maybe we could follow the current HOST_* stuff and use: offset_int, widest_int,
> wi::to_offset and wi::to_widest.
> 
> Bah.  I'm no good at naming stuff...

Nobody is ... but yes, offset_int and widest_int and wi::to_offset
and wi::to_widest sounds good to me - both short and descriptive.

Richard.

Patch

Index: gcc/alias.c
===================================================================
--- gcc/alias.c	2013-10-22 10:18:04.416965373 +0100
+++ gcc/alias.c	2013-10-22 10:18:20.376102089 +0100
@@ -2352,9 +2352,9 @@  adjust_offset_for_component_ref (tree x,
 	  return;
 	}
 
-      woffset = xoffset;
-      woffset += wi::udiv_trunc (addr_wide_int (DECL_FIELD_BIT_OFFSET (field)),
-				 BITS_PER_UNIT);
+      woffset = (wi::address (xoffset)
+		 + wi::udiv_trunc (wi::address (DECL_FIELD_BIT_OFFSET (field)),
+				   BITS_PER_UNIT));
 
       if (!wi::fits_uhwi_p (woffset))
 	{
Index: gcc/cp/init.c
===================================================================
--- gcc/cp/init.c	2013-10-22 10:18:04.417965382 +0100
+++ gcc/cp/init.c	2013-10-22 10:18:20.376102089 +0100
@@ -2300,7 +2300,7 @@  build_new_1 (vec<tree, va_gc> **placemen
       if (TREE_CODE (inner_nelts_cst) == INTEGER_CST)
 	{
 	  bool overflow;
-	  addr_wide_int result = wi::mul (addr_wide_int (inner_nelts_cst),
+	  addr_wide_int result = wi::mul (wi::address (inner_nelts_cst),
 					  inner_nelts_count, SIGNED,
 					  &overflow);
 	  if (overflow)
@@ -2417,9 +2417,9 @@  build_new_1 (vec<tree, va_gc> **placemen
       /* Unconditionally subtract the cookie size.  This decreases the
 	 maximum object size and is safe even if we choose not to use
 	 a cookie after all.  */
-      max_size -= cookie_size;
+      max_size -= wi::address (cookie_size);
       bool overflow;
-      inner_size = wi::mul (addr_wide_int (size), inner_nelts_count, SIGNED,
+      inner_size = wi::mul (wi::address (size), inner_nelts_count, SIGNED,
 			    &overflow);
       if (overflow || wi::gtu_p (inner_size, max_size))
 	{
Index: gcc/cp/mangle.c
===================================================================
--- gcc/cp/mangle.c	2013-10-22 10:18:04.418965390 +0100
+++ gcc/cp/mangle.c	2013-10-22 10:18:20.377102098 +0100
@@ -3223,7 +3223,7 @@  write_array_type (const tree type)
 	{
 	  /* The ABI specifies that we should mangle the number of
 	     elements in the array, not the largest allowed index.  */
-	  addr_wide_int wmax = addr_wide_int (max) + 1;
+	  addr_wide_int wmax = wi::address (max) + 1;
 	  /* Truncate the result - this will mangle [0, SIZE_INT_MAX]
 	     number of elements as zero.  */
 	  wmax = wi::zext (wmax, TYPE_PRECISION (TREE_TYPE (max)));
Index: gcc/cp/tree.c
===================================================================
--- gcc/cp/tree.c	2013-10-22 10:18:04.419965399 +0100
+++ gcc/cp/tree.c	2013-10-22 10:18:20.378102106 +0100
@@ -2603,7 +2603,7 @@  cp_tree_equal (tree t1, tree t2)
   switch (code1)
     {
     case INTEGER_CST:
-      return max_wide_int (t1) == max_wide_int (t2);
+      return wi::extend (t1) == wi::extend (t2);
 
     case REAL_CST:
       return REAL_VALUES_EQUAL (TREE_REAL_CST (t1), TREE_REAL_CST (t2));
Index: gcc/cp/typeck2.c
===================================================================
--- gcc/cp/typeck2.c	2013-10-22 10:18:04.420965408 +0100
+++ gcc/cp/typeck2.c	2013-10-22 10:18:20.378102106 +0100
@@ -1119,8 +1119,8 @@  process_init_constructor_array (tree typ
     {
       tree domain = TYPE_DOMAIN (type);
       if (domain && TREE_CONSTANT (TYPE_MAX_VALUE (domain)))
-	len = wi::ext (addr_wide_int (TYPE_MAX_VALUE (domain))
-		       - TYPE_MIN_VALUE (domain) + 1,
+	len = wi::ext (wi::address (TYPE_MAX_VALUE (domain))
+		       - wi::address (TYPE_MIN_VALUE (domain)) + 1,
 		       TYPE_PRECISION (TREE_TYPE (domain)),
 		       TYPE_SIGN (TREE_TYPE (domain))).to_uhwi ();
       else
Index: gcc/dwarf2out.c
===================================================================
--- gcc/dwarf2out.c	2013-10-22 10:18:04.426965459 +0100
+++ gcc/dwarf2out.c	2013-10-22 10:18:20.381102132 +0100
@@ -10312,7 +10312,7 @@  wide_int_type_size_in_bits (const_tree t
   else if (TYPE_SIZE (type) == NULL_TREE)
     return 0;
   else if (TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST)
-    return TYPE_SIZE (type);
+    return wi::address (TYPE_SIZE (type));
   else
     return TYPE_ALIGN (type);
 }
@@ -14721,7 +14721,7 @@  field_byte_offset (const_tree decl)
   if (TREE_CODE (bit_position (decl)) != INTEGER_CST)
     return 0;
 
-  bitpos_int = bit_position (decl);
+  bitpos_int = wi::address (bit_position (decl));
 
 #ifdef PCC_BITFIELD_TYPE_MATTERS
   if (PCC_BITFIELD_TYPE_MATTERS)
@@ -14747,7 +14747,7 @@  field_byte_offset (const_tree decl)
 
       /* If the size of the field is not constant, use the type size.  */
       if (TREE_CODE (field_size_tree) == INTEGER_CST)
-	field_size_in_bits = field_size_tree;
+	field_size_in_bits = wi::address (field_size_tree);
       else
 	field_size_in_bits = type_size_in_bits;
 
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c	2013-10-22 10:18:04.428965476 +0100
+++ gcc/expmed.c	2013-10-22 10:18:20.382102140 +0100
@@ -1821,9 +1821,7 @@  extract_fixed_bit_field (enum machine_mo
 lshift_value (enum machine_mode mode, unsigned HOST_WIDE_INT value,
 	      int bitpos)
 {
-  return 
-    immed_wide_int_const (wi::lshift (max_wide_int (value),
-				      bitpos), mode);
+  return immed_wide_int_const (wi::lshift (value, bitpos), mode);
 }
 
 /* Extract a bit field that is split across two words
Index: gcc/expr.c
===================================================================
--- gcc/expr.c	2013-10-22 10:18:04.431965502 +0100
+++ gcc/expr.c	2013-10-22 10:18:20.384102157 +0100
@@ -6581,7 +6581,7 @@  get_inner_reference (tree exp, HOST_WIDE
       switch (TREE_CODE (exp))
 	{
 	case BIT_FIELD_REF:
-	  bit_offset += TREE_OPERAND (exp, 2);
+	  bit_offset += wi::address (TREE_OPERAND (exp, 2));
 	  break;
 
 	case COMPONENT_REF:
@@ -6596,7 +6596,7 @@  get_inner_reference (tree exp, HOST_WIDE
 	      break;
 
 	    offset = size_binop (PLUS_EXPR, offset, this_offset);
-	    bit_offset += DECL_FIELD_BIT_OFFSET (field);
+	    bit_offset += wi::address (DECL_FIELD_BIT_OFFSET (field));
 
 	    /* ??? Right now we don't do anything with DECL_OFFSET_ALIGN.  */
 	  }
@@ -6675,7 +6675,7 @@  get_inner_reference (tree exp, HOST_WIDE
      this conversion.  */
   if (TREE_CODE (offset) == INTEGER_CST)
     {
-      addr_wide_int tem = wi::sext (addr_wide_int (offset),
+      addr_wide_int tem = wi::sext (wi::address (offset),
 				    TYPE_PRECISION (sizetype));
       tem = wi::lshift (tem, (BITS_PER_UNIT == 8
 			      ? 3 : exact_log2 (BITS_PER_UNIT)));
Index: gcc/fold-const.c
===================================================================
--- gcc/fold-const.c	2013-10-22 10:18:04.437965553 +0100
+++ gcc/fold-const.c	2013-10-22 10:18:20.386102175 +0100
@@ -1581,7 +1581,7 @@  fold_convert_const_int_from_int (tree ty
   /* Given an integer constant, make new constant with new type,
      appropriately sign-extended or truncated.  Use max_wide_int
      so that any extension is done according ARG1's type.  */
-  return force_fit_type (type, max_wide_int (arg1),
+  return force_fit_type (type, wi::extend (arg1),
 			 !POINTER_TYPE_P (TREE_TYPE (arg1)),
 			 TREE_OVERFLOW (arg1));
 }
@@ -1622,7 +1622,7 @@  fold_convert_const_int_from_real (enum t
   if (REAL_VALUE_ISNAN (r))
     {
       overflow = true;
-      val = max_wide_int (0);
+      val = wi::zero (TYPE_PRECISION (type));
     }
 
   /* See if R is less than the lower bound or greater than the
@@ -1635,7 +1635,7 @@  fold_convert_const_int_from_real (enum t
       if (REAL_VALUES_LESS (r, l))
 	{
 	  overflow = true;
-	  val = max_wide_int (lt);
+	  val = lt;
 	}
     }
 
@@ -1648,7 +1648,7 @@  fold_convert_const_int_from_real (enum t
 	  if (REAL_VALUES_LESS (u, r))
 	    {
 	      overflow = true;
-	      val = max_wide_int (ut);
+	      val = ut;
 	    }
 	}
     }
@@ -6611,7 +6611,7 @@  fold_single_bit_test (location_t loc, en
 	 not overflow, adjust BITNUM and INNER.  */
       if (TREE_CODE (inner) == RSHIFT_EXPR
 	  && TREE_CODE (TREE_OPERAND (inner, 1)) == INTEGER_CST
-	  && wi::ltu_p (wi::add (TREE_OPERAND (inner, 1), bitnum),
+	  && wi::ltu_p (wi::extend (TREE_OPERAND (inner, 1)) + bitnum,
 			TYPE_PRECISION (type)))
 	{
 	  bitnum += tree_to_hwi (TREE_OPERAND (inner, 1));
@@ -7287,7 +7287,9 @@  native_encode_int (const_tree expr, unsi
   for (byte = 0; byte < total_bytes; byte++)
     {
       int bitpos = byte * BITS_PER_UNIT;
-      value = wi::extract_uhwi (expr, bitpos, BITS_PER_UNIT);
+      /* Extend EXPR according to TYPE_SIGN if the precision isn't a whole
+	 number of bytes.  */
+      value = wi::extract_uhwi (wi::extend (expr), bitpos, BITS_PER_UNIT);
 
       if (total_bytes > UNITS_PER_WORD)
 	{
@@ -10450,7 +10452,7 @@  fold_binary_loc (location_t loc,
 	    code11 = TREE_CODE (tree11);
 	    if (code01 == INTEGER_CST
 		&& code11 == INTEGER_CST
-		&& (wi::add (tree01, tree11)
+		&& (wi::extend (tree01) + wi::extend (tree11)
 		    == element_precision (TREE_TYPE (TREE_OPERAND (arg0, 0)))))
 	      {
 		tem = build2_loc (loc, LROTATE_EXPR,
Index: gcc/fortran/trans-array.c
===================================================================
--- gcc/fortran/trans-array.c	2013-10-22 10:18:04.440965579 +0100
+++ gcc/fortran/trans-array.c	2013-10-22 10:18:20.387102183 +0100
@@ -5385,7 +5385,7 @@  gfc_conv_array_initializer (tree type, g
       else
 	gfc_conv_structure (&se, expr, 1);
 
-      wtmp = addr_wide_int (TYPE_MAX_VALUE (TYPE_DOMAIN (type))) + 1;
+      wtmp = wi::address (TYPE_MAX_VALUE (TYPE_DOMAIN (type))) + 1;
       gcc_assert (wtmp != 0);
       /* This will probably eat buckets of memory for large arrays.  */
       while (wtmp != 0)
Index: gcc/gimple-fold.c
===================================================================
--- gcc/gimple-fold.c	2013-10-22 10:18:04.441965588 +0100
+++ gcc/gimple-fold.c	2013-10-22 10:19:05.060484982 +0100
@@ -2821,14 +2821,14 @@  fold_array_ctor_reference (tree type, tr
       /* Static constructors for variably sized objects makes no sense.  */
       gcc_assert (TREE_CODE (TYPE_MIN_VALUE (domain_type)) == INTEGER_CST);
       index_type = TREE_TYPE (TYPE_MIN_VALUE (domain_type));
-      low_bound = TYPE_MIN_VALUE (domain_type);
+      low_bound = wi::address (TYPE_MIN_VALUE (domain_type));
     }
   else
     low_bound = 0;
   /* Static constructors for variably sized objects makes no sense.  */
   gcc_assert (TREE_CODE (TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (ctor))))
 	      == INTEGER_CST);
-  elt_size = TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (ctor)));
+  elt_size = wi::address (TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (ctor))));
 
   /* We can handle only constantly sized accesses that are known to not
      be larger than size of array element.  */
@@ -2866,12 +2866,12 @@  fold_array_ctor_reference (tree type, tr
       if (cfield)
 	{
 	  if (TREE_CODE (cfield) == INTEGER_CST)
-	    max_index = index = cfield;
+	    max_index = index = wi::address (cfield);
 	  else
 	    {
 	      gcc_assert (TREE_CODE (cfield) == RANGE_EXPR);
-	      index = TREE_OPERAND (cfield, 0);
-	      max_index = TREE_OPERAND (cfield, 1);
+	      index = wi::address (TREE_OPERAND (cfield, 0));
+	      max_index = wi::address (TREE_OPERAND (cfield, 1));
 	    }
 	}
       else
@@ -2913,7 +2913,7 @@  fold_nonarray_ctor_reference (tree type,
       tree field_offset = DECL_FIELD_BIT_OFFSET (cfield);
       tree field_size = DECL_SIZE (cfield);
       addr_wide_int bitoffset;
-      addr_wide_int byte_offset_cst = byte_offset;
+      addr_wide_int byte_offset_cst = wi::address (byte_offset);
       addr_wide_int bitoffset_end, access_end;
 
       /* Variable sized objects in static constructors makes no sense,
@@ -2925,10 +2925,11 @@  fold_nonarray_ctor_reference (tree type,
 		      : TREE_CODE (TREE_TYPE (cfield)) == ARRAY_TYPE));
 
       /* Compute bit offset of the field.  */
-      bitoffset = wi::add (field_offset, byte_offset_cst * BITS_PER_UNIT);
+      bitoffset = (wi::address (field_offset)
+		   + byte_offset_cst * BITS_PER_UNIT);
       /* Compute bit offset where the field ends.  */
       if (field_size != NULL_TREE)
-	bitoffset_end = bitoffset + field_size;
+	bitoffset_end = bitoffset + wi::address (field_size);
       else
 	bitoffset_end = 0;
 
@@ -3043,8 +3044,8 @@  fold_const_aggregate_ref_1 (tree t, tree
 	  if ((TREE_CODE (low_bound) == INTEGER_CST)
 	      && (tree_fits_uhwi_p (unit_size)))
 	    {
-	      addr_wide_int woffset 
-		= wi::sext (addr_wide_int (idx) - low_bound,
+	      addr_wide_int woffset
+		= wi::sext (wi::address (idx) - wi::address (low_bound),
 			    TYPE_PRECISION (TREE_TYPE (idx)));
 	      
 	      if (wi::fits_shwi_p (woffset))
Index: gcc/gimple-ssa-strength-reduction.c
===================================================================
--- gcc/gimple-ssa-strength-reduction.c	2013-10-22 10:18:04.444965613 +0100
+++ gcc/gimple-ssa-strength-reduction.c	2013-10-22 10:18:20.389102200 +0100
@@ -792,7 +792,7 @@  backtrace_base_for_ref (tree *pbase)
 	{
 	  /* X = B + (1 * S), S is integer constant.  */
 	  *pbase = base_cand->base_expr;
-	  return base_cand->stride;
+	  return wi::extend (base_cand->stride);
 	}
       else if (base_cand->kind == CAND_ADD
 	       && TREE_CODE (base_cand->stride) == INTEGER_CST
@@ -860,14 +860,14 @@  restructure_reference (tree *pbase, tree
   type = TREE_TYPE (TREE_OPERAND (base, 1));
 
   mult_op0 = TREE_OPERAND (offset, 0);
-  c3 = TREE_OPERAND (offset, 1);
+  c3 = wi::extend (TREE_OPERAND (offset, 1));
 
   if (TREE_CODE (mult_op0) == PLUS_EXPR)
 
     if (TREE_CODE (TREE_OPERAND (mult_op0, 1)) == INTEGER_CST)
       {
 	t2 = TREE_OPERAND (mult_op0, 0);
-	c2 = TREE_OPERAND (mult_op0, 1);
+	c2 = wi::extend (TREE_OPERAND (mult_op0, 1));
       }
     else
       return false;
@@ -877,7 +877,7 @@  restructure_reference (tree *pbase, tree
     if (TREE_CODE (TREE_OPERAND (mult_op0, 1)) == INTEGER_CST)
       {
 	t2 = TREE_OPERAND (mult_op0, 0);
-	c2 = -(max_wide_int)TREE_OPERAND (mult_op0, 1);
+	c2 = -wi::extend (TREE_OPERAND (mult_op0, 1));
       }
     else
       return false;
@@ -979,7 +979,7 @@  create_mul_ssa_cand (gimple gs, tree bas
 	     ============================
 	     X = B + ((i' * S) * Z)  */
 	  base = base_cand->base_expr;
-	  index = base_cand->index * base_cand->stride;
+	  index = base_cand->index * wi::extend (base_cand->stride);
 	  stride = stride_in;
 	  ctype = base_cand->cand_type;
 	  if (has_single_use (base_in))
@@ -1035,7 +1035,7 @@  create_mul_imm_cand (gimple gs, tree bas
 	     X = (B + i') * (S * c)  */
 	  base = base_cand->base_expr;
 	  index = base_cand->index;
-	  temp = wi::mul (base_cand->stride, stride_in);
+	  temp = wi::extend (base_cand->stride) * wi::extend (stride_in);
 	  stride = wide_int_to_tree (TREE_TYPE (stride_in), temp);
 	  ctype = base_cand->cand_type;
 	  if (has_single_use (base_in))
@@ -1065,7 +1065,7 @@  create_mul_imm_cand (gimple gs, tree bas
 	     ===========================
 	     X = (B + S) * c  */
 	  base = base_cand->base_expr;
-	  index = base_cand->stride;
+	  index = wi::extend (base_cand->stride);
 	  stride = stride_in;
 	  ctype = base_cand->cand_type;
 	  if (has_single_use (base_in))
@@ -1166,7 +1166,7 @@  create_add_ssa_cand (gimple gs, tree bas
 	     ===========================
 	     X = Y + ((+/-1 * S) * B)  */
 	  base = base_in;
-	  index = addend_cand->stride;
+	  index = wi::extend (addend_cand->stride);
 	  if (subtract_p)
 	    index = -index;
 	  stride = addend_cand->base_expr;
@@ -1216,7 +1216,7 @@  create_add_ssa_cand (gimple gs, tree bas
 		     ===========================
 		     Value:  X = Y + ((-1 * S) * B)  */
 		  base = base_in;
-		  index = subtrahend_cand->stride;
+		  index = wi::extend (subtrahend_cand->stride);
 		  index = -index;
 		  stride = subtrahend_cand->base_expr;
 		  ctype = TREE_TYPE (base_in);
@@ -1272,7 +1272,8 @@  create_add_imm_cand (gimple gs, tree bas
       signop sign = TYPE_SIGN (TREE_TYPE (base_cand->stride));
 
       if (TREE_CODE (base_cand->stride) == INTEGER_CST
-	  && wi::multiple_of_p (index_in, base_cand->stride, sign, &multiple))
+	  && wi::multiple_of_p (index_in, wi::extend (base_cand->stride),
+				sign, &multiple))
 	{
 	  /* Y = (B + i') * S, S constant, c = kS for some integer k
 	     X = Y + c
@@ -1360,7 +1361,7 @@  slsr_process_add (gimple gs, tree rhs1,
       max_wide_int index;
 
       /* Record an interpretation for the add-immediate.  */
-      index = rhs2;
+      index = wi::extend (rhs2);
       if (subtract_p)
 	index = -index;
 
@@ -2027,7 +2028,7 @@  replace_unconditional_candidate (slsr_ca
     return;
 
   basis = lookup_cand (c->basis);
-  bump = cand_increment (c) * c->stride;
+  bump = cand_increment (c) * wi::extend (c->stride);
 
   replace_mult_candidate (c, gimple_assign_lhs (basis->cand_stmt), bump);
 }
@@ -2078,7 +2079,7 @@  create_add_on_incoming_edge (slsr_cand_t
     {
       tree bump_tree;
       enum tree_code code = PLUS_EXPR;
-      max_wide_int bump = increment * c->stride;
+      max_wide_int bump = increment * wi::extend (c->stride);
       if (wi::neg_p (bump))
 	{
 	  code = MINUS_EXPR;
@@ -2246,7 +2247,7 @@  replace_conditional_candidate (slsr_cand
   name = create_phi_basis (c, lookup_cand (c->def_phi)->cand_stmt,
 			   basis_name, loc, KNOWN_STRIDE);
   /* Replace C with an add of the new basis phi and a constant.  */
-  bump = c->index * c->stride;
+  bump = c->index * wi::extend (c->stride);
 
   replace_mult_candidate (c, name, bump);
 }
Index: gcc/ipa-prop.c
===================================================================
--- gcc/ipa-prop.c	2013-10-22 10:18:04.445965622 +0100
+++ gcc/ipa-prop.c	2013-10-22 10:18:20.390102209 +0100
@@ -3640,8 +3640,7 @@  ipa_modify_call_arguments (struct cgraph
 		  if (TYPE_ALIGN (type) > align)
 		    align = TYPE_ALIGN (type);
 		}
-	      misalign += (wi::sext (addr_wide_int (off),
-				     TYPE_PRECISION (TREE_TYPE (off)))
+	      misalign += (addr_wide_int::from (off, SIGNED)
 			   * BITS_PER_UNIT).to_short_addr ();
 	      misalign = misalign & (align - 1);
 	      if (misalign != 0)
Index: gcc/predict.c
===================================================================
--- gcc/predict.c	2013-10-22 10:18:04.445965622 +0100
+++ gcc/predict.c	2013-10-22 10:18:20.390102209 +0100
@@ -1300,10 +1300,10 @@  predict_iv_comparison (struct loop *loop
       bool overflow, overall_overflow = false;
       max_wide_int compare_count, tem, loop_count;
 
-      max_wide_int loop_bound = loop_bound_var;
-      max_wide_int compare_bound = compare_var;
-      max_wide_int base = compare_base;
-      max_wide_int compare_step = compare_step_var;
+      max_wide_int loop_bound = wi::extend (loop_bound_var);
+      max_wide_int compare_bound = wi::extend (compare_var);
+      max_wide_int base = wi::extend (compare_base);
+      max_wide_int compare_step = wi::extend (compare_step_var);
 
       /* (loop_bound - base) / compare_step */
       tem = wi::sub (loop_bound, base, SIGNED, &overflow);
Index: gcc/stor-layout.c
===================================================================
--- gcc/stor-layout.c	2013-10-22 10:18:04.446965630 +0100
+++ gcc/stor-layout.c	2013-10-22 10:18:20.391102217 +0100
@@ -2198,11 +2198,10 @@  layout_type (tree type)
 		    && TYPE_UNSIGNED (TREE_TYPE (lb))
 		    && tree_int_cst_lt (ub, lb))
 		  {
-		    unsigned prec = TYPE_PRECISION (TREE_TYPE (lb));
 		    lb = wide_int_to_tree (ssizetype,
-					   wi::sext (addr_wide_int (lb), prec));
+					   addr_wide_int::from (lb, SIGNED));
 		    ub = wide_int_to_tree (ssizetype,
-					   wi::sext (addr_wide_int (ub), prec));
+					   addr_wide_int::from (ub, SIGNED));
 		  }
 		length
 		  = fold_convert (sizetype,
Index: gcc/tree-affine.c
===================================================================
--- gcc/tree-affine.c	2013-10-22 10:18:04.446965630 +0100
+++ gcc/tree-affine.c	2013-10-22 10:18:20.391102217 +0100
@@ -269,7 +269,7 @@  tree_to_aff_combination (tree expr, tree
   switch (code)
     {
     case INTEGER_CST:
-      aff_combination_const (comb, type, expr);
+      aff_combination_const (comb, type, wi::extend (expr));
       return;
 
     case POINTER_PLUS_EXPR:
@@ -292,7 +292,7 @@  tree_to_aff_combination (tree expr, tree
       if (TREE_CODE (cst) != INTEGER_CST)
 	break;
       tree_to_aff_combination (TREE_OPERAND (expr, 0), type, comb);
-      aff_combination_scale (comb, cst);
+      aff_combination_scale (comb, wi::extend (cst));
       return;
 
     case NEGATE_EXPR:
@@ -383,7 +383,7 @@  add_elt_to_tree (tree expr, tree type, t
     {
       elt = convert_to_ptrofftype (elt);
       elt = fold_build1 (NEGATE_EXPR, TREE_TYPE (elt), elt);
-      scale = max_wide_int (1);
+      scale = 1;
     }
 
   if (scale == 1)
Index: gcc/tree-dfa.c
===================================================================
--- gcc/tree-dfa.c	2013-10-22 10:18:04.472965853 +0100
+++ gcc/tree-dfa.c	2013-10-22 10:18:20.391102217 +0100
@@ -422,7 +422,7 @@  get_ref_base_and_extent (tree exp, HOST_
       switch (TREE_CODE (exp))
 	{
 	case BIT_FIELD_REF:
-	  bit_offset += TREE_OPERAND (exp, 2);
+	  bit_offset += wi::address (TREE_OPERAND (exp, 2));
 	  break;
 
 	case COMPONENT_REF:
@@ -432,11 +432,11 @@  get_ref_base_and_extent (tree exp, HOST_
 
 	    if (this_offset && TREE_CODE (this_offset) == INTEGER_CST)
 	      {
-		addr_wide_int woffset = this_offset;
+		addr_wide_int woffset = wi::address (this_offset);
 		woffset = wi::lshift (woffset,
 				      (BITS_PER_UNIT == 8
 				       ? 3 : exact_log2 (BITS_PER_UNIT)));
-		woffset += DECL_FIELD_BIT_OFFSET (field);
+		woffset += wi::address (DECL_FIELD_BIT_OFFSET (field));
 		bit_offset += woffset;
 
 		/* If we had seen a variable array ref already and we just
@@ -497,10 +497,10 @@  get_ref_base_and_extent (tree exp, HOST_
 		&& (unit_size = array_ref_element_size (exp),
 		    TREE_CODE (unit_size) == INTEGER_CST))
 	      {
-		addr_wide_int woffset 
-		  = wi::sext (addr_wide_int (index) - low_bound,
+		addr_wide_int woffset
+		  = wi::sext (wi::address (index) - wi::address (low_bound),
 			      TYPE_PRECISION (TREE_TYPE (index)));
-		woffset *= addr_wide_int (unit_size);
+		woffset *= wi::address (unit_size);
 		woffset = wi::lshift (woffset,
 				      (BITS_PER_UNIT == 8
 				       ? 3 : exact_log2 (BITS_PER_UNIT)));
Index: gcc/tree-predcom.c
===================================================================
--- gcc/tree-predcom.c	2013-10-22 10:18:04.473965862 +0100
+++ gcc/tree-predcom.c	2013-10-22 10:18:20.392102226 +0100
@@ -618,7 +618,7 @@  aff_combination_dr_offset (struct data_r
 
   tree_to_aff_combination_expand (DR_OFFSET (dr), type, offset,
 				  &name_expansions);
-  aff_combination_const (&delta, type, DR_INIT (dr));
+  aff_combination_const (&delta, type, wi::extend (DR_INIT (dr)));
   aff_combination_add (offset, &delta);
 }
 
Index: gcc/tree-pretty-print.c
===================================================================
--- gcc/tree-pretty-print.c	2013-10-22 10:18:04.474965870 +0100
+++ gcc/tree-pretty-print.c	2013-10-22 10:18:20.392102226 +0100
@@ -1509,7 +1509,7 @@  dump_generic_node (pretty_printer *buffe
 	  {
 	    tree minv = TYPE_MIN_VALUE (TYPE_DOMAIN (TREE_TYPE (node)));
 	    is_array_init = true;
-	    curidx = minv;
+	    curidx = wi::extend (minv);
 	  }
 	FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (node), ix, field, val)
 	  {
@@ -1523,7 +1523,7 @@  dump_generic_node (pretty_printer *buffe
 		  }
 		else if (is_array_init
 			 && (TREE_CODE (field) != INTEGER_CST
-			     || curidx != field))
+			     || curidx != wi::extend (field)))
 		  {
 		    pp_left_bracket (buffer);
 		    if (TREE_CODE (field) == RANGE_EXPR)
@@ -1534,12 +1534,12 @@  dump_generic_node (pretty_printer *buffe
 			dump_generic_node (buffer, TREE_OPERAND (field, 1), spc,
 					   flags, false);
 			if (TREE_CODE (TREE_OPERAND (field, 1)) == INTEGER_CST)
-			  curidx = TREE_OPERAND (field, 1);
+			  curidx = wi::extend (TREE_OPERAND (field, 1));
 		      }
 		    else
 		      dump_generic_node (buffer, field, spc, flags, false);
 		    if (TREE_CODE (field) == INTEGER_CST)
-		      curidx = field;
+		      curidx = wi::extend (field);
 		    pp_string (buffer, "]=");
 		  }
 	      }
Index: gcc/tree-ssa-address.c
===================================================================
--- gcc/tree-ssa-address.c	2013-10-22 10:18:04.474965870 +0100
+++ gcc/tree-ssa-address.c	2013-10-22 10:18:20.392102226 +0100
@@ -203,8 +203,7 @@  addr_for_mem_ref (struct mem_address *ad
 
   if (addr->offset && !integer_zerop (addr->offset))
     {
-      addr_wide_int dc = wi::sext (addr_wide_int (addr->offset),
-				   TYPE_PRECISION (TREE_TYPE (addr->offset)));
+      addr_wide_int dc = addr_wide_int::from (addr->offset, SIGNED);
       off = immed_wide_int_const (dc, pointer_mode);
     }
   else
Index: gcc/tree-ssa-ccp.c
===================================================================
--- gcc/tree-ssa-ccp.c	2013-10-22 10:18:04.475965878 +0100
+++ gcc/tree-ssa-ccp.c	2013-10-22 10:18:20.393102234 +0100
@@ -202,7 +202,7 @@  dump_lattice_value (FILE *outf, const ch
 	}
       else
 	{
-	  wide_int cval = wi::bit_and_not (val.value, val.mask);
+	  wide_int cval = wi::bit_and_not (wi::extend (val.value), val.mask);
 	  fprintf (outf, "%sCONSTANT ", prefix);
 	  print_hex (cval, outf);
 	  fprintf (outf, " (");
@@ -432,8 +432,8 @@  valid_lattice_transition (prop_value_t o
   /* Bit-lattices have to agree in the still valid bits.  */
   if (TREE_CODE (old_val.value) == INTEGER_CST
       && TREE_CODE (new_val.value) == INTEGER_CST)
-    return (wi::bit_and_not (old_val.value, new_val.mask)
-	    == wi::bit_and_not (new_val.value, new_val.mask));
+    return (wi::bit_and_not (wi::extend (old_val.value), new_val.mask)
+	    == wi::bit_and_not (wi::extend (new_val.value), new_val.mask));
 
   /* Otherwise constant values have to agree.  */
   return operand_equal_p (old_val.value, new_val.value, 0);
@@ -458,7 +458,8 @@  set_lattice_value (tree var, prop_value_
       && TREE_CODE (new_val.value) == INTEGER_CST
       && TREE_CODE (old_val->value) == INTEGER_CST)
     {
-      max_wide_int diff = wi::bit_xor (new_val.value, old_val->value);
+      max_wide_int diff = (wi::extend (new_val.value)
+			   ^ wi::extend (old_val->value));
       new_val.mask = new_val.mask | old_val->mask | diff;
     }
 
@@ -505,7 +506,7 @@  value_to_wide_int (prop_value_t val)
 {
   if (val.value
       && TREE_CODE (val.value) == INTEGER_CST)
-    return val.value;
+    return wi::extend (val.value);
 
   return 0;
 }
@@ -908,7 +909,7 @@  ccp_lattice_meet (prop_value_t *val1, pr
          For INTEGER_CSTs mask unequal bits.  If no equal bits remain,
 	 drop to varying.  */
       val1->mask = (val1->mask | val2->mask
-		    | (wi::bit_xor (val1->value, val2->value)));
+		    | (wi::extend (val1->value) ^ wi::extend (val2->value)));
       if (val1->mask == -1)
 	{
 	  val1->lattice_val = VARYING;
Index: gcc/tree-ssa-loop-ivcanon.c
===================================================================
--- gcc/tree-ssa-loop-ivcanon.c	2013-10-22 10:18:04.476965887 +0100
+++ gcc/tree-ssa-loop-ivcanon.c	2013-10-22 10:18:20.393102234 +0100
@@ -927,7 +927,7 @@  canonicalize_loop_induction_variables (s
      by find_loop_niter_by_eval.  Be sure to keep it for future.  */
   if (niter && TREE_CODE (niter) == INTEGER_CST)
     {
-      record_niter_bound (loop, niter,
+      record_niter_bound (loop, wi::extend (niter),
 			  exit == single_likely_exit (loop), true);
     }
 
Index: gcc/tree-ssa-loop-ivopts.c
===================================================================
--- gcc/tree-ssa-loop-ivopts.c	2013-10-22 10:18:04.478965904 +0100
+++ gcc/tree-ssa-loop-ivopts.c	2013-10-22 10:18:20.394102243 +0100
@@ -1590,7 +1590,7 @@  constant_multiple_of (tree top, tree bot
       if (!constant_multiple_of (TREE_OPERAND (top, 0), bot, &res))
 	return false;
 
-      *mul = wi::sext (res * mby, precision);
+      *mul = wi::sext (res * wi::extend (mby), precision);
       return true;
 
     case PLUS_EXPR:
@@ -1608,8 +1608,8 @@  constant_multiple_of (tree top, tree bot
       if (TREE_CODE (bot) != INTEGER_CST)
 	return false;
 
-      p0 = wi::sext (top, precision);
-      p1 = wi::sext (bot, precision);
+      p0 = max_wide_int::from (top, SIGNED);
+      p1 = max_wide_int::from (bot, SIGNED);
       if (p1 == 0)
 	return false;
       *mul = wi::sext (wi::divmod_trunc (p0, p1, SIGNED, &res), precision);
@@ -4632,7 +4632,7 @@  may_eliminate_iv (struct ivopts_data *da
       max_niter = desc->max;
       if (stmt_after_increment (loop, cand, use->stmt))
         max_niter += 1;
-      period_value = period;
+      period_value = wi::extend (period);
       if (wi::gtu_p (max_niter, period_value))
         {
           /* See if we can take advantage of inferred loop bound information.  */
Index: gcc/tree-ssa-loop-niter.c
===================================================================
--- gcc/tree-ssa-loop-niter.c	2013-10-22 10:18:04.480965921 +0100
+++ gcc/tree-ssa-loop-niter.c	2013-10-22 10:18:20.394102243 +0100
@@ -69,7 +69,6 @@  split_to_var_and_offset (tree expr, tree
 {
   tree type = TREE_TYPE (expr);
   tree op0, op1;
-  max_wide_int off;
   bool negate = false;
 
   *var = expr;
@@ -90,18 +89,15 @@  split_to_var_and_offset (tree expr, tree
 	break;
 
       *var = op0;
-      off = op1;
       /* Always sign extend the offset.  */
-      off = wi::sext (off, TYPE_PRECISION (type));
-      wi::to_mpz (off, offset, SIGNED);
+      wi::to_mpz (op1, offset, SIGNED);
       if (negate)
 	mpz_neg (offset, offset);
       break;
 
     case INTEGER_CST:
       *var = build_int_cst_type (type, 0);
-      off = expr;
-      wi::to_mpz (off, offset, TYPE_SIGN (type));
+      wi::to_mpz (expr, offset, TYPE_SIGN (type));
       break;
 
     default:
@@ -810,7 +806,7 @@  number_of_iterations_lt_to_ne (tree type
     niter->may_be_zero = fold_build2 (TRUTH_OR_EXPR, boolean_type_node,
 				      niter->may_be_zero,
 				      noloop);
-  bounds_add (bnds, mod, type);
+  bounds_add (bnds, wi::extend (mod), type);
   *delta = fold_build2 (PLUS_EXPR, niter_type, *delta, mod);
 
   ret = true;
@@ -926,10 +922,10 @@  assert_loop_rolls_lt (tree type, affine_
   /* First check whether the answer does not follow from the bounds we gathered
      before.  */
   if (integer_nonzerop (iv0->step))
-    dstep = iv0->step;
+    dstep = wi::extend (iv0->step);
   else
     {
-      dstep = wi::sext (iv1->step, TYPE_PRECISION (type));
+      dstep = wi::sext (wi::extend (iv1->step), TYPE_PRECISION (type));
       dstep = -dstep;
     }
 
@@ -1913,7 +1909,7 @@  number_of_iterations_exit (struct loop *
 
   /* If NITER has simplified into a constant, update MAX.  */
   if (TREE_CODE (niter->niter) == INTEGER_CST)
-    niter->max = niter->niter;
+    niter->max = wi::extend (niter->niter);
 
   if (integer_onep (niter->assumptions))
     return true;
@@ -2387,12 +2383,12 @@  derive_constant_upper_bound_ops (tree ty
   else
     maxt = upper_bound_in_type (type, type);
 
-  max = maxt;
+  max = wi::extend (maxt);
 
   switch (code)
     {
     case INTEGER_CST:
-      return op0;
+      return wi::extend (op0);
 
     CASE_CONVERT:
       subtype = TREE_TYPE (op0);
@@ -2429,8 +2425,7 @@  derive_constant_upper_bound_ops (tree ty
       /* Canonicalize to OP0 - CST.  Consider CST to be signed, in order to
 	 choose the most logical way how to treat this constant regardless
 	 of the signedness of the type.  */
-      cst = op1;
-      cst = wi::sext (cst, TYPE_PRECISION (type));
+      cst = wi::sext (wi::extend (op1), TYPE_PRECISION (type));
       if (code != MINUS_EXPR)
 	cst = -cst;
 
@@ -2490,13 +2485,13 @@  derive_constant_upper_bound_ops (tree ty
 	return max;
 
       bnd = derive_constant_upper_bound (op0);
-      return wi::udiv_floor (bnd, op1);
+      return wi::udiv_floor (bnd, wi::extend (op1));
 
     case BIT_AND_EXPR:
       if (TREE_CODE (op1) != INTEGER_CST
 	  || tree_int_cst_sign_bit (op1))
 	return max;
-      return op1;
+      return wi::extend (op1);
 
     case SSA_NAME:
       stmt = SSA_NAME_DEF_STMT (op0);
@@ -2575,7 +2570,7 @@  record_estimate (struct loop *loop, tree
   if (TREE_CODE (bound) != INTEGER_CST)
     realistic = false;
   else
-    gcc_checking_assert (i_bound == bound);
+    gcc_checking_assert (i_bound == wi::extend (bound));
   if (!upper && !realistic)
     return;
 
@@ -3363,7 +3358,7 @@  estimate_numbers_of_iterations_loop (str
       && TREE_CODE (loop->nb_iterations) == INTEGER_CST)
     {
       loop->any_upper_bound = true;
-      loop->nb_iterations_upper_bound = loop->nb_iterations;
+      loop->nb_iterations_upper_bound = wi::extend (loop->nb_iterations);
     }
 }
 
Index: gcc/tree-ssa-pre.c
===================================================================
--- gcc/tree-ssa-pre.c	2013-10-22 10:18:04.481965930 +0100
+++ gcc/tree-ssa-pre.c	2013-10-22 10:18:20.395102252 +0100
@@ -1581,9 +1581,9 @@  phi_translate_1 (pre_expr expr, bitmap_s
 		&& TREE_CODE (op[1]) == INTEGER_CST
 		&& TREE_CODE (op[2]) == INTEGER_CST)
 	      {
-		addr_wide_int off = op[0];
-		off += -addr_wide_int (op[1]);
-		off *= addr_wide_int (op[2]);
+		addr_wide_int off = ((wi::address (op[0])
+				      - wi::address (op[1]))
+				     * wi::address (op[2]));
 		if (wi::fits_shwi_p (off))
 		  newop.off = off.to_shwi ();
 	      }
Index: gcc/tree-ssa-sccvn.c
===================================================================
--- gcc/tree-ssa-sccvn.c	2013-10-22 10:18:04.483965947 +0100
+++ gcc/tree-ssa-sccvn.c	2013-10-22 10:18:20.396102260 +0100
@@ -801,8 +801,8 @@  copy_reference_ops_from_ref (tree ref, v
 		if (tree_to_hwi (bit_offset) % BITS_PER_UNIT == 0)
 		  {
 		    addr_wide_int off
-		      = (addr_wide_int (this_offset)
-			 + wi::lrshift (addr_wide_int (bit_offset),
+		      = (wi::address (this_offset)
+			 + wi::lrshift (wi::address (bit_offset),
 					BITS_PER_UNIT == 8
 					? 3 : exact_log2 (BITS_PER_UNIT)));
 		    if (wi::fits_shwi_p (off))
@@ -822,9 +822,9 @@  copy_reference_ops_from_ref (tree ref, v
 	      && TREE_CODE (temp.op1) == INTEGER_CST
 	      && TREE_CODE (temp.op2) == INTEGER_CST)
 	    {
-	      addr_wide_int off = temp.op0;
-	      off += -addr_wide_int (temp.op1);
-	      off *= addr_wide_int (temp.op2);
+	      addr_wide_int off = ((wi::address (temp.op0)
+				    - wi::address (temp.op1))
+				   * wi::address (temp.op2));
 	      if (wi::fits_shwi_p (off))
 		temp.off = off.to_shwi();
 	    }
@@ -1146,8 +1146,7 @@  vn_reference_fold_indirect (vec<vn_refer
   gcc_checking_assert (addr_base && TREE_CODE (addr_base) != MEM_REF);
   if (addr_base != TREE_OPERAND (op->op0, 0))
     {
-      addr_wide_int off = wi::sext (addr_wide_int (mem_op->op0),
-				    TYPE_PRECISION (TREE_TYPE (mem_op->op0)));
+      addr_wide_int off = addr_wide_int::from (mem_op->op0, SIGNED);
       off += addr_offset;
       mem_op->op0 = wide_int_to_tree (TREE_TYPE (mem_op->op0), off);
       op->op0 = build_fold_addr_expr (addr_base);
@@ -1180,8 +1179,7 @@  vn_reference_maybe_forwprop_address (vec
       && code != POINTER_PLUS_EXPR)
     return;
 
-  off = wi::sext (addr_wide_int (mem_op->op0),
-		  TYPE_PRECISION (TREE_TYPE (mem_op->op0)));
+  off = addr_wide_int::from (mem_op->op0, SIGNED);
 
   /* The only thing we have to do is from &OBJ.foo.bar add the offset
      from .foo.bar to the preceding MEM_REF offset and replace the
@@ -1211,7 +1209,7 @@  vn_reference_maybe_forwprop_address (vec
 	  || TREE_CODE (ptroff) != INTEGER_CST)
 	return;
 
-      off += ptroff;
+      off += wi::address (ptroff);
       op->op0 = ptr;
     }
 
@@ -1369,9 +1367,9 @@  valueize_refs_1 (vec<vn_reference_op_s>
 	       && TREE_CODE (vro->op1) == INTEGER_CST
 	       && TREE_CODE (vro->op2) == INTEGER_CST)
 	{
-	  addr_wide_int off = vro->op0;
-	  off += -addr_wide_int (vro->op1);
-	  off *= addr_wide_int (vro->op2);
+	  addr_wide_int off = ((wi::address (vro->op0)
+				- wi::address (vro->op1))
+			       * wi::address (vro->op2));
 	  if (wi::fits_shwi_p (off))
 	    vro->off = off.to_shwi ();
 	}
Index: gcc/tree-ssa-structalias.c
===================================================================
--- gcc/tree-ssa-structalias.c	2013-10-22 10:18:04.485965964 +0100
+++ gcc/tree-ssa-structalias.c	2013-10-22 10:18:20.397102269 +0100
@@ -3012,8 +3012,7 @@  get_constraint_for_ptr_offset (tree ptr,
   else
     {
       /* Sign-extend the offset.  */
-      addr_wide_int soffset = wi::sext (addr_wide_int (offset),
-					TYPE_PRECISION (TREE_TYPE (offset)));
+      addr_wide_int soffset = addr_wide_int::from (offset, SIGNED);
       if (!wi::fits_shwi_p (soffset))
 	rhsoffset = UNKNOWN_OFFSET;
       else
Index: gcc/tree-vrp.c
===================================================================
--- gcc/tree-vrp.c	2013-10-22 10:18:04.488965990 +0100
+++ gcc/tree-vrp.c	2013-10-22 10:30:40.213441286 +0100
@@ -3849,7 +3849,7 @@  adjust_range_with_scev (value_range_t *v
 	  signop sgn = TYPE_SIGN (TREE_TYPE (step));
 	  bool overflow;
 	  
-	  wtmp = wi::mul (step, nit, sgn, &overflow);
+	  wtmp = wi::mul (wi::extend (step), nit, sgn, &overflow);
 	  /* If the multiplication overflowed we can't do a meaningful
 	     adjustment.  Likewise if the result doesn't fit in the type
 	     of the induction variable.  For a signed type we have to
@@ -6292,7 +6292,7 @@  search_for_addr_array (tree t, location_
 	return;
 
       idx = mem_ref_offset (t);
-      idx = wi::sdiv_trunc (idx, el_sz);
+      idx = wi::sdiv_trunc (idx, wi::address (el_sz));
       if (wi::lts_p (idx, 0))
 	{
 	  if (dump_file && (dump_flags & TDF_DETAILS))
@@ -6305,7 +6305,8 @@  search_for_addr_array (tree t, location_
 		      "array subscript is below array bounds");
 	  TREE_NO_WARNING (t) = 1;
 	}
-      else if (wi::gts_p (idx, addr_wide_int (up_bound) - low_bound + 1))
+      else if (wi::gts_p (idx, (wi::address (up_bound)
+				- wi::address (low_bound) + 1)))
 	{
 	  if (dump_file && (dump_flags & TDF_DETAILS))
 	    {
@@ -8731,11 +8732,11 @@  range_fits_type_p (value_range_t *vr, un
 
   /* Then we can perform the conversion on both ends and compare
      the result for equality.  */
-  tem = wi::ext (vr->min, dest_precision, dest_sgn);
-  if (tem != vr->min)
+  tem = wi::ext (wi::extend (vr->min), dest_precision, dest_sgn);
+  if (tem != wi::extend (vr->min))
     return false;
-  tem = wi::ext (vr->max, dest_precision, dest_sgn);
-  if (tem != vr->max)
+  tem = wi::ext (wi::extend (vr->max), dest_precision, dest_sgn);
+  if (tem != wi::extend (vr->max))
     return false;
 
   return true;
@@ -9021,8 +9022,8 @@  simplify_conversion_using_ranges (gimple
 
   /* Simulate the conversion chain to check if the result is equal if
      the middle conversion is removed.  */
-  innermin = innervr->min;
-  innermax = innervr->max;
+  innermin = wi::extend (innervr->min);
+  innermax = wi::extend (innervr->max);
 
   inner_prec = TYPE_PRECISION (TREE_TYPE (innerop));
   middle_prec = TYPE_PRECISION (TREE_TYPE (middleop));
@@ -9482,7 +9483,8 @@  vrp_finalize (void)
 	    && (TREE_CODE (vr_value[i]->max) == INTEGER_CST))
 	  {
 	    if (vr_value[i]->type == VR_RANGE)
-	      set_range_info (name, vr_value[i]->min, vr_value[i]->max);
+	      set_range_info (name, wi::extend (vr_value[i]->min),
+			      wi::extend (vr_value[i]->max));
 	    else if (vr_value[i]->type == VR_ANTI_RANGE)
 	      {
 		/* VR_ANTI_RANGE ~[min, max] is encoded compactly as
@@ -9496,16 +9498,14 @@  vrp_finalize (void)
 		    && integer_zerop (vr_value[i]->min)
 		    && integer_zerop (vr_value[i]->max))
 		  {
-		    max_wide_int tmmwi
-		      = max_wide_int::from (wi::max_value (TYPE_PRECISION (TREE_TYPE (name)),
-							   UNSIGNED),
-					    UNSIGNED);
-		    set_range_info (name, 1, tmmwi);
+		    unsigned prec = TYPE_PRECISION (TREE_TYPE (name));
+		    set_range_info (name, 1,
+				    wi::mask <max_wide_int> (prec, false));
 		  }
 		else
 		  set_range_info (name,
-				  max_wide_int (vr_value[i]->max) + 1,
-				  max_wide_int (vr_value[i]->min) - 1);
+				  wi::extend (vr_value[i]->max) + 1,
+				  wi::extend (vr_value[i]->min) - 1);
 	      }
 	  }
       }
Index: gcc/tree.c
===================================================================
--- gcc/tree.c	2013-10-22 10:18:04.492966024 +0100
+++ gcc/tree.c	2013-10-22 10:18:20.400102295 +0100
@@ -4317,8 +4317,7 @@  build_simple_mem_ref_loc (location_t loc
 addr_wide_int
 mem_ref_offset (const_tree t)
 {
-  tree toff = TREE_OPERAND (t, 1);
-  return wi::sext (addr_wide_int (toff), TYPE_PRECISION (TREE_TYPE (toff)));
+  return addr_wide_int::from (TREE_OPERAND (t, 1), SIGNED);
 }
 
 /* Return an invariant ADDR_EXPR of type TYPE taking the address of BASE
@@ -6891,26 +6890,17 @@  type_num_arguments (const_tree type)
 int
 tree_int_cst_equal (const_tree t1, const_tree t2)
 {
-  unsigned int prec1, prec2;
   if (t1 == t2)
     return 1;
 
   if (t1 == 0 || t2 == 0)
     return 0;
 
-  if (TREE_CODE (t1) != INTEGER_CST
-      || TREE_CODE (t2) != INTEGER_CST)
-    return 0;
-
-  prec1 = TYPE_PRECISION (TREE_TYPE (t1));
-  prec2 = TYPE_PRECISION (TREE_TYPE (t2));
+  if (TREE_CODE (t1) == INTEGER_CST
+      && TREE_CODE (t2) == INTEGER_CST
+      && wi::extend (t1) == wi::extend (t2))
+    return 1;
 
-  if (prec1 == prec2)
-    return wi::eq_p (t1, t2);
-  else if (prec1 < prec2)
-    return wide_int::from (t1, prec2, TYPE_SIGN (TREE_TYPE (t1))) == t2;
-  else
-    return wide_int::from (t2, prec1, TYPE_SIGN (TREE_TYPE (t2))) == t1;
   return 0;
 }
 
@@ -7080,7 +7070,7 @@  simple_cst_equal (const_tree t1, const_t
   switch (code1)
     {
     case INTEGER_CST:
-      return wi::eq_p (t1, t2);
+      return wi::extend (t1) == wi::extend (t2);
 
     case REAL_CST:
       return REAL_VALUES_IDENTICAL (TREE_REAL_CST (t1), TREE_REAL_CST (t2));
Index: gcc/tree.h
===================================================================
--- gcc/tree.h	2013-10-22 10:18:04.494966041 +0100
+++ gcc/tree.h	2013-10-22 10:18:20.400102295 +0100
@@ -5239,7 +5239,7 @@  #define ANON_AGGRNAME_FORMAT "__anon_%d"
   template <>
   struct int_traits <const_tree>
   {
-    static const enum precision_type precision_type = FLEXIBLE_PRECISION;
+    static const enum precision_type precision_type = VAR_PRECISION;
     static const bool host_dependent_precision = false;
     static unsigned int get_precision (const_tree);
     static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int,
@@ -5248,6 +5248,34 @@  #define ANON_AGGRNAME_FORMAT "__anon_%d"
 
   template <>
   struct int_traits <tree> : public int_traits <const_tree> {};
+
+  template <int N>
+  class extended_tree
+  {
+  private:
+    const_tree m_t;
+
+  public:
+    extended_tree (const_tree);
+
+    unsigned int get_precision () const;
+    const HOST_WIDE_INT *get_val () const;
+    unsigned int get_len () const;
+  };
+
+  template <>
+  template <int N>
+  struct int_traits <extended_tree <N> >
+  {
+    static const enum precision_type precision_type = CONST_PRECISION;
+    static const bool host_dependent_precision = false;
+    static const unsigned int precision = N;
+  };
+
+  generic_wide_int <extended_tree <MAX_BITSIZE_MODE_ANY_INT> >
+  extend (const_tree);
+
+  generic_wide_int <extended_tree <ADDR_MAX_PRECISION> > address (const_tree);
 }
 
 inline unsigned int
@@ -5265,9 +5293,8 @@  wi::int_traits <const_tree>::decompose (
   const HOST_WIDE_INT *val = (const HOST_WIDE_INT *) &TREE_INT_CST_ELT (x, 0);
   unsigned int max_len = ((precision + HOST_BITS_PER_WIDE_INT - 1)
 			  / HOST_BITS_PER_WIDE_INT);
-  unsigned int xprecision = get_precision (x);
 
-  gcc_assert (precision >= xprecision);
+  gcc_checking_assert (precision == get_precision (x));
 
   /* If an unsigned constant occupies a whole number of HWIs and has the
      upper bit set, its representation includes an extra zero HWI,
@@ -5282,6 +5309,46 @@  wi::int_traits <const_tree>::decompose (
   return wi::storage_ref (val, len, precision);
 }
 
+inline generic_wide_int <wi::extended_tree <MAX_BITSIZE_MODE_ANY_INT> >
+wi::extend (const_tree t)
+{
+  return t;
+}
+
+inline generic_wide_int <wi::extended_tree <ADDR_MAX_PRECISION> >
+wi::address (const_tree t)
+{
+  return t;
+}
+
+template <int N>
+inline wi::extended_tree <N>::extended_tree (const_tree t)
+  : m_t (t)
+{
+  gcc_checking_assert (TYPE_PRECISION (TREE_TYPE (t)) <= N);
+}
+
+template <int N>
+inline unsigned int
+wi::extended_tree <N>::get_precision () const
+{
+  return N;
+}
+
+template <int N>
+inline const HOST_WIDE_INT *
+wi::extended_tree <N>::get_val () const
+{
+  return &TREE_INT_CST_ELT (m_t, 0);
+}
+
+template <int N>
+inline unsigned int
+wi::extended_tree <N>::get_len () const
+{
+  return TREE_INT_CST_NUNITS (m_t);
+}
+
 namespace wi
 {
   template <typename T>
Index: gcc/varasm.c
===================================================================
--- gcc/varasm.c	2013-10-22 10:18:04.495966050 +0100
+++ gcc/varasm.c	2013-10-22 10:18:20.401102303 +0100
@@ -4812,10 +4812,10 @@  array_size_for_constructor (tree val)
 
   /* Compute the total number of array elements.  */
   tmp = TYPE_MIN_VALUE (TYPE_DOMAIN (TREE_TYPE (val)));
-  i = addr_wide_int (max_index) - tmp + 1;
+  i = wi::address (max_index) - wi::address (tmp) + 1;
 
   /* Multiply by the array element unit size to find number of bytes.  */
-  i *= addr_wide_int (TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (val))));
+  i *= wi::address (TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (val))));
 
   gcc_assert (wi::fits_uhwi_p (i));
   return i.to_uhwi ();
@@ -4899,8 +4899,10 @@  output_constructor_regular_field (oc_loc
 	 but we are using an unsigned sizetype.  */
       unsigned prec = TYPE_PRECISION (sizetype);
       addr_wide_int idx 
-	= wi::sext (addr_wide_int (local->index) - local->min_index, prec);
-      fieldpos = (idx * TYPE_SIZE_UNIT (TREE_TYPE (local->val))).to_shwi ();
+	= wi::sext (wi::address (local->index)
+		    - wi::address (local->min_index), prec);
+      fieldpos = (idx * wi::address (TYPE_SIZE_UNIT (TREE_TYPE (local->val))))
+	.to_shwi ();
     }
   else if (local->field != NULL_TREE)
     fieldpos = int_byte_position (local->field);
Index: gcc/wide-int.h
===================================================================
--- gcc/wide-int.h	2013-10-22 10:17:29.995670564 +0100
+++ gcc/wide-int.h	2013-10-22 10:18:20.402102312 +0100
@@ -251,6 +251,46 @@  #define ADDR_MAX_BITSIZE 64
 #define ADDR_MAX_PRECISION \
   ((ADDR_MAX_BITSIZE + 4 + HOST_BITS_PER_WIDE_INT - 1) & ~(HOST_BITS_PER_WIDE_INT - 1))
 
+/* The type of result produced by a binary operation on types T1 and T2.
+   Defined purely for brevity.  */
+#define WI_BINARY_RESULT(T1, T2) \
+  typename wi::binary_traits <T1, T2>::result_type
+
+/* The type of result produced by a unary operation on type T.  */
+#define WI_UNARY_RESULT(T) \
+  typename wi::unary_traits <T>::result_type
+
+/* Define a variable RESULT to hold the result of a binary operation on
+   X and Y, which have types T1 and T2 respectively.  Define VAR to
+   point to the blocks of RESULT.  Once the user of the macro has
+   filled in VAR, it should call RESULT.set_len to set the number
+   of initialized blocks.  */
+#define WI_BINARY_RESULT_VAR(RESULT, VAL, T1, X, T2, Y) \
+  WI_BINARY_RESULT (T1, T2) RESULT = \
+    wi::int_traits <WI_BINARY_RESULT (T1, T2)>::get_binary_result (X, Y); \
+  HOST_WIDE_INT *VAL = RESULT.write_val ()
+
+/* Similar for the result of a unary operation on X, which has type T.  */
+#define WI_UNARY_RESULT_VAR(RESULT, VAL, T, X) \
+  WI_UNARY_RESULT (T) RESULT = \
+    wi::int_traits <WI_UNARY_RESULT (T)>::get_binary_result (X, X); \
+  HOST_WIDE_INT *VAL = RESULT.write_val ()
+
+template <typename T> struct generic_wide_int;
+template <int N> struct fixed_wide_int_storage;
+struct wide_int_storage;
+
+/* An N-bit integer.  Until we can use typedef templates, use this instead.  */
+#define FIXED_WIDE_INT(N) \
+  generic_wide_int < fixed_wide_int_storage <N> >
+
+typedef generic_wide_int <wide_int_storage> wide_int;
+typedef FIXED_WIDE_INT (ADDR_MAX_PRECISION) addr_wide_int;
+typedef FIXED_WIDE_INT (MAX_BITSIZE_MODE_ANY_INT) max_wide_int;
+
+struct wide_int_ref_storage;
+typedef generic_wide_int <wide_int_ref_storage> wide_int_ref;
+
 namespace wi
 {
   /* Classifies an integer based on its precision.  */
@@ -303,40 +343,70 @@  #define ADDR_MAX_PRECISION \
      a binary operation on two values of type T.  */
   template <typename T>
   struct unary_traits : public binary_traits <T, T> {};
-}
 
-/* The type of result produced by a binary operation on types T1 and T2.
-   Defined purely for brevity.  */
-#define WI_BINARY_RESULT(T1, T2) \
-  typename wi::binary_traits <T1, T2>::result_type
+  /* Specify the result type for each supported combination of binary
+     inputs.  Note that CONST_PRECISION and VAR_PRECISION cannot be
+     mixed, in order to give stronger type checking.  When both inputs
+     are CONST_PRECISION, they must have the same precision.  */
+  template <>
+  template <typename T1, typename T2>
+  struct binary_traits <T1, T2, FLEXIBLE_PRECISION, FLEXIBLE_PRECISION>
+  {
+    typedef max_wide_int result_type;
+  };
 
-/* The type of result produced by a unary operation on type T.  */
-#define WI_UNARY_RESULT(T) \
-  typename wi::unary_traits <T>::result_type
+  template <>
+  template <typename T1, typename T2>
+  struct binary_traits <T1, T2, FLEXIBLE_PRECISION, VAR_PRECISION>
+  {
+    typedef wide_int result_type;
+  };
 
-/* Define a variable RESULT to hold the result of a binary operation on
-   X and Y, which have types T1 and T2 respectively.  Define VAR to
-   point to the blocks of RESULT.  Once the user of the macro has
-   filled in VAR, it should call RESULT.set_len to set the number
-   of initialized blocks.  */
-#define WI_BINARY_RESULT_VAR(RESULT, VAL, T1, X, T2, Y) \
-  WI_BINARY_RESULT (T1, T2) RESULT = \
-    wi::int_traits <WI_BINARY_RESULT (T1, T2)>::get_binary_result (X, Y); \
-  HOST_WIDE_INT *VAL = RESULT.write_val ()
+  template <>
+  template <typename T1, typename T2>
+  struct binary_traits <T1, T2, FLEXIBLE_PRECISION, CONST_PRECISION>
+  {
+    /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
+       so as not to confuse gengtype.  */
+    typedef generic_wide_int < fixed_wide_int_storage
+			       <int_traits <T2>::precision> > result_type;
+  };
 
-/* Similar for the result of a unary operation on X, which has type T.  */
-#define WI_UNARY_RESULT_VAR(RESULT, VAL, T, X) \
-  WI_UNARY_RESULT (T) RESULT = \
-    wi::int_traits <WI_UNARY_RESULT (T)>::get_binary_result (X, X); \
-  HOST_WIDE_INT *VAL = RESULT.write_val ()
+  template <>
+  template <typename T1, typename T2>
+  struct binary_traits <T1, T2, VAR_PRECISION, FLEXIBLE_PRECISION>
+  {
+    typedef wide_int result_type;
+  };
 
-template <typename T> struct generic_wide_int;
+  template <>
+  template <typename T1, typename T2>
+  struct binary_traits <T1, T2, CONST_PRECISION, FLEXIBLE_PRECISION>
+  {
+    /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
+       so as not to confuse gengtype.  */
+    typedef generic_wide_int < fixed_wide_int_storage
+			       <int_traits <T1>::precision> > result_type;
+  };
 
-struct wide_int_storage;
-typedef generic_wide_int <wide_int_storage> wide_int;
+  template <>
+  template <typename T1, typename T2>
+  struct binary_traits <T1, T2, CONST_PRECISION, CONST_PRECISION>
+  {
+    /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
+       so as not to confuse gengtype.  */
+    STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);
+    typedef generic_wide_int < fixed_wide_int_storage
+			       <int_traits <T1>::precision> > result_type;
+  };
 
-struct wide_int_ref_storage;
-typedef generic_wide_int <wide_int_ref_storage> wide_int_ref;
+  template <>
+  template <typename T1, typename T2>
+  struct binary_traits <T1, T2, VAR_PRECISION, VAR_PRECISION>
+  {
+    typedef wide_int result_type;
+  };
+}
 
 /* Public functions for querying and operating on integers.  */
 namespace wi
@@ -572,38 +642,39 @@  #define BINARY_PREDICATE(OP, F) \
   bool OP (const T &c) const { return wi::F (*this, c); }
 
 #define UNARY_OPERATOR(OP, F) \
-  generic_wide_int OP () const { return wi::F (*this); }
+  WI_UNARY_RESULT (generic_wide_int) OP () const { return wi::F (*this); }
 
 #define BINARY_OPERATOR(OP, F) \
   template <typename T> \
-  generic_wide_int OP (const T &c) const { return wi::F (*this, c); }
+    WI_BINARY_RESULT (generic_wide_int, T) \
+    OP (const T &c) const { return wi::F (*this, c); }
 
 #define ASSIGNMENT_OPERATOR(OP, F) \
   template <typename T> \
-  generic_wide_int &OP (const T &c) { return (*this = wi::F (*this, c)); }
+    generic_wide_int &OP (const T &c) { return (*this = wi::F (*this, c)); }
 
 #define INCDEC_OPERATOR(OP, DELTA) \
   generic_wide_int &OP () { *this += DELTA; return *this; }
 
-  UNARY_OPERATOR (operator ~, bit_not) \
-  UNARY_OPERATOR (operator -, neg) \
-  BINARY_PREDICATE (operator ==, eq_p) \
-  BINARY_PREDICATE (operator !=, ne_p) \
-  BINARY_OPERATOR (operator &, bit_and) \
-  BINARY_OPERATOR (and_not, bit_and_not) \
-  BINARY_OPERATOR (operator |, bit_or) \
-  BINARY_OPERATOR (or_not, bit_or_not) \
-  BINARY_OPERATOR (operator ^, bit_xor) \
-  BINARY_OPERATOR (operator +, add) \
-  BINARY_OPERATOR (operator -, sub) \
-  BINARY_OPERATOR (operator *, mul) \
-  ASSIGNMENT_OPERATOR (operator &=, bit_and) \
-  ASSIGNMENT_OPERATOR (operator |=, bit_or) \
-  ASSIGNMENT_OPERATOR (operator ^=, bit_xor) \
-  ASSIGNMENT_OPERATOR (operator +=, add) \
-  ASSIGNMENT_OPERATOR (operator -=, sub) \
-  ASSIGNMENT_OPERATOR (operator *=, mul) \
-  INCDEC_OPERATOR (operator ++, 1) \
+  UNARY_OPERATOR (operator ~, bit_not)
+  UNARY_OPERATOR (operator -, neg)
+  BINARY_PREDICATE (operator ==, eq_p)
+  BINARY_PREDICATE (operator !=, ne_p)
+  BINARY_OPERATOR (operator &, bit_and)
+  BINARY_OPERATOR (and_not, bit_and_not)
+  BINARY_OPERATOR (operator |, bit_or)
+  BINARY_OPERATOR (or_not, bit_or_not)
+  BINARY_OPERATOR (operator ^, bit_xor)
+  BINARY_OPERATOR (operator +, add)
+  BINARY_OPERATOR (operator -, sub)
+  BINARY_OPERATOR (operator *, mul)
+  ASSIGNMENT_OPERATOR (operator &=, bit_and)
+  ASSIGNMENT_OPERATOR (operator |=, bit_or)
+  ASSIGNMENT_OPERATOR (operator ^=, bit_xor)
+  ASSIGNMENT_OPERATOR (operator +=, add)
+  ASSIGNMENT_OPERATOR (operator -=, sub)
+  ASSIGNMENT_OPERATOR (operator *=, mul)
+  INCDEC_OPERATOR (operator ++, 1)
   INCDEC_OPERATOR (operator --, -1)
 
 #undef BINARY_PREDICATE
@@ -848,6 +919,19 @@  class GTY(()) wide_int_storage
   wide_int bswap () const;
 };
 
+namespace wi
+{
+  template <>
+  struct int_traits <wide_int_storage>
+  {
+    static const enum precision_type precision_type = VAR_PRECISION;
+    /* Guaranteed by a static assert in the wide_int_storage constructor.  */
+    static const bool host_dependent_precision = false;
+    template <typename T1, typename T2>
+    static wide_int get_binary_result (const T1 &, const T2 &);
+  };
+}
+
 inline wide_int_storage::wide_int_storage () {}
 
 /* Initialize the storage from integer X, in its natural precision.
@@ -933,19 +1017,6 @@  wide_int_storage::create (unsigned int p
   return x;
 }
 
-namespace wi
-{
-  template <>
-  struct int_traits <wide_int_storage>
-  {
-    static const enum precision_type precision_type = VAR_PRECISION;
-    /* Guaranteed by a static assert in the wide_int_storage constructor.  */
-    static const bool host_dependent_precision = false;
-    template <typename T1, typename T2>
-    static wide_int get_binary_result (const T1 &, const T2 &);
-  };
-}
-
 template <typename T1, typename T2>
 inline wide_int
 wi::int_traits <wide_int_storage>::get_binary_result (const T1 &x, const T2 &y)
@@ -959,10 +1030,6 @@  wi::int_traits <wide_int_storage>::get_b
     return wide_int::create (wi::get_precision (x));
 }
 
-/* An N-bit integer.  Until we can use typedef templates, use this instead.  */
-#define FIXED_WIDE_INT(N) \
-  generic_wide_int < fixed_wide_int_storage <N> >
-
 /* The storage used by FIXED_WIDE_INT (N).  */
 template <int N>
 class GTY(()) fixed_wide_int_storage
@@ -988,8 +1055,19 @@  class GTY(()) fixed_wide_int_storage
 					bool = true);
 };
 
-typedef FIXED_WIDE_INT (ADDR_MAX_PRECISION) addr_wide_int;
-typedef FIXED_WIDE_INT (MAX_BITSIZE_MODE_ANY_INT) max_wide_int;
+namespace wi
+{
+  template <>
+  template <int N>
+  struct int_traits < fixed_wide_int_storage <N> >
+  {
+    static const enum precision_type precision_type = CONST_PRECISION;
+    static const bool host_dependent_precision = false;
+    static const unsigned int precision = N;
+    template <typename T1, typename T2>
+    static FIXED_WIDE_INT (N) get_binary_result (const T1 &, const T2 &);
+  };
+}
 
 template <int N>
 inline fixed_wide_int_storage <N>::fixed_wide_int_storage () {}
@@ -1071,20 +1149,6 @@  fixed_wide_int_storage <N>::from_array (
   return result;
 }
 
-namespace wi
-{
-  template <>
-  template <int N>
-  struct int_traits < fixed_wide_int_storage <N> >
-  {
-    static const enum precision_type precision_type = CONST_PRECISION;
-    static const bool host_dependent_precision = false;
-    static const unsigned int precision = N;
-    template <typename T1, typename T2>
-    static FIXED_WIDE_INT (N) get_binary_result (const T1 &, const T2 &);
-  };
-}
-
 template <int N>
 template <typename T1, typename T2>
 inline FIXED_WIDE_INT (N)
@@ -1094,72 +1158,6 @@  get_binary_result (const T1 &, const T2
   return FIXED_WIDE_INT (N) ();
 }
 
-/* Specify the result type for each supported combination of binary
-   inputs.  Note that CONST_PRECISION and VAR_PRECISION cannot be
-   mixed, in order to give stronger type checking.  When both inputs
-   are CONST_PRECISION, they must have the same precision.  */
-namespace wi
-{
-  template <>
-  template <typename T1, typename T2>
-  struct binary_traits <T1, T2, FLEXIBLE_PRECISION, FLEXIBLE_PRECISION>
-  {
-    typedef max_wide_int result_type;
-  };
-
-  template <>
-  template <typename T1, typename T2>
-  struct binary_traits <T1, T2, FLEXIBLE_PRECISION, VAR_PRECISION>
-  {
-    typedef wide_int result_type;
-  };
-
-  template <>
-  template <typename T1, typename T2>
-  struct binary_traits <T1, T2, FLEXIBLE_PRECISION, CONST_PRECISION>
-  {
-    /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
-       so as not to confuse gengtype.  */
-    typedef generic_wide_int < fixed_wide_int_storage
-			       <int_traits <T2>::precision> > result_type;
-  };
-
-  template <>
-  template <typename T1, typename T2>
-  struct binary_traits <T1, T2, VAR_PRECISION, FLEXIBLE_PRECISION>
-  {
-    typedef wide_int result_type;
-  };
-
-  template <>
-  template <typename T1, typename T2>
-  struct binary_traits <T1, T2, CONST_PRECISION, FLEXIBLE_PRECISION>
-  {
-    /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
-       so as not to confuse gengtype.  */
-    typedef generic_wide_int < fixed_wide_int_storage
-			       <int_traits <T1>::precision> > result_type;
-  };
-
-  template <>
-  template <typename T1, typename T2>
-  struct binary_traits <T1, T2, CONST_PRECISION, CONST_PRECISION>
-  {
-    /* Spelled out explicitly (rather than through FIXED_WIDE_INT)
-       so as not to confuse gengtype.  */
-    STATIC_ASSERT (int_traits <T1>::precision == int_traits <T2>::precision);
-    typedef generic_wide_int < fixed_wide_int_storage
-			       <int_traits <T1>::precision> > result_type;
-  };
-
-  template <>
-  template <typename T1, typename T2>
-  struct binary_traits <T1, T2, VAR_PRECISION, VAR_PRECISION>
-  {
-    typedef wide_int result_type;
-  };
-}
-
 namespace wi
 {
   /* Implementation of int_traits for primitive integer types like "int".  */
@@ -1288,9 +1286,7 @@  wi::two (unsigned int precision)
   template <>
   struct int_traits <wi::hwi_with_prec>
   {
-    /* Since we have a sign, we can extend or truncate the integer to
-       other precisions where necessary.  */
-    static const enum precision_type precision_type = FLEXIBLE_PRECISION;
+    static const enum precision_type precision_type = VAR_PRECISION;
     /* hwi_with_prec has an explicitly-given precision, rather than the
        precision of HOST_WIDE_INT.  */
     static const bool host_dependent_precision = false;