Patchwork patch to fix constant math -5th patch, rtl

login
register
mail settings
Submitter Kenneth Zadeck
Date April 16, 2013, 8:17 p.m.
Message ID <516DB1F3.8090908@naturalbridge.com>
Download mbox | patch
Permalink /patch/237079/
State New
Headers show

Comments

Kenneth Zadeck - April 16, 2013, 8:17 p.m.
Here is a refreshed version of the rtl changes for wide-int.   the only 
change from the previous versions is that the wide-int binary operations 
have been simplified to use the new wide-int binary templates.

kenny
2013-04-16  Kenneth Zadeck <zadeck@naturalbridge.com>

	* alias.c  (rtx_equal_for_memref_p): Fixed comment.
	* builtins.c (c_getstr, c_readstr, expand_builtin_signbit): 
	Make to work with any size int.
	* combine.c (try_combine, subst): Changed to support any 
	size integer.
	* coretypes.h (hwivec_def, hwivec, const_hwivec): New.
	* cse.c (hash_rtx_cb): Added CONST_WIDE_INT case are
	modified DOUBLE_INT case.
	* cselib.c (rtx_equal_for_cselib_1): Converted cases to 
	CASE_CONST_UNIQUE.
	(cselib_hash_rtx): Added CONST_WIDE_INT case.
	* defaults.h (TARGET_SUPPORTS_WIDE_INT): New.
	* doc/rtl.texi (CONST_DOUBLE, CONST_WIDE_INT): Updated.
	* doc/tm.texi (TARGET_SUPPORTS_WIDE_INT): New.	
	* doc/tm.texi.in (TARGET_SUPPORTS_WIDE_INT): New.
	* dojump.c (prefer_and_bit_test): Use wide int api.
	* dwarf2out.c (get_full_len): New function.
	(dw_val_equal_p, size_of_loc_descr,
	output_loc_operands, print_die, attr_checksum, same_dw_val_p,
	size_of_die, value_format, output_die, mem_loc_descriptor,
	loc_descriptor, extract_int, add_const_value_attribute,
	hash_loc_operands, compare_loc_operands): Add support for wide-ints.
	(add_AT_wide): New function.
	* dwarf2out.h (enum dw_val_class): Added dw_val_class_wide_int.
	* emit-rtl.c (const_wide_int_htab): Add marking.
	(const_wide_int_htab_hash, const_wide_int_htab_eq,
	lookup_const_wide_int, immed_wide_int_const): New functions.
	(const_double_htab_hash, const_double_htab_eq,
	rtx_to_double_int, immed_double_const): Conditionally 
	changed CONST_DOUBLE behavior.
 	(immed_double_const, init_emit_once): Changed to support wide-int.
	* explow.c (plus_constant): Now uses wide-int api.
	* expmed.c (mask_rtx, lshift_value): Now uses wide-int.
 	(expand_mult, expand_smod_pow2): Make to work with any size int.
	(make_tree): Added CONST_WIDE_INT case.
	* expr.c (convert_modes): Added support for any size int.
	(emit_group_load_1): Added todo for place that still does not
	allow large ints.
	(store_expr, expand_constructor): Fixed comments.
	(expand_expr_real_2, expand_expr_real_1,
	reduce_to_bit_field_precision, const_vector_from_tree):
	Converted to use wide-int api.
	* final.c (output_addr_const): Added CONST_WIDE_INT case.
	* genemit.c (gen_exp): Added CONST_WIDE_INT case.
	* gengenrtl.c (excluded_rtx): Added CONST_WIDE_INT case.
	* gengtype.c (wide-int): New type.
	* genpreds.c (write_one_predicate_function): Fixed comment.
	(add_constraint): Added CONST_WIDE_INT test.
	(write_tm_constrs_h): Do not emit hval or lval if target
	supports wide integers.
	* gensupport.c (std_preds): Added const_wide_int_operand and
	const_scalar_int_operand.
	* optabs.c (expand_subword_shift, expand_doubleword_shift,
	expand_absneg_bit, expand_absneg_bit, expand_copysign_absneg,
	expand_copysign_bit): Made to work with any size int.  
	* postreload.c (reload_cse_simplify_set):  Now uses wide-int api.
	* print-rtl.c (print_rtx): Added CONST_WIDE_INT case.
	* read-rtl.c (validate_const_wide_int): New function.
	(read_rtx_code): Added CONST_WIDE_INT case.
	* recog.c (const_scalar_int_operand, const_double_operand):
	New versions if target supports wide integers.
	(const_wide_int_operand): New function.
	* rtl.c (DEF_RTL_EXPR): Added CONST_WIDE_INT case.
	(rtx_size): Ditto.
	(rtx_alloc_stat, hwivec_output_hex, hwivec_check_failed_bounds):
	New functions.
	(iterative_hash_rtx): Added CONST_WIDE_INT case.
	* rtl.def (CONST_WIDE_INT): New.
	* rtl.h (hwivec_def): New function.
	(HWI_GET_NUM_ELEM, HWI_PUT_NUM_ELEM, CONST_WIDE_INT_P,
	CONST_SCALAR_INT_P, XHWIVEC_ELT, HWIVEC_CHECK, CONST_WIDE_INT_VEC,
	CONST_WIDE_INT_NUNITS, CONST_WIDE_INT_ELT, rtx_alloc_v): New macros.
	(chain_next): Added hwiv case.
	(CASE_CONST_SCALAR_INT, CONST_INT, CONST_WIDE_INT):  Added new
	defs if target supports wide ints.
	* rtlanal.c (commutative_operand_precedence, split_double):
	Added CONST_WIDE_INT case.
	* sched-vis.c (print_value): Added CONST_WIDE_INT case are
	modified DOUBLE_INT case.
	* sel-sched-ir.c (lhs_and_rhs_separable_p): Fixed comment
	* simplify-rtx.c (mode_signbit_p,
	simplify_const_unary_operation, simplify_binary_operation_1,
	simplify_const_binary_operation,
	simplify_const_relational_operation, simplify_immed_subreg):
	Make work with any size int.  .
	* tree-ssa-address.c (addr_for_mem_ref): Changes to use
	wide-int rather than double-int.
	* tree.c (wide_int_to_tree): New function.
	* var-tracking.c (loc_cmp): Added CONST_WIDE_INT case.
	* varasm.c (const_rtx_hash_1): Added CONST_WIDE_INT case.
Richard Guenther - April 24, 2013, 12:09 p.m.
On Tue, Apr 16, 2013 at 10:17 PM, Kenneth Zadeck
<zadeck@naturalbridge.com> wrote:
> Here is a refreshed version of the rtl changes for wide-int.   the only
> change from the previous versions is that the wide-int binary operations
> have been simplified to use the new wide-int binary templates.

Looking for from_rtx calls (to see where we get the mode/precision from) I
see for example

-         o = rtx_to_double_int (outer);
-         i = rtx_to_double_int (inner);
-
-         m = double_int::mask (width);
-         i &= m;
-         m = m.llshift (offset, HOST_BITS_PER_DOUBLE_INT);
-         i = i.llshift (offset, HOST_BITS_PER_DOUBLE_INT);
-         o = o.and_not (m) | i;
-
+
+         o = (wide_int::from_rtx (outer, GET_MODE (SET_DEST (temp)))
+              .insert (wide_int::from_rtx (inner, GET_MODE (dest)),
+                       offset, width));

where I'd rather have the original code preserved as much as possible
and not introduce a new primitive wide_int::insert for this.  The conversion
and review process will be much more error-prone if we do multiple
things at once (and it might keep the wide_int initial interface leaner).

Btw, the wide_int::insert implementation doesn't assert anything about
the inputs precision.  Instead it reads

+  if (start + width >= precision)
+    width = precision - start;
+
+  mask = shifted_mask (start, width, false, precision);
+  tmp = op0.lshift (start, 0, precision, NONE);
+  result = tmp & mask;
+
+  tmp = and_not (mask);
+  result = result | tmp;

which eventually ends up performing everything in target precision.  So
we don't really care about the mode or precision of inner.

Then I see

diff --git a/gcc/dwarf2out.h b/gcc/dwarf2out.h
index ad03a34..531a7c1 100644
@@ -180,6 +182,7 @@ typedef struct GTY(()) dw_val_struct {
       HOST_WIDE_INT GTY ((default)) val_int;
       unsigned HOST_WIDE_INT GTY ((tag
("dw_val_class_unsigned_const"))) val_unsigned;
       double_int GTY ((tag ("dw_val_class_const_double"))) val_double;
+      wide_int GTY ((tag ("dw_val_class_wide_int"))) val_wide;
       dw_vec_const GTY ((tag ("dw_val_class_vec"))) val_vec;
       struct dw_val_die_union
        {

ick.  That makes dw_val_struct really large ... (and thus dw_attr_struct).
You need to make this a pointer to a wide_int at least.

-/* Return a CONST_INT or CONST_DOUBLE corresponding to target reading
+/* Return a constant integer corresponding to target reading
    GET_MODE_BITSIZE (MODE) bits from string constant STR.  */

 static rtx
 c_readstr (const char *str, enum machine_mode mode)
 {
-  HOST_WIDE_INT c[2];
+  wide_int c;
...
-  return immed_double_const (c[0], c[1], mode);
+
+  c = wide_int::from_array (tmp, len, mode);
+  return immed_wide_int_const (c, mode);
 }

err - what's this good for?  It doesn't look necessary as part of the initial
wide-int conversion at least.  (please audit your patches for such cases)

@@ -4994,12 +4999,12 @@ expand_builtin_signbit (tree exp, rtx target)

   if (bitpos < GET_MODE_BITSIZE (rmode))
     {
-      double_int mask = double_int_zero.set_bit (bitpos);
+      wide_int mask = wide_int::set_bit_in_zero (bitpos, rmode);

       if (GET_MODE_SIZE (imode) > GET_MODE_SIZE (rmode))
        temp = gen_lowpart (rmode, temp);
       temp = expand_binop (rmode, and_optab, temp,
-                          immed_double_int_const (mask, rmode),
+                          immed_wide_int_const (mask, rmode),
                           NULL_RTX, 1, OPTAB_LIB_WIDEN);
     }
   else

Likewise.  I suppose you remove immed_double_int_const but I see no
reason to do that.  It just makes your patch larger than necessary.

[what was the reason again to have TARGET_SUPPORTS_WIDE_INT at all?
It's supposed to be a no-op conversion, right?]

@@ -95,38 +95,9 @@ plus_constant (enum machine_mode mode, rtx x,
HOST_WIDE_INT c)

   switch (code)
     {
-    case CONST_INT:
-      if (GET_MODE_BITSIZE (mode) > HOST_BITS_PER_WIDE_INT)
-       {
-         double_int di_x = double_int::from_shwi (INTVAL (x));
-         double_int di_c = double_int::from_shwi (c);
-
-         bool overflow;
-         double_int v = di_x.add_with_sign (di_c, false, &overflow);
-         if (overflow)
-           gcc_unreachable ();
-
-         return immed_double_int_const (v, VOIDmode);
-       }
-
-      return GEN_INT (INTVAL (x) + c);
-
-    case CONST_DOUBLE:
-      {
-       double_int di_x = double_int::from_pair (CONST_DOUBLE_HIGH (x),
-                                                CONST_DOUBLE_LOW (x));
-       double_int di_c = double_int::from_shwi (c);
-
-       bool overflow;
-       double_int v = di_x.add_with_sign (di_c, false, &overflow);
-       if (overflow)
-         /* Sorry, we have no way to represent overflows this wide.
-            To fix, add constant support wider than CONST_DOUBLE.  */
-         gcc_assert (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_DOUBLE_INT);
-
-       return immed_double_int_const (v, VOIDmode);
-      }
-
+    CASE_CONST_SCALAR_INT:
+      return immed_wide_int_const (wide_int::from_rtx (x, mode)
+                                  + wide_int::from_shwi (c, mode), mode);

you said you didn't want to convert CONST_INT to wide-int.  But the above
is certainly a lot less efficient than before - given your change to support
operator+ RTX even less efficient than possible.  The above also shows
three 'mode' arguments while one to immed_wide_int_const would be
enough (given it would truncate the arbitrary precision result from the
addition to modes precision).

That is, I see no reason to remove the CONST_INT case or the CONST_DOUBLE
case.  [why is the above not in any way guarded with TARGET_SUPPORTS_WIDE_INT?]

What happens with overflows in the wide-int case?  The double-int case
asserted that there is no overflow across 2 * hwi precision, the wide-int
case does not.  Still the wide-int case now truncates to 'mode' precision
while the CONST_DOUBLE case did not.

That's a change in behavior, no?  Effectively the code for CONST_INT
and CONST_DOUBLE did "arbitrary" precision arithmetic (up to the
precision they can encode) which wide-int changes.

Can we in such cases please to a preparatory patch and change the
CONST_INT/CONST_DOUBLE paths to do an explicit [sz]ext to
mode precision first?  What does wide-int do with VOIDmode mode inputs?
It seems to ICE on them for from_rtx and use garbage (0) for from_shwi.  Ugh.
Btw, plus_constant asserts that mode is either VOIDmode (I suppose semantically
do "arbitrary precision") or the same mode as the mode of x (I suppose
semantically do "mode precision").  Neither the current nor your implementation
seems to do something consistent here :/

So please, for those cases (I suppose there are many more, eventually
one of the reasons why you think that requiring a mode for all CONST_DOUBLEs
is impossible), can we massage the current code to 1) document what is
desired, 2) follow that specification with regarding to computation
mode / precision
and result mode / precision?

Thanks,
Richard.

> kenny
>
Richard Sandiford - April 24, 2013, 12:44 p.m.
Richard Biener <richard.guenther@gmail.com> writes:
> Can we in such cases please to a preparatory patch and change the
> CONST_INT/CONST_DOUBLE paths to do an explicit [sz]ext to
> mode precision first?

I'm not sure what you mean here.  CONST_INT HWIs are already sign-extended
from mode precision to HWI precision.  The 8-bit value 0xb10000000 must be
represented as (const_int -128); nothing else is allowed.  E.g. (const_int 128)
is not a valid QImode value on BITS_PER_UNIT==8 targets.

> What does wide-int do with VOIDmode mode inputs?
> It seems to ICE on them for from_rtx and use garbage (0) for from_shwi.  Ugh.

ICEing is right.  As mentioned before, every rtx constant has a mode,
whether it's stored in the rtx or not.  Callers must keep track of
what that mode is.

> Btw, plus_constant asserts that mode is either VOIDmode (I suppose
> semantically do "arbitrary precision")

No, not arbitrary precision.  It's always the precision specified
by the "mode" parameter.  The assert is:

  gcc_assert (GET_MODE (x) == VOIDmode || GET_MODE (x) == mode);

This is because GET_MODE always returns VOIDmode for CONST_INT and
CONST_DOUBLE integers.  The mode parameter is needed to tell us what
precision those CONST_INTs and CONST_DOUBLEs actually have, because
the rtx itself doesn't tell us.  The mode parameter serves no purpose
beyond that.

So if the rtx does specify a mode (everything except CONST_INT and
CONST_DOUBLE), the assert is making sure that the caller has correctly
tracked the rtx's mode and provided the right mode parameter.  The caller
must do that for all rtxes, it's just that we can't assert for it in the
CONST_INT and CONST_DOUBLE case, because the rtx has no mode to check
against.  If CONST_INT and CONST_DOUBLE did have a mode to check against,
there would be no need for the mode parameter at all.  Likewise there
would be no need for wide_int::from_rtx to have a mode parameter.

Richard
Richard Guenther - April 24, 2013, 1:36 p.m.
On Wed, Apr 24, 2013 at 2:44 PM, Richard Sandiford
<rdsandiford@googlemail.com> wrote:
> Richard Biener <richard.guenther@gmail.com> writes:
>> Can we in such cases please to a preparatory patch and change the
>> CONST_INT/CONST_DOUBLE paths to do an explicit [sz]ext to
>> mode precision first?
>
> I'm not sure what you mean here.  CONST_INT HWIs are already sign-extended
> from mode precision to HWI precision.  The 8-bit value 0xb10000000 must be
> represented as (const_int -128); nothing else is allowed.  E.g. (const_int 128)
> is not a valid QImode value on BITS_PER_UNIT==8 targets.

Yes, that's what I understand.  But consider you get a CONST_INT that is
_not_ a valid QImode value.  Current code simply trusts that it is, given
the context from ...

>> What does wide-int do with VOIDmode mode inputs?
>> It seems to ICE on them for from_rtx and use garbage (0) for from_shwi.  Ugh.
>
> ICEing is right.  As mentioned before, every rtx constant has a mode,
> whether it's stored in the rtx or not.  Callers must keep track of
> what that mode is.

... here.  So I see that both CONST_INT and CONST_DOUBLE get their
mode (or in wide-int speak precision) from the context.

Effectively a CONST_INT and CONST_DOUBLE is valid in multiple
modes and thus "arbitrary precision" with a limit set by the limit
of the encoding.

>> Btw, plus_constant asserts that mode is either VOIDmode (I suppose
>> semantically do "arbitrary precision")
>
> No, not arbitrary precision.  It's always the precision specified
> by the "mode" parameter.  The assert is:
>
>   gcc_assert (GET_MODE (x) == VOIDmode || GET_MODE (x) == mode);
>
> This is because GET_MODE always returns VOIDmode for CONST_INT and
> CONST_DOUBLE integers.  The mode parameter is needed to tell us what
> precision those CONST_INTs and CONST_DOUBLEs actually have, because
> the rtx itself doesn't tell us.  The mode parameter serves no purpose
> beyond that.

That doesn't make sense.  The only thing we could then do with the mode
is assert that the CONST_INT/CONST_DOUBLE is valid for mode.

mode does not constrain the result in any way, thus it happily produces
a CONST_INT (128) from QImode CONST_INT (127) + 1.  So, does the
caller of plus_constant have to verify the result is actually valid in the
mode it expects?  And what should it do if the result is not "valid"?

Given that we do not verify the input values and do not care for mode for
producing the output value the current behavior of plus_constant is to
compute in arbitrary precision.

wide-int changes the above to produce a different result (CONST_INT (-128)).
I'd rather not have this patch-set introduce such subtle differences.

I'd like to see the following:

1) strip away 'precision' from wide-int, make the sign/zero-extend operations
    that are now implicitely done explicit in the same way as done currently,
    thus, ...
2) do a more-or-less 1:1 conversion of existing double-int code.  double-int
    code already has all sign/zero-extensions that are required for correctness.

and after merging in wide-int

3) see what common code can be factored out (wide_int::insert and friends)
4) if seems fit, introduce a wide_int_with_precision class that provides
    a wrapper around wide_int and carries out operations in a fixed precision,
    doing sign-/zero-extends after each operation (I suppose not much code
    will be simplified by that)

before merging converting all targets is necessary - this isn't a part of
the infrastructure that can stand a partial conversion.  I suspect that
conversion of all targets is much easier after 2), especially if most
of the double-int interfaces are not removed but their implementation
changed to work on wide-ints (just as I mentioned for the immed_double_int_const
case, but likely not restricted to that - CONST_DOUBLE_LOW/HIGH
can be converted to code that asserts the encoding is sufficiently small
for example).  Thus,

5) piecewise remove legacy code dealing with CONST_DOUBLE

Btw, on 64bit hosts the rtx_def struct has 32bits padding before
the rtunion.  I think 32bit hosts are legacy now, so using that 32bits
padding by storing 'len' there will make space-efficient conversion
of CONST_INT possible.  Well, and avoids wasting another 32bits
of padding for CONST_WIDE on 64bit hosts.

> So if the rtx does specify a mode (everything except CONST_INT and
> CONST_DOUBLE), the assert is making sure that the caller has correctly
> tracked the rtx's mode and provided the right mode parameter.  The caller
> must do that for all rtxes, it's just that we can't assert for it in the
> CONST_INT and CONST_DOUBLE case, because the rtx has no mode to check
> against.  If CONST_INT and CONST_DOUBLE did have a mode to check against,
> there would be no need for the mode parameter at all.  Likewise there
> would be no need for wide_int::from_rtx to have a mode parameter.

constants do not have an intrinsic mode or precision.  They either do or
do not fit into a specific mode or precision.  If an operation is to be carried
out in a specific mode or precision I expect the result of the operation
fits into that specific mode or precision (which oddly isn't what it does now,
appearantly).

Richard.

> Richard
Richard Sandiford - April 24, 2013, 2:03 p.m.
Richard Biener <richard.guenther@gmail.com> writes:
> On Wed, Apr 24, 2013 at 2:44 PM, Richard Sandiford
> <rdsandiford@googlemail.com> wrote:
>> Richard Biener <richard.guenther@gmail.com> writes:
>>> Can we in such cases please to a preparatory patch and change the
>>> CONST_INT/CONST_DOUBLE paths to do an explicit [sz]ext to
>>> mode precision first?
>>
>> I'm not sure what you mean here.  CONST_INT HWIs are already sign-extended
>> from mode precision to HWI precision.  The 8-bit value 0xb10000000 must be
>> represented as (const_int -128); nothing else is allowed.
>> E.g. (const_int 128)
>> is not a valid QImode value on BITS_PER_UNIT==8 targets.
>
> Yes, that's what I understand.  But consider you get a CONST_INT that is
> _not_ a valid QImode value.

But that's invalid :-)  It is not valid to call:

      plus_constant (QImode, GEN_INT (128), 1)

The point is that, even though it's invalid, we can't assert for it.

plus_constant is not for arbitrary precision arithmetic.  It's for
arithmetic in a given non-VOIDmode mode.

> Effectively a CONST_INT and CONST_DOUBLE is valid in multiple
> modes and thus "arbitrary precision" with a limit set by the limit
> of the encoding.

The same CONST_INT and CONST_DOUBLE can be shared for several constants
in different modes, yes, which is presumably what motivated making them
VOIDmode in the first place.  E.g. zero is const0_rtx for every integer
mode.  But in any given context, including plus_constant, the CONST_INT
or CONST_DOUBLE has a specific mode.

>>> Btw, plus_constant asserts that mode is either VOIDmode (I suppose
>>> semantically do "arbitrary precision")
>>
>> No, not arbitrary precision.  It's always the precision specified
>> by the "mode" parameter.  The assert is:
>>
>>   gcc_assert (GET_MODE (x) == VOIDmode || GET_MODE (x) == mode);
>>
>> This is because GET_MODE always returns VOIDmode for CONST_INT and
>> CONST_DOUBLE integers.  The mode parameter is needed to tell us what
>> precision those CONST_INTs and CONST_DOUBLEs actually have, because
>> the rtx itself doesn't tell us.  The mode parameter serves no purpose
>> beyond that.
>
> That doesn't make sense.  The only thing we could then do with the mode
> is assert that the CONST_INT/CONST_DOUBLE is valid for mode.

No, we have to generate a correct CONST_INT or CONST_DOUBLE result.
If we are adding 1 to a QImode (const_int 127), we must return
(const_int -128).  If we are adding 1 to HImode (const_int 127),
we must return (const_int 128).  However...

> mode does not constrain the result in any way, thus it happily produces
> a CONST_INT (128) from QImode CONST_INT (127) + 1.  So, does the
> caller of plus_constant have to verify the result is actually valid in the
> mode it expects?  And what should it do if the result is not "valid"?

...good spot.  That's a bug.  It should be:

      return gen_int_mode (INTVAL (x) + c, mode);

rather than:

      return GEN_INT (INTVAL (x) + c);

It's a long-standing bug, because in the old days we didn't have
the mode to hand.  It was missed when the mode was added.

But the mode is also used in:

      if (GET_MODE_BITSIZE (mode) > HOST_BITS_PER_WIDE_INT)
	{
	  double_int di_x = double_int::from_shwi (INTVAL (x));
	  double_int di_c = double_int::from_shwi (c);

	  bool overflow;
	  double_int v = di_x.add_with_sign (di_c, false, &overflow);
	  if (overflow)
	    gcc_unreachable ();

	  return immed_double_int_const (v, VOIDmode);
	}

which is deciding whether the result should be kept as a HWI even
in cases where the addition overflows.  It isn't arbitrary precision.

Richard
Richard Guenther - April 24, 2013, 2:13 p.m.
On Wed, Apr 24, 2013 at 4:03 PM, Richard Sandiford
<rdsandiford@googlemail.com> wrote:
> Richard Biener <richard.guenther@gmail.com> writes:
>> On Wed, Apr 24, 2013 at 2:44 PM, Richard Sandiford
>> <rdsandiford@googlemail.com> wrote:
>>> Richard Biener <richard.guenther@gmail.com> writes:
>>>> Can we in such cases please to a preparatory patch and change the
>>>> CONST_INT/CONST_DOUBLE paths to do an explicit [sz]ext to
>>>> mode precision first?
>>>
>>> I'm not sure what you mean here.  CONST_INT HWIs are already sign-extended
>>> from mode precision to HWI precision.  The 8-bit value 0xb10000000 must be
>>> represented as (const_int -128); nothing else is allowed.
>>> E.g. (const_int 128)
>>> is not a valid QImode value on BITS_PER_UNIT==8 targets.
>>
>> Yes, that's what I understand.  But consider you get a CONST_INT that is
>> _not_ a valid QImode value.
>
> But that's invalid :-)  It is not valid to call:
>
>       plus_constant (QImode, GEN_INT (128), 1)
>
> The point is that, even though it's invalid, we can't assert for it.

Why can't we assert for it?

> plus_constant is not for arbitrary precision arithmetic.  It's for
> arithmetic in a given non-VOIDmode mode.
>
>> Effectively a CONST_INT and CONST_DOUBLE is valid in multiple
>> modes and thus "arbitrary precision" with a limit set by the limit
>> of the encoding.
>
> The same CONST_INT and CONST_DOUBLE can be shared for several constants
> in different modes, yes, which is presumably what motivated making them
> VOIDmode in the first place.  E.g. zero is const0_rtx for every integer
> mode.  But in any given context, including plus_constant, the CONST_INT
> or CONST_DOUBLE has a specific mode.
>
>>>> Btw, plus_constant asserts that mode is either VOIDmode (I suppose
>>>> semantically do "arbitrary precision")
>>>
>>> No, not arbitrary precision.  It's always the precision specified
>>> by the "mode" parameter.  The assert is:
>>>
>>>   gcc_assert (GET_MODE (x) == VOIDmode || GET_MODE (x) == mode);
>>>
>>> This is because GET_MODE always returns VOIDmode for CONST_INT and
>>> CONST_DOUBLE integers.  The mode parameter is needed to tell us what
>>> precision those CONST_INTs and CONST_DOUBLEs actually have, because
>>> the rtx itself doesn't tell us.  The mode parameter serves no purpose
>>> beyond that.
>>
>> That doesn't make sense.  The only thing we could then do with the mode
>> is assert that the CONST_INT/CONST_DOUBLE is valid for mode.
>
> No, we have to generate a correct CONST_INT or CONST_DOUBLE result.
> If we are adding 1 to a QImode (const_int 127), we must return
> (const_int -128).  If we are adding 1 to HImode (const_int 127),
> we must return (const_int 128).  However...
>
>> mode does not constrain the result in any way, thus it happily produces
>> a CONST_INT (128) from QImode CONST_INT (127) + 1.  So, does the
>> caller of plus_constant have to verify the result is actually valid in the
>> mode it expects?  And what should it do if the result is not "valid"?
>
> ...good spot.  That's a bug.  It should be:
>
>       return gen_int_mode (INTVAL (x) + c, mode);
>
> rather than:
>
>       return GEN_INT (INTVAL (x) + c);
>
> It's a long-standing bug, because in the old days we didn't have
> the mode to hand.  It was missed when the mode was added.
>
> But the mode is also used in:
>
>       if (GET_MODE_BITSIZE (mode) > HOST_BITS_PER_WIDE_INT)
>         {
>           double_int di_x = double_int::from_shwi (INTVAL (x));
>           double_int di_c = double_int::from_shwi (c);
>
>           bool overflow;
>           double_int v = di_x.add_with_sign (di_c, false, &overflow);
>           if (overflow)
>             gcc_unreachable ();
>
>           return immed_double_int_const (v, VOIDmode);
>         }
>
> which is deciding whether the result should be kept as a HWI even
> in cases where the addition overflows.  It isn't arbitrary precision.

The above is wrong for SImode HOST_WIDE_INT and 0x7fffffff + 1
in the same way as the QImode case above.  It will produce
0x80000000.  The ICEing on "overflow" is odd as well as I'd have
expected twos-complement behavior which double-int, when
overflowing its 2 * HWI precision, provides.

I suppose the above should use immed_double_int_const (v, mode), too,
which oddly only ever truncates to mode for modes <= HOST_BITS_PER_WIDE_INT
via gen_int_mode.

Same of course for the code for CONST_DOUBLE.

Richard.

> Richard
Richard Sandiford - April 24, 2013, 2:29 p.m.
Richard Biener <richard.guenther@gmail.com> writes:
> I suppose the above should use immed_double_int_const (v, mode), too,

In practice it doesn't matter, because...

> which oddly only ever truncates to mode for modes <= HOST_BITS_PER_WIDE_INT
> via gen_int_mode.

...right.  That's because there's not really any proper support for
non-power-of-2 modes.  Partial modes like PDI are specified by things like:

PARTIAL_INT_MODE (DI);

which is glaringly absent of any bit width.  So if the constant is big
enough to need 2 HWIs, it in practice must be exactly 2 HWIs wide.
One of the advantages of wide_int is that it allows us to relax this
restriction: we can have both (a) mode widths greater than
HOST_BITS_PER_WIDE_INT*2 and (b) mode widths that are greater than
HOST_BITS_PER_WIDE_INT while not being a multiple of it.

In other words, one of the reasons wide_int can't be exactly 1:1
in practice is because it is clearing out these mistakes (GEN_INT
rather than gen_int_mode) and missing features (non-power-of-2 widths).

Richard
Kenneth Zadeck - April 24, 2013, 2:35 p.m.
On 04/24/2013 09:36 AM, Richard Biener wrote:
> On Wed, Apr 24, 2013 at 2:44 PM, Richard Sandiford
> <rdsandiford@googlemail.com> wrote:
>> Richard Biener <richard.guenther@gmail.com> writes:
>>> Can we in such cases please to a preparatory patch and change the
>>> CONST_INT/CONST_DOUBLE paths to do an explicit [sz]ext to
>>> mode precision first?
>> I'm not sure what you mean here.  CONST_INT HWIs are already sign-extended
>> from mode precision to HWI precision.  The 8-bit value 0xb10000000 must be
>> represented as (const_int -128); nothing else is allowed.  E.g. (const_int 128)
>> is not a valid QImode value on BITS_PER_UNIT==8 targets.
> Yes, that's what I understand.  But consider you get a CONST_INT that is
> _not_ a valid QImode value.  Current code simply trusts that it is, given
> the context from ...
And the fact that it we have to trust but cannot verify is a severe 
problem at the rtl level that is not going to go away.    what i have 
been strongly objecting to is your idea that just because we cannot 
verify it, we can thus go change it in some completely different way 
(i.e. the infinite precision nonsense that you keep hitting us with) and 
it will all be ok.

I have three problems with this.

1) Even if we could do this, it gives us answers that are not what the 
programmer expects!!!!!!
Understand this!!!  Programmers expect the code to behave the same way 
if they optimize it or not.   If you do infinite precision arithmetic 
you get different answers than the machine may give you. While the C and 
C++ standards allow this, it is NOT desirable. While there are some 
optimizations that must make visible changes to be effective, this is 
certainly not the case with infinite precision math    Making the change 
to infinite precision math only because you think is pretty is NOT best 
practices and will only give GCC a bad reputation in the community.

Each programming language defines what it means to do constant 
arithmetic and by and large, our front ends do this the way they say.  
But once you go beyond that, you are squarely in the realm where an 
optimizer is expected to try to make the program run fast without 
changing the results.  Infinite precision math in the optimizations is 
visible in that A * B / C may get different answers between an infinite 
precision evaluation and one that is finite precision as specified by 
the types.  And all of this without any possible upside to the 
programmer.   Why would we want to drive people to use llvm?????   This 
is my primary objection.    If you ever gave any reason for infinite 
precision aside from that you consider it pretty, then i would consider 
it.    BUT THIS IS NOT WHAT PROGRAMMERS WANT!!!!

2) The rtl level of GCC does not follow best practices by today's 
standards.    It is quite fragile.     At this point, the best that can 
be said is that it generally seems to work.   What you are asking is for 
us to make the assumption that the code is in fact in better shape than 
it is.    I understand that in your mind, you are objecting to letting 
the back ends hold back something that you believe the middle ends 
should do, but the truth is that this is a bad idea for the middle ends.

3) If i am on a 32 bit machine and i say GEN_INT (0xffffffff), i get a 
32 bit word with 32 1s in it.   There is no other information. In 
particular there is no information that tells me was that a -1 or was 
that the largest positive integer.   We do not have GEN_INTS and a 
GEN_INTU, we just have GEN_INT.  Your desire is that we can take those 
32 bits and apply the lt_p function, not the ltu_p or lts_p function, 
but an lu_p function and use that to compare those 32 bits to 
something.   At the rtl level  there is simply not enough information 
there to sign extend this value.   This will never work without a major 
rewrite of the back ends.
Richard Guenther - April 24, 2013, 2:42 p.m.
On Wed, Apr 24, 2013 at 4:29 PM, Richard Sandiford
<rdsandiford@googlemail.com> wrote:
> Richard Biener <richard.guenther@gmail.com> writes:
>> I suppose the above should use immed_double_int_const (v, mode), too,
>
> In practice it doesn't matter, because...
>
>> which oddly only ever truncates to mode for modes <= HOST_BITS_PER_WIDE_INT
>> via gen_int_mode.
>
> ...right.  That's because there's not really any proper support for
> non-power-of-2 modes.  Partial modes like PDI are specified by things like:
>
> PARTIAL_INT_MODE (DI);
>
> which is glaringly absent of any bit width.  So if the constant is big
> enough to need 2 HWIs, it in practice must be exactly 2 HWIs wide.

Ah, of course.

> One of the advantages of wide_int is that it allows us to relax this
> restriction: we can have both (a) mode widths greater than
> HOST_BITS_PER_WIDE_INT*2 and (b) mode widths that are greater than
> HOST_BITS_PER_WIDE_INT while not being a multiple of it.
>
> In other words, one of the reasons wide_int can't be exactly 1:1
> in practice is because it is clearing out these mistakes (GEN_INT
> rather than gen_int_mode) and missing features (non-power-of-2 widths).

Note that the argument should be about CONST_WIDE_INT here,
not wide-int.  Indeed CONST_WIDE_INT has the desired feature
and can be properly truncated/extended according to mode at the time we build it
via immed_wide_int_cst (w, mode).  I don't see the requirement that
wide-int itself is automagically providing that truncation/extension
(though it is a possibility, one that does not match existing behavior of
HWI for CONST_INT or double-int for CONST_DOUBLE).

Richard.

> Richard
Kenneth Zadeck - April 24, 2013, 2:53 p.m.
On 04/24/2013 10:42 AM, Richard Biener wrote:
> On Wed, Apr 24, 2013 at 4:29 PM, Richard Sandiford
> <rdsandiford@googlemail.com> wrote:
>> Richard Biener <richard.guenther@gmail.com> writes:
>>> I suppose the above should use immed_double_int_const (v, mode), too,
>> In practice it doesn't matter, because...
>>
>>> which oddly only ever truncates to mode for modes <= HOST_BITS_PER_WIDE_INT
>>> via gen_int_mode.
>> ...right.  That's because there's not really any proper support for
>> non-power-of-2 modes.  Partial modes like PDI are specified by things like:
>>
>> PARTIAL_INT_MODE (DI);
>>
>> which is glaringly absent of any bit width.  So if the constant is big
>> enough to need 2 HWIs, it in practice must be exactly 2 HWIs wide.
> Ah, of course.
>
>> One of the advantages of wide_int is that it allows us to relax this
>> restriction: we can have both (a) mode widths greater than
>> HOST_BITS_PER_WIDE_INT*2 and (b) mode widths that are greater than
>> HOST_BITS_PER_WIDE_INT while not being a multiple of it.
>>
>> In other words, one of the reasons wide_int can't be exactly 1:1
>> in practice is because it is clearing out these mistakes (GEN_INT
>> rather than gen_int_mode) and missing features (non-power-of-2 widths).
> Note that the argument should be about CONST_WIDE_INT here,
> not wide-int.  Indeed CONST_WIDE_INT has the desired feature
> and can be properly truncated/extended according to mode at the time we build it
> via immed_wide_int_cst (w, mode).  I don't see the requirement that
> wide-int itself is automagically providing that truncation/extension
> (though it is a possibility, one that does not match existing behavior of
> HWI for CONST_INT or double-int for CONST_DOUBLE).
>
> Richard.
>
yes but you still have the problem with partial ints having no 
length.    Our plan was to be very careful and make sure that at no 
point were we doing anything that makes it harder to put modes in 
const_ints, but that is different from going thru everything and doing it.

Partially because of this discussion and some issues that you brought up 
with patch 4, i am removing the trick of being able to say 'wi + rtl' 
because there is no mode for the rtl.
i am leaving the 'wi + tree' because there is enough to info in the 
treecst to make this work.
but you are going to have to say something wi::add (mode, rtl)

see, i am willing to do things that work better in the tree world than 
in the rtl world.

kenny

>> Richard
Richard Sandiford - April 24, 2013, 3 p.m.
Richard Biener <richard.guenther@gmail.com> writes:
> On Wed, Apr 24, 2013 at 4:29 PM, Richard Sandiford
> <rdsandiford@googlemail.com> wrote:
>> In other words, one of the reasons wide_int can't be exactly 1:1
>> in practice is because it is clearing out these mistakes (GEN_INT
>> rather than gen_int_mode) and missing features (non-power-of-2 widths).
>
> Note that the argument should be about CONST_WIDE_INT here,
> not wide-int.  Indeed CONST_WIDE_INT has the desired feature
> and can be properly truncated/extended according to mode at the time we build it
> via immed_wide_int_cst (w, mode).  I don't see the requirement that
> wide-int itself is automagically providing that truncation/extension
> (though it is a possibility, one that does not match existing behavior of
> HWI for CONST_INT or double-int for CONST_DOUBLE).

I agree it doesn't match the existing behaviour of HWI for CONST_INT or
double-int for CONST_DOUBLE, but I think that's very much a good thing.
The model for HWIs at the moment is that you have to truncate results
to the canonical form after every operation where it matters.  As you
proved in your earlier message about the plus_constant bug, that's easily
forgotten.  I don't think the rtl code is doing all CONST_INT arithmetic
on full HWIs because it wants to: it's doing it because that's the way
C/C++ arithmetic on primitive types works.  In other words, the current
CONST_INT code is trying to emulate N-bit arithmetic (for gcc runtime N)
using a single primitive integer type.  wide_int gives us N-bit arithmetic
directly; no emulation is needed.

If your point is that an arbitrary-precision wide_int could be used by
other (non-rtl, and probably non-tree) clients, then I don't really
see the need.  We already have mpz_t for that.  What we don't have,
and what we IMO need, is something that performs N-bit arithmetic
for runtime N.  It seems better to have a single class that does
that for us (wide_int), rather than scatter N-bit emulation throughout
the codebase, which is what we do now.

Richard
Richard Guenther - April 24, 2013, 3:04 p.m.
On Wed, Apr 24, 2013 at 4:35 PM, Kenneth Zadeck
<zadeck@naturalbridge.com> wrote:
> On 04/24/2013 09:36 AM, Richard Biener wrote:
>>
>> On Wed, Apr 24, 2013 at 2:44 PM, Richard Sandiford
>> <rdsandiford@googlemail.com> wrote:
>>>
>>> Richard Biener <richard.guenther@gmail.com> writes:
>>>>
>>>> Can we in such cases please to a preparatory patch and change the
>>>> CONST_INT/CONST_DOUBLE paths to do an explicit [sz]ext to
>>>> mode precision first?
>>>
>>> I'm not sure what you mean here.  CONST_INT HWIs are already
>>> sign-extended
>>> from mode precision to HWI precision.  The 8-bit value 0xb10000000 must
>>> be
>>> represented as (const_int -128); nothing else is allowed.  E.g.
>>> (const_int 128)
>>> is not a valid QImode value on BITS_PER_UNIT==8 targets.
>>
>> Yes, that's what I understand.  But consider you get a CONST_INT that is
>> _not_ a valid QImode value.  Current code simply trusts that it is, given
>> the context from ...
>
> And the fact that it we have to trust but cannot verify is a severe problem
> at the rtl level that is not going to go away.    what i have been strongly
> objecting to is your idea that just because we cannot verify it, we can thus
> go change it in some completely different way (i.e. the infinite precision
> nonsense that you keep hitting us with) and it will all be ok.

Appearantly it is all ok because that's exactly what we have today (and
had for the last 25 years).  CONST_INT encodes infinite precision signed
values (with the complication that a QImode 0x80 isn't valid, thus all
modes are signed as well it seems).  CONST_DOUBLE encodes infinite
precision signed values as well.  Just the "infinite" is limited by the size
of the encoding, one and two HOST_WIDE_INTs.

The interpretation of those "infinite" precision constants is based on
the context (the operation mode of the operation we apply to a CONST_INT
or CONST_DOUBLE).  Thus CONST_INT and CONST_DOUBLE do
not have a mode or precision but VOIDmode so "different mode" 1
can be shared (which is probably the original reason of that design
decision).

> I have three problems with this.
>
> 1) Even if we could do this, it gives us answers that are not what the
> programmer expects!!!!!!
> Understand this!!!  Programmers expect the code to behave the same way if
> they optimize it or not.   If you do infinite precision arithmetic you get
> different answers than the machine may give you. While the C and C++
> standards allow this, it is NOT desirable. While there are some
> optimizations that must make visible changes to be effective, this is
> certainly not the case with infinite precision math    Making the change to
> infinite precision math only because you think is pretty is NOT best
> practices and will only give GCC a bad reputation in the community.

Note that as I tried to explain above this isn't a change.  _You_ are
proposing a change here!  Namely to associate a precision with a _constant_.
What precision does a '1' have?  What precision does a '12374' have?
It doesn't have any.  With this proposed change we will have the possibility
to explicitely program mismatches like

  simplify_binary_operation (PLUS_EXPR, HImode,
                                         wide_int_rtx (SImode, 27),
wide_int_rtx (QImode, 1))

even if _only_ the desired mode of the result matters!  Because given the
invariant that a wide-int is "valid" (it doesn't have bits outside of
its precision)
it's precision does no longer matter!

> Each programming language defines what it means to do constant arithmetic
> and by and large, our front ends do this the way they say.  But once you go
> beyond that, you are squarely in the realm where an optimizer is expected to
> try to make the program run fast without changing the results.  Infinite
> precision math in the optimizations is visible in that A * B / C may get
> different answers between an infinite precision evaluation and one that is
> finite precision as specified by the types.  And all of this without any
> possible upside to the programmer.   Why would we want to drive people to
> use llvm?????   This is my primary objection.    If you ever gave any reason
> for infinite precision aside from that you consider it pretty, then i would
> consider it.    BUT THIS IS NOT WHAT PROGRAMMERS WANT!!!!

Programming languages or prettiness is not in any way a reason to do
infinite precision math.  All-caps or pretty punctuation does not change that.

Infinite precision math is what we do now.  What I ask for is to make
separate changes separately.  You want larger and host independent
integer constants.  Fine - do that.  You want to change how we do
arithmetic?  Fine - do that.  But please separate the two.  (well, I'm
likely still going to object to the latter)

> 2) The rtl level of GCC does not follow best practices by today's standards.
> It is quite fragile.

It works quite well.

>     At this point, the best that can be said is that it
> generally seems to work.   What you are asking is for us to make the
> assumption that the code is in fact in better shape than it is.    I
> understand that in your mind, you are objecting to letting the back ends
> hold back something that you believe the middle ends should do, but the
> truth is that this is a bad idea for the middle ends.

I don't quite understand this.  What am I objecting to letting the back ends
hold back?

> 3) If i am on a 32 bit machine and i say GEN_INT (0xffffffff), i get a 32
> bit word with 32 1s in it.   There is no other information. In particular
> there is no information that tells me was that a -1 or was that the largest
> positive integer.   We do not have GEN_INTS and a GEN_INTU, we just have
> GEN_INT.  Your desire is that we can take those 32 bits and apply the lt_p
> function, not the ltu_p or lts_p function, but an lu_p function and use that
> to compare those 32 bits to something.   At the rtl level  there is simply
> not enough information there to sign extend this value.   This will never
> work without a major rewrite of the back ends.

Nono, I did not request that you get away with ltu_p or lts_p.  I said
it would be _possible_ to do that.  Currently (I just believe Richard here)
a positive QImode value 255 CONST_INT does not exist (well, it
does, but sign-extended and thus not distinguishable from -1) - correct?
Given the wide-int encoding in your last patch there is no reason to
disallow a positive QImode value 255 CONST_INT as we can perfectly
distinguish all values from -128 to 255 for QImode values as the encoding
uses extra bits to carry that information.  What is complicating things
is that to properly sign- or zero-extend a result according to the operation
mode you also need a desired signedness (that's an issue currently, too,
of course - nothing new here).  If you want a QImode add of 127 and 1
then you have to know whether the result is to be interpreted as
signed 8-bit value or unsigned 8-bit value.  Because that has an influence
on the encoding result (it doesn't matter if you view the 8 lower bits in the
"arbitrary precision" result 128 in twos-complement term of course).

Richard.
Richard Guenther - April 24, 2013, 3:13 p.m.
On Wed, Apr 24, 2013 at 5:00 PM, Richard Sandiford
<rdsandiford@googlemail.com> wrote:
> Richard Biener <richard.guenther@gmail.com> writes:
>> On Wed, Apr 24, 2013 at 4:29 PM, Richard Sandiford
>> <rdsandiford@googlemail.com> wrote:
>>> In other words, one of the reasons wide_int can't be exactly 1:1
>>> in practice is because it is clearing out these mistakes (GEN_INT
>>> rather than gen_int_mode) and missing features (non-power-of-2 widths).
>>
>> Note that the argument should be about CONST_WIDE_INT here,
>> not wide-int.  Indeed CONST_WIDE_INT has the desired feature
>> and can be properly truncated/extended according to mode at the time we build it
>> via immed_wide_int_cst (w, mode).  I don't see the requirement that
>> wide-int itself is automagically providing that truncation/extension
>> (though it is a possibility, one that does not match existing behavior of
>> HWI for CONST_INT or double-int for CONST_DOUBLE).
>
> I agree it doesn't match the existing behaviour of HWI for CONST_INT or
> double-int for CONST_DOUBLE, but I think that's very much a good thing.
> The model for HWIs at the moment is that you have to truncate results
> to the canonical form after every operation where it matters.  As you
> proved in your earlier message about the plus_constant bug, that's easily
> forgotten.  I don't think the rtl code is doing all CONST_INT arithmetic
> on full HWIs because it wants to: it's doing it because that's the way
> C/C++ arithmetic on primitive types works.  In other words, the current
> CONST_INT code is trying to emulate N-bit arithmetic (for gcc runtime N)
> using a single primitive integer type.  wide_int gives us N-bit arithmetic
> directly; no emulation is needed.

Ok, so what wide-int provides is integer values encoded in 'len' HWI
words that fit in 'precision' or more bits (and often in less).  wide-int
also provides N-bit arithmetic operations.  IMHO both are tied
too closely together.  A give constant doesn't really have a precision.
Associating one with it to give a precision to an arithmetic operation
looks wrong to me and are a source of mismatches.

What RTL currently has looks better to me - operations have
explicitely specified precisions.

> If your point is that an arbitrary-precision wide_int could be used by
> other (non-rtl, and probably non-tree) clients, then I don't really
> see the need.  We already have mpz_t for that.  What we don't have,
> and what we IMO need, is something that performs N-bit arithmetic
> for runtime N.  It seems better to have a single class that does
> that for us (wide_int), rather than scatter N-bit emulation throughout
> the codebase, which is what we do now.

mpz_t is not suitable here - it's way too expensive.  double-int
was the "suitable" bit for now, but given it's host dependency and
inability to handle larger ints (VRP ...) the ability to use wide-ints
for this looks appealing.

Richard.

> Richard
Richard Sandiford - April 24, 2013, 3:29 p.m.
Richard Biener <richard.guenther@gmail.com> writes:
> On Wed, Apr 24, 2013 at 4:35 PM, Kenneth Zadeck
> <zadeck@naturalbridge.com> wrote:
>> On 04/24/2013 09:36 AM, Richard Biener wrote:
>>>
>>> On Wed, Apr 24, 2013 at 2:44 PM, Richard Sandiford
>>> <rdsandiford@googlemail.com> wrote:
>>>>
>>>> Richard Biener <richard.guenther@gmail.com> writes:
>>>>>
>>>>> Can we in such cases please to a preparatory patch and change the
>>>>> CONST_INT/CONST_DOUBLE paths to do an explicit [sz]ext to
>>>>> mode precision first?
>>>>
>>>> I'm not sure what you mean here.  CONST_INT HWIs are already
>>>> sign-extended
>>>> from mode precision to HWI precision.  The 8-bit value 0xb10000000 must
>>>> be
>>>> represented as (const_int -128); nothing else is allowed.  E.g.
>>>> (const_int 128)
>>>> is not a valid QImode value on BITS_PER_UNIT==8 targets.
>>>
>>> Yes, that's what I understand.  But consider you get a CONST_INT that is
>>> _not_ a valid QImode value.  Current code simply trusts that it is, given
>>> the context from ...
>>
>> And the fact that it we have to trust but cannot verify is a severe problem
>> at the rtl level that is not going to go away.    what i have been strongly
>> objecting to is your idea that just because we cannot verify it, we can thus
>> go change it in some completely different way (i.e. the infinite precision
>> nonsense that you keep hitting us with) and it will all be ok.
>
> Appearantly it is all ok because that's exactly what we have today (and
> had for the last 25 years).  CONST_INT encodes infinite precision signed
> values (with the complication that a QImode 0x80 isn't valid, thus all
> modes are signed as well it seems).

I think this is the fundamental disagreement.  Your last step doesn't
follow.  RTL integer modes are neither signed nor unsigned.  They are
just a collection of N bits.  The fact that CONST_INTs represent
smaller-than-HWI integers in sign-extended form is purely a represential
detail.  There are no semantics attached to it.  We could just as easily
have decided to extend with zeros or ones instead of sign bits.

Although the decision was made before my time, I'm pretty sure the
point of having a canonical representation (which happened to be sign
extension) was to make sure that any given rtl constant has only a
single representation.  It would be too confusing if a QImode 0x80 could
be represented as either (const_int 128) or (const_int -128) (would
(const_int 384) then also be OK?).

And that's the problem with using an infinite-precision wide_int.
If you directly convert a CONST_INT representation of 0x80 into a
wide_int, you will always get infinite-precision -128, thanks to the
CONST_INT canonicalisation rule.  But if you arrive at 0x80 though
arithmetic, you might get infinite-precision 128 instead.  These two
values would not compare equal.

> CONST_DOUBLE encodes infinite precision signed values as well.  Just
> the "infinite" is limited by the size of the encoding, one and two
> HOST_WIDE_INTs.

It encodes an N-bit integer.  It's just that (assuming non-power-of-2
modes) several N-bit integers (with varying N) can be encoded using the
same CONST_DOUBLE representation.  That might be what you meant, sorry,
and so might seem pedantic, but I wasn't sure.

Richard
Richard Sandiford - April 24, 2013, 3:55 p.m.
Richard Biener <richard.guenther@gmail.com> writes:
> On Wed, Apr 24, 2013 at 5:00 PM, Richard Sandiford
> <rdsandiford@googlemail.com> wrote:
>> Richard Biener <richard.guenther@gmail.com> writes:
>>> On Wed, Apr 24, 2013 at 4:29 PM, Richard Sandiford
>>> <rdsandiford@googlemail.com> wrote:
>>>> In other words, one of the reasons wide_int can't be exactly 1:1
>>>> in practice is because it is clearing out these mistakes (GEN_INT
>>>> rather than gen_int_mode) and missing features (non-power-of-2 widths).
>>>
>>> Note that the argument should be about CONST_WIDE_INT here,
>>> not wide-int.  Indeed CONST_WIDE_INT has the desired feature
>>> and can be properly truncated/extended according to mode at the time
>>> we build it
>>> via immed_wide_int_cst (w, mode).  I don't see the requirement that
>>> wide-int itself is automagically providing that truncation/extension
>>> (though it is a possibility, one that does not match existing behavior of
>>> HWI for CONST_INT or double-int for CONST_DOUBLE).
>>
>> I agree it doesn't match the existing behaviour of HWI for CONST_INT or
>> double-int for CONST_DOUBLE, but I think that's very much a good thing.
>> The model for HWIs at the moment is that you have to truncate results
>> to the canonical form after every operation where it matters.  As you
>> proved in your earlier message about the plus_constant bug, that's easily
>> forgotten.  I don't think the rtl code is doing all CONST_INT arithmetic
>> on full HWIs because it wants to: it's doing it because that's the way
>> C/C++ arithmetic on primitive types works.  In other words, the current
>> CONST_INT code is trying to emulate N-bit arithmetic (for gcc runtime N)
>> using a single primitive integer type.  wide_int gives us N-bit arithmetic
>> directly; no emulation is needed.
>
> Ok, so what wide-int provides is integer values encoded in 'len' HWI
> words that fit in 'precision' or more bits (and often in less).  wide-int
> also provides N-bit arithmetic operations.  IMHO both are tied
> too closely together.  A give constant doesn't really have a precision.

I disagree.  All rtl objects have a precision.  REGs, MEMs, SYMBOL_REFs,
LABEL_REFs and CONSTs all have precisions, and the last three are
run-time constants.  Why should CONST_INT and CONST_DOUBLE be different?

See e.g. the hoops that cselib has to jump through:

/* We need to pass down the mode of constants through the hash table
   functions.  For that purpose, wrap them in a CONST of the appropriate
   mode.  */
static rtx
wrap_constant (enum machine_mode mode, rtx x)
{
  if ((!CONST_SCALAR_INT_P (x)) && GET_CODE (x) != CONST_FIXED)
    return x;
  gcc_assert (mode != VOIDmode);
  return gen_rtx_CONST (mode, x);
}

That is, cselib locally converts (const_int X) into (const:M (const_int X)),
purely so that it doesn't lose track of the CONST_INT's mode.
(const:M (const_int ...)) is invalid rtl elsewhere, but a necessary
hack here all the same.

> What RTL currently has looks better to me - operations have
> explicitely specified precisions.

But that isn't enough to determine the precision of all operands.
A classic case is ZERO_EXTEND.  Something like:

   (zero_extend:DI (reg:SI X))

is unambiguous.  But if you substitute (reg:SI X) with a CONST_INT,
the result becomes ambiguous.  E.g. we could end up with:

   (zero_extend:DI (const_int -1))

The ZERO_EXTEND operand still has SImode, but that fact is not explicit
in the rtl, and is certainly not explicit in the ZERO_EXTEND operation.
So if we just see the result above, we no longer know whether the result
should be (const_int 0xff), (const_int 0xffff), or what.  The same goes for:

   (zero_extend:DI (const_int 256))

where (const_int 0) and (const_int 256) are both potential results.

It's not just ZERO_EXTEND.  E.g.:

  (zero_extract:SI ...)

tells you that an SImode value is being extracted, but it doesn't tell
you what precision you're extracting from.  So for:

  (zero_extract:SI (const_int -1) (const_int X) (const_int 3))

how many 1 bits should be the result have?  Because of the sign-extension
canonicalisation, the answer depends on the precision of the (const_int -1),
which has now been lost.  If instead CONST_INTs were stored in zero-extended
form, the same ambiguity would apply to SIGN_EXTRACT.

This sort of thing has been a constant headache in rtl.  I can't stress
how much I feel it is _not_ better than recording the precision of
the constant :-)

Richard
Kenneth Zadeck - April 24, 2013, 11:18 p.m.
On 04/24/2013 11:13 AM, Richard Biener wrote:
> On Wed, Apr 24, 2013 at 5:00 PM, Richard Sandiford
> <rdsandiford@googlemail.com>  wrote:
>> Richard Biener<richard.guenther@gmail.com>  writes:
>>> On Wed, Apr 24, 2013 at 4:29 PM, Richard Sandiford
>>> <rdsandiford@googlemail.com>  wrote:
>>>> In other words, one of the reasons wide_int can't be exactly 1:1
>>>> in practice is because it is clearing out these mistakes (GEN_INT
>>>> rather than gen_int_mode) and missing features (non-power-of-2 widths).
>>> Note that the argument should be about CONST_WIDE_INT here,
>>> not wide-int.  Indeed CONST_WIDE_INT has the desired feature
>>> and can be properly truncated/extended according to mode at the time we build it
>>> via immed_wide_int_cst (w, mode).  I don't see the requirement that
>>> wide-int itself is automagically providing that truncation/extension
>>> (though it is a possibility, one that does not match existing behavior of
>>> HWI for CONST_INT or double-int for CONST_DOUBLE).
>> I agree it doesn't match the existing behaviour of HWI for CONST_INT or
>> double-int for CONST_DOUBLE, but I think that's very much a good thing.
>> The model for HWIs at the moment is that you have to truncate results
>> to the canonical form after every operation where it matters.  As you
>> proved in your earlier message about the plus_constant bug, that's easily
>> forgotten.  I don't think the rtl code is doing all CONST_INT arithmetic
>> on full HWIs because it wants to: it's doing it because that's the way
>> C/C++ arithmetic on primitive types works.  In other words, the current
>> CONST_INT code is trying to emulate N-bit arithmetic (for gcc runtime N)
>> using a single primitive integer type.  wide_int gives us N-bit arithmetic
>> directly; no emulation is needed.
> Ok, so what wide-int provides is integer values encoded in 'len' HWI
> words that fit in 'precision' or more bits (and often in less).  wide-int
> also provides N-bit arithmetic operations.  IMHO both are tied
> too closely together.  A give constant doesn't really have a precision.
> Associating one with it to give a precision to an arithmetic operation
> looks wrong to me and are a source of mismatches.
>
> What RTL currently has looks better to me - operations have
> explicitely specified precisions.
I have tried very hard to make wide-int work very efficiently with both 
tree and rtl without biasing the rep towards either representation.  
Both rtl and trees constants have a precision.   In tree, constants are 
done better than in rtl because the tree really does have a field that 
is filled in that points to a type. However, that does not mean that rtl 
constants do not have a precision: currently you have to look around at 
the context to find the mode of a constant that is in your hand, but it 
is in fact always there.   At the rtl level, you can see the entire 
patch - we always find an appropriate mode.

In the future, this may change.   Wide-int moves one step closer in that 
ports that support it will not expect that double-ints never have a 
mode.   But that is a long way from having the mode attached.

What is not stored with the constant is a indication of the signedness. 
Unlike a desire to add modes to rtl constants, there is no one even 
thinking about the sign.   The sign is implicit in the operator, just as 
it is at the tree level.

So when i designed wide-int, i assumed that i could get precisions from 
the variables or at least "close to" them.

As far as the question of infinite precision, 99% of the uses of 
double-int today are "get in, do a single operation and get out". If 
this is all that we plan to do, then it does not really matter if it is 
infinite precision or not, because at both the rtl and tree level, we 
truncate on the way out.     However, the use of double-int accounts for 
only a small percentage of the math done in the compiler.   My wide-int 
port converts a substantial portion of the math from inline code that is 
guarded by checks to the precision against HOST_WIDE_BITS_PER_INT or 
calls to host_integerp.   The conversion of this code has substantial 
potential to expose the differences between the fixed precision and 
infinite precision representations.

The only justification that you have ever given for wanting to use 
infinite precision is that it is cleaner.   You have never directly 
addressed my point that it gives surprising answers except to say that 
the user would have to put in explicit intermediate truncations.    It 
is hard for me to imaging buggering up something as bad as having to put 
in explicit intermediate truncations.   When i write a * b / c, it 
should really look something like the expression.



>> If your point is that an arbitrary-precision wide_int could be used by
>> other (non-rtl, and probably non-tree) clients, then I don't really
>> see the need.  We already have mpz_t for that.  What we don't have,
>> and what we IMO need, is something that performs N-bit arithmetic
>> for runtime N.  It seems better to have a single class that does
>> that for us (wide_int), rather than scatter N-bit emulation throughout
>> the codebase, which is what we do now.
> mpz_t is not suitable here - it's way too expensive.  double-int
> was the "suitable" bit for now, but given it's host dependency and
> inability to handle larger ints (VRP ...) the ability to use wide-ints
> for this looks appealing.
and it is expensive why?   Because it is not tightly integrated into 
tree and rtl as you have fought me tooth and nail about?  Because the 
people who did mpz were idiots and you feel that i am god's gift to 
programming and will do a better job?   Or because infinite precision 
arithmetic might just be more expensive.

I vote for the last option.    Being able to exit out inline for the 
math that can be done in a HWI is actually a big win!!!!

> Richard.
>
>> Richard
Richard Guenther - May 3, 2013, 11:19 a.m.
On Wed, Apr 24, 2013 at 5:29 PM, Richard Sandiford
<rdsandiford@googlemail.com> wrote:
> Richard Biener <richard.guenther@gmail.com> writes:
>> On Wed, Apr 24, 2013 at 4:35 PM, Kenneth Zadeck
>> <zadeck@naturalbridge.com> wrote:
>>> On 04/24/2013 09:36 AM, Richard Biener wrote:
>>>>
>>>> On Wed, Apr 24, 2013 at 2:44 PM, Richard Sandiford
>>>> <rdsandiford@googlemail.com> wrote:
>>>>>
>>>>> Richard Biener <richard.guenther@gmail.com> writes:
>>>>>>
>>>>>> Can we in such cases please to a preparatory patch and change the
>>>>>> CONST_INT/CONST_DOUBLE paths to do an explicit [sz]ext to
>>>>>> mode precision first?
>>>>>
>>>>> I'm not sure what you mean here.  CONST_INT HWIs are already
>>>>> sign-extended
>>>>> from mode precision to HWI precision.  The 8-bit value 0xb10000000 must
>>>>> be
>>>>> represented as (const_int -128); nothing else is allowed.  E.g.
>>>>> (const_int 128)
>>>>> is not a valid QImode value on BITS_PER_UNIT==8 targets.
>>>>
>>>> Yes, that's what I understand.  But consider you get a CONST_INT that is
>>>> _not_ a valid QImode value.  Current code simply trusts that it is, given
>>>> the context from ...
>>>
>>> And the fact that it we have to trust but cannot verify is a severe problem
>>> at the rtl level that is not going to go away.    what i have been strongly
>>> objecting to is your idea that just because we cannot verify it, we can thus
>>> go change it in some completely different way (i.e. the infinite precision
>>> nonsense that you keep hitting us with) and it will all be ok.
>>
>> Appearantly it is all ok because that's exactly what we have today (and
>> had for the last 25 years).  CONST_INT encodes infinite precision signed
>> values (with the complication that a QImode 0x80 isn't valid, thus all
>> modes are signed as well it seems).
>
> I think this is the fundamental disagreement.  Your last step doesn't
> follow.  RTL integer modes are neither signed nor unsigned.  They are
> just a collection of N bits.  The fact that CONST_INTs represent
> smaller-than-HWI integers in sign-extended form is purely a represential
> detail.  There are no semantics attached to it.  We could just as easily
> have decided to extend with zeros or ones instead of sign bits.
>
> Although the decision was made before my time, I'm pretty sure the
> point of having a canonical representation (which happened to be sign
> extension) was to make sure that any given rtl constant has only a
> single representation.  It would be too confusing if a QImode 0x80 could
> be represented as either (const_int 128) or (const_int -128) (would
> (const_int 384) then also be OK?).

No, not as value for a QImode as it doesn't fit there.

> And that's the problem with using an infinite-precision wide_int.
> If you directly convert a CONST_INT representation of 0x80 into a
> wide_int, you will always get infinite-precision -128, thanks to the
> CONST_INT canonicalisation rule.  But if you arrive at 0x80 though
> arithmetic, you might get infinite-precision 128 instead.  These two
> values would not compare equal.

That's true.  Note that I am not objecting to the canonicalization choice
for the RTL object.  On trees we do have -128 and 128 QImode integers
as tree constants have a sign.

So we clearly cannot have wide_int make that choice, but those that
create either a tree object or a RTL object have to do additional
canonicalization (or truncation to not allow a QImode 384).

Yes, I'm again arguing that making choices for wide_int shouldn't be
done because it seems right for RTL or right for how a CPU operates.
But we are mixing two things in this series of patches - introduction
of an additional RTX object kind CONST_WIDE_INT together with
deciding on its encoding of constant values, and introduction of
a wide_int class as a vehicle to do arithmetic on the host for larger
than HOST_WIDE_INT values.

The latter could be separated by dropping CONST_DOUBLE in favor
of CONST_WIDE_INT everywhere and simply providing a
CONST_WIDE_INT <-> double-int interface (both ways, so you'd
actually never generate a CONST_WIDE_INT that doesn't fit a double-int).

>> CONST_DOUBLE encodes infinite precision signed values as well.  Just
>> the "infinite" is limited by the size of the encoding, one and two
>> HOST_WIDE_INTs.
>
> It encodes an N-bit integer.  It's just that (assuming non-power-of-2
> modes) several N-bit integers (with varying N) can be encoded using the
> same CONST_DOUBLE representation.  That might be what you meant, sorry,
> and so might seem pedantic, but I wasn't sure.

Yes, that's what I meant.  Being able to share the same RTX object for
constants with the same representation but a different mode is nice
and looks appealing (of course works only when the actual mode stored
in the RTX object is then sth like VOIDmode ...).  That we have gazillions
of NULL pointer constants on trees (for each pointer type) isn't.

Richard.

> Richard
Richard Guenther - May 3, 2013, 11:28 a.m.
On Wed, Apr 24, 2013 at 5:55 PM, Richard Sandiford
<rdsandiford@googlemail.com> wrote:
> Richard Biener <richard.guenther@gmail.com> writes:
>> On Wed, Apr 24, 2013 at 5:00 PM, Richard Sandiford
>> <rdsandiford@googlemail.com> wrote:
>>> Richard Biener <richard.guenther@gmail.com> writes:
>>>> On Wed, Apr 24, 2013 at 4:29 PM, Richard Sandiford
>>>> <rdsandiford@googlemail.com> wrote:
>>>>> In other words, one of the reasons wide_int can't be exactly 1:1
>>>>> in practice is because it is clearing out these mistakes (GEN_INT
>>>>> rather than gen_int_mode) and missing features (non-power-of-2 widths).
>>>>
>>>> Note that the argument should be about CONST_WIDE_INT here,
>>>> not wide-int.  Indeed CONST_WIDE_INT has the desired feature
>>>> and can be properly truncated/extended according to mode at the time
>>>> we build it
>>>> via immed_wide_int_cst (w, mode).  I don't see the requirement that
>>>> wide-int itself is automagically providing that truncation/extension
>>>> (though it is a possibility, one that does not match existing behavior of
>>>> HWI for CONST_INT or double-int for CONST_DOUBLE).
>>>
>>> I agree it doesn't match the existing behaviour of HWI for CONST_INT or
>>> double-int for CONST_DOUBLE, but I think that's very much a good thing.
>>> The model for HWIs at the moment is that you have to truncate results
>>> to the canonical form after every operation where it matters.  As you
>>> proved in your earlier message about the plus_constant bug, that's easily
>>> forgotten.  I don't think the rtl code is doing all CONST_INT arithmetic
>>> on full HWIs because it wants to: it's doing it because that's the way
>>> C/C++ arithmetic on primitive types works.  In other words, the current
>>> CONST_INT code is trying to emulate N-bit arithmetic (for gcc runtime N)
>>> using a single primitive integer type.  wide_int gives us N-bit arithmetic
>>> directly; no emulation is needed.
>>
>> Ok, so what wide-int provides is integer values encoded in 'len' HWI
>> words that fit in 'precision' or more bits (and often in less).  wide-int
>> also provides N-bit arithmetic operations.  IMHO both are tied
>> too closely together.  A give constant doesn't really have a precision.
>
> I disagree.  All rtl objects have a precision.  REGs, MEMs, SYMBOL_REFs,
> LABEL_REFs and CONSTs all have precisions, and the last three are
> run-time constants.  Why should CONST_INT and CONST_DOUBLE be different?

Well - they _are_ different.  They don't even have a mode at the moment.
If you want to change that be my guest (or well, it's unfortunate to lose
the sharing then) - but Kenny always repeats that this is "impossible to fix".
Having CONST_INT and CONST_DOUBLE without a precision but
CONST_WIDE_INT with a precision would be at least odd.

> See e.g. the hoops that cselib has to jump through:
>
> /* We need to pass down the mode of constants through the hash table
>    functions.  For that purpose, wrap them in a CONST of the appropriate
>    mode.  */
> static rtx
> wrap_constant (enum machine_mode mode, rtx x)
> {
>   if ((!CONST_SCALAR_INT_P (x)) && GET_CODE (x) != CONST_FIXED)
>     return x;
>   gcc_assert (mode != VOIDmode);
>   return gen_rtx_CONST (mode, x);
> }
>
> That is, cselib locally converts (const_int X) into (const:M (const_int X)),
> purely so that it doesn't lose track of the CONST_INT's mode.
> (const:M (const_int ...)) is invalid rtl elsewhere, but a necessary
> hack here all the same.

Indeed ugly.  But I wonder why cselib needs to store constants in
hashtables at all ... they should be VALUEs themselves.  So the fix
for the above might not necessarily be to assign the CONST_INT
a mode (not that CONST_WIDE_INT would fix the above).

>> What RTL currently has looks better to me - operations have
>> explicitely specified precisions.
>
> But that isn't enough to determine the precision of all operands.
> A classic case is ZERO_EXTEND.  Something like:
>
>    (zero_extend:DI (reg:SI X))
>
> is unambiguous.  But if you substitute (reg:SI X) with a CONST_INT,
> the result becomes ambiguous.  E.g. we could end up with:
>
>    (zero_extend:DI (const_int -1))
>
> The ZERO_EXTEND operand still has SImode, but that fact is not explicit
> in the rtl, and is certainly not explicit in the ZERO_EXTEND operation.
> So if we just see the result above, we no longer know whether the result
> should be (const_int 0xff), (const_int 0xffff), or what.  The same goes for:

That situation only occurs when you have "unfolded" RTX.  You should
have never generated the above (and hopefully the RTL verifier doesn't
allow it), but instead called sth like

  simplify_gen_zero_extend (DImode, SImode, x);

with x being the constant substituted for X.  It's probably unfortunate
that parts of the RTL machinery work like (if I remember correctly)
for_each_rtx (x, replace-interesting-stuff); simplify (x);

>    (zero_extend:DI (const_int 256))
>
> where (const_int 0) and (const_int 256) are both potential results.
>
> It's not just ZERO_EXTEND.  E.g.:
>
>   (zero_extract:SI ...)
>
> tells you that an SImode value is being extracted, but it doesn't tell
> you what precision you're extracting from.  So for:
>
>   (zero_extract:SI (const_int -1) (const_int X) (const_int 3))
>
> how many 1 bits should be the result have?  Because of the sign-extension
> canonicalisation, the answer depends on the precision of the (const_int -1),
> which has now been lost.  If instead CONST_INTs were stored in zero-extended
> form, the same ambiguity would apply to SIGN_EXTRACT.
>
> This sort of thing has been a constant headache in rtl.  I can't stress
> how much I feel it is _not_ better than recording the precision of
> the constant :-)

Ok, so please then make all CONST_INTs and CONST_DOUBLEs have
a mode!

The solution is not to have a CONST_WIDE_INT (again with VOIDmode
and no precision in the RTX object(!)) and only have wide_int have a
precision.

So, the proper order to fix things is to get CONST_INTs and CONST_DOUBLEs
to have a mode.  And to make CONST_WIDE_INT inherit that property.

Richard.

> Richard
Richard Guenther - May 3, 2013, 11:34 a.m.
On Thu, Apr 25, 2013 at 1:18 AM, Kenneth Zadeck
<zadeck@naturalbridge.com> wrote:
> On 04/24/2013 11:13 AM, Richard Biener wrote:
>>
>> On Wed, Apr 24, 2013 at 5:00 PM, Richard Sandiford
>> <rdsandiford@googlemail.com>  wrote:
>>>
>>> Richard Biener<richard.guenther@gmail.com>  writes:
>>>>
>>>> On Wed, Apr 24, 2013 at 4:29 PM, Richard Sandiford
>>>> <rdsandiford@googlemail.com>  wrote:
>>>>>
>>>>> In other words, one of the reasons wide_int can't be exactly 1:1
>>>>> in practice is because it is clearing out these mistakes (GEN_INT
>>>>> rather than gen_int_mode) and missing features (non-power-of-2 widths).
>>>>
>>>> Note that the argument should be about CONST_WIDE_INT here,
>>>> not wide-int.  Indeed CONST_WIDE_INT has the desired feature
>>>> and can be properly truncated/extended according to mode at the time we
>>>> build it
>>>> via immed_wide_int_cst (w, mode).  I don't see the requirement that
>>>> wide-int itself is automagically providing that truncation/extension
>>>> (though it is a possibility, one that does not match existing behavior
>>>> of
>>>> HWI for CONST_INT or double-int for CONST_DOUBLE).
>>>
>>> I agree it doesn't match the existing behaviour of HWI for CONST_INT or
>>> double-int for CONST_DOUBLE, but I think that's very much a good thing.
>>> The model for HWIs at the moment is that you have to truncate results
>>> to the canonical form after every operation where it matters.  As you
>>> proved in your earlier message about the plus_constant bug, that's easily
>>> forgotten.  I don't think the rtl code is doing all CONST_INT arithmetic
>>> on full HWIs because it wants to: it's doing it because that's the way
>>> C/C++ arithmetic on primitive types works.  In other words, the current
>>> CONST_INT code is trying to emulate N-bit arithmetic (for gcc runtime N)
>>> using a single primitive integer type.  wide_int gives us N-bit
>>> arithmetic
>>> directly; no emulation is needed.
>>
>> Ok, so what wide-int provides is integer values encoded in 'len' HWI
>> words that fit in 'precision' or more bits (and often in less).  wide-int
>> also provides N-bit arithmetic operations.  IMHO both are tied
>> too closely together.  A give constant doesn't really have a precision.
>> Associating one with it to give a precision to an arithmetic operation
>> looks wrong to me and are a source of mismatches.
>>
>> What RTL currently has looks better to me - operations have
>> explicitely specified precisions.
>
> I have tried very hard to make wide-int work very efficiently with both tree
> and rtl without biasing the rep towards either representation.  Both rtl and
> trees constants have a precision.   In tree, constants are done better than
> in rtl because the tree really does have a field that is filled in that
> points to a type. However, that does not mean that rtl constants do not have
> a precision: currently you have to look around at the context to find the
> mode of a constant that is in your hand, but it is in fact always there.
> At the rtl level, you can see the entire patch - we always find an
> appropriate mode.

Appearantly you cannot.  See Richard S. examples.

As of "better", the tree has the issue that we have so many unshared
constants because they only differ in type but not in their representation.
That's the nice part of RTL constants all having VOIDmode ...

Richard.
Kenneth Zadeck - May 3, 2013, 11:49 a.m.
On 05/03/2013 07:34 AM, Richard Biener wrote:
> On Thu, Apr 25, 2013 at 1:18 AM, Kenneth Zadeck
> <zadeck@naturalbridge.com> wrote:
>> On 04/24/2013 11:13 AM, Richard Biener wrote:
>>> On Wed, Apr 24, 2013 at 5:00 PM, Richard Sandiford
>>> <rdsandiford@googlemail.com>  wrote:
>>>> Richard Biener<richard.guenther@gmail.com>  writes:
>>>>> On Wed, Apr 24, 2013 at 4:29 PM, Richard Sandiford
>>>>> <rdsandiford@googlemail.com>  wrote:
>>>>>> In other words, one of the reasons wide_int can't be exactly 1:1
>>>>>> in practice is because it is clearing out these mistakes (GEN_INT
>>>>>> rather than gen_int_mode) and missing features (non-power-of-2 widths).
>>>>> Note that the argument should be about CONST_WIDE_INT here,
>>>>> not wide-int.  Indeed CONST_WIDE_INT has the desired feature
>>>>> and can be properly truncated/extended according to mode at the time we
>>>>> build it
>>>>> via immed_wide_int_cst (w, mode).  I don't see the requirement that
>>>>> wide-int itself is automagically providing that truncation/extension
>>>>> (though it is a possibility, one that does not match existing behavior
>>>>> of
>>>>> HWI for CONST_INT or double-int for CONST_DOUBLE).
>>>> I agree it doesn't match the existing behaviour of HWI for CONST_INT or
>>>> double-int for CONST_DOUBLE, but I think that's very much a good thing.
>>>> The model for HWIs at the moment is that you have to truncate results
>>>> to the canonical form after every operation where it matters.  As you
>>>> proved in your earlier message about the plus_constant bug, that's easily
>>>> forgotten.  I don't think the rtl code is doing all CONST_INT arithmetic
>>>> on full HWIs because it wants to: it's doing it because that's the way
>>>> C/C++ arithmetic on primitive types works.  In other words, the current
>>>> CONST_INT code is trying to emulate N-bit arithmetic (for gcc runtime N)
>>>> using a single primitive integer type.  wide_int gives us N-bit
>>>> arithmetic
>>>> directly; no emulation is needed.
>>> Ok, so what wide-int provides is integer values encoded in 'len' HWI
>>> words that fit in 'precision' or more bits (and often in less).  wide-int
>>> also provides N-bit arithmetic operations.  IMHO both are tied
>>> too closely together.  A give constant doesn't really have a precision.
>>> Associating one with it to give a precision to an arithmetic operation
>>> looks wrong to me and are a source of mismatches.
>>>
>>> What RTL currently has looks better to me - operations have
>>> explicitely specified precisions.
>> I have tried very hard to make wide-int work very efficiently with both tree
>> and rtl without biasing the rep towards either representation.  Both rtl and
>> trees constants have a precision.   In tree, constants are done better than
>> in rtl because the tree really does have a field that is filled in that
>> points to a type. However, that does not mean that rtl constants do not have
>> a precision: currently you have to look around at the context to find the
>> mode of a constant that is in your hand, but it is in fact always there.
>> At the rtl level, you can see the entire patch - we always find an
>> appropriate mode.
> Appearantly you cannot.  See Richard S. examples.
>
> As of "better", the tree has the issue that we have so many unshared
> constants because they only differ in type but not in their representation.
> That's the nice part of RTL constants all having VOIDmode ...
>
> Richard.
I said we could always find a mode, i did not say that in order to find 
the mode we did not have to stand on our head, juggle chainsaws and say 
"mother may i".   The decision to leave the mode as void in rtl integer 
constants was made to save space, but comes with an otherwise very high 
cost and in today's world of cheap memory seems fairly dated.   It is a 
decision that i and others would love to change and the truth is wide 
int is one step in that direction (in that it gets rid of the pun of 
using double-int for both integers and floats where the discriminator is 
voidmode for ints.) But for now we have to live with that poor decision.
Richard Guenther - May 3, 2013, 12:12 p.m.
On Fri, May 3, 2013 at 1:49 PM, Kenneth Zadeck <zadeck@naturalbridge.com> wrote:
> On 05/03/2013 07:34 AM, Richard Biener wrote:
>>
>> On Thu, Apr 25, 2013 at 1:18 AM, Kenneth Zadeck
>> <zadeck@naturalbridge.com> wrote:
>>>
>>> On 04/24/2013 11:13 AM, Richard Biener wrote:
>>>>
>>>> On Wed, Apr 24, 2013 at 5:00 PM, Richard Sandiford
>>>> <rdsandiford@googlemail.com>  wrote:
>>>>>
>>>>> Richard Biener<richard.guenther@gmail.com>  writes:
>>>>>>
>>>>>> On Wed, Apr 24, 2013 at 4:29 PM, Richard Sandiford
>>>>>> <rdsandiford@googlemail.com>  wrote:
>>>>>>>
>>>>>>> In other words, one of the reasons wide_int can't be exactly 1:1
>>>>>>> in practice is because it is clearing out these mistakes (GEN_INT
>>>>>>> rather than gen_int_mode) and missing features (non-power-of-2
>>>>>>> widths).
>>>>>>
>>>>>> Note that the argument should be about CONST_WIDE_INT here,
>>>>>> not wide-int.  Indeed CONST_WIDE_INT has the desired feature
>>>>>> and can be properly truncated/extended according to mode at the time
>>>>>> we
>>>>>> build it
>>>>>> via immed_wide_int_cst (w, mode).  I don't see the requirement that
>>>>>> wide-int itself is automagically providing that truncation/extension
>>>>>> (though it is a possibility, one that does not match existing behavior
>>>>>> of
>>>>>> HWI for CONST_INT or double-int for CONST_DOUBLE).
>>>>>
>>>>> I agree it doesn't match the existing behaviour of HWI for CONST_INT or
>>>>> double-int for CONST_DOUBLE, but I think that's very much a good thing.
>>>>> The model for HWIs at the moment is that you have to truncate results
>>>>> to the canonical form after every operation where it matters.  As you
>>>>> proved in your earlier message about the plus_constant bug, that's
>>>>> easily
>>>>> forgotten.  I don't think the rtl code is doing all CONST_INT
>>>>> arithmetic
>>>>> on full HWIs because it wants to: it's doing it because that's the way
>>>>> C/C++ arithmetic on primitive types works.  In other words, the current
>>>>> CONST_INT code is trying to emulate N-bit arithmetic (for gcc runtime
>>>>> N)
>>>>> using a single primitive integer type.  wide_int gives us N-bit
>>>>> arithmetic
>>>>> directly; no emulation is needed.
>>>>
>>>> Ok, so what wide-int provides is integer values encoded in 'len' HWI
>>>> words that fit in 'precision' or more bits (and often in less).
>>>> wide-int
>>>> also provides N-bit arithmetic operations.  IMHO both are tied
>>>> too closely together.  A give constant doesn't really have a precision.
>>>> Associating one with it to give a precision to an arithmetic operation
>>>> looks wrong to me and are a source of mismatches.
>>>>
>>>> What RTL currently has looks better to me - operations have
>>>> explicitely specified precisions.
>>>
>>> I have tried very hard to make wide-int work very efficiently with both
>>> tree
>>> and rtl without biasing the rep towards either representation.  Both rtl
>>> and
>>> trees constants have a precision.   In tree, constants are done better
>>> than
>>> in rtl because the tree really does have a field that is filled in that
>>> points to a type. However, that does not mean that rtl constants do not
>>> have
>>> a precision: currently you have to look around at the context to find the
>>> mode of a constant that is in your hand, but it is in fact always there.
>>> At the rtl level, you can see the entire patch - we always find an
>>> appropriate mode.
>>
>> Appearantly you cannot.  See Richard S. examples.
>>
>> As of "better", the tree has the issue that we have so many unshared
>> constants because they only differ in type but not in their
>> representation.
>> That's the nice part of RTL constants all having VOIDmode ...
>>
>> Richard.
>
> I said we could always find a mode, i did not say that in order to find the
> mode we did not have to stand on our head, juggle chainsaws and say "mother
> may i".   The decision to leave the mode as void in rtl integer constants
> was made to save space, but comes with an otherwise very high cost and in
> today's world of cheap memory seems fairly dated.   It is a decision that i
> and others would love to change and the truth is wide int is one step in
> that direction (in that it gets rid of the pun of using double-int for both
> integers and floats where the discriminator is voidmode for ints.) But for
> now we have to live with that poor decision.

As far as I have read your wide-int patches the CONST_WIDE_INT RTX
object does not include a mode.  So I don't see it as a step forward in
any way (other than that it makes it explicit that you _do_ need a mode
to do any operation on a constant).

Richard.
Kenneth Zadeck - May 3, 2013, 12:31 p.m.
On 05/03/2013 08:12 AM, Richard Biener wrote:
> On Fri, May 3, 2013 at 1:49 PM, Kenneth Zadeck <zadeck@naturalbridge.com> wrote:
>> On 05/03/2013 07:34 AM, Richard Biener wrote:
>>> On Thu, Apr 25, 2013 at 1:18 AM, Kenneth Zadeck
>>> <zadeck@naturalbridge.com> wrote:
>>>> On 04/24/2013 11:13 AM, Richard Biener wrote:
>>>>> On Wed, Apr 24, 2013 at 5:00 PM, Richard Sandiford
>>>>> <rdsandiford@googlemail.com>  wrote:
>>>>>> Richard Biener<richard.guenther@gmail.com>  writes:
>>>>>>> On Wed, Apr 24, 2013 at 4:29 PM, Richard Sandiford
>>>>>>> <rdsandiford@googlemail.com>  wrote:
>>>>>>>> In other words, one of the reasons wide_int can't be exactly 1:1
>>>>>>>> in practice is because it is clearing out these mistakes (GEN_INT
>>>>>>>> rather than gen_int_mode) and missing features (non-power-of-2
>>>>>>>> widths).
>>>>>>> Note that the argument should be about CONST_WIDE_INT here,
>>>>>>> not wide-int.  Indeed CONST_WIDE_INT has the desired feature
>>>>>>> and can be properly truncated/extended according to mode at the time
>>>>>>> we
>>>>>>> build it
>>>>>>> via immed_wide_int_cst (w, mode).  I don't see the requirement that
>>>>>>> wide-int itself is automagically providing that truncation/extension
>>>>>>> (though it is a possibility, one that does not match existing behavior
>>>>>>> of
>>>>>>> HWI for CONST_INT or double-int for CONST_DOUBLE).
>>>>>> I agree it doesn't match the existing behaviour of HWI for CONST_INT or
>>>>>> double-int for CONST_DOUBLE, but I think that's very much a good thing.
>>>>>> The model for HWIs at the moment is that you have to truncate results
>>>>>> to the canonical form after every operation where it matters.  As you
>>>>>> proved in your earlier message about the plus_constant bug, that's
>>>>>> easily
>>>>>> forgotten.  I don't think the rtl code is doing all CONST_INT
>>>>>> arithmetic
>>>>>> on full HWIs because it wants to: it's doing it because that's the way
>>>>>> C/C++ arithmetic on primitive types works.  In other words, the current
>>>>>> CONST_INT code is trying to emulate N-bit arithmetic (for gcc runtime
>>>>>> N)
>>>>>> using a single primitive integer type.  wide_int gives us N-bit
>>>>>> arithmetic
>>>>>> directly; no emulation is needed.
>>>>> Ok, so what wide-int provides is integer values encoded in 'len' HWI
>>>>> words that fit in 'precision' or more bits (and often in less).
>>>>> wide-int
>>>>> also provides N-bit arithmetic operations.  IMHO both are tied
>>>>> too closely together.  A give constant doesn't really have a precision.
>>>>> Associating one with it to give a precision to an arithmetic operation
>>>>> looks wrong to me and are a source of mismatches.
>>>>>
>>>>> What RTL currently has looks better to me - operations have
>>>>> explicitely specified precisions.
>>>> I have tried very hard to make wide-int work very efficiently with both
>>>> tree
>>>> and rtl without biasing the rep towards either representation.  Both rtl
>>>> and
>>>> trees constants have a precision.   In tree, constants are done better
>>>> than
>>>> in rtl because the tree really does have a field that is filled in that
>>>> points to a type. However, that does not mean that rtl constants do not
>>>> have
>>>> a precision: currently you have to look around at the context to find the
>>>> mode of a constant that is in your hand, but it is in fact always there.
>>>> At the rtl level, you can see the entire patch - we always find an
>>>> appropriate mode.
>>> Appearantly you cannot.  See Richard S. examples.
>>>
>>> As of "better", the tree has the issue that we have so many unshared
>>> constants because they only differ in type but not in their
>>> representation.
>>> That's the nice part of RTL constants all having VOIDmode ...
>>>
>>> Richard.
>> I said we could always find a mode, i did not say that in order to find the
>> mode we did not have to stand on our head, juggle chainsaws and say "mother
>> may i".   The decision to leave the mode as void in rtl integer constants
>> was made to save space, but comes with an otherwise very high cost and in
>> today's world of cheap memory seems fairly dated.   It is a decision that i
>> and others would love to change and the truth is wide int is one step in
>> that direction (in that it gets rid of the pun of using double-int for both
>> integers and floats where the discriminator is voidmode for ints.) But for
>> now we have to live with that poor decision.
> As far as I have read your wide-int patches the CONST_WIDE_INT RTX
> object does not include a mode.  So I don't see it as a step forward in
> any way (other than that it makes it explicit that you _do_ need a mode
> to do any operation on a constant).
>
> Richard.
There are several problems with just dropping a mode into the already 
existing mode field of an rtx constant.
1) There may be places where the a back end is testing equality to see 
if constants of different modes are in fact the same value.
2) Most of the places what build int constants use GEN_INT which does 
not take a mode, even though about 95% of those places have a mode right 
there and the rest just take a little work.    There are constructor 
that do take a mode, but in the end they just throw the mode on the floor.
3) The canonical test to see if a CONST_DOUBLE contains an int or float 
is to test if the mode is VOIDmode.

Any port that is converted to have TARGET_SUPPORTS_WIDE_INT has no more 
of problem (3).   I admit that rooting out (1) is likely to be the worst 
of the problems.   But we were careful to at least make this work move 
us in the correct direction.
Richard Sandiford - May 3, 2013, 12:37 p.m.
Richard Biener <richard.guenther@gmail.com> writes:
>> See e.g. the hoops that cselib has to jump through:
>>
>> /* We need to pass down the mode of constants through the hash table
>>    functions.  For that purpose, wrap them in a CONST of the appropriate
>>    mode.  */
>> static rtx
>> wrap_constant (enum machine_mode mode, rtx x)
>> {
>>   if ((!CONST_SCALAR_INT_P (x)) && GET_CODE (x) != CONST_FIXED)
>>     return x;
>>   gcc_assert (mode != VOIDmode);
>>   return gen_rtx_CONST (mode, x);
>> }
>>
>> That is, cselib locally converts (const_int X) into (const:M (const_int X)),
>> purely so that it doesn't lose track of the CONST_INT's mode.
>> (const:M (const_int ...)) is invalid rtl elsewhere, but a necessary
>> hack here all the same.
>
> Indeed ugly.  But I wonder why cselib needs to store constants in
> hashtables at all ... they should be VALUEs themselves.  So the fix
> for the above might not necessarily be to assign the CONST_INT
> a mode (not that CONST_WIDE_INT would fix the above).

I don't understand.  Do you mean that cselib values ought to have
a field to say whether the value is constant or not, and if so, what
constant that is?  That feels like just the same kind of hack as the above.
The current idea of chaining all known equivalent rtxes in a list seems
more natural than having a list of all known equivalent rtxes except
CONST_INT and CONST_DOUBLE, which have to be stored separately instead.
(Again, we have runtime constants like SYMBOL_REF, which store modes,
and which would presumably still be in the normal rtx list.)

CONST_WIDE_INT was never supposed to solve this problem.  I'm just giving
it as an example to back up the argument that rtx constants do in fact
have modes (although those modes are not stored in the rtx).  The code
above is there to make sure that equivalence stays transitive.
Without it we could have bogus equivalences like:

  (A) (reg:DI X) == (const_int Y) == (reg:SI Z)

even though it cannot be the case that:

  (B) (reg:DI X) == (reg:SI Z)

My point is that, semantically, (A) did not come from X and Z being
equivalent to the "same" constant.  X was equivalent to (const_int:DI Y)
and Z was equivalent to (const_int:SI Y).  (A) only came about because
we happen to use the same rtx object to represent those two semantically-
distinct constants.

The idea isn't to make CONST_WIDE_INT get rid of the code above.
The idea is to make sure that wide_int has a precision and so doesn't
require code like the above to be written when dealing with wide_ints.

In other words, I got the impression your argument was "the fact
that CONST_INT and CONST_DOUBLE don't store a mode shows that
wide_int shouldn't store a precision".  But the fact that CONST_INT
and CONST_DOUBLE don't store a mode doesn't mean they don't _have_
a mode.  You just have to keep track of that mode separately.
And the same would apply to wide_int if we did the same thing there.

What I was trying to argue was that storing the mode/precision
separately is not always easy.  It's also much less robust,
because getting the wrong mode or precision will only show up
for certain values.  If the precision is stored in the wide_int,
mismatches can be asserted for based on precision alone, regardless
of the value.

> Ok, so please then make all CONST_INTs and CONST_DOUBLEs have
> a mode!

I'm saying that CONST_INT and CONST_DOUBLE already have a mode, but that
mode is not stored in the rtx.  So if you're saying "make all CONST_INTs
and CONST_DOUBLEs _store_ a mode", then yeah, I'd like to :-)  But I see
Kenny's patch as a prerequisite for that, because it consolidates the
CONST_INT and CONST_DOUBLE code so that the choice of rtx code is
less special.  Lots more work is needed after that.

Although TBH, the huge pushback that Kenny has got from this patch
puts me off ever trying that change.

But storing the mode in the rtx is orthogonal to what Kenny is doing.
The mode of each rtx constant is already available in the places
that Kenny is changing, because we already do the work to keep track
of the mode separately.  Being able to get the mode directly from the
rtx would be simpler and IMO better, but the semantics are the same
either way.

Kenny's patch is not designed to "fix" the CONST_INT representation
(although the patch does make it easier to "fix" the representation
in future).  Kenny's patch is about representing and handling constants
that we can't at the moment.

The argument isn't whether CONST_WIDE_INT repeats "mistakes" made for
CONST_INT and CONST_DOUBLE; I hope we agree that CONST_WIDE_INT should
behave like the other two, whatever that is.  The argument is about
whether we copy the "mistake" into the wide_int class.

Storing a precision in wide_int in no way requires CONST_WIDE_INT
to store a mode.  They are separate choices.

> The solution is not to have a CONST_WIDE_INT (again with VOIDmode
> and no precision in the RTX object(!)) and only have wide_int have a
> precision.

Why is having a VOIDmode CONST_WIDE_INT any worse than having
a VOIDmode CONST_INT or CONST_DOUBLE?  In all three cases the mode
is being obtained/inferred from the same external source.

Richard
Richard Guenther - May 3, 2013, 12:40 p.m.
On Fri, May 3, 2013 at 2:31 PM, Kenneth Zadeck <zadeck@naturalbridge.com> wrote:
> On 05/03/2013 08:12 AM, Richard Biener wrote:
>>
>> On Fri, May 3, 2013 at 1:49 PM, Kenneth Zadeck <zadeck@naturalbridge.com>
>> wrote:
>>>
>>> On 05/03/2013 07:34 AM, Richard Biener wrote:
>>>>
>>>> On Thu, Apr 25, 2013 at 1:18 AM, Kenneth Zadeck
>>>> <zadeck@naturalbridge.com> wrote:
>>>>>
>>>>> On 04/24/2013 11:13 AM, Richard Biener wrote:
>>>>>>
>>>>>> On Wed, Apr 24, 2013 at 5:00 PM, Richard Sandiford
>>>>>> <rdsandiford@googlemail.com>  wrote:
>>>>>>>
>>>>>>> Richard Biener<richard.guenther@gmail.com>  writes:
>>>>>>>>
>>>>>>>> On Wed, Apr 24, 2013 at 4:29 PM, Richard Sandiford
>>>>>>>> <rdsandiford@googlemail.com>  wrote:
>>>>>>>>>
>>>>>>>>> In other words, one of the reasons wide_int can't be exactly 1:1
>>>>>>>>> in practice is because it is clearing out these mistakes (GEN_INT
>>>>>>>>> rather than gen_int_mode) and missing features (non-power-of-2
>>>>>>>>> widths).
>>>>>>>>
>>>>>>>> Note that the argument should be about CONST_WIDE_INT here,
>>>>>>>> not wide-int.  Indeed CONST_WIDE_INT has the desired feature
>>>>>>>> and can be properly truncated/extended according to mode at the time
>>>>>>>> we
>>>>>>>> build it
>>>>>>>> via immed_wide_int_cst (w, mode).  I don't see the requirement that
>>>>>>>> wide-int itself is automagically providing that truncation/extension
>>>>>>>> (though it is a possibility, one that does not match existing
>>>>>>>> behavior
>>>>>>>> of
>>>>>>>> HWI for CONST_INT or double-int for CONST_DOUBLE).
>>>>>>>
>>>>>>> I agree it doesn't match the existing behaviour of HWI for CONST_INT
>>>>>>> or
>>>>>>> double-int for CONST_DOUBLE, but I think that's very much a good
>>>>>>> thing.
>>>>>>> The model for HWIs at the moment is that you have to truncate results
>>>>>>> to the canonical form after every operation where it matters.  As you
>>>>>>> proved in your earlier message about the plus_constant bug, that's
>>>>>>> easily
>>>>>>> forgotten.  I don't think the rtl code is doing all CONST_INT
>>>>>>> arithmetic
>>>>>>> on full HWIs because it wants to: it's doing it because that's the
>>>>>>> way
>>>>>>> C/C++ arithmetic on primitive types works.  In other words, the
>>>>>>> current
>>>>>>> CONST_INT code is trying to emulate N-bit arithmetic (for gcc runtime
>>>>>>> N)
>>>>>>> using a single primitive integer type.  wide_int gives us N-bit
>>>>>>> arithmetic
>>>>>>> directly; no emulation is needed.
>>>>>>
>>>>>> Ok, so what wide-int provides is integer values encoded in 'len' HWI
>>>>>> words that fit in 'precision' or more bits (and often in less).
>>>>>> wide-int
>>>>>> also provides N-bit arithmetic operations.  IMHO both are tied
>>>>>> too closely together.  A give constant doesn't really have a
>>>>>> precision.
>>>>>> Associating one with it to give a precision to an arithmetic operation
>>>>>> looks wrong to me and are a source of mismatches.
>>>>>>
>>>>>> What RTL currently has looks better to me - operations have
>>>>>> explicitely specified precisions.
>>>>>
>>>>> I have tried very hard to make wide-int work very efficiently with both
>>>>> tree
>>>>> and rtl without biasing the rep towards either representation.  Both
>>>>> rtl
>>>>> and
>>>>> trees constants have a precision.   In tree, constants are done better
>>>>> than
>>>>> in rtl because the tree really does have a field that is filled in that
>>>>> points to a type. However, that does not mean that rtl constants do not
>>>>> have
>>>>> a precision: currently you have to look around at the context to find
>>>>> the
>>>>> mode of a constant that is in your hand, but it is in fact always
>>>>> there.
>>>>> At the rtl level, you can see the entire patch - we always find an
>>>>> appropriate mode.
>>>>
>>>> Appearantly you cannot.  See Richard S. examples.
>>>>
>>>> As of "better", the tree has the issue that we have so many unshared
>>>> constants because they only differ in type but not in their
>>>> representation.
>>>> That's the nice part of RTL constants all having VOIDmode ...
>>>>
>>>> Richard.
>>>
>>> I said we could always find a mode, i did not say that in order to find
>>> the
>>> mode we did not have to stand on our head, juggle chainsaws and say
>>> "mother
>>> may i".   The decision to leave the mode as void in rtl integer constants
>>> was made to save space, but comes with an otherwise very high cost and in
>>> today's world of cheap memory seems fairly dated.   It is a decision that
>>> i
>>> and others would love to change and the truth is wide int is one step in
>>> that direction (in that it gets rid of the pun of using double-int for
>>> both
>>> integers and floats where the discriminator is voidmode for ints.) But
>>> for
>>> now we have to live with that poor decision.
>>
>> As far as I have read your wide-int patches the CONST_WIDE_INT RTX
>> object does not include a mode.  So I don't see it as a step forward in
>> any way (other than that it makes it explicit that you _do_ need a mode
>> to do any operation on a constant).
>>
>> Richard.
>
> There are several problems with just dropping a mode into the already
> existing mode field of an rtx constant.
> 1) There may be places where the a back end is testing equality to see if
> constants of different modes are in fact the same value.

That supposedly only happens in places where both RTX objects are
know to be constants.  Which makes me guess that it's in 99% of the
cases a comparison against one of the static RTX objects like
const0_rtx - thus easily greppable for (and easily converted similar
to the tree case where we have predicates for such tests like integer_zerop ()).
The remaining cases would be missed optimizations at most.

> 2) Most of the places what build int constants use GEN_INT which does not
> take a mode, even though about 95% of those places have a mode right there
> and the rest just take a little work.    There are constructor that do take
> a mode, but in the end they just throw the mode on the floor.

The fix is easy - make GEN_INT take a mandatory mode argument.
(and fix the fallout ...)

> 3) The canonical test to see if a CONST_DOUBLE contains an int or float is
> to test if the mode is VOIDmode.

I think you addressed this already by introducing CONST_DOUBLE_AS_INT_P ().

> Any port that is converted to have TARGET_SUPPORTS_WIDE_INT has no more of
> problem (3).   I admit that rooting out (1) is likely to be the worst of the
> problems.   But we were careful to at least make this work move us in the
> correct direction.

Well, you were careful to not walk in the wrong direction.  But I cannot see
were you get closer to fix any of 1-3 (apart from considering the new predicates
being that, or not overloading CONST_DOUBLE with floats and ints).

Richard.
Kenneth Zadeck - May 3, 2013, 12:45 p.m.
On 05/03/2013 07:19 AM, Richard Biener wrote:
> On Wed, Apr 24, 2013 at 5:29 PM, Richard Sandiford
> <rdsandiford@googlemail.com> wrote:
>> Richard Biener <richard.guenther@gmail.com> writes:
>>> On Wed, Apr 24, 2013 at 4:35 PM, Kenneth Zadeck
>>> <zadeck@naturalbridge.com> wrote:
>>>> On 04/24/2013 09:36 AM, Richard Biener wrote:
>>>>> On Wed, Apr 24, 2013 at 2:44 PM, Richard Sandiford
>>>>> <rdsandiford@googlemail.com> wrote:
>>>>>> Richard Biener <richard.guenther@gmail.com> writes:
>>>>>>> Can we in such cases please to a preparatory patch and change the
>>>>>>> CONST_INT/CONST_DOUBLE paths to do an explicit [sz]ext to
>>>>>>> mode precision first?
>>>>>> I'm not sure what you mean here.  CONST_INT HWIs are already
>>>>>> sign-extended
>>>>>> from mode precision to HWI precision.  The 8-bit value 0xb10000000 must
>>>>>> be
>>>>>> represented as (const_int -128); nothing else is allowed.  E.g.
>>>>>> (const_int 128)
>>>>>> is not a valid QImode value on BITS_PER_UNIT==8 targets.
>>>>> Yes, that's what I understand.  But consider you get a CONST_INT that is
>>>>> _not_ a valid QImode value.  Current code simply trusts that it is, given
>>>>> the context from ...
>>>> And the fact that it we have to trust but cannot verify is a severe problem
>>>> at the rtl level that is not going to go away.    what i have been strongly
>>>> objecting to is your idea that just because we cannot verify it, we can thus
>>>> go change it in some completely different way (i.e. the infinite precision
>>>> nonsense that you keep hitting us with) and it will all be ok.
>>> Appearantly it is all ok because that's exactly what we have today (and
>>> had for the last 25 years).  CONST_INT encodes infinite precision signed
>>> values (with the complication that a QImode 0x80 isn't valid, thus all
>>> modes are signed as well it seems).
>> I think this is the fundamental disagreement.  Your last step doesn't
>> follow.  RTL integer modes are neither signed nor unsigned.  They are
>> just a collection of N bits.  The fact that CONST_INTs represent
>> smaller-than-HWI integers in sign-extended form is purely a represential
>> detail.  There are no semantics attached to it.  We could just as easily
>> have decided to extend with zeros or ones instead of sign bits.
>>
>> Although the decision was made before my time, I'm pretty sure the
>> point of having a canonical representation (which happened to be sign
>> extension) was to make sure that any given rtl constant has only a
>> single representation.  It would be too confusing if a QImode 0x80 could
>> be represented as either (const_int 128) or (const_int -128) (would
>> (const_int 384) then also be OK?).
> No, not as value for a QImode as it doesn't fit there.
>
>> And that's the problem with using an infinite-precision wide_int.
>> If you directly convert a CONST_INT representation of 0x80 into a
>> wide_int, you will always get infinite-precision -128, thanks to the
>> CONST_INT canonicalisation rule.  But if you arrive at 0x80 though
>> arithmetic, you might get infinite-precision 128 instead.  These two
>> values would not compare equal.
> That's true.  Note that I am not objecting to the canonicalization choice
> for the RTL object.  On trees we do have -128 and 128 QImode integers
> as tree constants have a sign.
>
> So we clearly cannot have wide_int make that choice, but those that
> create either a tree object or a RTL object have to do additional
> canonicalization (or truncation to not allow a QImode 384).
>
> Yes, I'm again arguing that making choices for wide_int shouldn't be
> done because it seems right for RTL or right for how a CPU operates.
> But we are mixing two things in this series of patches - introduction
> of an additional RTX object kind CONST_WIDE_INT together with
> deciding on its encoding of constant values, and introduction of
> a wide_int class as a vehicle to do arithmetic on the host for larger
> than HOST_WIDE_INT values.
>
> The latter could be separated by dropping CONST_DOUBLE in favor
> of CONST_WIDE_INT everywhere and simply providing a
> CONST_WIDE_INT <-> double-int interface (both ways, so you'd
> actually never generate a CONST_WIDE_INT that doesn't fit a double-int).
Given the tree world, i am surprised that you would push in this 
direction.   While i do see some benefit for having two reps for ints at 
the rtl level, I understand the argument that is one too many.

The target_supports_wide_int is a transitional trick.   The idea is to 
move the ports away from using CONST_DOUBLE at all for ints. Not only is 
this a step towards putting a mode in an rtl int const, but it also 
would allow the floating point world to move beyond the limits that 
sharing the rep with integers imposes.

One of the big goals of this cleanup is to get rid of the three path 
implementation for integer math:
constant fits in HWI - good implementation.
constant fits in 2 HWIs - spotty implementation.
constant needs more than 2 HWIs - ice or get wrong answer.

Doing a trick like this just makes it harder to unify everything into 
the good implementation category.

>
>>> CONST_DOUBLE encodes infinite precision signed values as well.  Just
>>> the "infinite" is limited by the size of the encoding, one and two
>>> HOST_WIDE_INTs.
>> It encodes an N-bit integer.  It's just that (assuming non-power-of-2
>> modes) several N-bit integers (with varying N) can be encoded using the
>> same CONST_DOUBLE representation.  That might be what you meant, sorry,
>> and so might seem pedantic, but I wasn't sure.
> Yes, that's what I meant.  Being able to share the same RTX object for
> constants with the same representation but a different mode is nice
> and looks appealing (of course works only when the actual mode stored
> in the RTX object is then sth like VOIDmode ...).  That we have gazillions
> of NULL pointer constants on trees (for each pointer type) isn't.
>
> Richard.
4 gb of memory is less than $30US.   We need to move on.   The pain that 
no mode causes is significant.
>
>> Richard
Richard Sandiford - May 3, 2013, 12:48 p.m.
Kenneth Zadeck <zadeck@naturalbridge.com> writes:
> There are several problems with just dropping a mode into the already 
> existing mode field of an rtx constant.
> 1) There may be places where the a back end is testing equality to see 
> if constants of different modes are in fact the same value.
> 2) Most of the places what build int constants use GEN_INT which does 
> not take a mode, even though about 95% of those places have a mode right 
> there and the rest just take a little work.    There are constructor 
> that do take a mode, but in the end they just throw the mode on the floor.
> 3) The canonical test to see if a CONST_DOUBLE contains an int or float 
> is to test if the mode is VOIDmode.
>
> Any port that is converted to have TARGET_SUPPORTS_WIDE_INT has no more 
> of problem (3).   I admit that rooting out (1) is likely to be the worst 
> of the problems.   But we were careful to at least make this work move 
> us in the correct direction.

I agree with that, and FWIW, there are others.  Two off the top of my head:

4) Many places use const0_rtx instead of CONST0_RTX (mode) (correctly,
   according to current representation)
5) All const_ints in the .md files would need to be given a mode
   (except for those places where const_int actually represents
   a C++ constant, such as in attributes).

I realise your list wasn't supposed to be exhaustive, and neither's mine :-)

Richard
Richard Guenther - May 3, 2013, 12:53 p.m.
On Fri, May 3, 2013 at 2:37 PM, Richard Sandiford
<rdsandiford@googlemail.com> wrote:
> Richard Biener <richard.guenther@gmail.com> writes:
>>> See e.g. the hoops that cselib has to jump through:
>>>
>>> /* We need to pass down the mode of constants through the hash table
>>>    functions.  For that purpose, wrap them in a CONST of the appropriate
>>>    mode.  */
>>> static rtx
>>> wrap_constant (enum machine_mode mode, rtx x)
>>> {
>>>   if ((!CONST_SCALAR_INT_P (x)) && GET_CODE (x) != CONST_FIXED)
>>>     return x;
>>>   gcc_assert (mode != VOIDmode);
>>>   return gen_rtx_CONST (mode, x);
>>> }
>>>
>>> That is, cselib locally converts (const_int X) into (const:M (const_int X)),
>>> purely so that it doesn't lose track of the CONST_INT's mode.
>>> (const:M (const_int ...)) is invalid rtl elsewhere, but a necessary
>>> hack here all the same.
>>
>> Indeed ugly.  But I wonder why cselib needs to store constants in
>> hashtables at all ... they should be VALUEs themselves.  So the fix
>> for the above might not necessarily be to assign the CONST_INT
>> a mode (not that CONST_WIDE_INT would fix the above).
>
> I don't understand.  Do you mean that cselib values ought to have
> a field to say whether the value is constant or not, and if so, what
> constant that is?  That feels like just the same kind of hack as the above.
> The current idea of chaining all known equivalent rtxes in a list seems
> more natural than having a list of all known equivalent rtxes except
> CONST_INT and CONST_DOUBLE, which have to be stored separately instead.
> (Again, we have runtime constants like SYMBOL_REF, which store modes,
> and which would presumably still be in the normal rtx list.)
>
> CONST_WIDE_INT was never supposed to solve this problem.  I'm just giving
> it as an example to back up the argument that rtx constants do in fact
> have modes (although those modes are not stored in the rtx).  The code
> above is there to make sure that equivalence stays transitive.
> Without it we could have bogus equivalences like:
>
>   (A) (reg:DI X) == (const_int Y) == (reg:SI Z)
>
> even though it cannot be the case that:
>
>   (B) (reg:DI X) == (reg:SI Z)
>
> My point is that, semantically, (A) did not come from X and Z being
> equivalent to the "same" constant.  X was equivalent to (const_int:DI Y)
> and Z was equivalent to (const_int:SI Y).  (A) only came about because
> we happen to use the same rtx object to represent those two semantically-
> distinct constants.
>
> The idea isn't to make CONST_WIDE_INT get rid of the code above.
> The idea is to make sure that wide_int has a precision and so doesn't
> require code like the above to be written when dealing with wide_ints.
>
> In other words, I got the impression your argument was "the fact
> that CONST_INT and CONST_DOUBLE don't store a mode shows that
> wide_int shouldn't store a precision".  But the fact that CONST_INT
> and CONST_DOUBLE don't store a mode doesn't mean they don't _have_
> a mode.  You just have to keep track of that mode separately.
> And the same would apply to wide_int if we did the same thing there.
>
> What I was trying to argue was that storing the mode/precision
> separately is not always easy.  It's also much less robust,
> because getting the wrong mode or precision will only show up
> for certain values.  If the precision is stored in the wide_int,
> mismatches can be asserted for based on precision alone, regardless
> of the value.

I was just arguing that pointing out facts in the RTL land doesn't
necessarily influence wide-int which is purely separate.  So if you
argue that having a mode in RTL constants would be soo nice and
thus that is why you want a precision in wide-int then I don't follow
that argument.  If you want a mode in RTL constants then get a mode
in RTL constants!

This would make it immediately obvious where to get the precision
for wide-ints - something you do not address at all (and as you don't
I sort of cannot believe the 'it would be so nice to have a mode on RTL
constants').

That said, if modes on RTL constants were so useful then why not
have them on CONST_WIDE_INT at least?  Please.  Only sticking
them to wide-int in form of a precision is completely backward to me
(and I still think the core wide-int shouldn't have a precision, and if
you really want a wide-int-with-precision simply derive from wide-int).

>> Ok, so please then make all CONST_INTs and CONST_DOUBLEs have
>> a mode!
>
> I'm saying that CONST_INT and CONST_DOUBLE already have a mode, but that
> mode is not stored in the rtx.  So if you're saying "make all CONST_INTs
> and CONST_DOUBLEs _store_ a mode", then yeah, I'd like to :-)  But I see
> Kenny's patch as a prerequisite for that, because it consolidates the
> CONST_INT and CONST_DOUBLE code so that the choice of rtx code is
> less special.  Lots more work is needed after that.

If there were a separate patch consolidating the paths I'd be all for
doing that.
I don't see a reason that this cannot be done even with the current
code using double-ints.

> Although TBH, the huge pushback that Kenny has got from this patch
> puts me off ever trying that change.

Well.  The patch does so much together and is so large that makes
it basically unreviewable (or very hard to review at least).

> But storing the mode in the rtx is orthogonal to what Kenny is doing.
> The mode of each rtx constant is already available in the places
> that Kenny is changing, because we already do the work to keep track
> of the mode separately.  Being able to get the mode directly from the
> rtx would be simpler and IMO better, but the semantics are the same
> either way.

Well, you showed examples where it is impossible to get at the mode.

> Kenny's patch is not designed to "fix" the CONST_INT representation
> (although the patch does make it easier to "fix" the representation
> in future).  Kenny's patch is about representing and handling constants
> that we can't at the moment.

No, it is about much more.

> The argument isn't whether CONST_WIDE_INT repeats "mistakes" made for
> CONST_INT and CONST_DOUBLE; I hope we agree that CONST_WIDE_INT should
> behave like the other two, whatever that is.  The argument is about
> whether we copy the "mistake" into the wide_int class.

I don't see how CONST_WIDE_INT is in any way related to wide_int other
than that you use wide_int to operate on the constants encoded in
CONST_WIDE_INT.  As you have a mode available at the point you
create a wide_int from a CONST_WIDE_INT you can very easily just
use that modes precision to specify the precision of an operation
(or zero/sign-extend the result).  That's what happens hidden in the
wide-int implementation currently, but in the awkward way that allows
precision mismatches and leads to odd things like having a wide-int
1 constant with a precision.

> Storing a precision in wide_int in no way requires CONST_WIDE_INT
> to store a mode.  They are separate choices.

Yes.  And I obviously would have chosed to store a mode in CONST_WIDE_INT
and no precision in wide_int.  And I cannot see a good reason to
do it the way you did it ;)

>> The solution is not to have a CONST_WIDE_INT (again with VOIDmode
>> and no precision in the RTX object(!)) and only have wide_int have a
>> precision.
>
> Why is having a VOIDmode CONST_WIDE_INT any worse than having
> a VOIDmode CONST_INT or CONST_DOUBLE?  In all three cases the mode
> is being obtained/inferred from the same external source.

Well, we're arguing in circles - the argument that VOIDmode CONST_INT/DOUBLE
are bad is yours.  And if that's not bad I can't see why it is bad for wide-int
to not have a mode (or precision).

Richard.

> Richard
Richard Guenther - May 3, 2013, 1:02 p.m.
On Fri, May 3, 2013 at 2:45 PM, Kenneth Zadeck <zadeck@naturalbridge.com> wrote:
> On 05/03/2013 07:19 AM, Richard Biener wrote:
>>
>> On Wed, Apr 24, 2013 at 5:29 PM, Richard Sandiford
>> <rdsandiford@googlemail.com> wrote:
>>>
>>> Richard Biener <richard.guenther@gmail.com> writes:
>>>>
>>>> On Wed, Apr 24, 2013 at 4:35 PM, Kenneth Zadeck
>>>> <zadeck@naturalbridge.com> wrote:
>>>>>
>>>>> On 04/24/2013 09:36 AM, Richard Biener wrote:
>>>>>>
>>>>>> On Wed, Apr 24, 2013 at 2:44 PM, Richard Sandiford
>>>>>> <rdsandiford@googlemail.com> wrote:
>>>>>>>
>>>>>>> Richard Biener <richard.guenther@gmail.com> writes:
>>>>>>>>
>>>>>>>> Can we in such cases please to a preparatory patch and change the
>>>>>>>> CONST_INT/CONST_DOUBLE paths to do an explicit [sz]ext to
>>>>>>>> mode precision first?
>>>>>>>
>>>>>>> I'm not sure what you mean here.  CONST_INT HWIs are already
>>>>>>> sign-extended
>>>>>>> from mode precision to HWI precision.  The 8-bit value 0xb10000000
>>>>>>> must
>>>>>>> be
>>>>>>> represented as (const_int -128); nothing else is allowed.  E.g.
>>>>>>> (const_int 128)
>>>>>>> is not a valid QImode value on BITS_PER_UNIT==8 targets.
>>>>>>
>>>>>> Yes, that's what I understand.  But consider you get a CONST_INT that
>>>>>> is
>>>>>> _not_ a valid QImode value.  Current code simply trusts that it is,
>>>>>> given
>>>>>> the context from ...
>>>>>
>>>>> And the fact that it we have to trust but cannot verify is a severe
>>>>> problem
>>>>> at the rtl level that is not going to go away.    what i have been
>>>>> strongly
>>>>> objecting to is your idea that just because we cannot verify it, we can
>>>>> thus
>>>>> go change it in some completely different way (i.e. the infinite
>>>>> precision
>>>>> nonsense that you keep hitting us with) and it will all be ok.
>>>>
>>>> Appearantly it is all ok because that's exactly what we have today (and
>>>> had for the last 25 years).  CONST_INT encodes infinite precision signed
>>>> values (with the complication that a QImode 0x80 isn't valid, thus all
>>>> modes are signed as well it seems).
>>>
>>> I think this is the fundamental disagreement.  Your last step doesn't
>>> follow.  RTL integer modes are neither signed nor unsigned.  They are
>>> just a collection of N bits.  The fact that CONST_INTs represent
>>> smaller-than-HWI integers in sign-extended form is purely a represential
>>> detail.  There are no semantics attached to it.  We could just as easily
>>> have decided to extend with zeros or ones instead of sign bits.
>>>
>>> Although the decision was made before my time, I'm pretty sure the
>>> point of having a canonical representation (which happened to be sign
>>> extension) was to make sure that any given rtl constant has only a
>>> single representation.  It would be too confusing if a QImode 0x80 could
>>> be represented as either (const_int 128) or (const_int -128) (would
>>> (const_int 384) then also be OK?).
>>
>> No, not as value for a QImode as it doesn't fit there.
>>
>>> And that's the problem with using an infinite-precision wide_int.
>>> If you directly convert a CONST_INT representation of 0x80 into a
>>> wide_int, you will always get infinite-precision -128, thanks to the
>>> CONST_INT canonicalisation rule.  But if you arrive at 0x80 though
>>> arithmetic, you might get infinite-precision 128 instead.  These two
>>> values would not compare equal.
>>
>> That's true.  Note that I am not objecting to the canonicalization choice
>> for the RTL object.  On trees we do have -128 and 128 QImode integers
>> as tree constants have a sign.
>>
>> So we clearly cannot have wide_int make that choice, but those that
>> create either a tree object or a RTL object have to do additional
>> canonicalization (or truncation to not allow a QImode 384).
>>
>> Yes, I'm again arguing that making choices for wide_int shouldn't be
>> done because it seems right for RTL or right for how a CPU operates.
>> But we are mixing two things in this series of patches - introduction
>> of an additional RTX object kind CONST_WIDE_INT together with
>> deciding on its encoding of constant values, and introduction of
>> a wide_int class as a vehicle to do arithmetic on the host for larger
>> than HOST_WIDE_INT values.
>>
>> The latter could be separated by dropping CONST_DOUBLE in favor
>> of CONST_WIDE_INT everywhere and simply providing a
>> CONST_WIDE_INT <-> double-int interface (both ways, so you'd
>> actually never generate a CONST_WIDE_INT that doesn't fit a double-int).
>
> Given the tree world, i am surprised that you would push in this direction.
> While i do see some benefit for having two reps for ints at the rtl level, I
> understand the argument that is one too many.
>
> The target_supports_wide_int is a transitional trick.   The idea is to move
> the ports away from using CONST_DOUBLE at all for ints. Not only is this a
> step towards putting a mode in an rtl int const, but it also would allow the
> floating point world to move beyond the limits that sharing the rep with
> integers imposes.
>
> One of the big goals of this cleanup is to get rid of the three path
> implementation for integer math:
> constant fits in HWI - good implementation.
> constant fits in 2 HWIs - spotty implementation.
> constant needs more than 2 HWIs - ice or get wrong answer.
>
> Doing a trick like this just makes it harder to unify everything into the
> good implementation category.
>
>
>>
>>>> CONST_DOUBLE encodes infinite precision signed values as well.  Just
>>>> the "infinite" is limited by the size of the encoding, one and two
>>>> HOST_WIDE_INTs.
>>>
>>> It encodes an N-bit integer.  It's just that (assuming non-power-of-2
>>> modes) several N-bit integers (with varying N) can be encoded using the
>>> same CONST_DOUBLE representation.  That might be what you meant, sorry,
>>> and so might seem pedantic, but I wasn't sure.
>>
>> Yes, that's what I meant.  Being able to share the same RTX object for
>> constants with the same representation but a different mode is nice
>> and looks appealing (of course works only when the actual mode stored
>> in the RTX object is then sth like VOIDmode ...).  That we have gazillions
>> of NULL pointer constants on trees (for each pointer type) isn't.
>>
>> Richard.
>
> 4 gb of memory is less than $30US.   We need to move on.   The pain that no
> mode causes is significant.

I think you should stop arguing that way as you two confuse me with
two opposite views here.  Richard says having no mode on constants
is very much fine (but it would be convenient to have one).  You say
you absolutely want a mode but you do not add one to CONST_WIDE_INT.

?

Btw, I arrived at reviewing the patches for the introduction of wide_int
(separate from the RTL side) as vehicle of eventually replacing double-int.
It has good design goals but you get yourself too much influenced by
what you think are RTL/tree weaknesses or strengths.

Now I feel being dragged into a RTL IL discussion ... which isn't my
primary area of knowledge (nor interest).  Unfortunately nobody else
but the patch authors and me seem to be keen enough to get involved here ...

Just to ask again - is there a branch to look at the patches and produce
patches against?

Richard.
Richard Guenther - May 3, 2013, 1:06 p.m.
On Fri, May 3, 2013 at 2:48 PM, Richard Sandiford
<rdsandiford@googlemail.com> wrote:
> Kenneth Zadeck <zadeck@naturalbridge.com> writes:
>> There are several problems with just dropping a mode into the already
>> existing mode field of an rtx constant.
>> 1) There may be places where the a back end is testing equality to see
>> if constants of different modes are in fact the same value.
>> 2) Most of the places what build int constants use GEN_INT which does
>> not take a mode, even though about 95% of those places have a mode right
>> there and the rest just take a little work.    There are constructor
>> that do take a mode, but in the end they just throw the mode on the floor.
>> 3) The canonical test to see if a CONST_DOUBLE contains an int or float
>> is to test if the mode is VOIDmode.
>>
>> Any port that is converted to have TARGET_SUPPORTS_WIDE_INT has no more
>> of problem (3).   I admit that rooting out (1) is likely to be the worst
>> of the problems.   But we were careful to at least make this work move
>> us in the correct direction.
>
> I agree with that, and FWIW, there are others.  Two off the top of my head:
>
> 4) Many places use const0_rtx instead of CONST0_RTX (mode) (correctly,
>    according to current representation)

As it's easy from the context to get at a mode just drop const0_rtx
and fix the fallout? (and complicate the CONST0_RTX macro to
dispatch to const_int_rtx for integer modes)

> 5) All const_ints in the .md files would need to be given a mode
>    (except for those places where const_int actually represents
>    a C++ constant, such as in attributes).
>
> I realise your list wasn't supposed to be exhaustive, and neither's mine :-)

Now, do you think it is a good idea to assign integer constants a mode
or not?

Richard.

> Richard
Richard Sandiford - May 3, 2013, 1:23 p.m.
Richard Biener <richard.guenther@gmail.com> writes:
>> 5) All const_ints in the .md files would need to be given a mode
>>    (except for those places where const_int actually represents
>>    a C++ constant, such as in attributes).
>>
>> I realise your list wasn't supposed to be exhaustive, and neither's mine :-)
>
> Now, do you think it is a good idea to assign integer constants a mode
> or not?

I think the answer was obvious: good idea.  But it's an aspiration.
The huge amount of work involved means that it's out of scope unless
someone has several spare months on their hands.

The equivalent choice for wide_ints is not an aspiration.  The choice
is in ours hands.  So why force the class to use the old, problematic
model that rtl followed when (a) there are no compatiblity reasons
to do so and (b) Kenny has already written an implementation that
does it the better way?

The main reason we share the same CONST_INT and CONST_DOUBLE representation
for distinct constants is to save memory.  Even that's not an advantage
for wide_int, since wide_ints are supposed to be short-lived stack objects.

Richard
Richard Sandiford - May 3, 2013, 1:50 p.m.
Richard Biener <richard.guenther@gmail.com> writes:
>> But storing the mode in the rtx is orthogonal to what Kenny is doing.
>> The mode of each rtx constant is already available in the places
>> that Kenny is changing, because we already do the work to keep track
>> of the mode separately.  Being able to get the mode directly from the
>> rtx would be simpler and IMO better, but the semantics are the same
>> either way.
>
> Well, you showed examples where it is impossible to get at the mode.

No, I showed examples where the mode is not inherent in the rtl.
That's a very different thing.  I wrote that in respnose to:

Richard Biener <richard.guenther@gmail.com> writes:
> Ok, so what wide-int provides is integer values encoded in 'len' HWI
> words that fit in 'precision' or more bits (and often in less).  wide-int
> also provides N-bit arithmetic operations.  IMHO both are tied
> too closely together.  A give constant doesn't really have a precision.
> Associating one with it to give a precision to an arithmetic operation
> looks wrong to me and are a source of mismatches.
>
> What RTL currently has looks better to me - operations have
> explicitely specified precisions.

That is, you seemed to be arguing that constants don't need a precision
because, whenever you do anything with them, the operator tells you
what precision the constant has.  And you seemed to be citing rtl
as proof of that.

What I was trying to show is that the operator _doesn't_ tell you the
precision in all cases.  Instead, the operands always have their own
precision, and there are rules about which combinations of operand and
operator precision are allowed.  For most binary operations the three
precisions have to be the same.  For things like popcount there's no
real restriction: the precision of the thing being counted and the
precision of the result can be arbitrarily different.  For things like
zero_extend the operator precision must be greater than the operand
precision.  Etc.

The onus is then on the rtl code to keep track of both the operator
and operand precisions where necessary.  _And the current rtl code
already tries to do that_[*].  The cselib example I gave is one place
where we take special measures.  See also things like:

  /* Now recursively process each operand of this operation.  We need to
     handle ZERO_EXTEND specially so that we don't lose track of the
     inner mode.  */
  if (GET_CODE (x) == ZERO_EXTEND)
    {
      new_rtx = make_compound_operation (XEXP (x, 0), next_code);
      tem = simplify_const_unary_operation (ZERO_EXTEND, GET_MODE (x),
					    new_rtx, GET_MODE (XEXP (x, 0)));
      if (tem)
	return tem;
      SUBST (XEXP (x, 0), new_rtx);
      return x;
    }

in combine.c, which is there specifically because this code still knows
the mode of both the operand and operator.

So all this was trying to dispel the idea that:

(a) rtl constants don't have a mode
(b) the mode of an operator tells you the mode of the operands

Neither is really true.  Instead, every rtl constant has a precision/mode.
Every tree constant likewise has a precision.  The main purpose of wide_int
is to handle compile-time arithmetic on rtl constants and tree constants,
and if both of those have a precision, it seems strange that wide_int
shouldn't.  It just pushes the onus of tracking the precision onto the
callers, like the current rtl representation does.  And the examples
I've been giving were supposed to show what a hassle that can be.

  [*] Highlighted because that's why storing a mode in a CONST_INT or
      CONST_DOUBLE isn't a prerequisite for Kenny's patch.  The mode
      is already to hand where it needs to be.

Thanks,
Richard
Kenneth Zadeck - May 3, 2013, 2:08 p.m.
On 05/03/2013 08:40 AM, Richard Biener wrote:
> On Fri, May 3, 2013 at 2:31 PM, Kenneth Zadeck <zadeck@naturalbridge.com> wrote:
>> On 05/03/2013 08:12 AM, Richard Biener wrote:
>>> On Fri, May 3, 2013 at 1:49 PM, Kenneth Zadeck <zadeck@naturalbridge.com>
>>> wrote:
>>>> On 05/03/2013 07:34 AM, Richard Biener wrote:
>>>>> On Thu, Apr 25, 2013 at 1:18 AM, Kenneth Zadeck
>>>>> <zadeck@naturalbridge.com> wrote:
>>>>>> On 04/24/2013 11:13 AM, Richard Biener wrote:
>>>>>>> On Wed, Apr 24, 2013 at 5:00 PM, Richard Sandiford
>>>>>>> <rdsandiford@googlemail.com>  wrote:
>>>>>>>> Richard Biener<richard.guenther@gmail.com>  writes:
>>>>>>>>> On Wed, Apr 24, 2013 at 4:29 PM, Richard Sandiford
>>>>>>>>> <rdsandiford@googlemail.com>  wrote:
>>>>>>>>>> In other words, one of the reasons wide_int can't be exactly 1:1
>>>>>>>>>> in practice is because it is clearing out these mistakes (GEN_INT
>>>>>>>>>> rather than gen_int_mode) and missing features (non-power-of-2
>>>>>>>>>> widths).
>>>>>>>>> Note that the argument should be about CONST_WIDE_INT here,
>>>>>>>>> not wide-int.  Indeed CONST_WIDE_INT has the desired feature
>>>>>>>>> and can be properly truncated/extended according to mode at the time
>>>>>>>>> we
>>>>>>>>> build it
>>>>>>>>> via immed_wide_int_cst (w, mode).  I don't see the requirement that
>>>>>>>>> wide-int itself is automagically providing that truncation/extension
>>>>>>>>> (though it is a possibility, one that does not match existing
>>>>>>>>> behavior
>>>>>>>>> of
>>>>>>>>> HWI for CONST_INT or double-int for CONST_DOUBLE).
>>>>>>>> I agree it doesn't match the existing behaviour of HWI for CONST_INT
>>>>>>>> or
>>>>>>>> double-int for CONST_DOUBLE, but I think that's very much a good
>>>>>>>> thing.
>>>>>>>> The model for HWIs at the moment is that you have to truncate results
>>>>>>>> to the canonical form after every operation where it matters.  As you
>>>>>>>> proved in your earlier message about the plus_constant bug, that's
>>>>>>>> easily
>>>>>>>> forgotten.  I don't think the rtl code is doing all CONST_INT
>>>>>>>> arithmetic
>>>>>>>> on full HWIs because it wants to: it's doing it because that's the
>>>>>>>> way
>>>>>>>> C/C++ arithmetic on primitive types works.  In other words, the
>>>>>>>> current
>>>>>>>> CONST_INT code is trying to emulate N-bit arithmetic (for gcc runtime
>>>>>>>> N)
>>>>>>>> using a single primitive integer type.  wide_int gives us N-bit
>>>>>>>> arithmetic
>>>>>>>> directly; no emulation is needed.
>>>>>>> Ok, so what wide-int provides is integer values encoded in 'len' HWI
>>>>>>> words that fit in 'precision' or more bits (and often in less).
>>>>>>> wide-int
>>>>>>> also provides N-bit arithmetic operations.  IMHO both are tied
>>>>>>> too closely together.  A give constant doesn't really have a
>>>>>>> precision.
>>>>>>> Associating one with it to give a precision to an arithmetic operation
>>>>>>> looks wrong to me and are a source of mismatches.
>>>>>>>
>>>>>>> What RTL currently has looks better to me - operations have
>>>>>>> explicitely specified precisions.
>>>>>> I have tried very hard to make wide-int work very efficiently with both
>>>>>> tree
>>>>>> and rtl without biasing the rep towards either representation.  Both
>>>>>> rtl
>>>>>> and
>>>>>> trees constants have a precision.   In tree, constants are done better
>>>>>> than
>>>>>> in rtl because the tree really does have a field that is filled in that
>>>>>> points to a type. However, that does not mean that rtl constants do not
>>>>>> have
>>>>>> a precision: currently you have to look around at the context to find
>>>>>> the
>>>>>> mode of a constant that is in your hand, but it is in fact always
>>>>>> there.
>>>>>> At the rtl level, you can see the entire patch - we always find an
>>>>>> appropriate mode.
>>>>> Appearantly you cannot.  See Richard S. examples.
>>>>>
>>>>> As of "better", the tree has the issue that we have so many unshared
>>>>> constants because they only differ in type but not in their
>>>>> representation.
>>>>> That's the nice part of RTL constants all having VOIDmode ...
>>>>>
>>>>> Richard.
>>>> I said we could always find a mode, i did not say that in order to find
>>>> the
>>>> mode we did not have to stand on our head, juggle chainsaws and say
>>>> "mother
>>>> may i".   The decision to leave the mode as void in rtl integer constants
>>>> was made to save space, but comes with an otherwise very high cost and in
>>>> today's world of cheap memory seems fairly dated.   It is a decision that
>>>> i
>>>> and others would love to change and the truth is wide int is one step in
>>>> that direction (in that it gets rid of the pun of using double-int for
>>>> both
>>>> integers and floats where the discriminator is voidmode for ints.) But
>>>> for
>>>> now we have to live with that poor decision.
>>> As far as I have read your wide-int patches the CONST_WIDE_INT RTX
>>> object does not include a mode.  So I don't see it as a step forward in
>>> any way (other than that it makes it explicit that you _do_ need a mode
>>> to do any operation on a constant).
>>>
>>> Richard.
>> There are several problems with just dropping a mode into the already
>> existing mode field of an rtx constant.
>> 1) There may be places where the a back end is testing equality to see if
>> constants of different modes are in fact the same value.
> That supposedly only happens in places where both RTX objects are
> know to be constants.  Which makes me guess that it's in 99% of the
> cases a comparison against one of the static RTX objects like
> const0_rtx - thus easily greppable for (and easily converted similar
> to the tree case where we have predicates for such tests like integer_zerop ()).
> The remaining cases would be missed optimizations at most.
>
>> 2) Most of the places what build int constants use GEN_INT which does not
>> take a mode, even though about 95% of those places have a mode right there
>> and the rest just take a little work.    There are constructor that do take
>> a mode, but in the end they just throw the mode on the floor.
> The fix is easy - make GEN_INT take a mandatory mode argument.
> (and fix the fallout ...)
>
>> 3) The canonical test to see if a CONST_DOUBLE contains an int or float is
>> to test if the mode is VOIDmode.
> I think you addressed this already by introducing CONST_DOUBLE_AS_INT_P ().
>
>> Any port that is converted to have TARGET_SUPPORTS_WIDE_INT has no more of
>> problem (3).   I admit that rooting out (1) is likely to be the worst of the
>> problems.   But we were careful to at least make this work move us in the
>> correct direction.
> Well, you were careful to not walk in the wrong direction.  But I cannot see
> were you get closer to fix any of 1-3 (apart from considering the new predicates
> being that, or not overloading CONST_DOUBLE with floats and ints).
>
> Richard.
I understand the process, but it is unreasonable to expect me to do that 
for this.
Kenneth Zadeck - May 3, 2013, 2:27 p.m.
On 05/03/2013 08:53 AM, Richard Biener wrote:
> On Fri, May 3, 2013 at 2:37 PM, Richard Sandiford
> <rdsandiford@googlemail.com> wrote:
>> Richard Biener <richard.guenther@gmail.com> writes:
>>>> See e.g. the hoops that cselib has to jump through:
>>>>
>>>> /* We need to pass down the mode of constants through the hash table
>>>>     functions.  For that purpose, wrap them in a CONST of the appropriate
>>>>     mode.  */
>>>> static rtx
>>>> wrap_constant (enum machine_mode mode, rtx x)
>>>> {
>>>>    if ((!CONST_SCALAR_INT_P (x)) && GET_CODE (x) != CONST_FIXED)
>>>>      return x;
>>>>    gcc_assert (mode != VOIDmode);
>>>>    return gen_rtx_CONST (mode, x);
>>>> }
>>>>
>>>> That is, cselib locally converts (const_int X) into (const:M (const_int X)),
>>>> purely so that it doesn't lose track of the CONST_INT's mode.
>>>> (const:M (const_int ...)) is invalid rtl elsewhere, but a necessary
>>>> hack here all the same.
>>> Indeed ugly.  But I wonder why cselib needs to store constants in
>>> hashtables at all ... they should be VALUEs themselves.  So the fix
>>> for the above might not necessarily be to assign the CONST_INT
>>> a mode (not that CONST_WIDE_INT would fix the above).
>> I don't understand.  Do you mean that cselib values ought to have
>> a field to say whether the value is constant or not, and if so, what
>> constant that is?  That feels like just the same kind of hack as the above.
>> The current idea of chaining all known equivalent rtxes in a list seems
>> more natural than having a list of all known equivalent rtxes except
>> CONST_INT and CONST_DOUBLE, which have to be stored separately instead.
>> (Again, we have runtime constants like SYMBOL_REF, which store modes,
>> and which would presumably still be in the normal rtx list.)
>>
>> CONST_WIDE_INT was never supposed to solve this problem.  I'm just giving
>> it as an example to back up the argument that rtx constants do in fact
>> have modes (although those modes are not stored in the rtx).  The code
>> above is there to make sure that equivalence stays transitive.
>> Without it we could have bogus equivalences like:
>>
>>    (A) (reg:DI X) == (const_int Y) == (reg:SI Z)
>>
>> even though it cannot be the case that:
>>
>>    (B) (reg:DI X) == (reg:SI Z)
>>
>> My point is that, semantically, (A) did not come from X and Z being
>> equivalent to the "same" constant.  X was equivalent to (const_int:DI Y)
>> and Z was equivalent to (const_int:SI Y).  (A) only came about because
>> we happen to use the same rtx object to represent those two semantically-
>> distinct constants.
>>
>> The idea isn't to make CONST_WIDE_INT get rid of the code above.
>> The idea is to make sure that wide_int has a precision and so doesn't
>> require code like the above to be written when dealing with wide_ints.
>>
>> In other words, I got the impression your argument was "the fact
>> that CONST_INT and CONST_DOUBLE don't store a mode shows that
>> wide_int shouldn't store a precision".  But the fact that CONST_INT
>> and CONST_DOUBLE don't store a mode doesn't mean they don't _have_
>> a mode.  You just have to keep track of that mode separately.
>> And the same would apply to wide_int if we did the same thing there.
>>
>> What I was trying to argue was that storing the mode/precision
>> separately is not always easy.  It's also much less robust,
>> because getting the wrong mode or precision will only show up
>> for certain values.  If the precision is stored in the wide_int,
>> mismatches can be asserted for based on precision alone, regardless
>> of the value.
> I was just arguing that pointing out facts in the RTL land doesn't
> necessarily influence wide-int which is purely separate.  So if you
> argue that having a mode in RTL constants would be soo nice and
> thus that is why you want a precision in wide-int then I don't follow
> that argument.  If you want a mode in RTL constants then get a mode
> in RTL constants!
>
> This would make it immediately obvious where to get the precision
> for wide-ints - something you do not address at all (and as you don't
> I sort of cannot believe the 'it would be so nice to have a mode on RTL
> constants').
>
> That said, if modes on RTL constants were so useful then why not
> have them on CONST_WIDE_INT at least?  Please.  Only sticking
> them to wide-int in form of a precision is completely backward to me
> (and I still think the core wide-int shouldn't have a precision, and if
> you really want a wide-int-with-precision simply derive from wide-int).
>
>>> Ok, so please then make all CONST_INTs and CONST_DOUBLEs have
>>> a mode!
>> I'm saying that CONST_INT and CONST_DOUBLE already have a mode, but that
>> mode is not stored in the rtx.  So if you're saying "make all CONST_INTs
>> and CONST_DOUBLEs _store_ a mode", then yeah, I'd like to :-)  But I see
>> Kenny's patch as a prerequisite for that, because it consolidates the
>> CONST_INT and CONST_DOUBLE code so that the choice of rtx code is
>> less special.  Lots more work is needed after that.
> If there were a separate patch consolidating the paths I'd be all for
> doing that.
> I don't see a reason that this cannot be done even with the current
> code using double-ints.
>
>> Although TBH, the huge pushback that Kenny has got from this patch
>> puts me off ever trying that change.
> Well.  The patch does so much together and is so large that makes
> it basically unreviewable (or very hard to review at least).
>
>> But storing the mode in the rtx is orthogonal to what Kenny is doing.
>> The mode of each rtx constant is already available in the places
>> that Kenny is changing, because we already do the work to keep track
>> of the mode separately.  Being able to get the mode directly from the
>> rtx would be simpler and IMO better, but the semantics are the same
>> either way.
> Well, you showed examples where it is impossible to get at the mode.
>
>> Kenny's patch is not designed to "fix" the CONST_INT representation
>> (although the patch does make it easier to "fix" the representation
>> in future).  Kenny's patch is about representing and handling constants
>> that we can't at the moment.
> No, it is about much more.
>
>> The argument isn't whether CONST_WIDE_INT repeats "mistakes" made for
>> CONST_INT and CONST_DOUBLE; I hope we agree that CONST_WIDE_INT should
>> behave like the other two, whatever that is.  The argument is about
>> whether we copy the "mistake" into the wide_int class.
> I don't see how CONST_WIDE_INT is in any way related to wide_int other
> than that you use wide_int to operate on the constants encoded in
> CONST_WIDE_INT.  As you have a mode available at the point you
> create a wide_int from a CONST_WIDE_INT you can very easily just
> use that modes precision to specify the precision of an operation
> (or zero/sign-extend the result).  That's what happens hidden in the
> wide-int implementation currently, but in the awkward way that allows
> precision mismatches and leads to odd things like having a wide-int
> 1 constant with a precision.
i do not have a problem with putting the mode into CONST_WIDE_INT. It is 
that it just would not help anything now.    As you may have seen, the 
idiom that i use is to have a single wide-int constructor that does the 
correct thing no matter which of the three forms is passed in.   So the 
fact that only one of the three forms has the mode is of little use.

Again, if you want infinite precision then use mpc.   Given some of your 
comments on this patch, i do not think that you actually appreciate how 
much mileage we get out of having the precision. Doing fixed precision 
math allows me to do the precision fits in a HWI case inline with no 
function calls and no checking for carrys and such.

If i did infinite precision then i am stuck with a loop around every 
operation to take it as far as it needs to go.   Then, for performance 
reasons, that is going to force me live in the world where we have one 
implementation for things that fit in hwi and things that do not, and 
then people will, as they currently do, not do the work for the longer 
types.

>> Storing a precision in wide_int in no way requires CONST_WIDE_INT
>> to store a mode.  They are separate choices.
> Yes.  And I obviously would have chosed to store a mode in CONST_WIDE_INT
> and no precision in wide_int.  And I cannot see a good reason to
> do it the way you did it ;)
because if i say a* b + c, i need the precision in the middle, otherwise 
i am stuck doing infinite precision arithmetic which is not what i want 
and will perform too slowly to be generally useful.   I do not 
understand why you do not get this!!!!    I did not do infinite 
precision because i was lazy or because i was forced to by some 
weirdness in rtl.   I did it because it is the right way to do the math 
in the compiler after the front ends do the language specified "constant 
math" and because infinite precision is too expensive.    Double int is 
not infinite precision, it is fixed precision at 128 bits and if the 
number does not fit in that, the compiler ices.

>
>>> The solution is not to have a CONST_WIDE_INT (again with VOIDmode
>>> and no precision in the RTX object(!)) and only have wide_int have a
>>> precision.
>> Why is having a VOIDmode CONST_WIDE_INT any worse than having
>> a VOIDmode CONST_INT or CONST_DOUBLE?  In all three cases the mode
>> is being obtained/inferred from the same external source.
> Well, we're arguing in circles - the argument that VOIDmode CONST_INT/DOUBLE
> are bad is yours.  And if that's not bad I can't see why it is bad for wide-int
> to not have a mode (or precision).
I have said enough on this.


>
> Richard.
>
>> Richard
Kenneth Zadeck - May 3, 2013, 2:34 p.m.
On 05/03/2013 09:02 AM, Richard Biener wrote:
> On Fri, May 3, 2013 at 2:45 PM, Kenneth Zadeck <zadeck@naturalbridge.com> wrote:
>> On 05/03/2013 07:19 AM, Richard Biener wrote:
>>> On Wed, Apr 24, 2013 at 5:29 PM, Richard Sandiford
>>> <rdsandiford@googlemail.com> wrote:
>>>> Richard Biener <richard.guenther@gmail.com> writes:
>>>>> On Wed, Apr 24, 2013 at 4:35 PM, Kenneth Zadeck
>>>>> <zadeck@naturalbridge.com> wrote:
>>>>>> On 04/24/2013 09:36 AM, Richard Biener wrote:
>>>>>>> On Wed, Apr 24, 2013 at 2:44 PM, Richard Sandiford
>>>>>>> <rdsandiford@googlemail.com> wrote:
>>>>>>>> Richard Biener <richard.guenther@gmail.com> writes:
>>>>>>>>> Can we in such cases please to a preparatory patch and change the
>>>>>>>>> CONST_INT/CONST_DOUBLE paths to do an explicit [sz]ext to
>>>>>>>>> mode precision first?
>>>>>>>> I'm not sure what you mean here.  CONST_INT HWIs are already
>>>>>>>> sign-extended
>>>>>>>> from mode precision to HWI precision.  The 8-bit value 0xb10000000
>>>>>>>> must
>>>>>>>> be
>>>>>>>> represented as (const_int -128); nothing else is allowed.  E.g.
>>>>>>>> (const_int 128)
>>>>>>>> is not a valid QImode value on BITS_PER_UNIT==8 targets.
>>>>>>> Yes, that's what I understand.  But consider you get a CONST_INT that
>>>>>>> is
>>>>>>> _not_ a valid QImode value.  Current code simply trusts that it is,
>>>>>>> given
>>>>>>> the context from ...
>>>>>> And the fact that it we have to trust but cannot verify is a severe
>>>>>> problem
>>>>>> at the rtl level that is not going to go away.    what i have been
>>>>>> strongly
>>>>>> objecting to is your idea that just because we cannot verify it, we can
>>>>>> thus
>>>>>> go change it in some completely different way (i.e. the infinite
>>>>>> precision
>>>>>> nonsense that you keep hitting us with) and it will all be ok.
>>>>> Appearantly it is all ok because that's exactly what we have today (and
>>>>> had for the last 25 years).  CONST_INT encodes infinite precision signed
>>>>> values (with the complication that a QImode 0x80 isn't valid, thus all
>>>>> modes are signed as well it seems).
>>>> I think this is the fundamental disagreement.  Your last step doesn't
>>>> follow.  RTL integer modes are neither signed nor unsigned.  They are
>>>> just a collection of N bits.  The fact that CONST_INTs represent
>>>> smaller-than-HWI integers in sign-extended form is purely a represential
>>>> detail.  There are no semantics attached to it.  We could just as easily
>>>> have decided to extend with zeros or ones instead of sign bits.
>>>>
>>>> Although the decision was made before my time, I'm pretty sure the
>>>> point of having a canonical representation (which happened to be sign
>>>> extension) was to make sure that any given rtl constant has only a
>>>> single representation.  It would be too confusing if a QImode 0x80 could
>>>> be represented as either (const_int 128) or (const_int -128) (would
>>>> (const_int 384) then also be OK?).
>>> No, not as value for a QImode as it doesn't fit there.
>>>
>>>> And that's the problem with using an infinite-precision wide_int.
>>>> If you directly convert a CONST_INT representation of 0x80 into a
>>>> wide_int, you will always get infinite-precision -128, thanks to the
>>>> CONST_INT canonicalisation rule.  But if you arrive at 0x80 though
>>>> arithmetic, you might get infinite-precision 128 instead.  These two
>>>> values would not compare equal.
>>> That's true.  Note that I am not objecting to the canonicalization choice
>>> for the RTL object.  On trees we do have -128 and 128 QImode integers
>>> as tree constants have a sign.
>>>
>>> So we clearly cannot have wide_int make that choice, but those that
>>> create either a tree object or a RTL object have to do additional
>>> canonicalization (or truncation to not allow a QImode 384).
>>>
>>> Yes, I'm again arguing that making choices for wide_int shouldn't be
>>> done because it seems right for RTL or right for how a CPU operates.
>>> But we are mixing two things in this series of patches - introduction
>>> of an additional RTX object kind CONST_WIDE_INT together with
>>> deciding on its encoding of constant values, and introduction of
>>> a wide_int class as a vehicle to do arithmetic on the host for larger
>>> than HOST_WIDE_INT values.
>>>
>>> The latter could be separated by dropping CONST_DOUBLE in favor
>>> of CONST_WIDE_INT everywhere and simply providing a
>>> CONST_WIDE_INT <-> double-int interface (both ways, so you'd
>>> actually never generate a CONST_WIDE_INT that doesn't fit a double-int).
>> Given the tree world, i am surprised that you would push in this direction.
>> While i do see some benefit for having two reps for ints at the rtl level, I
>> understand the argument that is one too many.
>>
>> The target_supports_wide_int is a transitional trick.   The idea is to move
>> the ports away from using CONST_DOUBLE at all for ints. Not only is this a
>> step towards putting a mode in an rtl int const, but it also would allow the
>> floating point world to move beyond the limits that sharing the rep with
>> integers imposes.
>>
>> One of the big goals of this cleanup is to get rid of the three path
>> implementation for integer math:
>> constant fits in HWI - good implementation.
>> constant fits in 2 HWIs - spotty implementation.
>> constant needs more than 2 HWIs - ice or get wrong answer.
>>
>> Doing a trick like this just makes it harder to unify everything into the
>> good implementation category.
>>
>>
>>>>> CONST_DOUBLE encodes infinite precision signed values as well.  Just
>>>>> the "infinite" is limited by the size of the encoding, one and two
>>>>> HOST_WIDE_INTs.
>>>> It encodes an N-bit integer.  It's just that (assuming non-power-of-2
>>>> modes) several N-bit integers (with varying N) can be encoded using the
>>>> same CONST_DOUBLE representation.  That might be what you meant, sorry,
>>>> and so might seem pedantic, but I wasn't sure.
>>> Yes, that's what I meant.  Being able to share the same RTX object for
>>> constants with the same representation but a different mode is nice
>>> and looks appealing (of course works only when the actual mode stored
>>> in the RTX object is then sth like VOIDmode ...).  That we have gazillions
>>> of NULL pointer constants on trees (for each pointer type) isn't.
>>>
>>> Richard.
>> 4 gb of memory is less than $30US.   We need to move on.   The pain that no
>> mode causes is significant.
> I think you should stop arguing that way as you two confuse me with
> two opposite views here.  Richard says having no mode on constants
> is very much fine (but it would be convenient to have one).  You say
> you absolutely want a mode but you do not add one to CONST_WIDE_INT.
>
> ?
different people have different opinions.    however, the reality is 
that rtl constants have an implied mode, that is just not stored with 
constant.
>
> Btw, I arrived at reviewing the patches for the introduction of wide_int
> (separate from the RTL side) as vehicle of eventually replacing double-int.
> It has good design goals but you get yourself too much influenced by
> what you think are RTL/tree weaknesses or strengths.
Replacing double-int is my subgoal.   It is what is necessary to get to 
the place where gcc has robust support for any width integer. There are 
large parts of the compiler, at both the tree and rtl level that do not 
even use double-int: they just do the operations inline for the fits in 
HWI case and fail to try any transformation if it does not fit.    And i 
would point out that all of those places are explicitly math within a 
fixed precision.

>
> Now I feel being dragged into a RTL IL discussion ... which isn't my
> primary area of knowledge (nor interest).  Unfortunately nobody else
> but the patch authors and me seem to be keen enough to get involved here ...
I agree.
> Just to ask again - is there a branch to look at the patches and produce
> patches against?
I answered that at the bottom of the refresh of the 4th patch yesterday.

> Richard.
Kenneth Zadeck - May 3, 2013, 3:32 p.m.
Richi,

I also think that it is a digression to have this discussion about 
rtl.    The root problem is really that Mike, Richard, and myself do not 
believe that infinite precision math is the proper way to do math for 
the majority of the compiler.   Most of the compiler, at both the rtl 
and tree level just does the math inline.   There are 314 places at the 
tree level where we ask if the value fits in a hwi and then we do the 
hwi inline math.   The rtl level is even more skewed towards this style 
of programming.  While you view replacing double-int as my primary goal, 
it accounts for the minority of the places in the code where wide-int 
needs to be used.

Furthermore to call what is done in double-int infinite precision is 
really pushing it.   Because it certainly is not infinite if you happen 
to have a TImode variable.

What i did when i designed wide-int was to come up with a mechanism 
where i could preserve the performance of that inline math while 
generalizing it so that it worked correctly for any width. That is why 
the precision is there.   It allows me to avoid the hard work 99% of the 
time, with an inline test of the precision and then a branch free 
calculation of the answer.   For instance there is no loop checking for 
carries and propagating them.

I also feel strongly that it is our responsibility to preserve, to the 
extent possible, the notion that optimization never changes the output 
of a program, except of course for timing.   We, in the optimization 
community, do not always so do so well here, but at the very least, we 
should always try.   Having said that, there are optimizations like VRP 
that really do need to do math larger than precision defined in the 
type.   I get this, and always have.   I understand that if you truncate 
multiplies, add or subtracts, in VRP then the resulting range is not 
simple and becomes too difficult to reasonably represent.    I have no 
intention of giving up anything in VRP.   My plan for that is to look at 
the types used in function being compiled and take the largest type, 
double the precision, and do all of the math within VRP at that expanded 
fixed precision.   We can always guarantee that we can do this in 
wide-int since the size of the buffer is computed by looking at the 
target's modes and taking the largest one those times a comfortable 
multipler.   Since you cannot have a type without a corresponding mode, 
this will always work.   This scheme preserves the behavior of VRP while 
making it work with any sized integer.
The alternative is to use a true infinite precision package for VRP but 
i think that is overkill.

Kenny

Patch

diff --git a/gcc/alias.c b/gcc/alias.c
index ef11c6a..ed5ceb4 100644
--- a/gcc/alias.c
+++ b/gcc/alias.c
@@ -1471,9 +1471,7 @@  rtx_equal_for_memref_p (const_rtx x, const_rtx y)
 
     case VALUE:
     CASE_CONST_UNIQUE:
-      /* There's no need to compare the contents of CONST_DOUBLEs or
-	 CONST_INTs because pointer equality is a good enough
-	 comparison for these nodes.  */
+      /* Pointer equality guarantees equality for these nodes.  */
       return 0;
 
     default:
diff --git a/gcc/builtins.c b/gcc/builtins.c
index efab82e..ed5a6b3 100644
--- a/gcc/builtins.c
+++ b/gcc/builtins.c
@@ -672,20 +672,24 @@  c_getstr (tree src)
   return TREE_STRING_POINTER (src) + tree_low_cst (offset_node, 1);
 }
 
-/* Return a CONST_INT or CONST_DOUBLE corresponding to target reading
+/* Return a constant integer corresponding to target reading
    GET_MODE_BITSIZE (MODE) bits from string constant STR.  */
 
 static rtx
 c_readstr (const char *str, enum machine_mode mode)
 {
-  HOST_WIDE_INT c[2];
+  wide_int c;
   HOST_WIDE_INT ch;
   unsigned int i, j;
+  HOST_WIDE_INT tmp[MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_WIDE_INT];
+  unsigned int len = (GET_MODE_PRECISION (mode) + HOST_BITS_PER_WIDE_INT - 1)
+    / HOST_BITS_PER_WIDE_INT;
+
+  for (i = 0; i < len; i++)
+    tmp[i] = 0;
 
   gcc_assert (GET_MODE_CLASS (mode) == MODE_INT);
 
-  c[0] = 0;
-  c[1] = 0;
   ch = 1;
   for (i = 0; i < GET_MODE_SIZE (mode); i++)
     {
@@ -696,13 +700,14 @@  c_readstr (const char *str, enum machine_mode mode)
 	  && GET_MODE_SIZE (mode) >= UNITS_PER_WORD)
 	j = j + UNITS_PER_WORD - 2 * (j % UNITS_PER_WORD) - 1;
       j *= BITS_PER_UNIT;
-      gcc_assert (j < HOST_BITS_PER_DOUBLE_INT);
 
       if (ch)
 	ch = (unsigned char) str[i];
-      c[j / HOST_BITS_PER_WIDE_INT] |= ch << (j % HOST_BITS_PER_WIDE_INT);
+      tmp[j / HOST_BITS_PER_WIDE_INT] |= ch << (j % HOST_BITS_PER_WIDE_INT);
     }
-  return immed_double_const (c[0], c[1], mode);
+  
+  c = wide_int::from_array (tmp, len, mode);
+  return immed_wide_int_const (c, mode);
 }
 
 /* Cast a target constant CST to target CHAR and if that value fits into
@@ -4994,12 +4999,12 @@  expand_builtin_signbit (tree exp, rtx target)
 
   if (bitpos < GET_MODE_BITSIZE (rmode))
     {
-      double_int mask = double_int_zero.set_bit (bitpos);
+      wide_int mask = wide_int::set_bit_in_zero (bitpos, rmode);
 
       if (GET_MODE_SIZE (imode) > GET_MODE_SIZE (rmode))
 	temp = gen_lowpart (rmode, temp);
       temp = expand_binop (rmode, and_optab, temp,
-			   immed_double_int_const (mask, rmode),
+			   immed_wide_int_const (mask, rmode),
 			   NULL_RTX, 1, OPTAB_LIB_WIDEN);
     }
   else
diff --git a/gcc/combine.c b/gcc/combine.c
index 6d58b19..bfa151d 100644
--- a/gcc/combine.c
+++ b/gcc/combine.c
@@ -2669,23 +2669,15 @@  try_combine (rtx i3, rtx i2, rtx i1, rtx i0, int *new_direct_jump_p,
 	    offset = -1;
 	}
 
-      if (offset >= 0
-	  && (GET_MODE_PRECISION (GET_MODE (SET_DEST (temp)))
-	      <= HOST_BITS_PER_DOUBLE_INT))
+      if (offset >= 0)
 	{
-	  double_int m, o, i;
+	  wide_int o;
 	  rtx inner = SET_SRC (PATTERN (i3));
 	  rtx outer = SET_SRC (temp);
-
-	  o = rtx_to_double_int (outer);
-	  i = rtx_to_double_int (inner);
-
-	  m = double_int::mask (width);
-	  i &= m;
-	  m = m.llshift (offset, HOST_BITS_PER_DOUBLE_INT);
-	  i = i.llshift (offset, HOST_BITS_PER_DOUBLE_INT);
-	  o = o.and_not (m) | i;
-
+	  
+	  o = (wide_int::from_rtx (outer, GET_MODE (SET_DEST (temp)))
+	       .insert (wide_int::from_rtx (inner, GET_MODE (dest)),
+			offset, width));
 	  combine_merges++;
 	  subst_insn = i3;
 	  subst_low_luid = DF_INSN_LUID (i2);
@@ -2696,8 +2688,8 @@  try_combine (rtx i3, rtx i2, rtx i1, rtx i0, int *new_direct_jump_p,
 	  /* Replace the source in I2 with the new constant and make the
 	     resulting insn the new pattern for I3.  Then skip to where we
 	     validate the pattern.  Everything was set up above.  */
-	  SUBST (SET_SRC (temp),
-		 immed_double_int_const (o, GET_MODE (SET_DEST (temp))));
+	  SUBST (SET_SRC (temp), 
+		 immed_wide_int_const (o, GET_MODE (SET_DEST (temp))));
 
 	  newpat = PATTERN (i2);
 
@@ -5112,7 +5104,7 @@  subst (rtx x, rtx from, rtx to, int in_dest, int in_cond, int unique_copy)
 		  if (! x)
 		    x = gen_rtx_CLOBBER (mode, const0_rtx);
 		}
-	      else if (CONST_INT_P (new_rtx)
+	      else if (CONST_SCALAR_INT_P (new_rtx)
 		       && GET_CODE (x) == ZERO_EXTEND)
 		{
 		  x = simplify_unary_operation (ZERO_EXTEND, GET_MODE (x),
diff --git a/gcc/coretypes.h b/gcc/coretypes.h
index 320b4dd..3ea8920 100644
--- a/gcc/coretypes.h
+++ b/gcc/coretypes.h
@@ -55,6 +55,9 @@  typedef const struct rtx_def *const_rtx;
 struct rtvec_def;
 typedef struct rtvec_def *rtvec;
 typedef const struct rtvec_def *const_rtvec;
+struct hwivec_def;
+typedef struct hwivec_def *hwivec;
+typedef const struct hwivec_def *const_hwivec;
 union tree_node;
 typedef union tree_node *tree;
 typedef const union tree_node *const_tree;
diff --git a/gcc/cse.c b/gcc/cse.c
index f2c8f63..8e3bb88 100644
--- a/gcc/cse.c
+++ b/gcc/cse.c
@@ -2331,15 +2331,23 @@  hash_rtx_cb (const_rtx x, enum machine_mode mode,
                + (unsigned int) INTVAL (x));
       return hash;
 
+    case CONST_WIDE_INT:
+      {
+	int i;
+	for (i = 0; i < CONST_WIDE_INT_NUNITS (x); i++)
+	  hash += CONST_WIDE_INT_ELT (x, i);
+      }
+      return hash;
+
     case CONST_DOUBLE:
       /* This is like the general case, except that it only counts
 	 the integers representing the constant.  */
       hash += (unsigned int) code + (unsigned int) GET_MODE (x);
-      if (GET_MODE (x) != VOIDmode)
-	hash += real_hash (CONST_DOUBLE_REAL_VALUE (x));
-      else
+      if (TARGET_SUPPORTS_WIDE_INT == 0 && GET_MODE (x) == VOIDmode)
 	hash += ((unsigned int) CONST_DOUBLE_LOW (x)
 		 + (unsigned int) CONST_DOUBLE_HIGH (x));
+      else
+	hash += real_hash (CONST_DOUBLE_REAL_VALUE (x));
       return hash;
 
     case CONST_FIXED:
@@ -3756,6 +3764,7 @@  equiv_constant (rtx x)
 
       /* See if we previously assigned a constant value to this SUBREG.  */
       if ((new_rtx = lookup_as_function (x, CONST_INT)) != 0
+	  || (new_rtx = lookup_as_function (x, CONST_WIDE_INT)) != 0
           || (new_rtx = lookup_as_function (x, CONST_DOUBLE)) != 0
           || (new_rtx = lookup_as_function (x, CONST_FIXED)) != 0)
         return new_rtx;
diff --git a/gcc/cselib.c b/gcc/cselib.c
index dcad9741..3f7c156 100644
--- a/gcc/cselib.c
+++ b/gcc/cselib.c
@@ -923,8 +923,7 @@  rtx_equal_for_cselib_1 (rtx x, rtx y, enum machine_mode memmode)
   /* These won't be handled correctly by the code below.  */
   switch (GET_CODE (x))
     {
-    case CONST_DOUBLE:
-    case CONST_FIXED:
+    CASE_CONST_UNIQUE:
     case DEBUG_EXPR:
       return 0;
 
@@ -1118,15 +1117,23 @@  cselib_hash_rtx (rtx x, int create, enum machine_mode memmode)
       hash += ((unsigned) CONST_INT << 7) + INTVAL (x);
       return hash ? hash : (unsigned int) CONST_INT;
 
+    case CONST_WIDE_INT:
+      {
+	int i;
+	for (i = 0; i < CONST_WIDE_INT_NUNITS (x); i++)
+	  hash += CONST_WIDE_INT_ELT (x, i);
+      }
+      return hash;
+
     case CONST_DOUBLE:
       /* This is like the general case, except that it only counts
 	 the integers representing the constant.  */
       hash += (unsigned) code + (unsigned) GET_MODE (x);
-      if (GET_MODE (x) != VOIDmode)
-	hash += real_hash (CONST_DOUBLE_REAL_VALUE (x));
-      else
+      if (TARGET_SUPPORTS_WIDE_INT == 0 && GET_MODE (x) == VOIDmode)
 	hash += ((unsigned) CONST_DOUBLE_LOW (x)
 		 + (unsigned) CONST_DOUBLE_HIGH (x));
+      else
+	hash += real_hash (CONST_DOUBLE_REAL_VALUE (x));
       return hash ? hash : (unsigned int) CONST_DOUBLE;
 
     case CONST_FIXED:
diff --git a/gcc/defaults.h b/gcc/defaults.h
index 4f43f6f0..0801073 100644
--- a/gcc/defaults.h
+++ b/gcc/defaults.h
@@ -1404,6 +1404,14 @@  see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
 #define SWITCHABLE_TARGET 0
 #endif
 
+/* If the target supports integers that are wider than two
+   HOST_WIDE_INTs on the host compiler, then the target should define
+   TARGET_SUPPORTS_WIDE_INT and make the appropriate fixups.
+   Otherwise the compiler really is not robust.  */
+#ifndef TARGET_SUPPORTS_WIDE_INT
+#define TARGET_SUPPORTS_WIDE_INT 0
+#endif
+
 #endif /* GCC_INSN_FLAGS_H  */
 
 #endif  /* ! GCC_DEFAULTS_H */
diff --git a/gcc/doc/rtl.texi b/gcc/doc/rtl.texi
index 8829b0e..2254e2f 100644
--- a/gcc/doc/rtl.texi
+++ b/gcc/doc/rtl.texi
@@ -1525,17 +1525,22 @@  Similarly, there is only one object for the integer whose value is
 
 @findex const_double
 @item (const_double:@var{m} @var{i0} @var{i1} @dots{})
-Represents either a floating-point constant of mode @var{m} or an
-integer constant too large to fit into @code{HOST_BITS_PER_WIDE_INT}
-bits but small enough to fit within twice that number of bits (GCC
-does not provide a mechanism to represent even larger constants).  In
-the latter case, @var{m} will be @code{VOIDmode}.  For integral values
-constants for modes with more bits than twice the number in
-@code{HOST_WIDE_INT} the implied high order bits of that constant are
-copies of the top bit of @code{CONST_DOUBLE_HIGH}.  Note however that
-integral values are neither inherently signed nor inherently unsigned;
-where necessary, signedness is determined by the rtl operation
-instead.
+This represents either a floating-point constant of mode @var{m} or
+(on ports older ports that do not define
+@code{TARGET_SUPPORTS_WIDE_INT}) an integer constant too large to fit
+into @code{HOST_BITS_PER_WIDE_INT} bits but small enough to fit within
+twice that number of bits (GCC does not provide a mechanism to
+represent even larger constants).  In the latter case, @var{m} will be
+@code{VOIDmode}.  For integral values constants for modes with more
+bits than twice the number in @code{HOST_WIDE_INT} the implied high
+order bits of that constant are copies of the top bit of
+@code{CONST_DOUBLE_HIGH}.  Note however that integral values are
+neither inherently signed nor inherently unsigned; where necessary,
+signedness is determined by the rtl operation instead.
+
+On more modern ports, @code{CONST_DOUBLE} only represents floating
+point values.  New ports define to @code{TARGET_SUPPORTS_WIDE_INT} to
+make this designation.
 
 @findex CONST_DOUBLE_LOW
 If @var{m} is @code{VOIDmode}, the bits of the value are stored in
@@ -1550,6 +1555,37 @@  machine's or host machine's floating point format.  To convert them to
 the precise bit pattern used by the target machine, use the macro
 @code{REAL_VALUE_TO_TARGET_DOUBLE} and friends (@pxref{Data Output}).
 
+@findex CONST_WIDE_INT
+@item (const_wide_int:@var{m} @var{nunits} @var{elt0} @dots{})
+This contains an array of @code{HOST_WIDE_INTS} that is large enough
+to hold any constant that can be represented on the target.  This form
+of rtl is only used on targets that define
+@code{TARGET_SUPPORTS_WIDE_INT} to be non zero and then
+@code{CONST_DOUBLES} are only used to hold floating point values.  If
+the target leaves @code{TARGET_SUPPORTS_WIDE_INT} defined as 0,
+@code{CONST_WIDE_INT}s are not used and @code{CONST_DOUBLE}s are as
+they were before.
+
+The values are stored in a compressed format.   The higher order
+0s or -1s are not represented if they are just the logical sign
+extension of the number that is represented.   
+
+@findex CONST_WIDE_INT_VEC
+@item CONST_WIDE_INT_VEC (@var{code})
+Returns the entire array of @code{HOST_WIDE_INT}s that are used to
+store the value.   This macro should be rarely used.
+
+@findex CONST_WIDE_INT_NUNITS
+@item CONST_WIDE_INT_NUNITS (@var{code})
+The number of @code{HOST_WIDE_INT}s used to represent the number.
+Note that this generally be smaller than the number of
+@code{HOST_WIDE_INT}s implied by the mode size.
+
+@findex CONST_WIDE_INT_ELT
+@item CONST_WIDE_INT_NUNITS (@var{code},@var{i})
+Returns the @code{i}th element of the array.   Element 0 is contains
+the low order bits of the constant.
+
 @findex const_fixed
 @item (const_fixed:@var{m} @dots{})
 Represents a fixed-point constant of mode @var{m}.
diff --git a/gcc/doc/tm.texi b/gcc/doc/tm.texi
index c88a89f..116864b 100644
--- a/gcc/doc/tm.texi
+++ b/gcc/doc/tm.texi
@@ -11352,3 +11352,50 @@  It returns true if the target supports GNU indirect functions.
 The support includes the assembler, linker and dynamic linker.
 The default value of this hook is based on target's libc.
 @end deftypefn
+
+@defmac TARGET_SUPPORTS_WIDE_INT
+
+On older ports, large integers are stored in @code{CONST_DOUBLE} rtl
+objects.  Newer ports define @code{TARGET_SUPPORTS_WIDE_INT} to be non
+zero to indicate that large integers are stored in
+@code{CONST_WIDE_INT} rtl objects.  The @code{CONST_WIDE_INT} allows
+very large integer constants to be represented.  @code{CONST_DOUBLE}
+are limited to twice the size of host's @code{HOST_WIDE_INT}
+representation.
+
+Converting a port mostly requires looking for the places where
+@code{CONST_DOUBLES} are used with @code{VOIDmode} and replacing that
+code with code that accesses @code{CONST_WIDE_INT}s.  @samp{"grep -i
+const_double"} at the port level gets you to 95% of the changes that
+need to be made.  There are a few places that require a deeper look.
+
+@itemize @bullet
+@item
+There is no equivalent to @code{hval} and @code{lval} for
+@code{CONST_WIDE_INT}s.  This would be difficult to express in the md
+language since there are a variable number of elements.
+
+Most ports only check that @code{hval} is either 0 or -1 to see if the
+value is small.  As mentioned above, this will no longer be necessary
+since small constants are always @code{CONST_INT}.  Of course there
+are still a few exceptions, the alpha's constraint used by the zap
+instruction certainly requires careful examination by C code.
+However, all the current code does is pass the hval and lval to C
+code, so evolving the c code to look at the @code{CONST_WIDE_INT} is
+not really a large change.
+
+@item
+Because there is no standard template that ports use to materialize
+constants, there is likely to be some futzing that is unique to each
+port in this code.
+
+@item
+The rtx costs may have to be adjusted to properly account for larger
+constants that are represented as @code{CONST_WIDE_INT}.
+@end itemize
+
+All and all it does not takes long to convert ports that the
+maintainer is familiar with.
+
+@end defmac
+
diff --git a/gcc/doc/tm.texi.in b/gcc/doc/tm.texi.in
index d70ce4c..f59ffde 100644
--- a/gcc/doc/tm.texi.in
+++ b/gcc/doc/tm.texi.in
@@ -11182,3 +11182,50 @@  memory model bits are allowed.
 @hook TARGET_ATOMIC_TEST_AND_SET_TRUEVAL
 
 @hook TARGET_HAS_IFUNC_P
+
+@defmac TARGET_SUPPORTS_WIDE_INT
+
+On older ports, large integers are stored in @code{CONST_DOUBLE} rtl
+objects.  Newer ports define @code{TARGET_SUPPORTS_WIDE_INT} to be non
+zero to indicate that large integers are stored in
+@code{CONST_WIDE_INT} rtl objects.  The @code{CONST_WIDE_INT} allows
+very large integer constants to be represented.  @code{CONST_DOUBLE}
+are limited to twice the size of host's @code{HOST_WIDE_INT}
+representation.
+
+Converting a port mostly requires looking for the places where
+@code{CONST_DOUBLES} are used with @code{VOIDmode} and replacing that
+code with code that accesses @code{CONST_WIDE_INT}s.  @samp{"grep -i
+const_double"} at the port level gets you to 95% of the changes that
+need to be made.  There are a few places that require a deeper look.
+
+@itemize @bullet
+@item
+There is no equivalent to @code{hval} and @code{lval} for
+@code{CONST_WIDE_INT}s.  This would be difficult to express in the md
+language since there are a variable number of elements.
+
+Most ports only check that @code{hval} is either 0 or -1 to see if the
+value is small.  As mentioned above, this will no longer be necessary
+since small constants are always @code{CONST_INT}.  Of course there
+are still a few exceptions, the alpha's constraint used by the zap
+instruction certainly requires careful examination by C code.
+However, all the current code does is pass the hval and lval to C
+code, so evolving the c code to look at the @code{CONST_WIDE_INT} is
+not really a large change.
+
+@item
+Because there is no standard template that ports use to materialize
+constants, there is likely to be some futzing that is unique to each
+port in this code.
+
+@item
+The rtx costs may have to be adjusted to properly account for larger
+constants that are represented as @code{CONST_WIDE_INT}.
+@end itemize
+
+All and all it does not takes long to convert ports that the
+maintainer is familiar with.
+
+@end defmac
+
diff --git a/gcc/dojump.c b/gcc/dojump.c
index 3f04eac..ecbec40 100644
--- a/gcc/dojump.c
+++ b/gcc/dojump.c
@@ -142,6 +142,7 @@  static bool
 prefer_and_bit_test (enum machine_mode mode, int bitnum)
 {
   bool speed_p;
+  wide_int mask = wide_int::set_bit_in_zero (bitnum, mode);
 
   if (and_test == 0)
     {
@@ -162,8 +163,7 @@  prefer_and_bit_test (enum machine_mode mode, int bitnum)
     }
 
   /* Fill in the integers.  */
-  XEXP (and_test, 1)
-    = immed_double_int_const (double_int_zero.set_bit (bitnum), mode);
+  XEXP (and_test, 1) = immed_wide_int_const (mask, mode);
   XEXP (XEXP (shift_test, 0), 1) = GEN_INT (bitnum);
 
   speed_p = optimize_insn_for_speed_p ();
diff --git a/gcc/dwarf2out.c b/gcc/dwarf2out.c
index 2475ade..6fd3eae 100644
--- a/gcc/dwarf2out.c
+++ b/gcc/dwarf2out.c
@@ -323,6 +323,17 @@  dump_struct_debug (tree type, enum debug_info_usage usage,
 
 #endif
 
+
+/* Get the number of host wide ints needed to represent the precision
+   of the number.  */
+
+static unsigned int
+get_full_len (const wide_int &op)
+{
+  return ((op.get_precision () + HOST_BITS_PER_WIDE_INT - 1)
+	  / HOST_BITS_PER_WIDE_INT);
+}
+
 static bool
 should_emit_struct_debug (tree type, enum debug_info_usage usage)
 {
@@ -1354,6 +1365,9 @@  dw_val_equal_p (dw_val_node *a, dw_val_node *b)
       return (a->v.val_double.high == b->v.val_double.high
 	      && a->v.val_double.low == b->v.val_double.low);
 
+    case dw_val_class_wide_int:
+      return a->v.val_wide == b->v.val_wide;
+
     case dw_val_class_vec:
       {
 	size_t a_len = a->v.val_vec.elt_size * a->v.val_vec.length;
@@ -1610,6 +1624,10 @@  size_of_loc_descr (dw_loc_descr_ref loc)
 	  case dw_val_class_const_double:
 	    size += HOST_BITS_PER_DOUBLE_INT / BITS_PER_UNIT;
 	    break;
+	  case dw_val_class_wide_int:
+	    size += (get_full_len (loc->dw_loc_oprnd2.v.val_wide)
+		     * HOST_BITS_PER_WIDE_INT / BITS_PER_UNIT);
+	    break;
 	  default:
 	    gcc_unreachable ();
 	  }
@@ -1787,6 +1805,20 @@  output_loc_operands (dw_loc_descr_ref loc, int for_eh_or_skip)
 				 second, NULL);
 	  }
 	  break;
+	case dw_val_class_wide_int:
+	  {
+	    int i;
+	    int len = get_full_len (val2->v.val_wide);
+	    if (WORDS_BIG_ENDIAN)
+	      for (i = len; i >= 0; --i)
+		dw2_asm_output_data (HOST_BITS_PER_WIDE_INT / HOST_BITS_PER_CHAR,
+				     val2->v.val_wide.elt (i), NULL);
+	    else
+	      for (i = 0; i < len; ++i)
+		dw2_asm_output_data (HOST_BITS_PER_WIDE_INT / HOST_BITS_PER_CHAR,
+				     val2->v.val_wide.elt (i), NULL);
+	  }
+	  break;
 	case dw_val_class_addr:
 	  gcc_assert (val1->v.val_unsigned == DWARF2_ADDR_SIZE);
 	  dw2_asm_output_addr_rtx (DWARF2_ADDR_SIZE, val2->v.val_addr, NULL);
@@ -1996,6 +2028,21 @@  output_loc_operands (dw_loc_descr_ref loc, int for_eh_or_skip)
 	      dw2_asm_output_data (l, second, NULL);
 	    }
 	    break;
+	  case dw_val_class_wide_int:
+	    {
+	      int i;
+	      int len = get_full_len (val2->v.val_wide);
+	      l = HOST_BITS_PER_WIDE_INT / HOST_BITS_PER_CHAR;
+
+	      dw2_asm_output_data (1, len * l, NULL);
+	      if (WORDS_BIG_ENDIAN)
+		for (i = len; i >= 0; --i)
+		  dw2_asm_output_data (l, val2->v.val_wide.elt (i), NULL);
+	      else
+		for (i = 0; i < len; ++i)
+		  dw2_asm_output_data (l, val2->v.val_wide.elt (i), NULL);
+	    }
+	    break;
 	  default:
 	    gcc_unreachable ();
 	  }
@@ -3095,7 +3142,7 @@  static void add_AT_location_description	(dw_die_ref, enum dwarf_attribute,
 static void add_data_member_location_attribute (dw_die_ref, tree);
 static bool add_const_value_attribute (dw_die_ref, rtx);
 static void insert_int (HOST_WIDE_INT, unsigned, unsigned char *);
-static void insert_double (double_int, unsigned char *);
+static void insert_wide_int (const wide_int &, unsigned char *);
 static void insert_float (const_rtx, unsigned char *);
 static rtx rtl_for_decl_location (tree);
 static bool add_location_or_const_value_attribute (dw_die_ref, tree, bool,
@@ -3720,6 +3767,20 @@  AT_unsigned (dw_attr_ref a)
 /* Add an unsigned double integer attribute value to a DIE.  */
 
 static inline void
+add_AT_wide (dw_die_ref die, enum dwarf_attribute attr_kind,
+	     wide_int w)
+{
+  dw_attr_node attr;
+
+  attr.dw_attr = attr_kind;
+  attr.dw_attr_val.val_class = dw_val_class_wide_int;
+  attr.dw_attr_val.v.val_wide = w;
+  add_dwarf_attr (die, &attr);
+}
+
+/* Add an unsigned double integer attribute value to a DIE.  */
+
+static inline void
 add_AT_double (dw_die_ref die, enum dwarf_attribute attr_kind,
 	       HOST_WIDE_INT high, unsigned HOST_WIDE_INT low)
 {
@@ -5273,6 +5334,19 @@  print_die (dw_die_ref die, FILE *outfile)
 		   a->dw_attr_val.v.val_double.high,
 		   a->dw_attr_val.v.val_double.low);
 	  break;
+	case dw_val_class_wide_int:
+	  {
+	    int i = a->dw_attr_val.v.val_wide.get_len ();
+	    fprintf (outfile, "constant (");
+	    gcc_assert (i > 0);
+	    if (a->dw_attr_val.v.val_wide.elt (i) == 0)
+	      fprintf (outfile, "0x");
+	    fprintf (outfile, HOST_WIDE_INT_PRINT_HEX, a->dw_attr_val.v.val_wide.elt (--i));
+	    while (-- i >= 0)
+	      fprintf (outfile, HOST_WIDE_INT_PRINT_PADDED_HEX, a->dw_attr_val.v.val_wide.elt (i));
+	    fprintf (outfile, ")");
+	    break;
+	  }
 	case dw_val_class_vec:
 	  fprintf (outfile, "floating-point or vector constant");
 	  break;
@@ -5444,6 +5518,9 @@  attr_checksum (dw_attr_ref at, struct md5_ctx *ctx, int *mark)
     case dw_val_class_const_double:
       CHECKSUM (at->dw_attr_val.v.val_double);
       break;
+    case dw_val_class_wide_int:
+      CHECKSUM (at->dw_attr_val.v.val_wide);
+      break;
     case dw_val_class_vec:
       CHECKSUM (at->dw_attr_val.v.val_vec);
       break;
@@ -5714,6 +5791,12 @@  attr_checksum_ordered (enum dwarf_tag tag, dw_attr_ref at,
       CHECKSUM (at->dw_attr_val.v.val_double);
       break;
 
+    case dw_val_class_wide_int:
+      CHECKSUM_ULEB128 (DW_FORM_block);
+      CHECKSUM_ULEB128 (sizeof (at->dw_attr_val.v.val_wide));
+      CHECKSUM (at->dw_attr_val.v.val_wide);
+      break;
+
     case dw_val_class_vec:
       CHECKSUM_ULEB128 (DW_FORM_block);
       CHECKSUM_ULEB128 (sizeof (at->dw_attr_val.v.val_vec));
@@ -6178,6 +6261,8 @@  same_dw_val_p (const dw_val_node *v1, const dw_val_node *v2, int *mark)
     case dw_val_class_const_double:
       return v1->v.val_double.high == v2->v.val_double.high
 	     && v1->v.val_double.low == v2->v.val_double.low;
+    case dw_val_class_wide_int:
+      return v1->v.val_wide == v2->v.val_wide;
     case dw_val_class_vec:
       if (v1->v.val_vec.length != v2->v.val_vec.length
 	  || v1->v.val_vec.elt_size != v2->v.val_vec.elt_size)
@@ -7640,6 +7725,13 @@  size_of_die (dw_die_ref die)
 	  if (HOST_BITS_PER_WIDE_INT >= 64)
 	    size++; /* block */
 	  break;
+	case dw_val_class_wide_int:
+	  size += (get_full_len (a->dw_attr_val.v.val_wide)
+		   * HOST_BITS_PER_WIDE_INT / HOST_BITS_PER_CHAR);
+	  if (get_full_len (a->dw_attr_val.v.val_wide) * HOST_BITS_PER_WIDE_INT
+	      > 64)
+	    size++; /* block */
+	  break;
 	case dw_val_class_vec:
 	  size += constant_size (a->dw_attr_val.v.val_vec.length
 				 * a->dw_attr_val.v.val_vec.elt_size)
@@ -7978,6 +8070,20 @@  value_format (dw_attr_ref a)
 	default:
 	  return DW_FORM_block1;
 	}
+    case dw_val_class_wide_int:
+      switch (get_full_len (a->dw_attr_val.v.val_wide) * HOST_BITS_PER_WIDE_INT)
+	{
+	case 8:
+	  return DW_FORM_data1;
+	case 16:
+	  return DW_FORM_data2;
+	case 32:
+	  return DW_FORM_data4;
+	case 64:
+	  return DW_FORM_data8;
+	default:
+	  return DW_FORM_block1;
+	}
     case dw_val_class_vec:
       switch (constant_size (a->dw_attr_val.v.val_vec.length
 			     * a->dw_attr_val.v.val_vec.elt_size))
@@ -8417,6 +8523,32 @@  output_die (dw_die_ref die)
 	  }
 	  break;
 
+	case dw_val_class_wide_int:
+	  {
+	    int i;
+	    int len = get_full_len (a->dw_attr_val.v.val_wide);
+	    int l = HOST_BITS_PER_WIDE_INT / HOST_BITS_PER_CHAR;
+	    if (len * HOST_BITS_PER_WIDE_INT > 64)
+	      dw2_asm_output_data (1, get_full_len (a->dw_attr_val.v.val_wide) * l,
+				   NULL);
+
+	    if (WORDS_BIG_ENDIAN)
+	      for (i = len; i >= 0; --i)
+		{
+		  dw2_asm_output_data (l, a->dw_attr_val.v.val_wide.elt (i),
+				       name);
+		  name = NULL;
+		}
+	    else
+	      for (i = 0; i < len; ++i)
+		{
+		  dw2_asm_output_data (l, a->dw_attr_val.v.val_wide.elt (i),
+				       name);
+		  name = NULL;
+		}
+	  }
+	  break;
+
 	case dw_val_class_vec:
 	  {
 	    unsigned int elt_size = a->dw_attr_val.v.val_vec.elt_size;
@@ -11550,9 +11682,8 @@  clz_loc_descriptor (rtx rtl, enum machine_mode mode,
     msb = GEN_INT ((unsigned HOST_WIDE_INT) 1
 		   << (GET_MODE_BITSIZE (mode) - 1));
   else
-    msb = immed_double_const (0, (unsigned HOST_WIDE_INT) 1
-				  << (GET_MODE_BITSIZE (mode)
-				      - HOST_BITS_PER_WIDE_INT - 1), mode);
+    msb = immed_wide_int_const 
+      (wide_int::set_bit_in_zero (GET_MODE_PRECISION (mode) - 1, mode), mode);
   if (GET_CODE (msb) == CONST_INT && INTVAL (msb) < 0)
     tmp = new_loc_descr (HOST_BITS_PER_WIDE_INT == 32
 			 ? DW_OP_const4u : HOST_BITS_PER_WIDE_INT == 64
@@ -12493,7 +12624,16 @@  mem_loc_descriptor (rtx rtl, enum machine_mode mode,
 	  mem_loc_result->dw_loc_oprnd1.val_class = dw_val_class_die_ref;
 	  mem_loc_result->dw_loc_oprnd1.v.val_die_ref.die = type_die;
 	  mem_loc_result->dw_loc_oprnd1.v.val_die_ref.external = 0;
-	  if (SCALAR_FLOAT_MODE_P (mode))
+#if TARGET_SUPPORTS_WIDE_INT == 0
+	  if (!SCALAR_FLOAT_MODE_P (mode))
+	    {
+	      mem_loc_result->dw_loc_oprnd2.val_class
+		= dw_val_class_const_double;
+	      mem_loc_result->dw_loc_oprnd2.v.val_double
+		= rtx_to_double_int (rtl);
+	    }
+	  else
+#endif
 	    {
 	      unsigned int length = GET_MODE_SIZE (mode);
 	      unsigned char *array
@@ -12505,13 +12645,26 @@  mem_loc_descriptor (rtx rtl, enum machine_mode mode,
 	      mem_loc_result->dw_loc_oprnd2.v.val_vec.elt_size = 4;
 	      mem_loc_result->dw_loc_oprnd2.v.val_vec.array = array;
 	    }
-	  else
-	    {
-	      mem_loc_result->dw_loc_oprnd2.val_class
-		= dw_val_class_const_double;
-	      mem_loc_result->dw_loc_oprnd2.v.val_double
-		= rtx_to_double_int (rtl);
-	    }
+	}
+      break;
+
+    case CONST_WIDE_INT:
+      if (!dwarf_strict)
+	{
+	  dw_die_ref type_die;
+
+	  type_die = base_type_for_mode (mode,
+					 GET_MODE_CLASS (mode) == MODE_INT);
+	  if (type_die == NULL)
+	    return NULL;
+	  mem_loc_result = new_loc_descr (DW_OP_GNU_const_type, 0, 0);
+	  mem_loc_result->dw_loc_oprnd1.val_class = dw_val_class_die_ref;
+	  mem_loc_result->dw_loc_oprnd1.v.val_die_ref.die = type_die;
+	  mem_loc_result->dw_loc_oprnd1.v.val_die_ref.external = 0;
+	  mem_loc_result->dw_loc_oprnd2.val_class
+	    = dw_val_class_wide_int;
+	  mem_loc_result->dw_loc_oprnd2.v.val_wide
+	    = wide_int::from_rtx (rtl, mode);
 	}
       break;
 
@@ -12982,7 +13135,15 @@  loc_descriptor (rtx rtl, enum machine_mode mode,
 	     adequately represented.  We output CONST_DOUBLEs as blocks.  */
 	  loc_result = new_loc_descr (DW_OP_implicit_value,
 				      GET_MODE_SIZE (mode), 0);
-	  if (SCALAR_FLOAT_MODE_P (mode))
+#if TARGET_SUPPORTS_WIDE_INT == 0
+	  if (!SCALAR_FLOAT_MODE_P (mode))
+	    {
+	      loc_result->dw_loc_oprnd2.val_class = dw_val_class_const_double;
+	      loc_result->dw_loc_oprnd2.v.val_double
+	        = rtx_to_double_int (rtl);
+	    }
+	  else
+#endif
 	    {
 	      unsigned int length = GET_MODE_SIZE (mode);
 	      unsigned char *array
@@ -12994,12 +13155,26 @@  loc_descriptor (rtx rtl, enum machine_mode mode,
 	      loc_result->dw_loc_oprnd2.v.val_vec.elt_size = 4;
 	      loc_result->dw_loc_oprnd2.v.val_vec.array = array;
 	    }
-	  else
-	    {
-	      loc_result->dw_loc_oprnd2.val_class = dw_val_class_const_double;
-	      loc_result->dw_loc_oprnd2.v.val_double
-	        = rtx_to_double_int (rtl);
-	    }
+	}
+      break;
+
+    case CONST_WIDE_INT:
+      if (mode == VOIDmode)
+	mode = GET_MODE (rtl);
+
+      if (mode != VOIDmode && (dwarf_version >= 4 || !dwarf_strict))
+	{
+	  gcc_assert (mode == GET_MODE (rtl) || VOIDmode == GET_MODE (rtl));
+
+	  /* Note that a CONST_DOUBLE rtx could represent either an integer
+	     or a floating-point constant.  A CONST_DOUBLE is used whenever
+	     the constant requires more than one word in order to be
+	     adequately represented.  We output CONST_DOUBLEs as blocks.  */
+	  loc_result = new_loc_descr (DW_OP_implicit_value,
+				      GET_MODE_SIZE (mode), 0);
+	  loc_result->dw_loc_oprnd2.val_class = dw_val_class_wide_int;
+	  loc_result->dw_loc_oprnd2.v.val_wide
+	    = wide_int::from_rtx (rtl, mode);
 	}
       break;
 
@@ -13015,6 +13190,7 @@  loc_descriptor (rtx rtl, enum machine_mode mode,
 	    ggc_alloc_atomic (length * elt_size);
 	  unsigned int i;
 	  unsigned char *p;
+	  enum machine_mode imode = GET_MODE_INNER (mode);
 
 	  gcc_assert (mode == GET_MODE (rtl) || VOIDmode == GET_MODE (rtl));
 	  switch (GET_MODE_CLASS (mode))
@@ -13023,15 +13199,8 @@  loc_descriptor (rtx rtl, enum machine_mode mode,
 	      for (i = 0, p = array; i < length; i++, p += elt_size)
 		{
 		  rtx elt = CONST_VECTOR_ELT (rtl, i);
-		  double_int val = rtx_to_double_int (elt);
-
-		  if (elt_size <= sizeof (HOST_WIDE_INT))
-		    insert_int (val.to_shwi (), elt_size, p);
-		  else
-		    {
-		      gcc_assert (elt_size == 2 * sizeof (HOST_WIDE_INT));
-		      insert_double (val, p);
-		    }
+		  wide_int val = wide_int::from_rtx (elt, imode);
+		  insert_wide_int (val, p);
 		}
 	      break;
 
@@ -14656,22 +14825,27 @@  extract_int (const unsigned char *src, unsigned int size)
   return val;
 }
 
-/* Writes double_int values to dw_vec_const array.  */
+/* Writes wide_int values to dw_vec_const array.  */
 
 static void
-insert_double (double_int val, unsigned char *dest)
+insert_wide_int (const wide_int &val, unsigned char *dest)
 {
-  unsigned char *p0 = dest;
-  unsigned char *p1 = dest + sizeof (HOST_WIDE_INT);
+  int i;
 
   if (WORDS_BIG_ENDIAN)
-    {
-      p0 = p1;
-      p1 = dest;
-    }
-
-  insert_int ((HOST_WIDE_INT) val.low, sizeof (HOST_WIDE_INT), p0);
-  insert_int ((HOST_WIDE_INT) val.high, sizeof (HOST_WIDE_INT), p1);
+    for (i = (int)get_full_len (val) - 1; i >= 0; i--)
+      {
+	insert_int ((HOST_WIDE_INT) val.elt (i), 
+		    sizeof (HOST_WIDE_INT), dest);
+	dest += sizeof (HOST_WIDE_INT);
+      }
+  else
+    for (i = 0; i < (int)get_full_len (val); i++)
+      {
+	insert_int ((HOST_WIDE_INT) val.elt (i), 
+		    sizeof (HOST_WIDE_INT), dest);
+	dest += sizeof (HOST_WIDE_INT);
+      }
 }
 
 /* Writes floating point values to dw_vec_const array.  */
@@ -14716,6 +14890,11 @@  add_const_value_attribute (dw_die_ref die, rtx rtl)
       }
       return true;
 
+    case CONST_WIDE_INT:
+      add_AT_wide (die, DW_AT_const_value,
+		   wide_int::from_rtx (rtl, GET_MODE (rtl)));
+      return true;
+
     case CONST_DOUBLE:
       /* Note that a CONST_DOUBLE rtx could represent either an integer or a
 	 floating-point constant.  A CONST_DOUBLE is used whenever the
@@ -14724,7 +14903,10 @@  add_const_value_attribute (dw_die_ref die, rtx rtl)
       {
 	enum machine_mode mode = GET_MODE (rtl);
 
-	if (SCALAR_FLOAT_MODE_P (mode))
+	if (TARGET_SUPPORTS_WIDE_INT == 0 && !SCALAR_FLOAT_MODE_P (mode))
+	  add_AT_double (die, DW_AT_const_value,
+			 CONST_DOUBLE_HIGH (rtl), CONST_DOUBLE_LOW (rtl));
+	else
 	  {
 	    unsigned int length = GET_MODE_SIZE (mode);
 	    unsigned char *array = (unsigned char *) ggc_alloc_atomic (length);
@@ -14732,9 +14914,6 @@  add_const_value_attribute (dw_die_ref die, rtx rtl)
 	    insert_float (rtl, array);
 	    add_AT_vec (die, DW_AT_const_value, length / 4, 4, array);
 	  }
-	else
-	  add_AT_double (die, DW_AT_const_value,
-			 CONST_DOUBLE_HIGH (rtl), CONST_DOUBLE_LOW (rtl));
       }
       return true;
 
@@ -14747,6 +14926,7 @@  add_const_value_attribute (dw_die_ref die, rtx rtl)
 	  (length * elt_size);
 	unsigned int i;
 	unsigned char *p;
+	enum machine_mode imode = GET_MODE_INNER (mode);
 
 	switch (GET_MODE_CLASS (mode))
 	  {
@@ -14754,15 +14934,8 @@  add_const_value_attribute (dw_die_ref die, rtx rtl)
 	    for (i = 0, p = array; i < length; i++, p += elt_size)
 	      {
 		rtx elt = CONST_VECTOR_ELT (rtl, i);
-		double_int val = rtx_to_double_int (elt);
-
-		if (elt_size <= sizeof (HOST_WIDE_INT))
-		  insert_int (val.to_shwi (), elt_size, p);
-		else
-		  {
-		    gcc_assert (elt_size == 2 * sizeof (HOST_WIDE_INT));
-		    insert_double (val, p);
-		  }
+		wide_int val = wide_int::from_rtx (elt, imode);
+		insert_wide_int (val, p);
 	      }
 	    break;
 
@@ -23091,6 +23264,9 @@  hash_loc_operands (dw_loc_descr_ref loc, hashval_t hash)
 	  hash = iterative_hash_object (val2->v.val_double.low, hash);
 	  hash = iterative_hash_object (val2->v.val_double.high, hash);
 	  break;
+	case dw_val_class_wide_int:
+	  hash = iterative_hash_object (val2->v.val_wide, hash);
+	  break;
 	case dw_val_class_addr:
 	  hash = iterative_hash_rtx (val2->v.val_addr, hash);
 	  break;
@@ -23180,6 +23356,9 @@  hash_loc_operands (dw_loc_descr_ref loc, hashval_t hash)
 	    hash = iterative_hash_object (val2->v.val_double.low, hash);
 	    hash = iterative_hash_object (val2->v.val_double.high, hash);
 	    break;
+	  case dw_val_class_wide_int:
+	    hash = iterative_hash_object (val2->v.val_wide, hash);
+	    break;
 	  default:
 	    gcc_unreachable ();
 	  }
@@ -23328,6 +23507,8 @@  compare_loc_operands (dw_loc_descr_ref x, dw_loc_descr_ref y)
 	case dw_val_class_const_double:
 	  return valx2->v.val_double.low == valy2->v.val_double.low
 		 && valx2->v.val_double.high == valy2->v.val_double.high;
+	case dw_val_class_wide_int:
+	  return valx2->v.val_wide == valy2->v.val_wide;
 	case dw_val_class_addr:
 	  return rtx_equal_p (valx2->v.val_addr, valy2->v.val_addr);
 	default:
@@ -23371,6 +23552,8 @@  compare_loc_operands (dw_loc_descr_ref x, dw_loc_descr_ref y)
 	case dw_val_class_const_double:
 	  return valx2->v.val_double.low == valy2->v.val_double.low
 		 && valx2->v.val_double.high == valy2->v.val_double.high;
+	case dw_val_class_wide_int:
+	  return valx2->v.val_wide == valy2->v.val_wide;
 	default:
 	  gcc_unreachable ();
 	}
diff --git a/gcc/dwarf2out.h b/gcc/dwarf2out.h
index ad03a34..531a7c1 100644
--- a/gcc/dwarf2out.h
+++ b/gcc/dwarf2out.h
@@ -21,6 +21,7 @@  along with GCC; see the file COPYING3.  If not see
 #define GCC_DWARF2OUT_H 1
 
 #include "dwarf2.h"	/* ??? Remove this once only used by dwarf2foo.c.  */
+#include "wide-int.h"
 
 typedef struct die_struct *dw_die_ref;
 typedef const struct die_struct *const_dw_die_ref;
@@ -139,6 +140,7 @@  enum dw_val_class
   dw_val_class_const,
   dw_val_class_unsigned_const,
   dw_val_class_const_double,
+  dw_val_class_wide_int,
   dw_val_class_vec,
   dw_val_class_flag,
   dw_val_class_die_ref,
@@ -180,6 +182,7 @@  typedef struct GTY(()) dw_val_struct {
       HOST_WIDE_INT GTY ((default)) val_int;
       unsigned HOST_WIDE_INT GTY ((tag ("dw_val_class_unsigned_const"))) val_unsigned;
       double_int GTY ((tag ("dw_val_class_const_double"))) val_double;
+      wide_int GTY ((tag ("dw_val_class_wide_int"))) val_wide;
       dw_vec_const GTY ((tag ("dw_val_class_vec"))) val_vec;
       struct dw_val_die_union
 	{
diff --git a/gcc/emit-rtl.c b/gcc/emit-rtl.c
index 5a24a791..dfb0abc 100644
--- a/gcc/emit-rtl.c
+++ b/gcc/emit-rtl.c
@@ -124,6 +124,9 @@  rtx cc0_rtx;
 static GTY ((if_marked ("ggc_marked_p"), param_is (struct rtx_def)))
      htab_t const_int_htab;
 
+static GTY ((if_marked ("ggc_marked_p"), param_is (struct rtx_def)))
+     htab_t const_wide_int_htab;
+
 /* A hash table storing memory attribute structures.  */
 static GTY ((if_marked ("ggc_marked_p"), param_is (struct mem_attrs)))
      htab_t mem_attrs_htab;
@@ -149,6 +152,11 @@  static void set_used_decls (tree);
 static void mark_label_nuses (rtx);
 static hashval_t const_int_htab_hash (const void *);
 static int const_int_htab_eq (const void *, const void *);
+#if TARGET_SUPPORTS_WIDE_INT
+static hashval_t const_wide_int_htab_hash (const void *);
+static int const_wide_int_htab_eq (const void *, const void *);
+static rtx lookup_const_wide_int (rtx);
+#endif
 static hashval_t const_double_htab_hash (const void *);
 static int const_double_htab_eq (const void *, const void *);
 static rtx lookup_const_double (rtx);
@@ -185,6 +193,43 @@  const_int_htab_eq (const void *x, const void *y)
   return (INTVAL ((const_rtx) x) == *((const HOST_WIDE_INT *) y));
 }
 
+#if TARGET_SUPPORTS_WIDE_INT
+/* Returns a hash code for X (which is a really a CONST_WIDE_INT).  */
+
+static hashval_t
+const_wide_int_htab_hash (const void *x)
+{
+  int i;
+  HOST_WIDE_INT hash = 0;
+  const_rtx xr = (const_rtx) x;
+
+  for (i = 0; i < CONST_WIDE_INT_NUNITS (xr); i++)
+    hash += CONST_WIDE_INT_ELT (xr, i);
+
+  return (hashval_t) hash;
+}
+
+/* Returns nonzero if the value represented by X (which is really a
+   CONST_WIDE_INT) is the same as that given by Y (which is really a
+   CONST_WIDE_INT).  */
+
+static int
+const_wide_int_htab_eq (const void *x, const void *y)
+{
+  int i;
+  const_rtx xr = (const_rtx)x;
+  const_rtx yr = (const_rtx)y;
+  if (CONST_WIDE_INT_NUNITS (xr) != CONST_WIDE_INT_NUNITS (yr))
+    return 0;
+
+  for (i = 0; i < CONST_WIDE_INT_NUNITS (xr); i++)
+    if (CONST_WIDE_INT_ELT (xr, i) != CONST_WIDE_INT_ELT (yr, i))
+      return 0;
+  
+  return 1;
+}
+#endif
+
 /* Returns a hash code for X (which is really a CONST_DOUBLE).  */
 static hashval_t
 const_double_htab_hash (const void *x)
@@ -192,7 +237,7 @@  const_double_htab_hash (const void *x)
   const_rtx const value = (const_rtx) x;
   hashval_t h;
 
-  if (GET_MODE (value) == VOIDmode)
+  if (TARGET_SUPPORTS_WIDE_INT == 0 && GET_MODE (value) == VOIDmode)
     h = CONST_DOUBLE_LOW (value) ^ CONST_DOUBLE_HIGH (value);
   else
     {
@@ -212,7 +257,7 @@  const_double_htab_eq (const void *x, const void *y)
 
   if (GET_MODE (a) != GET_MODE (b))
     return 0;
-  if (GET_MODE (a) == VOIDmode)
+  if (TARGET_SUPPORTS_WIDE_INT == 0 && GET_MODE (a) == VOIDmode)
     return (CONST_DOUBLE_LOW (a) == CONST_DOUBLE_LOW (b)
 	    && CONST_DOUBLE_HIGH (a) == CONST_DOUBLE_HIGH (b));
   else
@@ -478,6 +523,7 @@  const_fixed_from_fixed_value (FIXED_VALUE_TYPE value, enum machine_mode mode)
   return lookup_const_fixed (fixed);
 }
 
+#if TARGET_SUPPORTS_WIDE_INT == 0
 /* Constructs double_int from rtx CST.  */
 
 double_int
@@ -497,17 +543,60 @@  rtx_to_double_int (const_rtx cst)
   
   return r;
 }
+#endif
 
+#if TARGET_SUPPORTS_WIDE_INT
+/* Determine whether WIDE_INT, already exists in the hash table.  If
+   so, return its counterpart; otherwise add it to the hash table and
+   return it.  */
+
+static rtx
+lookup_const_wide_int (rtx wint)
+{
+  void **slot = htab_find_slot (const_wide_int_htab, wint, INSERT);
+  if (*slot == 0)
+    *slot = wint;
 
-/* Return a CONST_DOUBLE or CONST_INT for a value specified as
-   a double_int.  */
+  return (rtx) *slot;
+}
+#endif
 
+/* V contains a wide_int.  A CONST_INT or CONST_WIDE_INT (if
+   TARGET_SUPPORTS_WIDE_INT is defined) or CONST_DOUBLE if
+   TARGET_SUPPORTS_WIDE_INT is not defined is produced based on the
+   number of HOST_WIDE_INTs that are necessary to represent the value
+   in compact form.  */
 rtx
-immed_double_int_const (double_int i, enum machine_mode mode)
+immed_wide_int_const (const wide_int &v, enum machine_mode mode)
 {
-  return immed_double_const (i.low, i.high, mode);
+  unsigned int len = v.get_len ();
+
+  if (len < 2)
+    return gen_int_mode (v.elt (0), mode);
+
+  gcc_assert (GET_MODE_PRECISION (mode) == v.get_precision ());
+
+#if TARGET_SUPPORTS_WIDE_INT
+  {
+    rtx value = const_wide_int_alloc (len);
+    unsigned int i;
+
+    /* It is so tempting to just put the mode in here.  Must control
+       myself ... */
+    PUT_MODE (value, VOIDmode);
+    HWI_PUT_NUM_ELEM (CONST_WIDE_INT_VEC (value), len);
+
+    for (i = 0; i < len; i++)
+      CONST_WIDE_INT_ELT (value, i) = v.elt (i);
+
+    return lookup_const_wide_int (value);
+  }
+#else
+  return immed_double_const (v.elt (0), v.elt (1), mode);
+#endif
 }
 
+#if TARGET_SUPPORTS_WIDE_INT == 0
 /* Return a CONST_DOUBLE or CONST_INT for a value specified as a pair
    of ints: I0 is the low-order word and I1 is the high-order word.
    For values that are larger than HOST_BITS_PER_DOUBLE_INT, the
@@ -559,6 +648,7 @@  immed_double_const (HOST_WIDE_INT i0, HOST_WIDE_INT i1, enum machine_mode mode)
 
   return lookup_const_double (value);
 }
+#endif
 
 rtx
 gen_rtx_REG (enum machine_mode mode, unsigned int regno)
@@ -5616,11 +5706,15 @@  init_emit_once (void)
   enum machine_mode mode;
   enum machine_mode double_mode;
 
-  /* Initialize the CONST_INT, CONST_DOUBLE, CONST_FIXED, and memory attribute
-     hash tables.  */
+  /* Initialize the CONST_INT, CONST_WIDE_INT, CONST_DOUBLE,
+     CONST_FIXED, and memory attribute hash tables.  */
   const_int_htab = htab_create_ggc (37, const_int_htab_hash,
 				    const_int_htab_eq, NULL);
 
+#if TARGET_SUPPORTS_WIDE_INT
+  const_wide_int_htab = htab_create_ggc (37, const_wide_int_htab_hash,
+					 const_wide_int_htab_eq, NULL);
+#endif
   const_double_htab = htab_create_ggc (37, const_double_htab_hash,
 				       const_double_htab_eq, NULL);
 
diff --git a/gcc/explow.c b/gcc/explow.c
index 08a6653..c154472 100644
--- a/gcc/explow.c
+++ b/gcc/explow.c
@@ -95,38 +95,9 @@  plus_constant (enum machine_mode mode, rtx x, HOST_WIDE_INT c)
 
   switch (code)
     {
-    case CONST_INT:
-      if (GET_MODE_BITSIZE (mode) > HOST_BITS_PER_WIDE_INT)
-	{
-	  double_int di_x = double_int::from_shwi (INTVAL (x));
-	  double_int di_c = double_int::from_shwi (c);
-
-	  bool overflow;
-	  double_int v = di_x.add_with_sign (di_c, false, &overflow);
-	  if (overflow)
-	    gcc_unreachable ();
-
-	  return immed_double_int_const (v, VOIDmode);
-	}
-
-      return GEN_INT (INTVAL (x) + c);
-
-    case CONST_DOUBLE:
-      {
-	double_int di_x = double_int::from_pair (CONST_DOUBLE_HIGH (x),
-						 CONST_DOUBLE_LOW (x));
-	double_int di_c = double_int::from_shwi (c);
-
-	bool overflow;
-	double_int v = di_x.add_with_sign (di_c, false, &overflow);
-	if (overflow)
-	  /* Sorry, we have no way to represent overflows this wide.
-	     To fix, add constant support wider than CONST_DOUBLE.  */
-	  gcc_assert (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_DOUBLE_INT);
-
-	return immed_double_int_const (v, VOIDmode);
-      }
-
+    CASE_CONST_SCALAR_INT:
+      return immed_wide_int_const (wide_int::from_rtx (x, mode) 
+				   + wide_int::from_shwi (c, mode), mode);
     case MEM:
       /* If this is a reference to the constant pool, try replacing it with
 	 a reference to a new constant.  If the resulting address isn't
diff --git a/gcc/expmed.c b/gcc/expmed.c
index 3c3a179..ae726c2 100644
--- a/gcc/expmed.c
+++ b/gcc/expmed.c
@@ -55,7 +55,6 @@  static void store_split_bit_field (rtx, unsigned HOST_WIDE_INT,
 static rtx extract_fixed_bit_field (enum machine_mode, rtx,
 				    unsigned HOST_WIDE_INT,
 				    unsigned HOST_WIDE_INT, rtx, int, bool);
-static rtx mask_rtx (enum machine_mode, int, int, int);
 static rtx lshift_value (enum machine_mode, rtx, int, int);
 static rtx extract_split_bit_field (rtx, unsigned HOST_WIDE_INT,
 				    unsigned HOST_WIDE_INT, int);
@@ -63,6 +62,18 @@  static void do_cmp_and_jump (rtx, rtx, enum rtx_code, enum machine_mode, rtx);
 static rtx expand_smod_pow2 (enum machine_mode, rtx, HOST_WIDE_INT);
 static rtx expand_sdiv_pow2 (enum machine_mode, rtx, HOST_WIDE_INT);
 
+/* Return a constant integer mask value of mode MODE with BITSIZE ones
+   followed by BITPOS zeros, or the complement of that if COMPLEMENT.
+   The mask is truncated if necessary to the width of mode MODE.  The
+   mask is zero-extended if BITSIZE+BITPOS is too small for MODE.  */
+
+static inline rtx 
+mask_rtx (enum machine_mode mode, int bitpos, int bitsize, bool complement)
+{
+  return immed_wide_int_const 
+    (wide_int::shifted_mask (bitpos, bitsize, complement, mode), mode);
+}
+
 /* Test whether a value is zero of a power of two.  */
 #define EXACT_POWER_OF_2_OR_ZERO_P(x) \
   (((x) & ((x) - (unsigned HOST_WIDE_INT) 1)) == 0)
@@ -1832,39 +1843,15 @@  extract_fixed_bit_field (enum machine_mode tmode, rtx op0,
   return expand_shift (RSHIFT_EXPR, mode, op0,
 		       GET_MODE_BITSIZE (mode) - bitsize, target, 0);
 }
-
-/* Return a constant integer (CONST_INT or CONST_DOUBLE) mask value
-   of mode MODE with BITSIZE ones followed by BITPOS zeros, or the
-   complement of that if COMPLEMENT.  The mask is truncated if
-   necessary to the width of mode MODE.  The mask is zero-extended if
-   BITSIZE+BITPOS is too small for MODE.  */
-
-static rtx
-mask_rtx (enum machine_mode mode, int bitpos, int bitsize, int complement)
-{
-  double_int mask;
-
-  mask = double_int::mask (bitsize);
-  mask = mask.llshift (bitpos, HOST_BITS_PER_DOUBLE_INT);
-
-  if (complement)
-    mask = ~mask;
-
-  return immed_double_int_const (mask, mode);
-}
-
-/* Return a constant integer (CONST_INT or CONST_DOUBLE) rtx with the value
-   VALUE truncated to BITSIZE bits and then shifted left BITPOS bits.  */
+/* Return a constant integer rtx with the value VALUE truncated to
+   BITSIZE bits and then shifted left BITPOS bits.  */
 
 static rtx
 lshift_value (enum machine_mode mode, rtx value, int bitpos, int bitsize)
 {
-  double_int val;
-  
-  val = double_int::from_uhwi (INTVAL (value)).zext (bitsize);
-  val = val.llshift (bitpos, HOST_BITS_PER_DOUBLE_INT);
-
-  return immed_double_int_const (val, mode);
+  return 
+    immed_wide_int_const (wide_int::from_rtx (value, mode)
+			  .zext (bitsize).lshift (bitpos), mode);
 }
 
 /* Extract a bit field that is split across two words
@@ -3069,37 +3056,41 @@  expand_mult (enum machine_mode mode, rtx op0, rtx op1, rtx target,
 	 only if the constant value exactly fits in an `unsigned int' without
 	 any truncation.  This means that multiplying by negative values does
 	 not work; results are off by 2^32 on a 32 bit machine.  */
-
       if (CONST_INT_P (scalar_op1))
 	{
 	  coeff = INTVAL (scalar_op1);
 	  is_neg = coeff < 0;
 	}
+#if TARGET_SUPPORTS_WIDE_INT
+      else if (CONST_WIDE_INT_P (scalar_op1))
+#else
       else if (CONST_DOUBLE_AS_INT_P (scalar_op1))
+#endif
 	{
-	  /* If we are multiplying in DImode, it may still be a win
-	     to try to work with shifts and adds.  */
-	  if (CONST_DOUBLE_HIGH (scalar_op1) == 0
-	      && (CONST_DOUBLE_LOW (scalar_op1) > 0
-		  || (CONST_DOUBLE_LOW (scalar_op1) < 0
-		      && EXACT_POWER_OF_2_OR_ZERO_P
-			   (CONST_DOUBLE_LOW (scalar_op1)))))
+	  int p = GET_MODE_PRECISION (mode);
+	  wide_int val = wide_int::from_rtx (scalar_op1, mode);
+	  int shift = val.exact_log2 ().to_shwi (); 
+	  /* Perfect power of 2.  */
+	  is_neg = false;
+	  if (shift > 0)
 	    {
-	      coeff = CONST_DOUBLE_LOW (scalar_op1);
-	      is_neg = false;
+	      /* Do the shift count trucation against the bitsize, not
+		 the precision.  See the comment above
+		 wide-int.c:trunc_shift for details.  */
+	      if (SHIFT_COUNT_TRUNCATED)
+		shift &= GET_MODE_BITSIZE (mode) - 1;
+	      /* We could consider adding just a move of 0 to target
+		 if the shift >= p  */
+	      if (shift < p)
+		return expand_shift (LSHIFT_EXPR, mode, op0, 
+				     shift, target, unsignedp);
+	      /* Any positive number that fits in a word.  */
+	      coeff = CONST_WIDE_INT_ELT (scalar_op1, 0);
 	    }
-	  else if (CONST_DOUBLE_LOW (scalar_op1) == 0)
+	  else if (val.sign_mask () == 0)
 	    {
-	      coeff = CONST_DOUBLE_HIGH (scalar_op1);
-	      if (EXACT_POWER_OF_2_OR_ZERO_P (coeff))
-		{
-		  int shift = floor_log2 (coeff) + HOST_BITS_PER_WIDE_INT;
-		  if (shift < HOST_BITS_PER_DOUBLE_INT - 1
-		      || mode_bitsize <= HOST_BITS_PER_DOUBLE_INT)
-		    return expand_shift (LSHIFT_EXPR, mode, op0,
-					 shift, target, unsignedp);
-		}
-	      goto skip_synth;
+	      /* Any positive number that fits in a word.  */
+	      coeff = CONST_WIDE_INT_ELT (scalar_op1, 0);
 	    }
 	  else
 	    goto skip_synth;
@@ -3601,9 +3592,10 @@  expmed_mult_highpart (enum machine_mode mode, rtx op0, rtx op1,
 static rtx
 expand_smod_pow2 (enum machine_mode mode, rtx op0, HOST_WIDE_INT d)
 {
-  unsigned HOST_WIDE_INT masklow, maskhigh;
   rtx result, temp, shift, label;
   int logd;
+  wide_int mask;
+  int prec = GET_MODE_PRECISION (mode);
 
   logd = floor_log2 (d);
   result = gen_reg_rtx (mode);
@@ -3616,8 +3608,8 @@  expand_smod_pow2 (enum machine_mode mode, rtx op0, HOST_WIDE_INT d)
 				      mode, 0, -1);
       if (signmask)
 	{
+	  HOST_WIDE_INT masklow = ((HOST_WIDE_INT) 1 << logd) - 1;
 	  signmask = force_reg (mode, signmask);
-	  masklow = ((HOST_WIDE_INT) 1 << logd) - 1;
 	  shift = GEN_INT (GET_MODE_BITSIZE (mode) - logd);
 
 	  /* Use the rtx_cost of a LSHIFTRT instruction to determine
@@ -3662,19 +3654,11 @@  expand_smod_pow2 (enum machine_mode mode, rtx op0, HOST_WIDE_INT d)
      modulus.  By including the signbit in the operation, many targets
      can avoid an explicit compare operation in the following comparison
      against zero.  */
-
-  masklow = ((HOST_WIDE_INT) 1 << logd) - 1;
-  if (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT)
-    {
-      masklow |= (HOST_WIDE_INT) -1 << (GET_MODE_BITSIZE (mode) - 1);
-      maskhigh = -1;
-    }
-  else
-    maskhigh = (HOST_WIDE_INT) -1
-		 << (GET_MODE_BITSIZE (mode) - HOST_BITS_PER_WIDE_INT - 1);
+  mask = wide_int::mask (logd, false, mode);
+  mask = mask.set_bit (prec - 1);
 
   temp = expand_binop (mode, and_optab, op0,
-		       immed_double_const (masklow, maskhigh, mode),
+		       immed_wide_int_const (mask, mode),
 		       result, 1, OPTAB_LIB_WIDEN);
   if (temp != result)
     emit_move_insn (result, temp);
@@ -3684,10 +3668,10 @@  expand_smod_pow2 (enum machine_mode mode, rtx op0, HOST_WIDE_INT d)
 
   temp = expand_binop (mode, sub_optab, result, const1_rtx, result,
 		       0, OPTAB_LIB_WIDEN);
-  masklow = (HOST_WIDE_INT) -1 << logd;
-  maskhigh = -1;
+
+  mask = wide_int::mask (logd, true, mode); 
   temp = expand_binop (mode, ior_optab, temp,
-		       immed_double_const (masklow, maskhigh, mode),
+		       immed_wide_int_const (mask, mode),
 		       result, 1, OPTAB_LIB_WIDEN);
   temp = expand_binop (mode, add_optab, temp, const1_rtx, result,
 		       0, OPTAB_LIB_WIDEN);
@@ -4940,8 +4924,12 @@  make_tree (tree type, rtx x)
 	return t;
       }
 
+    case CONST_WIDE_INT:
+      t = wide_int_to_tree (type, wide_int::from_rtx (x, TYPE_MODE (type)));
+      return t;
+
     case CONST_DOUBLE:
-      if (GET_MODE (x) == VOIDmode)
+      if (TARGET_SUPPORTS_WIDE_INT == 0 && GET_MODE (x) == VOIDmode)
 	t = build_int_cst_wide (type,
 				CONST_DOUBLE_LOW (x), CONST_DOUBLE_HIGH (x));
       else
diff --git a/gcc/expr.c b/gcc/expr.c
index e3fb0b6..6c8b1b5 100644
--- a/gcc/expr.c
+++ b/gcc/expr.c
@@ -710,23 +710,23 @@  convert_modes (enum machine_mode mode, enum machine_mode oldmode, rtx x, int uns
   if (mode == oldmode)
     return x;
 
-  /* There is one case that we must handle specially: If we are converting
-     a CONST_INT into a mode whose size is twice HOST_BITS_PER_WIDE_INT and
-     we are to interpret the constant as unsigned, gen_lowpart will do
-     the wrong if the constant appears negative.  What we want to do is
-     make the high-order word of the constant zero, not all ones.  */
+  /* There is one case that we must handle specially: If we are
+     converting a CONST_INT into a mode whose size is larger than
+     HOST_BITS_PER_WIDE_INT and we are to interpret the constant as
+     unsigned, gen_lowpart will do the wrong if the constant appears
+     negative.  What we want to do is make the high-order word of the
+     constant zero, not all ones.  */
 
   if (unsignedp && GET_MODE_CLASS (mode) == MODE_INT
-      && GET_MODE_BITSIZE (mode) == HOST_BITS_PER_DOUBLE_INT
+      && GET_MODE_BITSIZE (mode) > HOST_BITS_PER_WIDE_INT
       && CONST_INT_P (x) && INTVAL (x) < 0)
     {
-      double_int val = double_int::from_uhwi (INTVAL (x));
-
+      HOST_WIDE_INT val = INTVAL (x);
       /* We need to zero extend VAL.  */
       if (oldmode != VOIDmode)
-	val = val.zext (GET_MODE_BITSIZE (oldmode));
+	val &= GET_MODE_PRECISION (oldmode) - 1;
 
-      return immed_double_int_const (val, mode);
+      return immed_wide_int_const (wide_int::from_uhwi (val, mode), mode);
     }
 
   /* We can do this with a gen_lowpart if both desired and current modes
@@ -738,7 +738,11 @@  convert_modes (enum machine_mode mode, enum machine_mode oldmode, rtx x, int uns
        && GET_MODE_PRECISION (mode) <= HOST_BITS_PER_WIDE_INT)
       || (GET_MODE_CLASS (mode) == MODE_INT
 	  && GET_MODE_CLASS (oldmode) == MODE_INT
-	  && (CONST_DOUBLE_AS_INT_P (x) 
+#if TARGET_SUPPORTS_WIDE_INT
+	  && (CONST_WIDE_INT_P (x)
+#else
+ 	  && (CONST_DOUBLE_AS_INT_P (x)
+#endif
 	      || (GET_MODE_PRECISION (mode) <= GET_MODE_PRECISION (oldmode)
 		  && ((MEM_P (x) && ! MEM_VOLATILE_P (x)
 		       && direct_load[(int) mode])
@@ -1743,6 +1747,7 @@  emit_group_load_1 (rtx *tmps, rtx dst, rtx orig_src, tree type, int ssize)
 	    {
 	      rtx first, second;
 
+	      /* TODO: const_wide_int can have sizes other than this...  */
 	      gcc_assert (2 * len == ssize);
 	      split_double (src, &first, &second);
 	      if (i)
@@ -5239,10 +5244,10 @@  store_expr (tree exp, rtx target, int call_param_p, bool nontemporal)
 			       &alt_rtl);
     }
 
-  /* If TEMP is a VOIDmode constant and the mode of the type of EXP is not
-     the same as that of TARGET, adjust the constant.  This is needed, for
-     example, in case it is a CONST_DOUBLE and we want only a word-sized
-     value.  */
+  /* If TEMP is a VOIDmode constant and the mode of the type of EXP is
+     not the same as that of TARGET, adjust the constant.  This is
+     needed, for example, in case it is a CONST_DOUBLE or
+     CONST_WIDE_INT and we want only a word-sized value.  */
   if (CONSTANT_P (temp) && GET_MODE (temp) == VOIDmode
       && TREE_CODE (exp) != ERROR_MARK
       && GET_MODE (target) != TYPE_MODE (TREE_TYPE (exp)))
@@ -7741,11 +7746,12 @@  expand_constructor (tree exp, rtx target, enum expand_modifier modifier,
 
   /* All elts simple constants => refer to a constant in memory.  But
      if this is a non-BLKmode mode, let it store a field at a time
-     since that should make a CONST_INT or CONST_DOUBLE when we
-     fold.  Likewise, if we have a target we can use, it is best to
-     store directly into the target unless the type is large enough
-     that memcpy will be used.  If we are making an initializer and
-     all operands are constant, put it in memory as well.
+     since that should make a CONST_INT, CONST_WIDE_INT or
+     CONST_DOUBLE when we fold.  Likewise, if we have a target we can
+     use, it is best to store directly into the target unless the type
+     is large enough that memcpy will be used.  If we are making an
+     initializer and all operands are constant, put it in memory as
+     well.
 
      FIXME: Avoid trying to fill vector constructors piece-meal.
      Output them with output_constant_def below unless we're sure
@@ -8215,17 +8221,18 @@  expand_expr_real_2 (sepops ops, rtx target, enum machine_mode tmode,
 	      && TREE_CONSTANT (treeop1))
 	    {
 	      rtx constant_part;
+	      HOST_WIDE_INT wc;
+	      enum machine_mode wmode = TYPE_MODE (TREE_TYPE (treeop1));
 
 	      op1 = expand_expr (treeop1, subtarget, VOIDmode,
 				 EXPAND_SUM);
-	      /* Use immed_double_const to ensure that the constant is
+	      /* Use wide_int::from_shwi to ensure that the constant is
 		 truncated according to the mode of OP1, then sign extended
 		 to a HOST_WIDE_INT.  Using the constant directly can result
 		 in non-canonical RTL in a 64x32 cross compile.  */
-	      constant_part
-		= immed_double_const (TREE_INT_CST_LOW (treeop0),
-				      (HOST_WIDE_INT) 0,
-				      TYPE_MODE (TREE_TYPE (treeop1)));
+	      wc = TREE_INT_CST_LOW (treeop0);
+	      constant_part 
+		= immed_wide_int_const (wide_int::from_shwi (wc, wmode), wmode);
 	      op1 = plus_constant (mode, op1, INTVAL (constant_part));
 	      if (modifier != EXPAND_SUM && modifier != EXPAND_INITIALIZER)
 		op1 = force_operand (op1, target);
@@ -8237,7 +8244,8 @@  expand_expr_real_2 (sepops ops, rtx target, enum machine_mode tmode,
 		   && TREE_CONSTANT (treeop0))
 	    {
 	      rtx constant_part;
-
+	      HOST_WIDE_INT wc;
+	      enum machine_mode wmode = TYPE_MODE (TREE_TYPE (treeop0));
 	      op0 = expand_expr (treeop0, subtarget, VOIDmode,
 				 (modifier == EXPAND_INITIALIZER
 				 ? EXPAND_INITIALIZER : EXPAND_SUM));
@@ -8251,14 +8259,13 @@  expand_expr_real_2 (sepops ops, rtx target, enum machine_mode tmode,
 		    return simplify_gen_binary (PLUS, mode, op0, op1);
 		  goto binop2;
 		}
-	      /* Use immed_double_const to ensure that the constant is
+	      /* Use wide_int::from_shwi to ensure that the constant is
 		 truncated according to the mode of OP1, then sign extended
 		 to a HOST_WIDE_INT.  Using the constant directly can result
 		 in non-canonical RTL in a 64x32 cross compile.  */
-	      constant_part
-		= immed_double_const (TREE_INT_CST_LOW (treeop1),
-				      (HOST_WIDE_INT) 0,
-				      TYPE_MODE (TREE_TYPE (treeop0)));
+	      wc = TREE_INT_CST_LOW (treeop1);
+	      constant_part 
+		= immed_wide_int_const (wide_int::from_shwi (wc, wmode), wmode);
 	      op0 = plus_constant (mode, op0, INTVAL (constant_part));
 	      if (modifier != EXPAND_SUM && modifier != EXPAND_INITIALIZER)
 		op0 = force_operand (op0, target);
@@ -8760,10 +8767,13 @@  expand_expr_real_2 (sepops ops, rtx target, enum machine_mode tmode,
 	 for unsigned bitfield expand this as XOR with a proper constant
 	 instead.  */
       if (reduce_bit_field && TYPE_UNSIGNED (type))
-	temp = expand_binop (mode, xor_optab, op0,
-			     immed_double_int_const
-			       (double_int::mask (TYPE_PRECISION (type)), mode),
-			     target, 1, OPTAB_LIB_WIDEN);
+	{
+	  wide_int mask = wide_int::mask (TYPE_PRECISION (type), false, mode);
+
+	  temp = expand_binop (mode, xor_optab, op0,
+			       immed_wide_int_const (mask, mode),
+			       target, 1, OPTAB_LIB_WIDEN);
+	}
       else
 	temp = expand_unop (mode, one_cmpl_optab, op0, target, 1);
       gcc_assert (temp);
@@ -9396,9 +9406,8 @@  expand_expr_real_1 (tree exp, rtx target, enum machine_mode tmode,
       return decl_rtl;
 
     case INTEGER_CST:
-      temp = immed_double_const (TREE_INT_CST_LOW (exp),
-				 TREE_INT_CST_HIGH (exp), mode);
-
+      temp = immed_wide_int_const (wide_int::from_tree (exp), 
+				   TYPE_MODE (TREE_TYPE (exp)));
       return temp;
 
     case VECTOR_CST:
@@ -9630,8 +9639,9 @@  expand_expr_real_1 (tree exp, rtx target, enum machine_mode tmode,
 	op0 = memory_address_addr_space (address_mode, op0, as);
 	if (!integer_zerop (TREE_OPERAND (exp, 1)))
 	  {
-	    rtx off
-	      = immed_double_int_const (mem_ref_offset (exp), address_mode);
+	    wide_int wi = wide_int::from_double_int
+	      (mem_ref_offset (exp), address_mode);
+	    rtx off = immed_wide_int_const (wi, address_mode);
 	    op0 = simplify_gen_binary (PLUS, address_mode, op0, off);
 	  }
 	op0 = memory_address_addr_space (mode, op0, as);
@@ -10510,9 +10520,10 @@  reduce_to_bit_field_precision (rtx exp, rtx target, tree type)
     }
   else if (TYPE_UNSIGNED (type))
     {
-      rtx mask = immed_double_int_const (double_int::mask (prec),
-					 GET_MODE (exp));
-      return expand_and (GET_MODE (exp), exp, mask, target);
+      enum machine_mode mode = GET_MODE (exp);
+      rtx mask = immed_wide_int_const 
+	(wide_int::mask (prec, false, mode), mode);
+      return expand_and (mode, exp, mask, target);
     }
   else
     {
@@ -11084,8 +11095,9 @@  const_vector_from_tree (tree exp)
 	RTVEC_ELT (v, i) = CONST_FIXED_FROM_FIXED_VALUE (TREE_FIXED_CST (elt),
 							 inner);
       else
-	RTVEC_ELT (v, i) = immed_double_int_const (tree_to_double_int (elt),
-						   inner);
+	RTVEC_ELT (v, i) 
+	  = immed_wide_int_const (wide_int::from_tree (elt),
+				  TYPE_MODE (TREE_TYPE (elt)));
     }
 
   return gen_rtx_CONST_VECTOR (mode, v);
diff --git a/gcc/final.c b/gcc/final.c
index f6974f4..053aebc 100644
--- a/gcc/final.c
+++ b/gcc/final.c
@@ -3789,8 +3789,16 @@  output_addr_const (FILE *file, rtx x)
       output_addr_const (file, XEXP (x, 0));
       break;
 
+    case CONST_WIDE_INT:
+      /* This should be ok for a while.  */
+      gcc_assert (CONST_WIDE_INT_NUNITS (x) == 2);
+      fprintf (file, HOST_WIDE_INT_PRINT_DOUBLE_HEX,
+	       (unsigned HOST_WIDE_INT) CONST_WIDE_INT_ELT (x, 1),
+	       (unsigned HOST_WIDE_INT) CONST_WIDE_INT_ELT (x, 0));
+      break;
+
     case CONST_DOUBLE:
-      if (GET_MODE (x) == VOIDmode)
+      if (CONST_DOUBLE_AS_INT_P (x))
 	{
 	  /* We can use %d if the number is one word and positive.  */
 	  if (CONST_DOUBLE_HIGH (x))
diff --git a/gcc/genemit.c b/gcc/genemit.c
index 692ef52..7b1e471 100644
--- a/gcc/genemit.c
+++ b/gcc/genemit.c
@@ -204,6 +204,7 @@  gen_exp (rtx x, enum rtx_code subroutine_type, char *used)
 
     case CONST_DOUBLE:
     case CONST_FIXED:
+    case CONST_WIDE_INT:
       /* These shouldn't be written in MD files.  Instead, the appropriate
 	 routines in varasm.c should be called.  */
       gcc_unreachable ();
diff --git a/gcc/gengenrtl.c b/gcc/gengenrtl.c
index 5b5a3ca..1f93dd5 100644
--- a/gcc/gengenrtl.c
+++ b/gcc/gengenrtl.c
@@ -142,6 +142,7 @@  static int
 excluded_rtx (int idx)
 {
   return ((strcmp (defs[idx].enumname, "CONST_DOUBLE") == 0)
+	  || (strcmp (defs[idx].enumname, "CONST_WIDE_INT") == 0)
 	  || (strcmp (defs[idx].enumname, "CONST_FIXED") == 0));
 }
 
diff --git a/gcc/gengtype.c b/gcc/gengtype.c
index eede798..ca2ee25 100644
--- a/gcc/gengtype.c
+++ b/gcc/gengtype.c
@@ -5442,6 +5442,7 @@  main (int argc, char **argv)
       POS_HERE (do_scalar_typedef ("REAL_VALUE_TYPE", &pos));
       POS_HERE (do_scalar_typedef ("FIXED_VALUE_TYPE", &pos));
       POS_HERE (do_scalar_typedef ("double_int", &pos));
+      POS_HERE (do_scalar_typedef ("wide_int", &pos));
       POS_HERE (do_scalar_typedef ("uint64_t", &pos));
       POS_HERE (do_scalar_typedef ("uint8", &pos));
       POS_HERE (do_scalar_typedef ("uintptr_t", &pos));
diff --git a/gcc/genpreds.c b/gcc/genpreds.c
index 98488e3..29fafbe 100644
--- a/gcc/genpreds.c
+++ b/gcc/genpreds.c
@@ -612,7 +612,7 @@  write_one_predicate_function (struct pred_data *p)
   add_mode_tests (p);
 
   /* A normal predicate can legitimately not look at enum machine_mode
-     if it accepts only CONST_INTs and/or CONST_DOUBLEs.  */
+     if it accepts only CONST_INTs and/or CONST_WIDE_INT and/or CONST_DOUBLEs.  */
   printf ("int\n%s (rtx op, enum machine_mode mode ATTRIBUTE_UNUSED)\n{\n",
 	  p->name);
   write_predicate_stmts (p->exp);
@@ -809,8 +809,11 @@  add_constraint (const char *name, const char *regclass,
   if (is_const_int || is_const_dbl)
     {
       enum rtx_code appropriate_code
+#if TARGET_SUPPORTS_WIDE_INT
+	= is_const_int ? CONST_INT : CONST_WIDE_INT;
+#else
 	= is_const_int ? CONST_INT : CONST_DOUBLE;
-
+#endif
       /* Consider relaxing this requirement in the future.  */
       if (regclass
 	  || GET_CODE (exp) != AND
@@ -1075,12 +1078,17 @@  write_tm_constrs_h (void)
 	if (needs_ival)
 	  puts ("  if (CONST_INT_P (op))\n"
 		"    ival = INTVAL (op);");
+#if TARGET_SUPPORTS_WIDE_INT
+	if (needs_lval || needs_hval)
+	  error ("you can't use lval or hval");
+#else
 	if (needs_hval)
 	  puts ("  if (GET_CODE (op) == CONST_DOUBLE && mode == VOIDmode)"
 		"    hval = CONST_DOUBLE_HIGH (op);");
 	if (needs_lval)
 	  puts ("  if (GET_CODE (op) == CONST_DOUBLE && mode == VOIDmode)"
 		"    lval = CONST_DOUBLE_LOW (op);");
+#endif
 	if (needs_rval)
 	  puts ("  if (GET_CODE (op) == CONST_DOUBLE && mode != VOIDmode)"
 		"    rval = CONST_DOUBLE_REAL_VALUE (op);");
diff --git a/gcc/gensupport.c b/gcc/gensupport.c
index 9b9a03e..638e051 100644
--- a/gcc/gensupport.c
+++ b/gcc/gensupport.c
@@ -2775,7 +2775,13 @@  static const struct std_pred_table std_preds[] = {
   {"scratch_operand", false, false, {SCRATCH, REG}},
   {"immediate_operand", false, true, {UNKNOWN}},
   {"const_int_operand", false, false, {CONST_INT}},
+#if TARGET_SUPPORTS_WIDE_INT
+  {"const_wide_int_operand", false, false, {CONST_WIDE_INT}},
+  {"const_scalar_int_operand", false, false, {CONST_INT, CONST_WIDE_INT}},
+  {"const_double_operand", false, false, {CONST_DOUBLE}},
+#else
   {"const_double_operand", false, false, {CONST_INT, CONST_DOUBLE}},
+#endif
   {"nonimmediate_operand", false, false, {SUBREG, REG, MEM}},
   {"nonmemory_operand", false, true, {SUBREG, REG}},
   {"push_operand", false, false, {MEM}},
diff --git a/gcc/optabs.c b/gcc/optabs.c
index a3051ad..3b534b2 100644
--- a/gcc/optabs.c
+++ b/gcc/optabs.c
@@ -851,7 +851,8 @@  expand_subword_shift (enum machine_mode op1_mode, optab binoptab,
   if (CONSTANT_P (op1) || shift_mask >= BITS_PER_WORD)
     {
       carries = outof_input;
-      tmp = immed_double_const (BITS_PER_WORD, 0, op1_mode);
+      tmp = immed_wide_int_const (wide_int::from_shwi (BITS_PER_WORD,
+						       op1_mode), op1_mode);
       tmp = simplify_expand_binop (op1_mode, sub_optab, tmp, op1,
 				   0, true, methods);
     }
@@ -866,13 +867,14 @@  expand_subword_shift (enum machine_mode op1_mode, optab binoptab,
 			      outof_input, const1_rtx, 0, unsignedp, methods);
       if (shift_mask == BITS_PER_WORD - 1)
 	{
-	  tmp = immed_double_const (-1, -1, op1_mode);
+	  tmp = immed_wide_int_const (wide_int::minus_one (op1_mode), op1_mode);
 	  tmp = simplify_expand_binop (op1_mode, xor_optab, op1, tmp,
 				       0, true, methods);
 	}
       else
 	{
-	  tmp = immed_double_const (BITS_PER_WORD - 1, 0, op1_mode);
+	  tmp = immed_wide_int_const (wide_int::from_shwi (BITS_PER_WORD - 1,
+							   op1_mode), op1_mode);
 	  tmp = simplify_expand_binop (op1_mode, sub_optab, tmp, op1,
 				       0, true, methods);
 	}
@@ -1035,7 +1037,8 @@  expand_doubleword_shift (enum machine_mode op1_mode, optab binoptab,
      is true when the effective shift value is less than BITS_PER_WORD.
      Set SUPERWORD_OP1 to the shift count that should be used to shift
      OUTOF_INPUT into INTO_TARGET when the condition is false.  */
-  tmp = immed_double_const (BITS_PER_WORD, 0, op1_mode);
+  tmp = immed_wide_int_const (wide_int::from_shwi (BITS_PER_WORD, op1_mode),
+			      op1_mode);
   if (!CONSTANT_P (op1) && shift_mask == BITS_PER_WORD - 1)
     {
       /* Set CMP1 to OP1 & BITS_PER_WORD.  The result is zero iff OP1
@@ -2885,7 +2888,7 @@  expand_absneg_bit (enum rtx_code code, enum machine_mode mode,
   const struct real_format *fmt;
   int bitpos, word, nwords, i;
   enum machine_mode imode;
-  double_int mask;
+  wide_int mask;
   rtx temp, insns;
 
   /* The format has to have a simple sign bit.  */
@@ -2921,7 +2924,7 @@  expand_absneg_bit (enum rtx_code code, enum machine_mode mode,
       nwords = (GET_MODE_BITSIZE (mode) + BITS_PER_WORD - 1) / BITS_PER_WORD;
     }
 
-  mask = double_int_zero.set_bit (bitpos);
+  mask = wide_int::set_bit_in_zero (bitpos, imode);
   if (code == ABS)
     mask = ~mask;
 
@@ -2943,7 +2946,7 @@  expand_absneg_bit (enum rtx_code code, enum machine_mode mode,
 	    {
 	      temp = expand_binop (imode, code == ABS ? and_optab : xor_optab,
 				   op0_piece,
-				   immed_double_int_const (mask, imode),
+				   immed_wide_int_const (mask, imode),
 				   targ_piece, 1, OPTAB_LIB_WIDEN);
 	      if (temp != targ_piece)
 		emit_move_insn (targ_piece, temp);
@@ -2961,7 +2964,7 @@  expand_absneg_bit (enum rtx_code code, enum machine_mode mode,
     {
       temp = expand_binop (imode, code == ABS ? and_optab : xor_optab,
 			   gen_lowpart (imode, op0),
-			   immed_double_int_const (mask, imode),
+			   immed_wide_int_const (mask, imode),
 		           gen_lowpart (imode, target), 1, OPTAB_LIB_WIDEN);
       target = lowpart_subreg_maybe_copy (mode, temp, imode);
 
@@ -3560,7 +3563,7 @@  expand_copysign_absneg (enum machine_mode mode, rtx op0, rtx op1, rtx target,
     }
   else
     {
-      double_int mask;
+      wide_int mask;
 
       if (GET_MODE_SIZE (mode) <= UNITS_PER_WORD)
 	{
@@ -3582,10 +3585,9 @@  expand_copysign_absneg (enum machine_mode mode, rtx op0, rtx op1, rtx target,
 	  op1 = operand_subword_force (op1, word, mode);
 	}
 
-      mask = double_int_zero.set_bit (bitpos);
-
+      mask = wide_int::set_bit_in_zero (bitpos, imode);
       sign = expand_binop (imode, and_optab, op1,
-			   immed_double_int_const (mask, imode),
+			   immed_wide_int_const (mask, imode),
 			   NULL_RTX, 1, OPTAB_LIB_WIDEN);
     }
 
@@ -3629,7 +3631,7 @@  expand_copysign_bit (enum machine_mode mode, rtx op0, rtx op1, rtx target,
 		     int bitpos, bool op0_is_abs)
 {
   enum machine_mode imode;
-  double_int mask;
+  wide_int mask, nmask;
   int word, nwords, i;
   rtx temp, insns;
 
@@ -3653,7 +3655,7 @@  expand_copysign_bit (enum machine_mode mode, rtx op0, rtx op1, rtx target,
       nwords = (GET_MODE_BITSIZE (mode) + BITS_PER_WORD - 1) / BITS_PER_WORD;
     }
 
-  mask = double_int_zero.set_bit (bitpos);
+  mask = wide_int::set_bit_in_zero (bitpos, imode);
 
   if (target == 0
       || target == op0
@@ -3673,14 +3675,16 @@  expand_copysign_bit (enum machine_mode mode, rtx op0, rtx op1, rtx target,
 	  if (i == word)
 	    {
 	      if (!op0_is_abs)
-		op0_piece
-		  = expand_binop (imode, and_optab, op0_piece,
-				  immed_double_int_const (~mask, imode),
-				  NULL_RTX, 1, OPTAB_LIB_WIDEN);
-
+		{
+		  nmask = ~mask;
+  		  op0_piece
+		    = expand_binop (imode, and_optab, op0_piece,
+				    immed_wide_int_const (nmask, imode),
+				    NULL_RTX, 1, OPTAB_LIB_WIDEN);
+		}
 	      op1 = expand_binop (imode, and_optab,
 				  operand_subword_force (op1, i, mode),
-				  immed_double_int_const (mask, imode),
+				  immed_wide_int_const (mask, imode),
 				  NULL_RTX, 1, OPTAB_LIB_WIDEN);
 
 	      temp = expand_binop (imode, ior_optab, op0_piece, op1,
@@ -3700,15 +3704,17 @@  expand_copysign_bit (enum machine_mode mode, rtx op0, rtx op1, rtx target,
   else
     {
       op1 = expand_binop (imode, and_optab, gen_lowpart (imode, op1),
-		          immed_double_int_const (mask, imode),
+		          immed_wide_int_const (mask, imode),
 		          NULL_RTX, 1, OPTAB_LIB_WIDEN);
 
       op0 = gen_lowpart (imode, op0);
       if (!op0_is_abs)
-	op0 = expand_binop (imode, and_optab, op0,
-			    immed_double_int_const (~mask, imode),
-			    NULL_RTX, 1, OPTAB_LIB_WIDEN);
-
+	{
+	  nmask = ~mask;
+	  op0 = expand_binop (imode, and_optab, op0,
+			      immed_wide_int_const (nmask, imode),
+			      NULL_RTX, 1, OPTAB_LIB_WIDEN);
+	}
       temp = expand_binop (imode, ior_optab, op0, op1,
 			   gen_lowpart (imode, target), 1, OPTAB_LIB_WIDEN);
       target = lowpart_subreg_maybe_copy (mode, temp, imode);
diff --git a/gcc/postreload.c b/gcc/postreload.c
index 33462e4..b899fe1 100644
--- a/gcc/postreload.c
+++ b/gcc/postreload.c
@@ -295,27 +295,25 @@  reload_cse_simplify_set (rtx set, rtx insn)
 #ifdef LOAD_EXTEND_OP
 	  if (extend_op != UNKNOWN)
 	    {
-	      HOST_WIDE_INT this_val;
+	      wide_int result;
 
-	      /* ??? I'm lazy and don't wish to handle CONST_DOUBLE.  Other
-		 constants, such as SYMBOL_REF, cannot be extended.  */
-	      if (!CONST_INT_P (this_rtx))
+	      if (!CONST_SCALAR_INT_P (this_rtx))
 		continue;
 
-	      this_val = INTVAL (this_rtx);
 	      switch (extend_op)
 		{
 		case ZERO_EXTEND:
-		  this_val &= GET_MODE_MASK (GET_MODE (src));
+		  result = (wide_int::from_rtx (this_rtx, GET_MODE (src))
+			    .zext (word_mode));
 		  break;
 		case SIGN_EXTEND:
-		  /* ??? In theory we're already extended.  */
-		  if (this_val == trunc_int_for_mode (this_val, GET_MODE (src)))
-		    break;
+		  result = (wide_int::from_rtx (this_rtx, GET_MODE (src))
+			    .sext (word_mode));
+		  break;
 		default:
 		  gcc_unreachable ();
 		}
-	      this_rtx = GEN_INT (this_val);
+	      this_rtx = immed_wide_int_const (result, GET_MODE (src));
 	    }
 #endif
 	  this_cost = set_src_cost (this_rtx, speed);
diff --git a/gcc/print-rtl.c b/gcc/print-rtl.c
index d2bda9e..3620bd6 100644
--- a/gcc/print-rtl.c
+++ b/gcc/print-rtl.c
@@ -612,6 +612,12 @@  print_rtx (const_rtx in_rtx)
 	  fprintf (outfile, " [%s]", s);
 	}
       break;
+
+    case CONST_WIDE_INT:
+      if (! flag_simple)
+	fprintf (outfile, " ");
+      hwivec_output_hex (outfile, CONST_WIDE_INT_VEC (in_rtx));
+      break;
 #endif
 
     case CODE_LABEL:
diff --git a/gcc/read-rtl.c b/gcc/read-rtl.c
index cd58b1f..a73a41b 100644
--- a/gcc/read-rtl.c
+++ b/gcc/read-rtl.c
@@ -806,6 +806,29 @@  validate_const_int (const char *string)
     fatal_with_file_and_line ("invalid decimal constant \"%s\"\n", string);
 }
 
+static void
+validate_const_wide_int (const char *string)
+{
+  const char *cp;
+  int valid = 1;
+
+  cp = string;
+  while (*cp && ISSPACE (*cp))
+    cp++;
+  /* Skip the leading 0x.  */
+  if (cp[0] == '0' || cp[1] == 'x')
+    cp += 2;
+  else
+    valid = 0;
+  if (*cp == 0)
+    valid = 0;
+  for (; *cp; cp++)
+    if (! ISXDIGIT (*cp))
+      valid = 0;
+  if (!valid)
+    fatal_with_file_and_line ("invalid hex constant \"%s\"\n", string);
+}
+
 /* Record that PTR uses iterator ITERATOR.  */
 
 static void
@@ -1319,6 +1342,56 @@  read_rtx_code (const char *code_name)
 	gcc_unreachable ();
       }
 
+  if (CONST_WIDE_INT_P (return_rtx))
+    {
+      read_name (&name);
+      validate_const_wide_int (name.string);
+      {
+	hwivec hwiv;
+	const char *s = name.string;
+	int len;
+	int index = 0;
+	int gs = HOST_BITS_PER_WIDE_INT/4;
+	int pos;
+	char * buf = XALLOCAVEC (char, gs + 1);
+	unsigned HOST_WIDE_INT wi;
+	int wlen;
+
+	/* Skip the leading spaces.  */
+	while (*s && ISSPACE (*s))
+	  s++;
+
+	/* Skip the leading 0x.  */
+	gcc_assert (s[0] == '0');
+	gcc_assert (s[1] == 'x');
+	s += 2;
+
+	len = strlen (s);
+	pos = len - gs;
+	wlen = (len + gs - 1) / gs;	/* Number of words needed */
+
+	return_rtx = const_wide_int_alloc (wlen);
+
+	hwiv = CONST_WIDE_INT_VEC (return_rtx);
+	while (pos > 0)
+	  {
+#if HOST_BITS_PER_WIDE_INT == 64
+	    sscanf (s + pos, "%16" HOST_WIDE_INT_PRINT "x", &wi);
+#else
+	    sscanf (s + pos, "%8" HOST_WIDE_INT_PRINT "x", &wi);
+#endif
+	    XHWIVEC_ELT (hwiv, index++) = wi;
+	    pos -= gs;
+	  }
+	strncpy (buf, s, gs - pos);
+	buf [gs - pos] = 0;
+	sscanf (buf, "%" HOST_WIDE_INT_PRINT "x", &wi);
+	XHWIVEC_ELT (hwiv, index++) = wi;
+	/* TODO: After reading, do we want to canonicalize with:
+	   value = lookup_const_wide_int (value); ? */
+      }
+    }
+
   c = read_skip_spaces ();
   /* Syntactic sugar for AND and IOR, allowing Lisp-like
      arbitrary number of arguments for them.  */
diff --git a/gcc/recog.c b/gcc/recog.c
index ed359f6..05e08e9 100644
--- a/gcc/recog.c
+++ b/gcc/recog.c
@@ -1141,7 +1141,7 @@  immediate_operand (rtx op, enum machine_mode mode)
 					    : mode, op));
 }
 
-/* Returns 1 if OP is an operand that is a CONST_INT.  */
+/* Returns 1 if OP is an operand that is a CONST_INT of mode MODE.  */
 
 int
 const_int_operand (rtx op, enum machine_mode mode)
@@ -1156,8 +1156,64 @@  const_int_operand (rtx op, enum machine_mode mode)
   return 1;
 }
 
+#if TARGET_SUPPORTS_WIDE_INT
+/* Returns 1 if OP is an operand that is a CONST_INT or CONST_WIDE_INT
+   of mode MODE.  */
+int
+const_scalar_int_operand (rtx op, enum machine_mode mode)
+{
+  if (!CONST_SCALAR_INT_P (op))
+    return 0;
+
+  if (CONST_INT_P (op))
+    return const_int_operand (op, mode);
+
+  if (mode != VOIDmode)
+    {
+      int prec = GET_MODE_PRECISION (mode);
+      int bitsize = GET_MODE_BITSIZE (mode);
+      
+      if (CONST_WIDE_INT_NUNITS (op) * HOST_BITS_PER_WIDE_INT > bitsize)
+	return 0;
+      
+      if (prec == bitsize)
+	return 1;
+      else
+	{
+	  /* Multiword partial int.  */
+	  HOST_WIDE_INT x 
+	    = CONST_WIDE_INT_ELT (op, CONST_WIDE_INT_NUNITS (op) - 1);
+	  return (wide_int::sext (x, prec & (HOST_BITS_PER_WIDE_INT - 1))
+		  == x);
+	}
+    }
+  return 1;
+}
+
+/* Returns 1 if OP is an operand that is a CONST_WIDE_INT of mode
+   MODE.  This most likely is not as useful as
+   const_scalar_int_operand, but is here for consistancy.  */
+int
+const_wide_int_operand (rtx op, enum machine_mode mode)
+{
+  if (!CONST_WIDE_INT_P (op))
+    return 0;
+
+  return const_scalar_int_operand (op, mode);
+}
+
 /* Returns 1 if OP is an operand that is a constant integer or constant
-   floating-point number.  */
+   floating-point number of MODE.  */
+
+int
+const_double_operand (rtx op, enum machine_mode mode)
+{
+  return (GET_CODE (op) == CONST_DOUBLE)
+	  && (GET_MODE (op) == mode || mode == VOIDmode);
+}
+#else
+/* Returns 1 if OP is an operand that is a constant integer or constant
+   floating-point number of MODE.  */
 
 int
 const_double_operand (rtx op, enum machine_mode mode)
@@ -1173,8 +1229,9 @@  const_double_operand (rtx op, enum machine_mode mode)
 	  && (mode == VOIDmode || GET_MODE (op) == mode
 	      || GET_MODE (op) == VOIDmode));
 }
-
-/* Return 1 if OP is a general operand that is not an immediate operand.  */
+#endif
+/* Return 1 if OP is a general operand that is not an immediate
+   operand of mode MODE.  */
 
 int
 nonimmediate_operand (rtx op, enum machine_mode mode)
@@ -1182,7 +1239,8 @@  nonimmediate_operand (rtx op, enum machine_mode mode)
   return (general_operand (op, mode) && ! CONSTANT_P (op));
 }
 
-/* Return 1 if OP is a register reference or immediate value of mode MODE.  */
+/* Return 1 if OP is a register reference or immediate value of mode
+   MODE.  */
 
 int
 nonmemory_operand (rtx op, enum machine_mode mode)
diff --git a/gcc/rtl.c b/gcc/rtl.c
index b2d88f7..074e425 100644
--- a/gcc/rtl.c
+++ b/gcc/rtl.c
@@ -109,7 +109,7 @@  const enum rtx_class rtx_class[NUM_RTX_CODE] = {
 const unsigned char rtx_code_size[NUM_RTX_CODE] = {
 #define DEF_RTL_EXPR(ENUM, NAME, FORMAT, CLASS)				\
   (((ENUM) == CONST_INT || (ENUM) == CONST_DOUBLE			\
-    || (ENUM) == CONST_FIXED)						\
+    || (ENUM) == CONST_FIXED || (ENUM) == CONST_WIDE_INT)		\
    ? RTX_HDR_SIZE + (sizeof FORMAT - 1) * sizeof (HOST_WIDE_INT)	\
    : RTX_HDR_SIZE + (sizeof FORMAT - 1) * sizeof (rtunion)),
 
@@ -181,18 +181,24 @@  shallow_copy_rtvec (rtvec vec)
 unsigned int
 rtx_size (const_rtx x)
 {
+  if (CONST_WIDE_INT_P (x))
+    return (RTX_HDR_SIZE
+	    + sizeof (struct hwivec_def)
+	    + ((CONST_WIDE_INT_NUNITS (x) - 1)
+	       * sizeof (HOST_WIDE_INT)));
   if (GET_CODE (x) == SYMBOL_REF && SYMBOL_REF_HAS_BLOCK_INFO_P (x))
     return RTX_HDR_SIZE + sizeof (struct block_symbol);
   return RTX_CODE_SIZE (GET_CODE (x));
 }
 
-/* Allocate an rtx of code CODE.  The CODE is stored in the rtx;
-   all the rest is initialized to zero.  */
+/* Allocate an rtx of code CODE with EXTRA bytes in it.  The CODE is
+   stored in the rtx; all the rest is initialized to zero.  */
 
 rtx
-rtx_alloc_stat (RTX_CODE code MEM_STAT_DECL)
+rtx_alloc_stat_v (RTX_CODE code MEM_STAT_DECL, int extra)
 {
-  rtx rt = ggc_alloc_rtx_def_stat (RTX_CODE_SIZE (code) PASS_MEM_STAT);
+  rtx rt = ggc_alloc_rtx_def_stat (RTX_CODE_SIZE (code) + extra
+				   PASS_MEM_STAT);
 
   /* We want to clear everything up to the FLD array.  Normally, this
      is one int, but we don't want to assume that and it isn't very
@@ -210,6 +216,29 @@  rtx_alloc_stat (RTX_CODE code MEM_STAT_DECL)
   return rt;
 }
 
+/* Allocate an rtx of code CODE.  The CODE is stored in the rtx;
+   all the rest is initialized to zero.  */
+
+rtx
+rtx_alloc_stat (RTX_CODE code MEM_STAT_DECL)
+{
+  return rtx_alloc_stat_v (code PASS_MEM_STAT, 0);
+}
+
+/* Write the wide constant OP0 to OUTFILE.  */
+
+void
+hwivec_output_hex (FILE *outfile, const_hwivec op0)
+{
+  int i = HWI_GET_NUM_ELEM (op0);
+  gcc_assert (i > 0);
+  if (XHWIVEC_ELT (op0, i-1) == 0)
+    fprintf (outfile, "0x");
+  fprintf (outfile, HOST_WIDE_INT_PRINT_HEX, XHWIVEC_ELT (op0, --i));
+  while (--i >= 0)
+    fprintf (outfile, HOST_WIDE_INT_PRINT_PADDED_HEX, XHWIVEC_ELT (op0, i));
+}
+
 
 /* Return true if ORIG is a sharable CONST.  */
 
@@ -428,7 +457,6 @@  rtx_equal_p_cb (const_rtx x, const_rtx y, rtx_equal_p_callback_function cb)
 	  if (XWINT (x, i) != XWINT (y, i))
 	    return 0;
 	  break;
-
 	case 'n':
 	case 'i':
 	  if (XINT (x, i) != XINT (y, i))
@@ -646,6 +674,10 @@  iterative_hash_rtx (const_rtx x, hashval_t hash)
       return iterative_hash_object (i, hash);
     case CONST_INT:
       return iterative_hash_object (INTVAL (x), hash);
+    case CONST_WIDE_INT:
+      for (i = 0; i < CONST_WIDE_INT_NUNITS (x); i++)
+	hash = iterative_hash_object (CONST_WIDE_INT_ELT (x, i), hash);
+      return hash;
     case SYMBOL_REF:
       if (XSTR (x, 0))
 	return iterative_hash (XSTR (x, 0), strlen (XSTR (x, 0)) + 1,
@@ -811,6 +843,16 @@  rtl_check_failed_block_symbol (const char *file, int line, const char *func)
 
 /* XXX Maybe print the vector?  */
 void
+hwivec_check_failed_bounds (const_hwivec r, int n, const char *file, int line,
+			    const char *func)
+{
+  internal_error
+    ("RTL check: access of hwi elt %d of vector with last elt %d in %s, at %s:%d",
+     n, GET_NUM_ELEM (r) - 1, func, trim_filename (file), line);
+}
+
+/* XXX Maybe print the vector?  */
+void
 rtvec_check_failed_bounds (const_rtvec r, int n, const char *file, int line,
 			   const char *func)
 {
diff --git a/gcc/rtl.def b/gcc/rtl.def
index f8aea32..4c5eb00 100644
--- a/gcc/rtl.def
+++ b/gcc/rtl.def
@@ -342,6 +342,9 @@  DEF_RTL_EXPR(TRAP_IF, "trap_if", "ee", RTX_EXTRA)
 /* numeric integer constant */
 DEF_RTL_EXPR(CONST_INT, "const_int", "w", RTX_CONST_OBJ)
 
+/* numeric integer constant */
+DEF_RTL_EXPR(CONST_WIDE_INT, "const_wide_int", "", RTX_CONST_OBJ)
+
 /* fixed-point constant */
 DEF_RTL_EXPR(CONST_FIXED, "const_fixed", "www", RTX_CONST_OBJ)
 
diff --git a/gcc/rtl.h b/gcc/rtl.h
index eea80ef..6479513 100644
--- a/gcc/rtl.h
+++ b/gcc/rtl.h
@@ -28,6 +28,7 @@  along with GCC; see the file COPYING3.  If not see
 #include "fixed-value.h"
 #include "alias.h"
 #include "hashtab.h"
+#include "wide-int.h"
 #include "flags.h"
 
 /* Value used by some passes to "recognize" noop moves as valid
@@ -249,6 +250,14 @@  struct GTY(()) object_block {
   vec<rtx, va_gc> *anchors;
 };
 
+struct GTY((variable_size)) hwivec_def {
+  int num_elem;		/* number of elements */
+  HOST_WIDE_INT elem[1];
+};
+
+#define HWI_GET_NUM_ELEM(HWIVEC)	((HWIVEC)->num_elem)
+#define HWI_PUT_NUM_ELEM(HWIVEC, NUM)	((HWIVEC)->num_elem = (NUM))
+
 /* RTL expression ("rtx").  */
 
 struct GTY((chain_next ("RTX_NEXT (&%h)"),
@@ -343,6 +352,7 @@  struct GTY((chain_next ("RTX_NEXT (&%h)"),
     struct block_symbol block_sym;
     struct real_value rv;
     struct fixed_value fv;
+    struct hwivec_def hwiv;
   } GTY ((special ("rtx_def"), desc ("GET_CODE (&%0)"))) u;
 };
 
@@ -382,13 +392,13 @@  struct GTY((chain_next ("RTX_NEXT (&%h)"),
    for a variable number of things.  The principle use is inside
    PARALLEL expressions.  */
 
+#define NULL_RTVEC (rtvec) 0
+
 struct GTY((variable_size)) rtvec_def {
   int num_elem;		/* number of elements */
   rtx GTY ((length ("%h.num_elem"))) elem[1];
 };
 
-#define NULL_RTVEC (rtvec) 0
-
 #define GET_NUM_ELEM(RTVEC)		((RTVEC)->num_elem)
 #define PUT_NUM_ELEM(RTVEC, NUM)	((RTVEC)->num_elem = (NUM))
 
@@ -398,12 +408,38 @@  struct GTY((variable_size)) rtvec_def {
 /* Predicate yielding nonzero iff X is an rtx for a memory location.  */
 #define MEM_P(X) (GET_CODE (X) == MEM)
 
+#if TARGET_SUPPORTS_WIDE_INT
+
+/* Match CONST_*s that can represent compile-time constant integers.  */
+#define CASE_CONST_SCALAR_INT \
+   case CONST_INT: \
+   case CONST_WIDE_INT
+
+/* Match CONST_*s for which pointer equality corresponds to value 
+   equality.  */
+#define CASE_CONST_UNIQUE \
+   case CONST_INT: \
+   case CONST_WIDE_INT: \
+   case CONST_DOUBLE: \
+   case CONST_FIXED
+
+/* Match all CONST_* rtxes.  */
+#define CASE_CONST_ANY \
+   case CONST_INT: \
+   case CONST_WIDE_INT: \
+   case CONST_DOUBLE: \
+   case CONST_FIXED: \
+   case CONST_VECTOR
+
+#else
+
 /* Match CONST_*s that can represent compile-time constant integers.  */
 #define CASE_CONST_SCALAR_INT \
    case CONST_INT: \
    case CONST_DOUBLE
 
-/* Match CONST_*s for which pointer equality corresponds to value equality.  */
+/* Match CONST_*s for which pointer equality corresponds to value 
+equality.  */
 #define CASE_CONST_UNIQUE \
    case CONST_INT: \
    case CONST_DOUBLE: \
@@ -415,10 +451,17 @@  struct GTY((variable_size)) rtvec_def {
    case CONST_DOUBLE: \
    case CONST_FIXED: \
    case CONST_VECTOR
+#endif
+
+
+
 
 /* Predicate yielding nonzero iff X is an rtx for a constant integer.  */
 #define CONST_INT_P(X) (GET_CODE (X) == CONST_INT)
 
+/* Predicate yielding nonzero iff X is an rtx for a constant integer.  */
+#define CONST_WIDE_INT_P(X) (GET_CODE (X) == CONST_WIDE_INT)
+
 /* Predicate yielding nonzero iff X is an rtx for a constant fixed-point.  */
 #define CONST_FIXED_P(X) (GET_CODE (X) == CONST_FIXED)
 
@@ -431,8 +474,13 @@  struct GTY((variable_size)) rtvec_def {
   (GET_CODE (X) == CONST_DOUBLE && GET_MODE (X) == VOIDmode)
 
 /* Predicate yielding true iff X is an rtx for a integer const.  */
+#if TARGET_SUPPORTS_WIDE_INT
+#define CONST_SCALAR_INT_P(X) \
+  (CONST_INT_P (X) || CONST_WIDE_INT_P (X))
+#else
 #define CONST_SCALAR_INT_P(X) \
   (CONST_INT_P (X) || CONST_DOUBLE_AS_INT_P (X))
+#endif
 
 /* Predicate yielding true iff X is an rtx for a double-int.  */
 #define CONST_DOUBLE_AS_FLOAT_P(X) \
@@ -593,6 +641,13 @@  struct GTY((variable_size)) rtvec_def {
 			       __FUNCTION__);				\
      &_rtx->u.hwint[_n]; }))
 
+#define XHWIVEC_ELT(HWIVEC, I) __extension__				\
+(*({ __typeof (HWIVEC) const _hwivec = (HWIVEC); const int _i = (I);	\
+     if (_i < 0 || _i >= HWI_GET_NUM_ELEM (_hwivec))			\
+       hwivec_check_failed_bounds (_hwivec, _i, __FILE__, __LINE__,	\
+				  __FUNCTION__);			\
+     &_hwivec->elem[_i]; }))
+
 #define XCWINT(RTX, N, C) __extension__					\
 (*({ __typeof (RTX) const _rtx = (RTX);					\
      if (GET_CODE (_rtx) != (C))					\
@@ -629,6 +684,11 @@  struct GTY((variable_size)) rtvec_def {
 				    __FUNCTION__);			\
    &_symbol->u.block_sym; })
 
+#define HWIVEC_CHECK(RTX,C) __extension__				\
+({ __typeof (RTX) const _symbol = (RTX);				\
+   RTL_CHECKC1 (_symbol, 0, C);						\
+   &_symbol->u.hwiv; })
+
 extern void rtl_check_failed_bounds (const_rtx, int, const char *, int,
 				     const char *)
     ATTRIBUTE_NORETURN;
@@ -649,6 +709,9 @@  extern void rtl_check_failed_code_mode (const_rtx, enum rtx_code, enum machine_m
     ATTRIBUTE_NORETURN;
 extern void rtl_check_failed_block_symbol (const char *, int, const char *)
     ATTRIBUTE_NORETURN;
+extern void hwivec_check_failed_bounds (const_rtvec, int, const char *, int,
+					const char *)
+    ATTRIBUTE_NORETURN;
 extern void rtvec_check_failed_bounds (const_rtvec, int, const char *, int,
 				       const char *)
     ATTRIBUTE_NORETURN;
@@ -661,12 +724,14 @@  extern void rtvec_check_failed_bounds (const_rtvec, int, const char *, int,
 #define RTL_CHECKC2(RTX, N, C1, C2) ((RTX)->u.fld[N])
 #define RTVEC_ELT(RTVEC, I)	    ((RTVEC)->elem[I])
 #define XWINT(RTX, N)		    ((RTX)->u.hwint[N])
+#define XHWIVEC_ELT(HWIVEC, I)	    ((HWIVEC)->elem[I])
 #define XCWINT(RTX, N, C)	    ((RTX)->u.hwint[N])
 #define XCMWINT(RTX, N, C, M)	    ((RTX)->u.hwint[N])
 #define XCNMWINT(RTX, N, C, M)	    ((RTX)->u.hwint[N])
 #define XCNMPRV(RTX, C, M)	    (&(RTX)->u.rv)
 #define XCNMPFV(RTX, C, M)	    (&(RTX)->u.fv)
 #define BLOCK_SYMBOL_CHECK(RTX)	    (&(RTX)->u.block_sym)
+#define HWIVEC_CHECK(RTX,C)	    (&(RTX)->u.hwiv)
 
 #endif
 
@@ -809,8 +874,8 @@  extern void rtl_check_failed_flag (const char *, const_rtx, const char *,
 #define XCCFI(RTX, N, C)      (RTL_CHECKC1 (RTX, N, C).rt_cfi)
 #define XCCSELIB(RTX, N, C)   (RTL_CHECKC1 (RTX, N, C).rt_cselib)
 
-#define XCVECEXP(RTX, N, M, C)	RTVEC_ELT (XCVEC (RTX, N, C), M)
-#define XCVECLEN(RTX, N, C)	GET_NUM_ELEM (XCVEC (RTX, N, C))
+#define XCVECEXP(RTX, N, M, C) RTVEC_ELT (XCVEC (RTX, N, C), M)
+#define XCVECLEN(RTX, N, C)    GET_NUM_ELEM (XCVEC (RTX, N, C))
 
 #define XC2EXP(RTX, N, C1, C2)      (RTL_CHECKC2 (RTX, N, C1, C2).rt_rtx)
 
@@ -1151,9 +1216,19 @@  rhs_regno (const_rtx x)
 #define INTVAL(RTX) XCWINT(RTX, 0, CONST_INT)
 #define UINTVAL(RTX) ((unsigned HOST_WIDE_INT) INTVAL (RTX))
 
+/* For a CONST_WIDE_INT, CONST_WIDE_INT_NUNITS is the number of
+   elements actually needed to represent the constant.
+   CONST_WIDE_INT_ELT gets one of the elements.  0 is the least
+   significant HOST_WIDE_INT.  */
+#define CONST_WIDE_INT_VEC(RTX) HWIVEC_CHECK (RTX, CONST_WIDE_INT)
+#define CONST_WIDE_INT_NUNITS(RTX) HWI_GET_NUM_ELEM (CONST_WIDE_INT_VEC (RTX))
+#define CONST_WIDE_INT_ELT(RTX, N) XHWIVEC_ELT (CONST_WIDE_INT_VEC (RTX), N) 
+
 /* For a CONST_DOUBLE:
+#if TARGET_SUPPORTS_WIDE_INT == 0
    For a VOIDmode, there are two integers CONST_DOUBLE_LOW is the
      low-order word and ..._HIGH the high-order.
+#endif
    For a float, there is a REAL_VALUE_TYPE structure, and
      CONST_DOUBLE_REAL_VALUE(r) is a pointer to it.  */
 #define CONST_DOUBLE_LOW(r) XCMWINT (r, 0, CONST_DOUBLE, VOIDmode)
@@ -1308,6 +1383,34 @@  struct address_info {
   bool autoinc_p;
 };
 
+#ifndef GENERATOR_FILE
+/* Overload of to_shwi2 function in wide-int.h for rtl.  This cannot be
+   in wide-int.h because of circular includes.  */
+
+inline const HOST_WIDE_INT* wide_int::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED, int *l,
+			       rtx rcst)
+{
+  switch (GET_CODE (rcst))
+    {
+    case CONST_INT:
+      *l = 1;
+      return &INTVAL (rcst);
+      
+    case CONST_WIDE_INT:
+      *l = CONST_WIDE_INT_NUNITS (rcst);
+      return &CONST_WIDE_INT_ELT (rcst, 0);
+      
+    case CONST_DOUBLE:
+      *l = 2;
+      return &CONST_DOUBLE_LOW (rcst);
+      
+    default:
+      gcc_unreachable ();
+    }
+}
+#endif
+
+
 extern void init_rtlanal (void);
 extern int rtx_cost (rtx, enum rtx_code, int, bool);
 extern int address_cost (rtx, enum machine_mode, addr_space_t, bool);
@@ -1758,6 +1861,12 @@  extern rtx plus_constant (enum machine_mode, rtx, HOST_WIDE_INT);
 /* In rtl.c */
 extern rtx rtx_alloc_stat (RTX_CODE MEM_STAT_DECL);
 #define rtx_alloc(c) rtx_alloc_stat (c MEM_STAT_INFO)
+extern rtx rtx_alloc_stat_v (RTX_CODE MEM_STAT_DECL, int);
+#define rtx_alloc_v(c, SZ) rtx_alloc_stat_v (c MEM_STAT_INFO, SZ)
+#define const_wide_int_alloc(NWORDS)				\
+  rtx_alloc_v (CONST_WIDE_INT,					\
+	       (sizeof (struct hwivec_def)			\
+		+ ((NWORDS)-1) * sizeof (HOST_WIDE_INT)))	\
 
 extern rtvec rtvec_alloc (int);
 extern rtvec shallow_copy_rtvec (rtvec);
@@ -1814,10 +1923,17 @@  extern void start_sequence (void);
 extern void push_to_sequence (rtx);
 extern void push_to_sequence2 (rtx, rtx);
 extern void end_sequence (void);
+#if TARGET_SUPPORTS_WIDE_INT == 0
 extern double_int rtx_to_double_int (const_rtx);
-extern rtx immed_double_int_const (double_int, enum machine_mode);
+#endif
+extern void hwivec_output_hex (FILE *, const_hwivec);
+#ifndef GENERATOR_FILE
+extern rtx immed_wide_int_const (const wide_int &cst, enum machine_mode mode);
+#endif
+#if TARGET_SUPPORTS_WIDE_INT == 0
 extern rtx immed_double_const (HOST_WIDE_INT, HOST_WIDE_INT,
 			       enum machine_mode);
+#endif
 
 /* In loop-iv.c  */
 
diff --git a/gcc/rtlanal.c b/gcc/rtlanal.c
index b198685..0fe1d0e 100644
--- a/gcc/rtlanal.c
+++ b/gcc/rtlanal.c
@@ -3091,6 +3091,8 @@  commutative_operand_precedence (rtx op)
   /* Constants always come the second operand.  Prefer "nice" constants.  */
   if (code == CONST_INT)
     return -8;
+  if (code == CONST_WIDE_INT)
+    return -8;
   if (code == CONST_DOUBLE)
     return -7;
   if (code == CONST_FIXED)
@@ -3103,6 +3105,8 @@  commutative_operand_precedence (rtx op)
     case RTX_CONST_OBJ:
       if (code == CONST_INT)
         return -6;
+      if (code == CONST_WIDE_INT)
+        return -6;
       if (code == CONST_DOUBLE)
         return -5;
       if (code == CONST_FIXED)
@@ -5289,7 +5293,10 @@  get_address_mode (rtx mem)
 /* Split up a CONST_DOUBLE or integer constant rtx
    into two rtx's for single words,
    storing in *FIRST the word that comes first in memory in the target
-   and in *SECOND the other.  */
+   and in *SECOND the other. 
+
+   TODO: This function needs to be rewritten to work on any size
+   integer.  */
 
 void
 split_double (rtx value, rtx *first, rtx *second)
@@ -5366,6 +5373,22 @@  split_double (rtx value, rtx *first, rtx *second)
 	    }
 	}
     }
+  else if (GET_CODE (value) == CONST_WIDE_INT)
+    {
+      /* All of this is scary code and needs to be converted to
+	 properly work with any size integer.  */
+      gcc_assert (CONST_WIDE_INT_NUNITS (value) == 2);
+      if (WORDS_BIG_ENDIAN)
+	{
+	  *first = GEN_INT (CONST_WIDE_INT_ELT (value, 1));
+	  *second = GEN_INT (CONST_WIDE_INT_ELT (value, 0));
+	}
+      else
+	{
+	  *first = GEN_INT (CONST_WIDE_INT_ELT (value, 0));
+	  *second = GEN_INT (CONST_WIDE_INT_ELT (value, 1));
+	}
+    }
   else if (!CONST_DOUBLE_P (value))
     {
       if (WORDS_BIG_ENDIAN)
diff --git a/gcc/sched-vis.c b/gcc/sched-vis.c
index 763230c..979aab1 100644
--- a/gcc/sched-vis.c
+++ b/gcc/sched-vis.c
@@ -432,6 +432,23 @@  print_value (pretty_printer *pp, const_rtx x, int verbose)
       pp_scalar (pp, HOST_WIDE_INT_PRINT_HEX,
 		 (unsigned HOST_WIDE_INT) INTVAL (x));
       break;
+
+    case CONST_WIDE_INT:
+      {
+	const char *sep = "<";
+	int i;
+	for (i = CONST_WIDE_INT_NUNITS (x) - 1; i >= 0; i--)
+	  {
+	    pp_string (pp, sep);
+	    sep = ",";
+	    sprintf (tmp, HOST_WIDE_INT_PRINT_HEX,
+		     (unsigned HOST_WIDE_INT) CONST_WIDE_INT_ELT (x, i));
+	    pp_string (pp, tmp);
+	  }
+        pp_greater (pp);
+      }
+      break;
+
     case CONST_DOUBLE:
       if (FLOAT_MODE_P (GET_MODE (x)))
 	{
diff --git a/gcc/sel-sched-ir.c b/gcc/sel-sched-ir.c
index 47e7695..828cac3 100644
--- a/gcc/sel-sched-ir.c
+++ b/gcc/sel-sched-ir.c
@@ -1141,10 +1141,10 @@  lhs_and_rhs_separable_p (rtx lhs, rtx rhs)
   if (lhs == NULL || rhs == NULL)
     return false;
 
-  /* Do not schedule CONST, CONST_INT and CONST_DOUBLE etc as rhs: no point
-     to use reg, if const can be used.  Moreover, scheduling const as rhs may
-     lead to mode mismatch cause consts don't have modes but they could be
-     merged from branches where the same const used in different modes.  */
+  /* Do not schedule constants as rhs: no point to use reg, if const
+     can be used.  Moreover, scheduling const as rhs may lead to mode
+     mismatch cause consts don't have modes but they could be merged
+     from branches where the same const used in different modes.  */
   if (CONSTANT_P (rhs))
     return false;
 
diff --git a/gcc/simplify-rtx.c b/gcc/simplify-rtx.c
index 791f91a..54a34ff 100644
--- a/gcc/simplify-rtx.c
+++ b/gcc/simplify-rtx.c
@@ -86,6 +86,22 @@  mode_signbit_p (enum machine_mode mode, const_rtx x)
   if (width <= HOST_BITS_PER_WIDE_INT
       && CONST_INT_P (x))
     val = INTVAL (x);
+#if TARGET_SUPPORTS_WIDE_INT
+  else if (CONST_WIDE_INT_P (x))
+    {
+      unsigned int i;
+      unsigned int elts = CONST_WIDE_INT_NUNITS (x);
+      if (elts != (width + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT)
+	return false;
+      for (i = 0; i < elts - 1; i++)
+	if (CONST_WIDE_INT_ELT (x, i) != 0)
+	  return false;
+      val = CONST_WIDE_INT_ELT (x, elts - 1);
+      width %= HOST_BITS_PER_WIDE_INT;
+      if (width == 0)
+	width = HOST_BITS_PER_WIDE_INT;
+    }
+#else
   else if (width <= HOST_BITS_PER_DOUBLE_INT
 	   && CONST_DOUBLE_AS_INT_P (x)
 	   && CONST_DOUBLE_LOW (x) == 0)
@@ -93,8 +109,9 @@  mode_signbit_p (enum machine_mode mode, const_rtx x)
       val = CONST_DOUBLE_HIGH (x);
       width -= HOST_BITS_PER_WIDE_INT;
     }
+#endif
   else
-    /* FIXME: We don't yet have a representation for wider modes.  */
+    /* X is not an integer constant.  */
     return false;
 
   if (width < HOST_BITS_PER_WIDE_INT)
@@ -1496,7 +1513,6 @@  simplify_const_unary_operation (enum rtx_code code, enum machine_mode mode,
 				rtx op, enum machine_mode op_mode)
 {
   unsigned int width = GET_MODE_PRECISION (mode);
-  unsigned int op_width = GET_MODE_PRECISION (op_mode);
 
   if (code == VEC_DUPLICATE)
     {
@@ -1570,8 +1586,19 @@  simplify_const_unary_operation (enum rtx_code code, enum machine_mode mode,
       if (CONST_INT_P (op))
 	lv = INTVAL (op), hv = HWI_SIGN_EXTEND (lv);
       else
+#if TARGET_SUPPORTS_WIDE_INT
+	{
+	  /* The conversion code to floats really want exactly 2 HWIs.
+	     This needs to be fixed.  For now, if the constant is
+	     really big, just return 0 which is safe.  */
+	  if (CONST_WIDE_INT_NUNITS (op) > 2)
+	    return 0;
+	  lv = CONST_WIDE_INT_ELT (op, 0);
+	  hv = CONST_WIDE_INT_ELT (op, 1);
+	}
+#else
 	lv = CONST_DOUBLE_LOW (op),  hv = CONST_DOUBLE_HIGH (op);
-
+#endif
       REAL_VALUE_FROM_INT (d, lv, hv, mode);
       d = real_value_truncate (mode, d);
       return CONST_DOUBLE_FROM_REAL_VALUE (d, mode);
@@ -1584,8 +1611,19 @@  simplify_const_unary_operation (enum rtx_code code, enum machine_mode mode,
       if (CONST_INT_P (op))
 	lv = INTVAL (op), hv = HWI_SIGN_EXTEND (lv);
       else
+#if TARGET_SUPPORTS_WIDE_INT
+	{
+	  /* The conversion code to floats really want exactly 2 HWIs.
+	     This needs to be fixed.  For now, if the constant is
+	     really big, just return 0 which is safe.  */
+	  if (CONST_WIDE_INT_NUNITS (op) > 2)
+	    return 0;
+	  lv = CONST_WIDE_INT_ELT (op, 0);
+	  hv = CONST_WIDE_INT_ELT (op, 1);
+	}
+#else
 	lv = CONST_DOUBLE_LOW (op),  hv = CONST_DOUBLE_HIGH (op);
-
+#endif
       if (op_mode == VOIDmode
 	  || GET_MODE_PRECISION (op_mode) > HOST_BITS_PER_DOUBLE_INT)
 	/* We should never get a negative number.  */
@@ -1598,302 +1636,82 @@  simplify_const_unary_operation (enum rtx_code code, enum machine_mode mode,
       return CONST_DOUBLE_FROM_REAL_VALUE (d, mode);
     }
 
-  if (CONST_INT_P (op)
-      && width <= HOST_BITS_PER_WIDE_INT && width > 0)
+  if (CONST_SCALAR_INT_P (op) && width > 0)
     {
-      HOST_WIDE_INT arg0 = INTVAL (op);
-      HOST_WIDE_INT val;
+      wide_int result;
+      enum machine_mode imode = op_mode == VOIDmode ? mode : op_mode;
+      wide_int op0 = wide_int::from_rtx (op, imode);
+
+#if TARGET_SUPPORTS_WIDE_INT == 0
+      /* This assert keeps the simplification from producing a result
+	 that cannot be represented in a CONST_DOUBLE but a lot of
+	 upstream callers expect that this function never fails to
+	 simplify something and so you if you added this to the test
+	 above the code would die later anyway.  If this assert
+	 happens, you just need to make the port support wide int.  */
+      gcc_assert (width <= HOST_BITS_PER_DOUBLE_INT); 
+#endif
 
       switch (code)
 	{
 	case NOT:
-	  val = ~ arg0;
+	  result = ~op0;
 	  break;
 
 	case NEG:
-	  val = - arg0;
+	  result = op0.neg ();
 	  break;
 
 	case ABS:
-	  val = (arg0 >= 0 ? arg0 : - arg0);
+	  result = op0.abs ();
 	  break;
 
 	case FFS:
-	  arg0 &= GET_MODE_MASK (mode);
-	  val = ffs_hwi (arg0);
+	  result = op0.ffs ();
 	  break;
 
 	case CLZ:
-	  arg0 &= GET_MODE_MASK (mode);
-	  if (arg0 == 0 && CLZ_DEFINED_VALUE_AT_ZERO (mode, val))
-	    ;
-	  else
-	    val = GET_MODE_PRECISION (mode) - floor_log2 (arg0) - 1;
+	  result = op0.clz ();
 	  break;
 
 	case CLRSB:
-	  arg0 &= GET_MODE_MASK (mode);
-	  if (arg0 == 0)
-	    val = GET_MODE_PRECISION (mode) - 1;
-	  else if (arg0 >= 0)
-	    val = GET_MODE_PRECISION (mode) - floor_log2 (arg0) - 2;
-	  else if (arg0 < 0)
-	    val = GET_MODE_PRECISION (mode) - floor_log2 (~arg0) - 2;
+	  result = op0.clrsb ();
 	  break;
-
+	  
 	case CTZ:
-	  arg0 &= GET_MODE_MASK (mode);
-	  if (arg0 == 0)
-	    {
-	      /* Even if the value at zero is undefined, we have to come
-		 up with some replacement.  Seems good enough.  */
-	      if (! CTZ_DEFINED_VALUE_AT_ZERO (mode, val))
-		val = GET_MODE_PRECISION (mode);
-	    }
-	  else
-	    val = ctz_hwi (arg0);
+	  result = op0.ctz ();
 	  break;
 
 	case POPCOUNT:
-	  arg0 &= GET_MODE_MASK (mode);
-	  val = 0;
-	  while (arg0)
-	    val++, arg0 &= arg0 - 1;
+	  result = op0.popcount ();
 	  break;
 
 	case PARITY:
-	  arg0 &= GET_MODE_MASK (mode);
-	  val = 0;
-	  while (arg0)
-	    val++, arg0 &= arg0 - 1;
-	  val &= 1;
+	  result = op0.parity ();
 	  break;
 
 	case BSWAP:
-	  {
-	    unsigned int s;
-
-	    val = 0;
-	    for (s = 0; s < width; s += 8)
-	      {
-		unsigned int d = width - s - 8;
-		unsigned HOST_WIDE_INT byte;
-		byte = (arg0 >> s) & 0xff;
-		val |= byte << d;
-	      }
-	  }
+	  result = op0.bswap ();
 	  break;
 
 	case TRUNCATE:
-	  val = arg0;
+	  result = op0.zforce_to_size (mode);
 	  break;
 
 	case ZERO_EXTEND:
-	  /* When zero-extending a CONST_INT, we need to know its
-             original mode.  */
-	  gcc_assert (op_mode != VOIDmode);
-	  if (op_width == HOST_BITS_PER_WIDE_INT)
-	    {
-	      /* If we were really extending the mode,
-		 we would have to distinguish between zero-extension
-		 and sign-extension.  */
-	      gcc_assert (width == op_width);
-	      val = arg0;
-	    }
-	  else if (GET_MODE_BITSIZE (op_mode) < HOST_BITS_PER_WIDE_INT)
-	    val = arg0 & GET_MODE_MASK (op_mode);
-	  else
-	    return 0;
+	  result = op0.zforce_to_size (mode);
 	  break;
 
 	case SIGN_EXTEND:
-	  if (op_mode == VOIDmode)
-	    op_mode = mode;
-	  op_width = GET_MODE_PRECISION (op_mode);
-	  if (op_width == HOST_BITS_PER_WIDE_INT)
-	    {
-	      /* If we were really extending the mode,
-		 we would have to distinguish between zero-extension
-		 and sign-extension.  */
-	      gcc_assert (width == op_width);
-	      val = arg0;
-	    }
-	  else if (op_width < HOST_BITS_PER_WIDE_INT)
-	    {
-	      val = arg0 & GET_MODE_MASK (op_mode);
-	      if (val_signbit_known_set_p (op_mode, val))
-		val |= ~GET_MODE_MASK (op_mode);
-	    }
-	  else
-	    return 0;
+	  result = op0.sforce_to_size (mode);
 	  break;
 
 	case SQRT:
-	case FLOAT_EXTEND:
-	case FLOAT_TRUNCATE:
-	case SS_TRUNCATE:
-	case US_TRUNCATE:
-	case SS_NEG:
-	case US_NEG:
-	case SS_ABS:
-	  return 0;
-
-	default:
-	  gcc_unreachable ();
-	}
-
-      return gen_int_mode (val, mode);
-    }
-
-  /* We can do some operations on integer CONST_DOUBLEs.  Also allow
-     for a DImode operation on a CONST_INT.  */
-  else if (width <= HOST_BITS_PER_DOUBLE_INT
-	   && (CONST_DOUBLE_AS_INT_P (op) || CONST_INT_P (op)))
-    {
-      double_int first, value;
-
-      if (CONST_DOUBLE_AS_INT_P (op))
-	first = double_int::from_pair (CONST_DOUBLE_HIGH (op),
-				       CONST_DOUBLE_LOW (op));
-      else
-	first = double_int::from_shwi (INTVAL (op));
-
-      switch (code)
-	{
-	case NOT:
-	  value = ~first;
-	  break;
-
-	case NEG:
-	  value = -first;
-	  break;
-
-	case ABS:
-	  if (first.is_negative ())
-	    value = -first;
-	  else
-	    value = first;
-	  break;
-
-	case FFS:
-	  value.high = 0;
-	  if (first.low != 0)
-	    value.low = ffs_hwi (first.low);
-	  else if (first.high != 0)
-	    value.low = HOST_BITS_PER_WIDE_INT + ffs_hwi (first.high);
-	  else
-	    value.low = 0;
-	  break;
-
-	case CLZ:
-	  value.high = 0;
-	  if (first.high != 0)
-	    value.low = GET_MODE_PRECISION (mode) - floor_log2 (first.high) - 1
-	              - HOST_BITS_PER_WIDE_INT;
-	  else if (first.low != 0)
-	    value.low = GET_MODE_PRECISION (mode) - floor_log2 (first.low) - 1;
-	  else if (! CLZ_DEFINED_VALUE_AT_ZERO (mode, value.low))
-	    value.low = GET_MODE_PRECISION (mode);
-	  break;
-
-	case CTZ:
-	  value.high = 0;
-	  if (first.low != 0)
-	    value.low = ctz_hwi (first.low);
-	  else if (first.high != 0)
-	    value.low = HOST_BITS_PER_WIDE_INT + ctz_hwi (first.high);
-	  else if (! CTZ_DEFINED_VALUE_AT_ZERO (mode, value.low))
-	    value.low = GET_MODE_PRECISION (mode);
-	  break;
-
-	case POPCOUNT:
-	  value = double_int_zero;
-	  while (first.low)
-	    {
-	      value.low++;
-	      first.low &= first.low - 1;
-	    }
-	  while (first.high)
-	    {
-	      value.low++;
-	      first.high &= first.high - 1;
-	    }
-	  break;
-
-	case PARITY:
-	  value = double_int_zero;
-	  while (first.low)
-	    {
-	      value.low++;
-	      first.low &= first.low - 1;
-	    }
-	  while (first.high)
-	    {
-	      value.low++;
-	      first.high &= first.high - 1;
-	    }
-	  value.low &= 1;
-	  break;
-
-	case BSWAP:
-	  {
-	    unsigned int s;
-
-	    value = double_int_zero;
-	    for (s = 0; s < width; s += 8)
-	      {
-		unsigned int d = width - s - 8;
-		unsigned HOST_WIDE_INT byte;
-
-		if (s < HOST_BITS_PER_WIDE_INT)
-		  byte = (first.low >> s) & 0xff;
-		else
-		  byte = (first.high >> (s - HOST_BITS_PER_WIDE_INT)) & 0xff;
-
-		if (d < HOST_BITS_PER_WIDE_INT)
-		  value.low |= byte << d;
-		else
-		  value.high |= byte << (d - HOST_BITS_PER_WIDE_INT);
-	      }
-	  }
-	  break;
-
-	case TRUNCATE:
-	  /* This is just a change-of-mode, so do nothing.  */
-	  value = first;
-	  break;
-
-	case ZERO_EXTEND:
-	  gcc_assert (op_mode != VOIDmode);
-
-	  if (op_width > HOST_BITS_PER_WIDE_INT)
-	    return 0;
-
-	  value = double_int::from_uhwi (first.low & GET_MODE_MASK (op_mode));
-	  break;
-
-	case SIGN_EXTEND:
-	  if (op_mode == VOIDmode
-	      || op_width > HOST_BITS_PER_WIDE_INT)
-	    return 0;
-	  else
-	    {
-	      value.low = first.low & GET_MODE_MASK (op_mode);
-	      if (val_signbit_known_set_p (op_mode, value.low))
-		value.low |= ~GET_MODE_MASK (op_mode);
-
-	      value.high = HWI_SIGN_EXTEND (value.low);
-	    }
-	  break;
-
-	case SQRT:
-	  return 0;
-
 	default:
 	  return 0;
 	}
 
-      return immed_double_int_const (value, mode);
+      return immed_wide_int_const (result, mode);
     }
 
   else if (CONST_DOUBLE_AS_FLOAT_P (op) 
@@ -1945,7 +1763,6 @@  simplify_const_unary_operation (enum rtx_code code, enum machine_mode mode,
 	}
       return CONST_DOUBLE_FROM_REAL_VALUE (d, mode);
     }
-
   else if (CONST_DOUBLE_AS_FLOAT_P (op)
 	   && SCALAR_FLOAT_MODE_P (GET_MODE (op))
 	   && GET_MODE_CLASS (mode) == MODE_INT
@@ -1958,9 +1775,12 @@  simplify_const_unary_operation (enum rtx_code code, enum machine_mode mode,
 
       /* This was formerly used only for non-IEEE float.
 	 eggert@twinsun.com says it is safe for IEEE also.  */
-      HOST_WIDE_INT xh, xl, th, tl;
+      HOST_WIDE_INT th, tl;
       REAL_VALUE_TYPE x, t;
+      wide_int wc;
       REAL_VALUE_FROM_CONST_DOUBLE (x, op);
+      HOST_WIDE_INT tmp[2];
+
       switch (code)
 	{
 	case FIX:
@@ -1982,8 +1802,8 @@  simplify_const_unary_operation (enum rtx_code code, enum machine_mode mode,
 	  real_from_integer (&t, VOIDmode, tl, th, 0);
 	  if (REAL_VALUES_LESS (t, x))
 	    {
-	      xh = th;
-	      xl = tl;
+	      tmp[1] = th;
+	      tmp[0] = tl;
 	      break;
 	    }
 
@@ -2002,11 +1822,11 @@  simplify_const_unary_operation (enum rtx_code code, enum machine_mode mode,
 	  real_from_integer (&t, VOIDmode, tl, th, 0);
 	  if (REAL_VALUES_LESS (x, t))
 	    {
-	      xh = th;
-	      xl = tl;
+	      tmp[1] = th;
+	      tmp[0] = tl;
 	      break;
 	    }
-	  REAL_VALUE_TO_INT (&xl, &xh, x);
+	  REAL_VALUE_TO_INT (&tmp[0], &tmp[1], x);
 	  break;
 
 	case UNSIGNED_FIX:
@@ -2033,18 +1853,19 @@  simplify_const_unary_operation (enum rtx_code code, enum machine_mode mode,
 	  real_from_integer (&t, VOIDmode, tl, th, 1);
 	  if (REAL_VALUES_LESS (t, x))
 	    {
-	      xh = th;
-	      xl = tl;
+	      tmp[1] = th;
+	      tmp[0] = tl;
 	      break;
 	    }
 
-	  REAL_VALUE_TO_INT (&xl, &xh, x);
+	  REAL_VALUE_TO_INT (&tmp[0], &tmp[1], x);
 	  break;
 
 	default:
 	  gcc_unreachable ();
 	}
-      return immed_double_const (xl, xh, mode);
+      wc = wide_int::from_array (tmp, 2, mode);
+      return immed_wide_int_const (wc, mode);
     }
 
   return NULL_RTX;
@@ -2204,49 +2025,50 @@  simplify_binary_operation_1 (enum rtx_code code, enum machine_mode mode,
 
       if (SCALAR_INT_MODE_P (mode))
 	{
-	  double_int coeff0, coeff1;
+	  wide_int coeff0;
+	  wide_int coeff1;
 	  rtx lhs = op0, rhs = op1;
 
-	  coeff0 = double_int_one;
-	  coeff1 = double_int_one;
+	  coeff0 = wide_int::one (mode);
+	  coeff1 = wide_int::one (mode);
 
 	  if (GET_CODE (lhs) == NEG)
 	    {
-	      coeff0 = double_int_minus_one;
+	      coeff0 = wide_int::minus_one (mode);
 	      lhs = XEXP (lhs, 0);
 	    }
 	  else if (GET_CODE (lhs) == MULT
-		   && CONST_INT_P (XEXP (lhs, 1)))
+		   && CONST_SCALAR_INT_P (XEXP (lhs, 1)))
 	    {
-	      coeff0 = double_int::from_shwi (INTVAL (XEXP (lhs, 1)));
+	      coeff0 = wide_int::from_rtx (XEXP (lhs, 1), mode);
 	      lhs = XEXP (lhs, 0);
 	    }
 	  else if (GET_CODE (lhs) == ASHIFT
 		   && CONST_INT_P (XEXP (lhs, 1))
                    && INTVAL (XEXP (lhs, 1)) >= 0
-		   && INTVAL (XEXP (lhs, 1)) < HOST_BITS_PER_WIDE_INT)
+		   && INTVAL (XEXP (lhs, 1)) < GET_MODE_PRECISION (mode))
 	    {
-	      coeff0 = double_int_zero.set_bit (INTVAL (XEXP (lhs, 1)));
+	      coeff0 = wide_int::set_bit_in_zero (INTVAL (XEXP (lhs, 1)), mode);
 	      lhs = XEXP (lhs, 0);
 	    }
 
 	  if (GET_CODE (rhs) == NEG)
 	    {
-	      coeff1 = double_int_minus_one;
+	      coeff1 = wide_int::minus_one (mode);
 	      rhs = XEXP (rhs, 0);
 	    }
 	  else if (GET_CODE (rhs) == MULT
 		   && CONST_INT_P (XEXP (rhs, 1)))
 	    {
-	      coeff1 = double_int::from_shwi (INTVAL (XEXP (rhs, 1)));
+	      coeff1 = wide_int::from_rtx (XEXP (rhs, 1), mode);
 	      rhs = XEXP (rhs, 0);
 	    }
 	  else if (GET_CODE (rhs) == ASHIFT
 		   && CONST_INT_P (XEXP (rhs, 1))
 		   && INTVAL (XEXP (rhs, 1)) >= 0
-		   && INTVAL (XEXP (rhs, 1)) < HOST_BITS_PER_WIDE_INT)
+		   && INTVAL (XEXP (rhs, 1)) < GET_MODE_PRECISION (mode))
 	    {
-	      coeff1 = double_int_zero.set_bit (INTVAL (XEXP (rhs, 1)));
+	      coeff1 = wide_int::set_bit_in_zero (INTVAL (XEXP (rhs, 1)), mode);
 	      rhs = XEXP (rhs, 0);
 	    }
 
@@ -2254,11 +2076,9 @@  simplify_binary_operation_1 (enum rtx_code code, enum machine_mode mode,
 	    {
 	      rtx orig = gen_rtx_PLUS (mode, op0, op1);
 	      rtx coeff;
-	      double_int val;
 	      bool speed = optimize_function_for_speed_p (cfun);
 
-	      val = coeff0 + coeff1;
-	      coeff = immed_double_int_const (val, mode);
+	      coeff = immed_wide_int_const (coeff0 + coeff1, mode);
 
 	      tem = simplify_gen_binary (MULT, mode, lhs, coeff);
 	      return set_src_cost (tem, speed) <= set_src_cost (orig, speed)
@@ -2380,50 +2200,52 @@  simplify_binary_operation_1 (enum rtx_code code, enum machine_mode mode,
 
       if (SCALAR_INT_MODE_P (mode))
 	{
-	  double_int coeff0, negcoeff1;
+	  wide_int coeff0;
+	  wide_int negcoeff1;
 	  rtx lhs = op0, rhs = op1;
 
-	  coeff0 = double_int_one;
-	  negcoeff1 = double_int_minus_one;
+	  coeff0 = wide_int::one (mode);
+	  negcoeff1 = wide_int::minus_one (mode);
 
 	  if (GET_CODE (lhs) == NEG)
 	    {
-	      coeff0 = double_int_minus_one;
+	      coeff0 = wide_int::minus_one (mode);
 	      lhs = XEXP (lhs, 0);
 	    }
 	  else if (GET_CODE (lhs) == MULT
-		   && CONST_INT_P (XEXP (lhs, 1)))
+		   && CONST_SCALAR_INT_P (XEXP (lhs, 1)))
 	    {
-	      coeff0 = double_int::from_shwi (INTVAL (XEXP (lhs, 1)));
+	      coeff0 = wide_int::from_rtx (XEXP (lhs, 1), mode);
 	      lhs = XEXP (lhs, 0);
 	    }
 	  else if (GET_CODE (lhs) == ASHIFT
 		   && CONST_INT_P (XEXP (lhs, 1))
 		   && INTVAL (XEXP (lhs, 1)) >= 0
-		   && INTVAL (XEXP (lhs, 1)) < HOST_BITS_PER_WIDE_INT)
+		   && INTVAL (XEXP (lhs, 1)) < GET_MODE_PRECISION (mode))
 	    {
-	      coeff0 = double_int_zero.set_bit (INTVAL (XEXP (lhs, 1)));
+	      coeff0 = wide_int::set_bit_in_zero (INTVAL (XEXP (lhs, 1)), mode);
 	      lhs = XEXP (lhs, 0);
 	    }
 
 	  if (GET_CODE (rhs) == NEG)
 	    {
-	      negcoeff1 = double_int_one;
+	      negcoeff1 = wide_int::one (mode);
 	      rhs = XEXP (rhs, 0);
 	    }
 	  else if (GET_CODE (rhs) == MULT
 		   && CONST_INT_P (XEXP (rhs, 1)))
 	    {
-	      negcoeff1 = double_int::from_shwi (-INTVAL (XEXP (rhs, 1)));
+	      negcoeff1 = wide_int::from_rtx (XEXP (rhs, 1), mode).neg ();
 	      rhs = XEXP (rhs, 0);
 	    }
 	  else if (GET_CODE (rhs) == ASHIFT
 		   && CONST_INT_P (XEXP (rhs, 1))
 		   && INTVAL (XEXP (rhs, 1)) >= 0
-		   && INTVAL (XEXP (rhs, 1)) < HOST_BITS_PER_WIDE_INT)
+		   && INTVAL (XEXP (rhs, 1)) < GET_MODE_PRECISION (mode))
 	    {
-	      negcoeff1 = double_int_zero.set_bit (INTVAL (XEXP (rhs, 1)));
-	      negcoeff1 = -negcoeff1;
+	      negcoeff1 = wide_int::set_bit_in_zero (INTVAL (XEXP (rhs, 1)),
+						    mode);
+	      negcoeff1 = negcoeff1.neg ();
 	      rhs = XEXP (rhs, 0);
 	    }
 
@@ -2431,11 +2253,9 @@  simplify_binary_operation_1 (enum rtx_code code, enum machine_mode mode,
 	    {
 	      rtx orig = gen_rtx_MINUS (mode, op0, op1);
 	      rtx coeff;
-	      double_int val;
 	      bool speed = optimize_function_for_speed_p (cfun);
 
-	      val = coeff0 + negcoeff1;
-	      coeff = immed_double_int_const (val, mode);
+	      coeff = immed_wide_int_const (coeff0 + negcoeff1, mode);
 
 	      tem = simplify_gen_binary (MULT, mode, lhs, coeff);
 	      return set_src_cost (tem, speed) <= set_src_cost (orig, speed)
@@ -2587,26 +2407,13 @@  simplify_binary_operation_1 (enum rtx_code code, enum machine_mode mode,
 	  && trueop1 == CONST1_RTX (mode))
 	return op0;
 
-      /* Convert multiply by constant power of two into shift unless
-	 we are still generating RTL.  This test is a kludge.  */
-      if (CONST_INT_P (trueop1)
-	  && (val = exact_log2 (UINTVAL (trueop1))) >= 0
-	  /* If the mode is larger than the host word size, and the
-	     uppermost bit is set, then this isn't a power of two due
-	     to implicit sign extension.  */
-	  && (width <= HOST_BITS_PER_WIDE_INT
-	      || val != HOST_BITS_PER_WIDE_INT - 1))
-	return simplify_gen_binary (ASHIFT, mode, op0, GEN_INT (val));
-
-      /* Likewise for multipliers wider than a word.  */
-      if (CONST_DOUBLE_AS_INT_P (trueop1)
-	  && GET_MODE (op0) == mode
-	  && CONST_DOUBLE_LOW (trueop1) == 0
-	  && (val = exact_log2 (CONST_DOUBLE_HIGH (trueop1))) >= 0
-	  && (val < HOST_BITS_PER_DOUBLE_INT - 1
-	      || GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_DOUBLE_INT))
-	return simplify_gen_binary (ASHIFT, mode, op0,
-				    GEN_INT (val + HOST_BITS_PER_WIDE_INT));
+      /* Convert multiply by constant power of two into shift.  */
+      if (CONST_SCALAR_INT_P (trueop1))
+	{
+	  val = wide_int::from_rtx (trueop1, mode).exact_log2 ().to_shwi ();
+	  if (val >= 0 && val < GET_MODE_BITSIZE (mode))
+	    return simplify_gen_binary (ASHIFT, mode, op0, GEN_INT (val));
+	}
 
       /* x*2 is x+x and x*(-1) is -x */
       if (CONST_DOUBLE_AS_FLOAT_P (trueop1)
@@ -3682,9 +3489,9 @@  rtx
 simplify_const_binary_operation (enum rtx_code code, enum machine_mode mode,
 				 rtx op0, rtx op1)
 {
-  HOST_WIDE_INT arg0, arg1, arg0s, arg1s;
-  HOST_WIDE_INT val;
+#if TARGET_SUPPORTS_WIDE_INT == 0
   unsigned int width = GET_MODE_PRECISION (mode);
+#endif
 
   if (VECTOR_MODE_P (mode)
       && code != VEC_CONCAT
@@ -3877,299 +3684,128 @@  simplify_const_binary_operation (enum rtx_code code, enum machine_mode mode,
 
   /* We can fold some multi-word operations.  */
   if (GET_MODE_CLASS (mode) == MODE_INT
-      && width == HOST_BITS_PER_DOUBLE_INT
-      && (CONST_DOUBLE_AS_INT_P (op0) || CONST_INT_P (op0))
-      && (CONST_DOUBLE_AS_INT_P (op1) || CONST_INT_P (op1)))
+      && CONST_SCALAR_INT_P (op0)
+      && CONST_SCALAR_INT_P (op1))
     {
-      double_int o0, o1, res, tmp;
-      bool overflow;
-
-      o0 = rtx_to_double_int (op0);
-      o1 = rtx_to_double_int (op1);
-
+      wide_int result;
+      wide_int wop0 = wide_int::from_rtx (op0, mode);
+      bool overflow = false;
+      unsigned int bitsize = GET_MODE_BITSIZE (mode);
+
+#if TARGET_SUPPORTS_WIDE_INT == 0
+      /* This assert keeps the simplification from producing a result
+	 that cannot be represented in a CONST_DOUBLE but a lot of
+	 upstream callers expect that this function never fails to
+	 simplify something and so you if you added this to the test
+	 above the code would die later anyway.  If this assert
+	 happens, you just need to make the port support wide int.  */
+      gcc_assert (width <= HOST_BITS_PER_DOUBLE_INT);
+#endif
       switch (code)
 	{
 	case MINUS:
-	  /* A - B == A + (-B).  */
-	  o1 = -o1;
-
-	  /* Fall through....  */
+	  result = wop0 - op1;
+	  break;
 
 	case PLUS:
-	  res = o0 + o1;
+	  result = wop0 + op1;
 	  break;
 
 	case MULT:
-	  res = o0 * o1;
+	  result = wop0 * op1;
 	  break;
 
 	case DIV:
-          res = o0.divmod_with_overflow (o1, false, TRUNC_DIV_EXPR,
-					 &tmp, &overflow);
+	  result = wop0.div_trunc (op1, wide_int::SIGNED, &overflow);
 	  if (overflow)
-	    return 0;
+	    return NULL_RTX;
 	  break;
-
+	  
 	case MOD:
-          tmp = o0.divmod_with_overflow (o1, false, TRUNC_DIV_EXPR,
-					 &res, &overflow);
+	  result = wop0.mod_trunc (op1, wide_int::SIGNED, &overflow);
 	  if (overflow)
-	    return 0;
+	    return NULL_RTX;
 	  break;
 
 	case UDIV:
-          res = o0.divmod_with_overflow (o1, true, TRUNC_DIV_EXPR,
-					 &tmp, &overflow);
+	  result = wop0.div_trunc (op1, wide_int::UNSIGNED, &overflow);
 	  if (overflow)
-	    return 0;
+	    return NULL_RTX;
 	  break;
 
 	case UMOD:
-          tmp = o0.divmod_with_overflow (o1, true, TRUNC_DIV_EXPR,
-					 &res, &overflow);
+	  result = wop0.mod_trunc (op1, wide_int::UNSIGNED, &overflow);
 	  if (overflow)
-	    return 0;
+	    return NULL_RTX;
 	  break;
 
 	case AND:
-	  res = o0 & o1;
+	  result = wop0 & op1;
 	  break;
 
 	case IOR:
-	  res = o0 | o1;
+	  result = wop0 | op1;
 	  break;
 
 	case XOR:
-	  res = o0 ^ o1;
+	  result = wop0 ^ op1;
 	  break;
 
 	case SMIN:
-	  res = o0.smin (o1);
+	  result = wop0.smin (op1);
 	  break;
 
 	case SMAX:
-	  res = o0.smax (o1);
+	  result = wop0.smax (op1);
 	  break;
 
 	case UMIN:
-	  res = o0.umin (o1);
+	  result = wop0.umin (op1);
 	  break;
 
 	case UMAX:
-	  res = o0.umax (o1);
-	  break;
-
-	case LSHIFTRT:   case ASHIFTRT:
-	case ASHIFT:
-	case ROTATE:     case ROTATERT:
-	  {
-	    unsigned HOST_WIDE_INT cnt;
-
-	    if (SHIFT_COUNT_TRUNCATED)
-	      {
-		o1.high = 0; 
-		o1.low &= GET_MODE_PRECISION (mode) - 1;
-	      }
-
-	    if (!o1.fits_uhwi ()
-	        || o1.to_uhwi () >= GET_MODE_PRECISION (mode))
-	      return 0;
-
-	    cnt = o1.to_uhwi ();
-	    unsigned short prec = GET_MODE_PRECISION (mode);
-
-	    if (code == LSHIFTRT || code == ASHIFTRT)
-	      res = o0.rshift (cnt, prec, code == ASHIFTRT);
-	    else if (code == ASHIFT)
-	      res = o0.alshift (cnt, prec);
-	    else if (code == ROTATE)
-	      res = o0.lrotate (cnt, prec);
-	    else /* code == ROTATERT */
-	      res = o0.rrotate (cnt, prec);
-	  }
-	  break;
-
-	default:
-	  return 0;
-	}
-
-      return immed_double_int_const (res, mode);
-    }
-
-  if (CONST_INT_P (op0) && CONST_INT_P (op1)
-      && width <= HOST_BITS_PER_WIDE_INT && width != 0)
-    {
-      /* Get the integer argument values in two forms:
-         zero-extended in ARG0, ARG1 and sign-extended in ARG0S, ARG1S.  */
-
-      arg0 = INTVAL (op0);
-      arg1 = INTVAL (op1);
-
-      if (width < HOST_BITS_PER_WIDE_INT)
-        {
-          arg0 &= GET_MODE_MASK (mode);
-          arg1 &= GET_MODE_MASK (mode);
-
-          arg0s = arg0;
-	  if (val_signbit_known_set_p (mode, arg0s))
-	    arg0s |= ~GET_MODE_MASK (mode);
-
-          arg1s = arg1;
-	  if (val_signbit_known_set_p (mode, arg1s))
-	    arg1s |= ~GET_MODE_MASK (mode);
-	}
-      else
-	{
-	  arg0s = arg0;
-	  arg1s = arg1;
-	}
-
-      /* Compute the value of the arithmetic.  */
-
-      switch (code)
-	{
-	case PLUS:
-	  val = arg0s + arg1s;
-	  break;
-
-	case MINUS:
-	  val = arg0s - arg1s;
-	  break;
-
-	case MULT:
-	  val = arg0s * arg1s;
-	  break;
-
-	case DIV:
-	  if (arg1s == 0
-	      || ((unsigned HOST_WIDE_INT) arg0s
-		  == (unsigned HOST_WIDE_INT) 1 << (HOST_BITS_PER_WIDE_INT - 1)
-		  && arg1s == -1))
-	    return 0;
-	  val = arg0s / arg1s;
-	  break;
-
-	case MOD:
-	  if (arg1s == 0
-	      || ((unsigned HOST_WIDE_INT) arg0s
-		  == (unsigned HOST_WIDE_INT) 1 << (HOST_BITS_PER_WIDE_INT - 1)
-		  && arg1s == -1))
-	    return 0;
-	  val = arg0s % arg1s;
+	  result = wop0.umax (op1);
 	  break;
 
-	case UDIV:
-	  if (arg1 == 0
-	      || ((unsigned HOST_WIDE_INT) arg0s
-		  == (unsigned HOST_WIDE_INT) 1 << (HOST_BITS_PER_WIDE_INT - 1)
-		  && arg1s == -1))
-	    return 0;
-	  val = (unsigned HOST_WIDE_INT) arg0 / arg1;
-	  break;
-
-	case UMOD:
-	  if (arg1 == 0
-	      || ((unsigned HOST_WIDE_INT) arg0s
-		  == (unsigned HOST_WIDE_INT) 1 << (HOST_BITS_PER_WIDE_INT - 1)
-		  && arg1s == -1))
-	    return 0;
-	  val = (unsigned HOST_WIDE_INT) arg0 % arg1;
-	  break;
-
-	case AND:
-	  val = arg0 & arg1;
-	  break;
-
-	case IOR:
-	  val = arg0 | arg1;
-	  break;
+	case LSHIFTRT:
+	  if (wide_int::from_rtx (op1, mode).neg_p ())
+	    return NULL_RTX;
 
-	case XOR:
-	  val = arg0 ^ arg1;
+	  result = wop0.rshiftu (op1, bitsize, wide_int::TRUNC);
 	  break;
-
-	case LSHIFTRT:
-	case ASHIFT:
+	  
 	case ASHIFTRT:
-	  /* Truncate the shift if SHIFT_COUNT_TRUNCATED, otherwise make sure
-	     the value is in range.  We can't return any old value for
-	     out-of-range arguments because either the middle-end (via
-	     shift_truncation_mask) or the back-end might be relying on
-	     target-specific knowledge.  Nor can we rely on
-	     shift_truncation_mask, since the shift might not be part of an
-	     ashlM3, lshrM3 or ashrM3 instruction.  */
-	  if (SHIFT_COUNT_TRUNCATED)
-	    arg1 = (unsigned HOST_WIDE_INT) arg1 % width;
-	  else if (arg1 < 0 || arg1 >= GET_MODE_BITSIZE (mode))
-	    return 0;
-
-	  val = (code == ASHIFT
-		 ? ((unsigned HOST_WIDE_INT) arg0) << arg1
-		 : ((unsigned HOST_WIDE_INT) arg0) >> arg1);
+	  if (wide_int::from_rtx (op1, mode).neg_p ())
+	    return NULL_RTX;
 
-	  /* Sign-extend the result for arithmetic right shifts.  */
-	  if (code == ASHIFTRT && arg0s < 0 && arg1 > 0)
-	    val |= ((unsigned HOST_WIDE_INT) (-1)) << (width - arg1);
+	  result = wop0.rshifts (op1, bitsize, wide_int::TRUNC);
 	  break;
+	  
+	case ASHIFT:
+	  if (wide_int::from_rtx (op1, mode).neg_p ())
+	    return NULL_RTX;
 
-	case ROTATERT:
-	  if (arg1 < 0)
-	    return 0;
-
-	  arg1 %= width;
-	  val = ((((unsigned HOST_WIDE_INT) arg0) << (width - arg1))
-		 | (((unsigned HOST_WIDE_INT) arg0) >> arg1));
+	  result = wop0.lshift (op1, bitsize, wide_int::TRUNC);
 	  break;
-
+	  
 	case ROTATE:
-	  if (arg1 < 0)
-	    return 0;
-
-	  arg1 %= width;
-	  val = ((((unsigned HOST_WIDE_INT) arg0) << arg1)
-		 | (((unsigned HOST_WIDE_INT) arg0) >> (width - arg1)));
-	  break;
-
-	case COMPARE:
-	  /* Do nothing here.  */
-	  return 0;
-
-	case SMIN:
-	  val = arg0s <= arg1s ? arg0s : arg1s;
-	  break;
-
-	case UMIN:
-	  val = ((unsigned HOST_WIDE_INT) arg0
-		 <= (unsigned HOST_WIDE_INT) arg1 ? arg0 : arg1);
-	  break;
+	  if (wide_int::from_rtx (op1, mode).neg_p ())
+	    return NULL_RTX;
 
-	case SMAX:
-	  val = arg0s > arg1s ? arg0s : arg1s;
+	  result = wop0.lrotate (op1);
 	  break;
+	  
+	case ROTATERT:
+	  if (wide_int::from_rtx (op1, mode).neg_p ())
+	    return NULL_RTX;
 
-	case UMAX:
-	  val = ((unsigned HOST_WIDE_INT) arg0
-		 > (unsigned HOST_WIDE_INT) arg1 ? arg0 : arg1);
+	  result = wop0.rrotate (op1);
 	  break;
 
-	case SS_PLUS:
-	case US_PLUS:
-	case SS_MINUS:
-	case US_MINUS:
-	case SS_MULT:
-	case US_MULT:
-	case SS_DIV:
-	case US_DIV:
-	case SS_ASHIFT:
-	case US_ASHIFT:
-	  /* ??? There are simplifications that can be done.  */
-	  return 0;
-
 	default:
-	  gcc_unreachable ();
+	  return NULL_RTX;
 	}
-
-      return gen_int_mode (val, mode);
+      return immed_wide_int_const (result, mode);
     }
 
   return NULL_RTX;
@@ -4837,10 +4473,11 @@  comparison_result (enum rtx_code code, int known_results)
     }
 }
 
-/* Check if the given comparison (done in the given MODE) is actually a
-   tautology or a contradiction.
-   If no simplification is possible, this function returns zero.
-   Otherwise, it returns either const_true_rtx or const0_rtx.  */
+/* Check if the given comparison (done in the given MODE) is actually
+   a tautology or a contradiction.  If the mode is VOID_mode, the
+   comparison is done in "infinite precision".  If no simplification
+   is possible, this function returns zero.  Otherwise, it returns
+   either const_true_rtx or const0_rtx.  */
 
 rtx
 simplify_const_relational_operation (enum rtx_code code,
@@ -4964,59 +4601,23 @@  simplify_const_relational_operation (enum rtx_code code,
 
   /* Otherwise, see if the operands are both integers.  */
   if ((GET_MODE_CLASS (mode) == MODE_INT || mode == VOIDmode)
-       && (CONST_DOUBLE_AS_INT_P (trueop0) || CONST_INT_P (trueop0))
-       && (CONST_DOUBLE_AS_INT_P (trueop1) || CONST_INT_P (trueop1)))
+      && CONST_SCALAR_INT_P (trueop0) && CONST_SCALAR_INT_P (trueop1))
     {
-      int width = GET_MODE_PRECISION (mode);
-      HOST_WIDE_INT l0s, h0s, l1s, h1s;
-      unsigned HOST_WIDE_INT l0u, h0u, l1u, h1u;
-
-      /* Get the two words comprising each integer constant.  */
-      if (CONST_DOUBLE_AS_INT_P (trueop0))
-	{
-	  l0u = l0s = CONST_DOUBLE_LOW (trueop0);
-	  h0u = h0s = CONST_DOUBLE_HIGH (trueop0);
-	}
-      else
-	{
-	  l0u = l0s = INTVAL (trueop0);
-	  h0u = h0s = HWI_SIGN_EXTEND (l0s);
-	}
-
-      if (CONST_DOUBLE_AS_INT_P (trueop1))
-	{
-	  l1u = l1s = CONST_DOUBLE_LOW (trueop1);
-	  h1u = h1s = CONST_DOUBLE_HIGH (trueop1);
-	}
-      else
-	{
-	  l1u = l1s = INTVAL (trueop1);
-	  h1u = h1s = HWI_SIGN_EXTEND (l1s);
-	}
-
-      /* If WIDTH is nonzero and smaller than HOST_BITS_PER_WIDE_INT,
-	 we have to sign or zero-extend the values.  */
-      if (width != 0 && width < HOST_BITS_PER_WIDE_INT)
-	{
-	  l0u &= GET_MODE_MASK (mode);
-	  l1u &= GET_MODE_MASK (mode);
-
-	  if (val_signbit_known_set_p (mode, l0s))
-	    l0s |= ~GET_MODE_MASK (mode);
-
-	  if (val_signbit_known_set_p (mode, l1s))
-	    l1s |= ~GET_MODE_MASK (mode);
-	}
-      if (width != 0 && width <= HOST_BITS_PER_WIDE_INT)
-	h0u = h1u = 0, h0s = HWI_SIGN_EXTEND (l0s), h1s = HWI_SIGN_EXTEND (l1s);
-
-      if (h0u == h1u && l0u == l1u)
+      enum machine_mode cmode = mode;
+      wide_int wo0;
+
+      /* It would be nice if we really had a mode here.  However, the
+	 largest int representable on the target is as good as
+	 infinite.  */
+      if (mode == VOIDmode)
+	cmode = MAX_MODE_INT;
+      wo0 = wide_int::from_rtx (trueop0, cmode);
+      if (wo0 == trueop1)
 	return comparison_result (code, CMP_EQ);
       else
 	{
-	  int cr;
-	  cr = (h0s < h1s || (h0s == h1s && l0u < l1u)) ? CMP_LT : CMP_GT;
-	  cr |= (h0u < h1u || (h0u == h1u && l0u < l1u)) ? CMP_LTU : CMP_GTU;
+	  int cr = wo0.lts_p (trueop1) ? CMP_LT : CMP_GT;
+	  cr |= wo0.ltu_p (trueop1) ? CMP_LTU : CMP_GTU;
 	  return comparison_result (code, cr);
 	}
     }
@@ -5472,9 +5073,9 @@  simplify_ternary_operation (enum rtx_code code, enum machine_mode mode,
   return 0;
 }
 
-/* Evaluate a SUBREG of a CONST_INT or CONST_DOUBLE or CONST_FIXED
-   or CONST_VECTOR,
-   returning another CONST_INT or CONST_DOUBLE or CONST_FIXED or CONST_VECTOR.
+/* Evaluate a SUBREG of a CONST_INT or CONST_WIDE_INT or CONST_DOUBLE
+   or CONST_FIXED or CONST_VECTOR, returning another CONST_INT or
+   CONST_WIDE_INT or CONST_DOUBLE or CONST_FIXED or CONST_VECTOR.
 
    Works by unpacking OP into a collection of 8-bit values
    represented as a little-endian array of 'unsigned char', selecting by BYTE,
@@ -5484,13 +5085,11 @@  static rtx
 simplify_immed_subreg (enum machine_mode outermode, rtx op,
 		       enum machine_mode innermode, unsigned int byte)
 {
-  /* We support up to 512-bit values (for V8DFmode).  */
   enum {
-    max_bitsize = 512,
     value_bit = 8,
     value_mask = (1 << value_bit) - 1
   };
-  unsigned char value[max_bitsize / value_bit];
+  unsigned char value[MAX_BITSIZE_MODE_ANY_MODE/value_bit];
   int value_start;
   int i;
   int elem;
@@ -5502,6 +5101,7 @@  simplify_immed_subreg (enum machine_mode outermode, rtx op,
   rtvec result_v = NULL;
   enum mode_class outer_class;
   enum machine_mode outer_submode;
+  int max_bitsize;
 
   /* Some ports misuse CCmode.  */
   if (GET_MODE_CLASS (outermode) == MODE_CC && CONST_INT_P (op))
@@ -5511,6 +5111,10 @@  simplify_immed_subreg (enum machine_mode outermode, rtx op,
   if (COMPLEX_MODE_P (outermode))
     return NULL_RTX;
 
+  /* We support any size mode.  */
+  max_bitsize = MAX (GET_MODE_BITSIZE (outermode), 
+		     GET_MODE_BITSIZE (innermode));
+
   /* Unpack the value.  */
 
   if (GET_CODE (op) == CONST_VECTOR)
@@ -5560,8 +5164,20 @@  simplify_immed_subreg (enum machine_mode outermode, rtx op,
 	    *vp++ = INTVAL (el) < 0 ? -1 : 0;
 	  break;
 
+	case CONST_WIDE_INT:
+	  {
+	    wide_int val = wide_int::from_rtx (el, innermode);
+	    unsigned char extend = val.sign_mask ();
+
+	    for (i = 0; i < elem_bitsize; i += value_bit) 
+	      *vp++ = val.extract_to_hwi (i, value_bit);
+	    for (; i < elem_bitsize; i += value_bit)
+	      *vp++ = extend;
+	  }
+	  break;
+
 	case CONST_DOUBLE:
-	  if (GET_MODE (el) == VOIDmode)
+	  if (TARGET_SUPPORTS_WIDE_INT == 0 && GET_MODE (el) == VOIDmode)
 	    {
 	      unsigned char extend = 0;
 	      /* If this triggers, someone should have generated a
@@ -5584,7 +5200,8 @@  simplify_immed_subreg (enum machine_mode outermode, rtx op,
 	    }
 	  else
 	    {
-	      long tmp[max_bitsize / 32];
+	      /* This is big enough for anything on the platform.  */
+	      long tmp[MAX_BITSIZE_MODE_ANY_MODE / 32];
 	      int bitsize = GET_MODE_BITSIZE (GET_MODE (el));
 
 	      gcc_assert (SCALAR_FLOAT_MODE_P (GET_MODE (el)));
@@ -5704,24 +5321,27 @@  simplify_immed_subreg (enum machine_mode outermode, rtx op,
 	case MODE_INT:
 	case MODE_PARTIAL_INT:
 	  {
-	    unsigned HOST_WIDE_INT hi = 0, lo = 0;
-
-	    for (i = 0;
-		 i < HOST_BITS_PER_WIDE_INT && i < elem_bitsize;
-		 i += value_bit)
-	      lo |= (unsigned HOST_WIDE_INT)(*vp++ & value_mask) << i;
-	    for (; i < elem_bitsize; i += value_bit)
-	      hi |= (unsigned HOST_WIDE_INT)(*vp++ & value_mask)
-		     << (i - HOST_BITS_PER_WIDE_INT);
-
-	    /* immed_double_const doesn't call trunc_int_for_mode.  I don't
-	       know why.  */
-	    if (elem_bitsize <= HOST_BITS_PER_WIDE_INT)
-	      elems[elem] = gen_int_mode (lo, outer_submode);
-	    else if (elem_bitsize <= HOST_BITS_PER_DOUBLE_INT)
-	      elems[elem] = immed_double_const (lo, hi, outer_submode);
-	    else
-	      return NULL_RTX;
+	    int u;
+	    int base = 0;
+	    int units 
+	      = (GET_MODE_BITSIZE (outer_submode) + HOST_BITS_PER_WIDE_INT - 1) 
+	      / HOST_BITS_PER_WIDE_INT;
+	    HOST_WIDE_INT tmp[MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_WIDE_INT];
+	    wide_int r;
+
+	    for (u = 0; u < units; u++) 
+	      {
+		unsigned HOST_WIDE_INT buf = 0;
+		for (i = 0; 
+		     i < HOST_BITS_PER_WIDE_INT && base + i < elem_bitsize; 
+		     i += value_bit)
+		  buf |= (unsigned HOST_WIDE_INT)(*vp++ & value_mask) << i;
+
+		tmp[u] = buf;
+		base += HOST_BITS_PER_WIDE_INT;
+	      }
+	    r = wide_int::from_array (tmp, units, outer_submode);
+	    elems[elem] = immed_wide_int_const (r, outer_submode);
 	  }
 	  break;
 
@@ -5729,7 +5349,7 @@  simplify_immed_subreg (enum machine_mode outermode, rtx op,
 	case MODE_DECIMAL_FLOAT:
 	  {
 	    REAL_VALUE_TYPE r;
-	    long tmp[max_bitsize / 32];
+	    long tmp[MAX_BITSIZE_MODE_ANY_INT / 32];
 
 	    /* real_from_target wants its input in words affected by
 	       FLOAT_WORDS_BIG_ENDIAN.  However, we ignore this,
diff --git a/gcc/tree-ssa-address.c b/gcc/tree-ssa-address.c
index cfd42ad..85b1552 100644
--- a/gcc/tree-ssa-address.c
+++ b/gcc/tree-ssa-address.c
@@ -189,15 +189,18 @@  addr_for_mem_ref (struct mem_address *addr, addr_space_t as,
   struct mem_addr_template *templ;
 
   if (addr->step && !integer_onep (addr->step))
-    st = immed_double_int_const (tree_to_double_int (addr->step), pointer_mode);
+    st = immed_wide_int_const (wide_int::from_tree (addr->step),
+			       TYPE_MODE (TREE_TYPE (addr->step)));
   else
     st = NULL_RTX;
 
   if (addr->offset && !integer_zerop (addr->offset))
-    off = immed_double_int_const
-	    (tree_to_double_int (addr->offset)
-	     .sext (TYPE_PRECISION (TREE_TYPE (addr->offset))),
-	     pointer_mode);
+    {
+      wide_int dc = wide_int::from_tree (addr->offset);
+      dc = dc.sforce_to_size (TREE_TYPE (addr->offset));
+      off = immed_wide_int_const (dc,
+			       TYPE_MODE (TREE_TYPE (addr->offset)));
+    }
   else
     off = NULL_RTX;
 
diff --git a/gcc/tree.c b/gcc/tree.c
index d8f2424..f40871a 100644
--- a/gcc/tree.c
+++ b/gcc/tree.c
@@ -59,6 +59,7 @@  along with GCC; see the file COPYING3.  If not see
 #include "except.h"
 #include "debug.h"
 #include "intl.h"
+#include "wide-int.h"
 
 /* Tree code classes.  */
 
@@ -1067,6 +1068,33 @@  double_int_to_tree (tree type, double_int cst)
   return build_int_cst_wide (type, cst.low, cst.high);
 }
 
+/* Constructs tree in type TYPE from with value given by CST.  Signedness
+   of CST is assumed to be the same as the signedness of TYPE.  */
+
+tree
+wide_int_to_tree (tree type, const wide_int &cst)
+{
+  wide_int v;
+  unsigned int new_prec = TYPE_PRECISION (type);
+  
+  gcc_assert (cst.get_len () <= 2);
+  wide_int::SignOp sgn = TYPE_UNSIGNED (type) ? wide_int::UNSIGNED : wide_int::SIGNED;
+
+  /* This is something of a temporary hack.  The current rep of a
+     INT_CST looks at all of the bits, even those past the precision
+     of the type.  So we have to accomodate this.  The first test
+     checks to see if the type we want to make this is shorter than
+     the current rep, but the second block just goes and extends what
+     is there to the full size of the INT_CST.  */ 
+  if (new_prec < cst.get_precision ())
+    v = cst.zext (TYPE_PRECISION (type))
+      .force_to_size (HOST_BITS_PER_DOUBLE_INT, sgn);
+  else
+    v = cst.force_to_size (HOST_BITS_PER_DOUBLE_INT, sgn);
+
+  return build_int_cst_wide (type, v.elt (0), v.elt (1));
+}
+
 /* Returns true if CST fits into range of TYPE.  Signedness of CST is assumed
    to be the same as the signedness of TYPE.  */
 
diff --git a/gcc/var-tracking.c b/gcc/var-tracking.c
index 4855fb1..5de9d25 100644
--- a/gcc/var-tracking.c
+++ b/gcc/var-tracking.c
@@ -3513,6 +3513,23 @@  loc_cmp (rtx x, rtx y)
       default:
 	gcc_unreachable ();
       }
+  if (CONST_WIDE_INT_P (x))
+    {
+      /* Compare the vector length first.  */
+      if (CONST_WIDE_INT_NUNITS (x) >= CONST_WIDE_INT_NUNITS (y))
+	return 1;
+      else if (CONST_WIDE_INT_NUNITS (x) < CONST_WIDE_INT_NUNITS (y))
+	return -1;
+
+      /* Compare the vectors elements.  */;
+      for (j = CONST_WIDE_INT_NUNITS (x) - 1; j >= 0 ; j--)
+	{
+	  if (CONST_WIDE_INT_ELT (x, j) < CONST_WIDE_INT_ELT (y, j))
+	    return -1;
+	  if (CONST_WIDE_INT_ELT (x, j) > CONST_WIDE_INT_ELT (y, j))
+	    return 1;
+	}
+    }
 
   return 0;
 }
diff --git a/gcc/varasm.c b/gcc/varasm.c
index 2532d80..7cca674 100644
--- a/gcc/varasm.c
+++ b/gcc/varasm.c
@@ -3406,6 +3406,7 @@  const_rtx_hash_1 (rtx *xp, void *data)
   enum rtx_code code;
   hashval_t h, *hp;
   rtx x;
+  int i;
 
   x = *xp;
   code = GET_CODE (x);
@@ -3416,12 +3417,12 @@  const_rtx_hash_1 (rtx *xp, void *data)
     {
     case CONST_INT:
       hwi = INTVAL (x);
+
     fold_hwi:
       {
 	int shift = sizeof (hashval_t) * CHAR_BIT;
 	const int n = sizeof (HOST_WIDE_INT) / sizeof (hashval_t);
-	int i;
-
+	
 	h ^= (hashval_t) hwi;
 	for (i = 1; i < n; ++i)
 	  {
@@ -3431,8 +3432,16 @@  const_rtx_hash_1 (rtx *xp, void *data)
       }
       break;
 
+    case CONST_WIDE_INT:
+      hwi = GET_MODE_PRECISION (mode);
+      {
+	for (i = 0; i < CONST_WIDE_INT_NUNITS (x); i++)
+	  hwi ^= CONST_WIDE_INT_ELT (x, i);
+	goto fold_hwi;
+      }
+
     case CONST_DOUBLE:
-      if (mode == VOIDmode)
+      if (TARGET_SUPPORTS_WIDE_INT == 0 && mode == VOIDmode)
 	{
 	  hwi = CONST_DOUBLE_LOW (x) ^ CONST_DOUBLE_HIGH (x);
 	  goto fold_hwi;