diff mbox

patch to fix constant math - 8th patch - tree-vrp.c

Message ID 5092F406.9060101@naturalbridge.com
State New
Headers show

Commit Message

Kenneth Zadeck Nov. 1, 2012, 10:13 p.m. UTC
This patch converts tree-vpn to use wide-int.   In doing so it gets rid 
of all restrictions that this pass currently has on the target or source 
word size.

The pass's reliance on a finite "infinite precision" representation has 
been preserved.  It first scans the function being compiled to determine 
the largest type that needs to be represented within that function and 
then it uses some multiple of that size as it's definition of infinite.

I am currently using 4 for this value.   However marc glisse claims that 
this may be due to a bug and that the value should be 2. This is 
something that has to be investigated further.   This could easily be my 
mistake or some other issue that has crept into the pass.  The value of 
2 or 4 is easily changed in largest_initialize.    The only truly non 
mechanical transformation is in the code that multiplies two ranges.   
This code uses the wide-int multiply full functions rather than using 
pairs of double-ints.

Most of the changes in the first part of the changelog relate to 
changing of external functions to return wide-ints rather than 
double-ints since c++ does not have result polymorphism.

This patch depends on a freshened version of the wide-int class.

This patch has been bootstrapped and tested on x86-64.

kenny

2012-11-1  Kenneth Zadeck <zadeck@naturalbridge.com>

     * builtins.c (get_object_alignment_2, fold_builtin_memory_op):
     Renamed mem_ref_offset to mem_ref_offset_as_double.
     * cfgloop.h (max_loop_iterations): Changed to take wide_int param.
     * expr.c (get_inner_reference, expand_expr_real_1): Renamed
     mem_ref_offset to mem_ref_offset_as_double.
     * gimple-fold.c (get_base_constructor): Ditto.
     * gimple-ssa-strength-reduction.c (restructure_reference): Ditto.
     * ipa-prop.c (compute_complex_assign_jump_func, 
get_ancestor_addr_info):
     Ditto.
     * tree-data-ref.c (dr_analyze_innermost): Ditto.
     * tree-dfa.c (get_ref_base_and_extent): Ditto.
     * tree-flow-inline.h (get_addr_base_and_unit_offset_1): Ditto.
     * tree-object-size.c (compute_object_offset, addr_object_size): Ditto.
     * tree-sra.c (sra_ipa_modify_expr): Ditto.
     * tree-ssa-address.c (copy_ref_info): Ditto.
     * tree-ssa-alias.c (indirect_ref_may_alias_decl_p): Ditto.
     * tree-ssa-forwprop.c (forward_propagate_addr_expr_1,
     constant_pointer_difference): Ditto.
     * tree-ssa-loop-niter.c (max_loop_iterations): New function.
     * tree-ssa-phiopt.c (jump_function_from_stmt): Ditto.
     * tree-ssa-sccvn.c (vn_reference_maybe_forwprop_address): Ditto.
     * tree-ssa.c (non_rewritable_mem_ref_base): Ditto.
     * tree-vect-data-refs.c (vect_check_gather): Ditto.
     * tree-vrp.c (largest_precision, largest_bitsize): New variables.
     (tree_to_infinite_wide_int, look_for_largest, largest_initialize):
     New functions.
     (extract_range_from_assert, vrp_int_const_binop,
     zero_nonzero_bits_from_vr, ranges_from_anti_range,
     extract_range_from_multiplicative_op_1,
     extract_range_from_binary_expr_1, adjust_range_with_scev,
     register_new_assert_for, extract_code_and_val_from_cond_with_ops,
     register_edge_assert_for_2, search_for_addr_array,
     search_for_addr_array, simplify_bit_ops_using_ranges,
     simplify_switch_using_ranges, simplify_conversion_using_ranges,
     range_fits_type_p, range_fits_type_p,
     simplify_float_conversion_using_ranges, execute_vrp): Convert from
     double-int operations to wide-int operations.
     * tree.c (wide_int_to_infinite_tree): New function.
     * varasm.c (decode_addr_const): Renamed mem_ref_offset to
     mem_ref_offset_as_double.

Comments

Marc Glisse Nov. 1, 2012, 10:28 p.m. UTC | #1
On Thu, 1 Nov 2012, Kenneth Zadeck wrote:

> This patch converts tree-vpn to use wide-int.   In doing so it gets rid of 
> all restrictions that this pass currently has on the target or source word 
> size.
>
> The pass's reliance on a finite "infinite precision" representation has been 
> preserved.  It first scans the function being compiled to determine the 
> largest type that needs to be represented within that function and then it 
> uses some multiple of that size as it's definition of infinite.
>
> I am currently using 4 for this value.   However marc glisse claims that this 
> may be due to a bug and that the value should be 2. This is something that 
> has to be investigated further.   This could easily be my mistake or some 
> other issue that has crept into the pass.  The value of 2 or 4 is easily 
> changed in largest_initialize.    The only truly non mechanical 
> transformation is in the code that multiplies two ranges.   This code uses 
> the wide-int multiply full functions rather than using pairs of double-ints.

(I didn't look at the patch (yet))

Er, no, I didn't claim that using 4 was wrong, I think it is good because 
it makes things easier. I only claimed that the current implementation 
jumps through enough hoops to make do with 2.
Kenneth Zadeck Nov. 1, 2012, 10:33 p.m. UTC | #2
This patch refreshes wide-int.[ch].   Most of the changes are bug fixes 
that were fixed for tree-vrp.c in patch 8.

There are two significant differences:

1) There are now constructors to override the precision and bitsize that 
are normally taken from the type.  These are used to perform the finite 
"infinite precision" that is required by the tree-vrp.c pass.   The 
bitsize and precision passed in are the ones necessary to compile the 
current function.

2) The signed and unsigned extension functions have changed a lot. The 
ones with the name ext do an extension but the result always has the 
bitsize and precision of this.  the functions that are named 
force_to_size, now return results based on the precision and bitsize 
passed in after doing the proper extension.

The second change is in line with comments made by richi and others.

kenny
Kenneth Zadeck Nov. 1, 2012, 10:35 p.m. UTC | #3
either way, this needs to be investigated.  it could just be my bad.
On 11/01/2012 06:28 PM, Marc Glisse wrote:
> On Thu, 1 Nov 2012, Kenneth Zadeck wrote:
>
>> This patch converts tree-vpn to use wide-int.   In doing so it gets 
>> rid of all restrictions that this pass currently has on the target or 
>> source word size.
>>
>> The pass's reliance on a finite "infinite precision" representation 
>> has been preserved.  It first scans the function being compiled to 
>> determine the largest type that needs to be represented within that 
>> function and then it uses some multiple of that size as it's 
>> definition of infinite.
>>
>> I am currently using 4 for this value.   However marc glisse claims 
>> that this may be due to a bug and that the value should be 2. This is 
>> something that has to be investigated further. This could easily be 
>> my mistake or some other issue that has crept into the pass.  The 
>> value of 2 or 4 is easily changed in largest_initialize.    The only 
>> truly non mechanical transformation is in the code that multiplies 
>> two ranges.   This code uses the wide-int multiply full functions 
>> rather than using pairs of double-ints.
>
> (I didn't look at the patch (yet))
>
> Er, no, I didn't claim that using 4 was wrong, I think it is good 
> because it makes things easier. I only claimed that the current 
> implementation jumps through enough hoops to make do with 2.
>
diff mbox

Patch

diff --git a/gcc/builtins.c b/gcc/builtins.c
index c1722ab..f5c9418 100644
--- a/gcc/builtins.c
+++ b/gcc/builtins.c
@@ -378,7 +378,7 @@  get_object_alignment_2 (tree exp, unsigned int *alignp,
 	  bitpos += ptr_bitpos;
 	  if (TREE_CODE (exp) == MEM_REF
 	      || TREE_CODE (exp) == TARGET_MEM_REF)
-	    bitpos += mem_ref_offset (exp).low * BITS_PER_UNIT;
+	    bitpos += mem_ref_offset_as_double (exp).low * BITS_PER_UNIT;
 	}
     }
   else if (TREE_CODE (exp) == STRING_CST)
@@ -8784,12 +8784,12 @@  fold_builtin_memory_op (location_t loc, tree dest, tree src,
 		  if (! operand_equal_p (TREE_OPERAND (src_base, 0),
 					 TREE_OPERAND (dest_base, 0), 0))
 		    return NULL_TREE;
-		  off = mem_ref_offset (src_base) +
+		  off = mem_ref_offset_as_double (src_base) +
 					double_int::from_shwi (src_offset);
 		  if (!off.fits_shwi ())
 		    return NULL_TREE;
 		  src_offset = off.low;
-		  off = mem_ref_offset (dest_base) +
+		  off = mem_ref_offset_as_double (dest_base) +
 					double_int::from_shwi (dest_offset);
 		  if (!off.fits_shwi ())
 		    return NULL_TREE;
diff --git a/gcc/cfgloop.h b/gcc/cfgloop.h
index e0a370f..dd3748a 100644
--- a/gcc/cfgloop.h
+++ b/gcc/cfgloop.h
@@ -24,6 +24,7 @@  along with GCC; see the file COPYING3.  If not see
 #include "basic-block.h"
 #include "vecprim.h"
 #include "double-int.h"
+#include "wide-int.h"
 
 #include "bitmap.h"
 #include "sbitmap.h"
@@ -288,6 +289,7 @@  void estimate_numbers_of_iterations_loop (struct loop *);
 void record_niter_bound (struct loop *, double_int, bool, bool);
 bool estimated_loop_iterations (struct loop *, double_int *);
 bool max_loop_iterations (struct loop *, double_int *);
+bool max_loop_iterations (struct loop *, wide_int *, int, int);
 HOST_WIDE_INT estimated_loop_iterations_int (struct loop *);
 HOST_WIDE_INT max_loop_iterations_int (struct loop *);
 bool max_stmt_executions (struct loop *, double_int *);
diff --git a/gcc/expr.c b/gcc/expr.c
index fe27819..31adc26 100644
--- a/gcc/expr.c
+++ b/gcc/expr.c
@@ -6678,7 +6678,7 @@  get_inner_reference (tree exp, HOST_WIDE_INT *pbitsize,
 	      tree off = TREE_OPERAND (exp, 1);
 	      if (!integer_zerop (off))
 		{
-		  double_int boff, coff = mem_ref_offset (exp);
+		  double_int boff, coff = mem_ref_offset_as_double (exp);
 		  boff = coff.alshift (BITS_PER_UNIT == 8
 				       ? 3 : exact_log2 (BITS_PER_UNIT),
 				       HOST_BITS_PER_DOUBLE_INT);
@@ -9533,7 +9533,7 @@  expand_expr_real_1 (tree exp, rtx target, enum machine_mode tmode,
 	   might end up in a register.  */
 	if (mem_ref_refers_to_non_mem_p (exp))
 	  {
-	    HOST_WIDE_INT offset = mem_ref_offset (exp).low;
+	    HOST_WIDE_INT offset = mem_ref_offset_as_double (exp).low;
 	    tree bit_offset;
 	    tree bftype;
 	    base = TREE_OPERAND (base, 0);
@@ -9577,8 +9577,7 @@  expand_expr_real_1 (tree exp, rtx target, enum machine_mode tmode,
 	op0 = memory_address_addr_space (address_mode, op0, as);
 	if (!integer_zerop (TREE_OPERAND (exp, 1)))
 	  {
-	    wide_int wi = wide_int::from_double_int
-	      (mem_ref_offset (exp), address_mode);
+	    wide_int wi = mem_ref_offset (exp);
 	    rtx off = immed_wide_int_const (wi, address_mode);
 	    op0 = simplify_gen_binary (PLUS, address_mode, op0, off);
 	  }
diff --git a/gcc/gimple-fold.c b/gcc/gimple-fold.c
index 5ff70a2..57f3914 100644
--- a/gcc/gimple-fold.c
+++ b/gcc/gimple-fold.c
@@ -2688,7 +2688,7 @@  get_base_constructor (tree base, HOST_WIDE_INT *bit_offset,
 	{
 	  if (!tree_fits_shwi_p (TREE_OPERAND (base, 1)))
 	    return NULL_TREE;
-	  *bit_offset += (mem_ref_offset (base).low
+	  *bit_offset += (mem_ref_offset_as_double (base).low
 			  * BITS_PER_UNIT);
 	}
 
diff --git a/gcc/gimple-ssa-strength-reduction.c b/gcc/gimple-ssa-strength-reduction.c
index 6ee5c7a..eb141c0 100644
--- a/gcc/gimple-ssa-strength-reduction.c
+++ b/gcc/gimple-ssa-strength-reduction.c
@@ -557,7 +557,7 @@  restructure_reference (tree *pbase, tree *poffset, double_int *pindex,
     return false;
 
   t1 = TREE_OPERAND (base, 0);
-  c1 = mem_ref_offset (base);
+  c1 = mem_ref_offset_as_double (base);
   type = TREE_TYPE (TREE_OPERAND (base, 1));
 
   mult_op0 = TREE_OPERAND (offset, 0);
diff --git a/gcc/ipa-prop.c b/gcc/ipa-prop.c
index 633cbc4..243847e 100644
--- a/gcc/ipa-prop.c
+++ b/gcc/ipa-prop.c
@@ -933,7 +933,7 @@  compute_complex_assign_jump_func (struct ipa_node_params *info,
       || max_size == -1
       || max_size != size)
     return;
-  offset += mem_ref_offset (base).low * BITS_PER_UNIT;
+  offset += mem_ref_offset_as_double (base).low * BITS_PER_UNIT;
   ssa = TREE_OPERAND (base, 0);
   if (TREE_CODE (ssa) != SSA_NAME
       || !SSA_NAME_IS_DEFAULT_DEF (ssa)
@@ -988,7 +988,7 @@  get_ancestor_addr_info (gimple assign, tree *obj_p, HOST_WIDE_INT *offset)
       || TREE_CODE (SSA_NAME_VAR (parm)) != PARM_DECL)
     return NULL_TREE;
 
-  *offset += mem_ref_offset (expr).low * BITS_PER_UNIT;
+  *offset += mem_ref_offset_as_double (expr).low * BITS_PER_UNIT;
   *obj_p = obj;
   return expr;
 }
diff --git a/gcc/tree-data-ref.c b/gcc/tree-data-ref.c
index b606210..42ad127 100644
--- a/gcc/tree-data-ref.c
+++ b/gcc/tree-data-ref.c
@@ -720,7 +720,7 @@  dr_analyze_innermost (struct data_reference *dr, struct loop *nest)
     {
       if (!integer_zerop (TREE_OPERAND (base, 1)))
 	{
-	  double_int moff = mem_ref_offset (base);
+	  double_int moff = mem_ref_offset_as_double (base);
 	  tree mofft = double_int_to_tree (sizetype, moff);
 	  if (!poffset)
 	    poffset = mofft;
diff --git a/gcc/tree-dfa.c b/gcc/tree-dfa.c
index 9be06aa..03814bc 100644
--- a/gcc/tree-dfa.c
+++ b/gcc/tree-dfa.c
@@ -552,7 +552,7 @@  get_ref_base_and_extent (tree exp, HOST_WIDE_INT *poffset,
 		exp = TREE_OPERAND (TREE_OPERAND (exp, 0), 0);
 	      else
 		{
-		  double_int off = mem_ref_offset (exp);
+		  double_int off = mem_ref_offset_as_double (exp);
 		  off = off.alshift (BITS_PER_UNIT == 8
 				     ? 3 : exact_log2 (BITS_PER_UNIT),
 				     HOST_BITS_PER_DOUBLE_INT);
@@ -583,7 +583,7 @@  get_ref_base_and_extent (tree exp, HOST_WIDE_INT *poffset,
 		exp = TREE_OPERAND (TMR_BASE (exp), 0);
 	      else
 		{
-		  double_int off = mem_ref_offset (exp);
+		  double_int off = mem_ref_offset_as_double (exp);
 		  off = off.alshift (BITS_PER_UNIT == 8
 				     ? 3 : exact_log2 (BITS_PER_UNIT),
 				     HOST_BITS_PER_DOUBLE_INT);
diff --git a/gcc/tree-flow-inline.h b/gcc/tree-flow-inline.h
index a433223..1e59aa1 100644
--- a/gcc/tree-flow-inline.h
+++ b/gcc/tree-flow-inline.h
@@ -1269,7 +1269,7 @@  get_addr_base_and_unit_offset_1 (tree exp, HOST_WIDE_INT *poffset,
 	      {
 		if (!integer_zerop (TREE_OPERAND (exp, 1)))
 		  {
-		    double_int off = mem_ref_offset (exp);
+		    double_int off = mem_ref_offset_as_double (exp);
 		    gcc_assert (off.high == -1 || off.high == 0);
 		    byte_offset += off.to_shwi ();
 		  }
@@ -1292,7 +1292,7 @@  get_addr_base_and_unit_offset_1 (tree exp, HOST_WIDE_INT *poffset,
 		  return NULL_TREE;
 		if (!integer_zerop (TMR_OFFSET (exp)))
 		  {
-		    double_int off = mem_ref_offset (exp);
+		    double_int off = mem_ref_offset_as_double (exp);
 		    gcc_assert (off.high == -1 || off.high == 0);
 		    byte_offset += off.to_shwi ();
 		  }
diff --git a/gcc/tree-object-size.c b/gcc/tree-object-size.c
index c9e30f7..1a4a4d9 100644
--- a/gcc/tree-object-size.c
+++ b/gcc/tree-object-size.c
@@ -142,7 +142,7 @@  compute_object_offset (const_tree expr, const_tree var)
 
     case MEM_REF:
       gcc_assert (TREE_CODE (TREE_OPERAND (expr, 0)) == ADDR_EXPR);
-      return double_int_to_tree (sizetype, mem_ref_offset (expr));
+      return double_int_to_tree (sizetype, mem_ref_offset_as_double (expr));
 
     default:
       return error_mark_node;
@@ -192,7 +192,7 @@  addr_object_size (struct object_size_info *osi, const_tree ptr,
 	}
       if (sz != unknown[object_size_type])
 	{
-	  double_int dsz = double_int::from_uhwi (sz) - mem_ref_offset (pt_var);
+	  double_int dsz = double_int::from_uhwi (sz) - mem_ref_offset_as_double (pt_var);
 	  if (dsz.is_negative ())
 	    sz = 0;
 	  else if (dsz.fits_uhwi ())
diff --git a/gcc/tree-sra.c b/gcc/tree-sra.c
index a172d91..5ccecbc 100644
--- a/gcc/tree-sra.c
+++ b/gcc/tree-sra.c
@@ -4304,7 +4304,7 @@  sra_ipa_modify_expr (tree *expr, bool convert,
 
   if (TREE_CODE (base) == MEM_REF)
     {
-      offset += mem_ref_offset (base).low * BITS_PER_UNIT;
+      offset += mem_ref_offset_as_double (base).low * BITS_PER_UNIT;
       base = TREE_OPERAND (base, 0);
     }
 
diff --git a/gcc/tree-ssa-address.c b/gcc/tree-ssa-address.c
index 932c1e1..c7875ee 100644
--- a/gcc/tree-ssa-address.c
+++ b/gcc/tree-ssa-address.c
@@ -869,8 +869,8 @@  copy_ref_info (tree new_ref, tree old_ref)
 			   && (tree_to_shwi (TMR_STEP (new_ref))
 			       < align)))))
 	    {
-	      unsigned int inc = (mem_ref_offset (old_ref)
-				  - mem_ref_offset (new_ref)).low;
+	      unsigned int inc = (mem_ref_offset_as_double (old_ref)
+				  - mem_ref_offset_as_double (new_ref)).low;
 	      adjust_ptr_info_misalignment (new_pi, inc);
 	    }
 	  else
diff --git a/gcc/tree-ssa-alias.c b/gcc/tree-ssa-alias.c
index 6eabb70..8624ce2 100644
--- a/gcc/tree-ssa-alias.c
+++ b/gcc/tree-ssa-alias.c
@@ -755,7 +755,7 @@  indirect_ref_may_alias_decl_p (tree ref1 ATTRIBUTE_UNUSED, tree base1,
 
   /* The offset embedded in MEM_REFs can be negative.  Bias them
      so that the resulting offset adjustment is positive.  */
-  moff = mem_ref_offset (base1);
+  moff = mem_ref_offset_as_double (base1);
   moff = moff.alshift (BITS_PER_UNIT == 8
 		       ? 3 : exact_log2 (BITS_PER_UNIT),
 		       HOST_BITS_PER_DOUBLE_INT);
@@ -833,7 +833,7 @@  indirect_ref_may_alias_decl_p (tree ref1 ATTRIBUTE_UNUSED, tree base1,
   if (TREE_CODE (dbase2) == MEM_REF
       || TREE_CODE (dbase2) == TARGET_MEM_REF)
     {
-      double_int moff = mem_ref_offset (dbase2);
+      double_int moff = mem_ref_offset_as_double (dbase2);
       moff = moff.alshift (BITS_PER_UNIT == 8
 			   ? 3 : exact_log2 (BITS_PER_UNIT),
 			   HOST_BITS_PER_DOUBLE_INT);
@@ -929,7 +929,7 @@  indirect_refs_may_alias_p (tree ref1 ATTRIBUTE_UNUSED, tree base1,
       double_int moff;
       /* The offset embedded in MEM_REFs can be negative.  Bias them
 	 so that the resulting offset adjustment is positive.  */
-      moff = mem_ref_offset (base1);
+      moff = mem_ref_offset_as_double (base1);
       moff = moff.alshift (BITS_PER_UNIT == 8
 			   ? 3 : exact_log2 (BITS_PER_UNIT),
 			   HOST_BITS_PER_DOUBLE_INT);
@@ -937,7 +937,7 @@  indirect_refs_may_alias_p (tree ref1 ATTRIBUTE_UNUSED, tree base1,
 	offset2 += (-moff).low;
       else
 	offset1 += moff.low;
-      moff = mem_ref_offset (base2);
+      moff = mem_ref_offset_as_double (base2);
       moff = moff.alshift (BITS_PER_UNIT == 8
 			   ? 3 : exact_log2 (BITS_PER_UNIT),
 			   HOST_BITS_PER_DOUBLE_INT);
diff --git a/gcc/tree-ssa-forwprop.c b/gcc/tree-ssa-forwprop.c
index 703325a..601641e 100644
--- a/gcc/tree-ssa-forwprop.c
+++ b/gcc/tree-ssa-forwprop.c
@@ -799,12 +799,12 @@  forward_propagate_addr_expr_1 (tree name, tree def_rhs,
       if ((def_rhs_base = get_addr_base_and_unit_offset (TREE_OPERAND (def_rhs, 0),
 							 &def_rhs_offset)))
 	{
-	  double_int off = mem_ref_offset (lhs);
+	  double_int off = mem_ref_offset_as_double (lhs);
 	  tree new_ptr;
 	  off += double_int::from_shwi (def_rhs_offset);
 	  if (TREE_CODE (def_rhs_base) == MEM_REF)
 	    {
-	      off += mem_ref_offset (def_rhs_base);
+	      off += mem_ref_offset_as_double (def_rhs_base);
 	      new_ptr = TREE_OPERAND (def_rhs_base, 0);
 	    }
 	  else
@@ -883,12 +883,12 @@  forward_propagate_addr_expr_1 (tree name, tree def_rhs,
       if ((def_rhs_base = get_addr_base_and_unit_offset (TREE_OPERAND (def_rhs, 0),
 							 &def_rhs_offset)))
 	{
-	  double_int off = mem_ref_offset (rhs);
+	  double_int off = mem_ref_offset_as_double (rhs);
 	  tree new_ptr;
 	  off += double_int::from_shwi (def_rhs_offset);
 	  if (TREE_CODE (def_rhs_base) == MEM_REF)
 	    {
-	      off += mem_ref_offset (def_rhs_base);
+	      off += mem_ref_offset_as_double (def_rhs_base);
 	      new_ptr = TREE_OPERAND (def_rhs_base, 0);
 	    }
 	  else
@@ -1352,7 +1352,7 @@  constant_pointer_difference (tree p1, tree p2)
 		  p = TREE_OPERAND (q, 0);
 		  off = size_binop (PLUS_EXPR, off,
 				    double_int_to_tree (sizetype,
-							mem_ref_offset (q)));
+							mem_ref_offset_as_double (q)));
 		}
 	      else
 		{
diff --git a/gcc/tree-ssa-loop-niter.c b/gcc/tree-ssa-loop-niter.c
index fe4db37..27e2d71 100644
--- a/gcc/tree-ssa-loop-niter.c
+++ b/gcc/tree-ssa-loop-niter.c
@@ -3035,6 +3035,23 @@  max_loop_iterations (struct loop *loop, double_int *nit)
   return true;
 }
 
+/* Sets NIT to an upper bound for the maximum number of executions of the
+   latch of the LOOP.  If we have no reliable estimate, the function returns
+   false, otherwise returns true.  */
+
+bool
+max_loop_iterations (struct loop *loop, wide_int *nit,
+		     int bitsize, int precision)
+{
+  estimate_numbers_of_iterations_loop (loop);
+  if (!loop->any_upper_bound)
+    return false;
+
+  *nit = wide_int::from_double_int (loop->nb_iterations_upper_bound,
+				    bitsize, precision);
+  return true;
+}
+
 /* Similar to estimated_loop_iterations, but returns the estimate only
    if it fits to HOST_WIDE_INT.  If this is not the case, or the estimate
    on the number of iterations of LOOP could not be derived, returns -1.  */
diff --git a/gcc/tree-ssa-phiopt.c b/gcc/tree-ssa-phiopt.c
index e3e90d5..3055b1e 100644
--- a/gcc/tree-ssa-phiopt.c
+++ b/gcc/tree-ssa-phiopt.c
@@ -702,7 +702,7 @@  jump_function_from_stmt (tree *arg, gimple stmt)
 						&offset);
       if (tem
 	  && TREE_CODE (tem) == MEM_REF
-	  && (mem_ref_offset (tem) + double_int::from_shwi (offset)).is_zero ())
+	  && (mem_ref_offset_as_double (tem) + double_int::from_shwi (offset)).is_zero ())
 	{
 	  *arg = TREE_OPERAND (tem, 0);
 	  return true;
diff --git a/gcc/tree-ssa-sccvn.c b/gcc/tree-ssa-sccvn.c
index 837fe6d..91fc76b 100644
--- a/gcc/tree-ssa-sccvn.c
+++ b/gcc/tree-ssa-sccvn.c
@@ -1124,7 +1124,7 @@  vn_reference_maybe_forwprop_address (VEC (vn_reference_op_s, heap) **ops,
 	return;
 
       off += double_int::from_shwi (addr_offset);
-      off += mem_ref_offset (addr_base);
+      off += mem_ref_offset_as_double (addr_base);
       op->op0 = TREE_OPERAND (addr_base, 0);
     }
   else
diff --git a/gcc/tree-ssa.c b/gcc/tree-ssa.c
index 7ba11e1..1fee91e 100644
--- a/gcc/tree-ssa.c
+++ b/gcc/tree-ssa.c
@@ -1833,9 +1833,9 @@  non_rewritable_mem_ref_base (tree ref)
 	   || TREE_CODE (TREE_TYPE (decl)) == COMPLEX_TYPE)
 	  && useless_type_conversion_p (TREE_TYPE (base),
 					TREE_TYPE (TREE_TYPE (decl)))
-	  && mem_ref_offset (base).fits_uhwi ()
+	  && mem_ref_offset_as_double (base).fits_uhwi ()
 	  && tree_to_double_int (TYPE_SIZE_UNIT (TREE_TYPE (decl)))
-	     .ugt (mem_ref_offset (base))
+	     .ugt (mem_ref_offset_as_double (base))
 	  && multiple_of_p (sizetype, TREE_OPERAND (base, 1),
 			    TYPE_SIZE_UNIT (TREE_TYPE (base))))
 	return NULL_TREE;
diff --git a/gcc/tree-vect-data-refs.c b/gcc/tree-vect-data-refs.c
index d93c5b7..d16e345 100644
--- a/gcc/tree-vect-data-refs.c
+++ b/gcc/tree-vect-data-refs.c
@@ -2730,7 +2730,7 @@  vect_check_gather (gimple stmt, loop_vec_info loop_vinfo, tree *basep,
 	{
 	  if (off == NULL_TREE)
 	    {
-	      double_int moff = mem_ref_offset (base);
+	      double_int moff = mem_ref_offset_as_double (base);
 	      off = double_int_to_tree (sizetype, moff);
 	    }
 	  else
diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c
index 9003ac5..85d6f6d 100644
--- a/gcc/tree-vrp.c
+++ b/gcc/tree-vrp.c
@@ -80,6 +80,12 @@  typedef struct value_range_d value_range_t;
    for still active basic-blocks.  */
 static sbitmap *live;
 
+/* The largest precision and bitsize used in any code in this
+   function.  We use these to determine the value of infinity for
+   doing infinite precision math.  */
+static int largest_precision;
+static int largest_bitsize;
+
 /* Return true if the SSA name NAME is live on the edge E.  */
 
 static bool
@@ -387,6 +393,15 @@  nonnull_arg_p (const_tree arg)
   return false;
 }
 
+/* Convert a cst to an infinitely sized wide_interger.  Infinite is
+   defined to be that largest type seen in this function.  */
+static wide_int
+tree_to_infinite_wide_int (const_tree cst)
+{
+  wide_int result = wide_int::from_tree_as_infinite_precision
+    (cst, largest_bitsize, largest_precision);
+  return result;
+}
 
 /* Set value range VR to VR_UNDEFINED.  */
 
@@ -967,7 +982,7 @@  gimple_assign_nonnegative_warnv_p (gimple stmt, bool *strict_overflow_p)
     }
 }
 
-/* Return true if return value of call STMT is know to be non-negative.
+/* Return true if return value of call STMT is known to be non-negative.
    If the return value is based on the assumption that signed overflow is
    undefined, set *STRICT_OVERFLOW_P to true; otherwise, don't change
    *STRICT_OVERFLOW_P.*/
@@ -987,7 +1002,7 @@  gimple_call_nonnegative_warnv_p (gimple stmt, bool *strict_overflow_p)
 					strict_overflow_p);
 }
 
-/* Return true if STMT is know to to compute a non-negative value.
+/* Return true if STMT is known to to compute a non-negative value.
    If the return value is based on the assumption that signed overflow is
    undefined, set *STRICT_OVERFLOW_P to true; otherwise, don't change
    *STRICT_OVERFLOW_P.*/
@@ -1006,7 +1021,7 @@  gimple_stmt_nonnegative_warnv_p (gimple stmt, bool *strict_overflow_p)
     }
 }
 
-/* Return true if the result of assignment STMT is know to be non-zero.
+/* Return true if the result of assignment STMT is known to be non-zero.
    If the return value is based on the assumption that signed overflow is
    undefined, set *STRICT_OVERFLOW_P to true; otherwise, don't change
    *STRICT_OVERFLOW_P.*/
@@ -1040,7 +1055,7 @@  gimple_assign_nonzero_warnv_p (gimple stmt, bool *strict_overflow_p)
     }
 }
 
-/* Return true if STMT is know to to compute a non-zero value.
+/* Return true if STMT is known to to compute a non-zero value.
    If the return value is based on the assumption that signed overflow is
    undefined, set *STRICT_OVERFLOW_P to true; otherwise, don't change
    *STRICT_OVERFLOW_P.*/
@@ -1613,10 +1628,12 @@  extract_range_from_assert (value_range_t *vr_p, tree expr)
       /* Make sure to not set TREE_OVERFLOW on the final type
 	 conversion.  We are willingly interpreting large positive
 	 unsigned values as negative singed values here.  */
-      min = force_fit_type_double (TREE_TYPE (var), tree_to_double_int (min),
-				   0, false);
-      max = force_fit_type_double (TREE_TYPE (var), tree_to_double_int (max),
-				   0, false);
+      min = wide_int_to_infinite_tree (TREE_TYPE (var), 
+				       tree_to_infinite_wide_int (min),
+				       largest_precision);
+      max = wide_int_to_infinite_tree (TREE_TYPE (var), 
+				       tree_to_infinite_wide_int (max),
+				       largest_precision);
 
       /* We can transform a max, min range to an anti-range or
          vice-versa.  Use set_and_canonicalize_value_range which does
@@ -1962,7 +1979,7 @@  vrp_int_const_binop (enum tree_code code, tree val1, tree val2)
 }
 
 
-/* For range VR compute two double_int bitmasks.  In *MAY_BE_NONZERO
+/* For range VR compute two wide_int bitmasks.  In *MAY_BE_NONZERO
    bitmask if some bit is unset, it means for all numbers in the range
    the bit is 0, otherwise it might be 0 or 1.  In *MUST_BE_NONZERO
    bitmask if some bit is set, it means for all numbers in the range
@@ -1970,11 +1987,11 @@  vrp_int_const_binop (enum tree_code code, tree val1, tree val2)
 
 static bool
 zero_nonzero_bits_from_vr (value_range_t *vr,
-			   double_int *may_be_nonzero,
-			   double_int *must_be_nonzero)
+			   wide_int *may_be_nonzero,
+			   wide_int *must_be_nonzero)
 {
-  *may_be_nonzero = double_int_minus_one;
-  *must_be_nonzero = double_int_zero;
+  *may_be_nonzero = wide_int::minus_one (largest_bitsize, largest_precision);
+  *must_be_nonzero = wide_int::zero (largest_bitsize, largest_precision);
   if (!range_int_cst_p (vr)
       || TREE_OVERFLOW (vr->min)
       || TREE_OVERFLOW (vr->max))
@@ -1982,34 +1999,24 @@  zero_nonzero_bits_from_vr (value_range_t *vr,
 
   if (range_int_cst_singleton_p (vr))
     {
-      *may_be_nonzero = tree_to_double_int (vr->min);
+      *may_be_nonzero = tree_to_infinite_wide_int (vr->min);
       *must_be_nonzero = *may_be_nonzero;
     }
   else if (tree_int_cst_sgn (vr->min) >= 0
 	   || tree_int_cst_sgn (vr->max) < 0)
     {
-      double_int dmin = tree_to_double_int (vr->min);
-      double_int dmax = tree_to_double_int (vr->max);
-      double_int xor_mask = dmin ^ dmax;
+      wide_int dmin = tree_to_infinite_wide_int (vr->min);
+      wide_int dmax = tree_to_infinite_wide_int (vr->max);
+      wide_int xor_mask = dmin ^ dmax;
       *may_be_nonzero = dmin | dmax;
       *must_be_nonzero = dmin & dmax;
-      if (xor_mask.high != 0)
+      if (!xor_mask.zero_p ())
 	{
-	  unsigned HOST_WIDE_INT mask
-	      = ((unsigned HOST_WIDE_INT) 1
-		 << floor_log2 (xor_mask.high)) - 1;
-	  may_be_nonzero->low = ALL_ONES;
-	  may_be_nonzero->high |= mask;
-	  must_be_nonzero->low = 0;
-	  must_be_nonzero->high &= ~mask;
-	}
-      else if (xor_mask.low != 0)
-	{
-	  unsigned HOST_WIDE_INT mask
-	      = ((unsigned HOST_WIDE_INT) 1
-		 << floor_log2 (xor_mask.low)) - 1;
-	  may_be_nonzero->low |= mask;
-	  must_be_nonzero->low &= ~mask;
+	  wide_int mask = wide_int::mask (xor_mask.floor_log2 (), 
+					  false, largest_bitsize, 
+					  largest_precision);
+	  *may_be_nonzero = (*may_be_nonzero) | mask;
+	  *must_be_nonzero = (*must_be_nonzero).and_not (mask);
 	}
     }
 
@@ -2042,15 +2049,21 @@  ranges_from_anti_range (value_range_t *ar,
       vr0->type = VR_RANGE;
       vr0->min = vrp_val_min (type);
       vr0->max
-	= double_int_to_tree (type,
-			      tree_to_double_int (ar->min) - double_int_one);
+	= wide_int_to_infinite_tree (type,
+				     tree_to_infinite_wide_int (ar->min) 
+				     - wide_int::one (largest_bitsize, 
+						      largest_precision),
+				     largest_precision);
     }
   if (!vrp_val_is_max (ar->max))
     {
       vr1->type = VR_RANGE;
       vr1->min
-	= double_int_to_tree (type,
-			      tree_to_double_int (ar->max) + double_int_one);
+	= wide_int_to_infinite_tree (type,
+				     tree_to_infinite_wide_int (ar->max) 
+				     + wide_int::one (largest_bitsize, 
+						      largest_precision),
+				     largest_precision);
       vr1->max = vrp_val_max (type);
     }
   if (vr0->type == VR_UNDEFINED)
@@ -2216,28 +2229,6 @@  extract_range_from_multiplicative_op_1 (value_range_t *vr,
     set_value_range (vr, type, min, max, NULL);
 }
 
-/* Some quadruple precision helpers.  */
-static int
-quad_int_cmp (double_int l0, double_int h0,
-	      double_int l1, double_int h1, bool uns)
-{
-  int c = h0.cmp (h1, uns);
-  if (c != 0) return c;
-  return l0.ucmp (l1);
-}
-
-static void
-quad_int_pair_sort (double_int *l0, double_int *h0,
-		    double_int *l1, double_int *h1, bool uns)
-{
-  if (quad_int_cmp (*l0, *h0, *l1, *h1, uns) > 0)
-    {
-      double_int tmp;
-      tmp = *l0; *l0 = *l1; *l1 = tmp;
-      tmp = *h0; *h0 = *h1; *h1 = tmp;
-    }
-}
-
 /* Extract range information from a binary operation CODE based on
    the ranges of each of its operands, *VR0 and *VR1 with resulting
    type EXPR_TYPE.  The resulting range is stored in *VR.  */
@@ -2252,6 +2243,8 @@  extract_range_from_binary_expr_1 (value_range_t *vr,
   enum value_range_type type;
   tree min = NULL_TREE, max = NULL_TREE;
   int cmp;
+  wide_int::SignOp sgn = TYPE_UNSIGNED (expr_type) 
+    ? wide_int::UNSIGNED : wide_int::UNSIGNED;
 
   if (!INTEGRAL_TYPE_P (expr_type)
       && !POINTER_TYPE_P (expr_type))
@@ -2407,32 +2400,34 @@  extract_range_from_binary_expr_1 (value_range_t *vr,
       /* If we have a PLUS_EXPR with two VR_RANGE integer constant
          ranges compute the precise range for such case if possible.  */
       if (range_int_cst_p (&vr0)
-	  && range_int_cst_p (&vr1)
-	  /* We need as many bits as the possibly unsigned inputs.  */
-	  && TYPE_PRECISION (expr_type) <= HOST_BITS_PER_DOUBLE_INT)
-	{
-	  double_int min0 = tree_to_double_int (vr0.min);
-	  double_int max0 = tree_to_double_int (vr0.max);
-	  double_int min1 = tree_to_double_int (vr1.min);
-	  double_int max1 = tree_to_double_int (vr1.max);
-	  bool uns = TYPE_UNSIGNED (expr_type);
-	  double_int type_min
-	    = double_int::min_value (TYPE_PRECISION (expr_type), uns);
-	  double_int type_max
-	    = double_int::max_value (TYPE_PRECISION (expr_type), uns);
-	  double_int dmin, dmax;
+	  && range_int_cst_p (&vr1))
+	{
+	  wide_int min0 = tree_to_infinite_wide_int (vr0.min);
+	  wide_int max0 = tree_to_infinite_wide_int (vr0.max);
+	  wide_int min1 = tree_to_infinite_wide_int (vr1.min);
+	  wide_int max1 = tree_to_infinite_wide_int (vr1.max);
+	  wide_int::SignOp uns = TYPE_UNSIGNED (expr_type) 
+	    ? wide_int::UNSIGNED : wide_int::SIGNED;
+	  wide_int type_min = wide_int::min_value (expr_type)
+	    .force_to_size (largest_bitsize, largest_precision, uns)
+	    .ext (TYPE_PRECISION (expr_type), uns);
+	  wide_int type_max = wide_int::max_value (expr_type)
+	    .force_to_size (largest_bitsize, largest_precision, uns)
+	    .ext (TYPE_PRECISION (expr_type), wide_int::UNSIGNED);
+	  wide_int dmin, dmax;
 	  int min_ovf = 0;
 	  int max_ovf = 0;
+	  wide_int zero = wide_int::zero (largest_bitsize, largest_precision);
 
 	  if (code == PLUS_EXPR)
 	    {
 	      dmin = min0 + min1;
 	      dmax = max0 + max1;
 
-	      /* Check for overflow in double_int.  */
-	      if (min1.cmp (double_int_zero, uns) != dmin.cmp (min0, uns))
+	      /* Check for overflow in wide_int.  */
+	      if (min1.cmp (zero, uns) != dmin.cmp (min0, uns))
 		min_ovf = min0.cmp (dmin, uns);
-	      if (max1.cmp (double_int_zero, uns) != dmax.cmp (max0, uns))
+	      if (max1.cmp (zero, uns) != dmax.cmp (max0, uns))
 		max_ovf = max0.cmp (dmax, uns);
 	    }
 	  else /* if (code == MINUS_EXPR) */
@@ -2440,9 +2435,9 @@  extract_range_from_binary_expr_1 (value_range_t *vr,
 	      dmin = min0 - max1;
 	      dmax = max0 - min1;
 
-	      if (double_int_zero.cmp (max1, uns) != dmin.cmp (min0, uns))
+	      if (zero.cmp (max1, uns) != dmin.cmp (min0, uns))
 		min_ovf = min0.cmp (max1, uns);
-	      if (double_int_zero.cmp (min1, uns) != dmax.cmp (max0, uns))
+	      if (zero.cmp (min1, uns) != dmax.cmp (max0, uns))
 		max_ovf = max0.cmp (min1, uns);
 	    }
 
@@ -2451,9 +2446,9 @@  extract_range_from_binary_expr_1 (value_range_t *vr,
 	  if (!TYPE_OVERFLOW_WRAPS (expr_type))
 	    {
 	      if (vrp_val_min (expr_type))
-		type_min = tree_to_double_int (vrp_val_min (expr_type));
+		type_min = tree_to_infinite_wide_int (vrp_val_min (expr_type));
 	      if (vrp_val_max (expr_type))
-		type_max = tree_to_double_int (vrp_val_max (expr_type));
+		type_max = tree_to_infinite_wide_int (vrp_val_max (expr_type));
 	    }
 
 	  /* Check for type overflow.  */
@@ -2476,19 +2471,16 @@  extract_range_from_binary_expr_1 (value_range_t *vr,
 	    {
 	      /* If overflow wraps, truncate the values and adjust the
 		 range kind and bounds appropriately.  */
-	      double_int tmin
-		= dmin.ext (TYPE_PRECISION (expr_type), uns);
-	      double_int tmax
-		= dmax.ext (TYPE_PRECISION (expr_type), uns);
+	      wide_int tmin = dmin.ext (TYPE_PRECISION (expr_type), uns);
+	      wide_int tmax = dmax.ext (TYPE_PRECISION (expr_type), uns);
 	      if (min_ovf == max_ovf)
 		{
 		  /* No overflow or both overflow or underflow.  The
 		     range kind stays VR_RANGE.  */
-		  min = double_int_to_tree (expr_type, tmin);
-		  max = double_int_to_tree (expr_type, tmax);
+		  min = wide_int_to_infinite_tree (expr_type, tmin, largest_precision);
+		  max = wide_int_to_infinite_tree (expr_type, tmax, largest_precision);
 		}
-	      else if (min_ovf == -1
-		       && max_ovf == 1)
+	      else if (min_ovf == -1 && max_ovf == 1)
 		{
 		  /* Underflow and overflow, drop to VR_VARYING.  */
 		  set_value_range_to_varying (vr);
@@ -2499,26 +2491,30 @@  extract_range_from_binary_expr_1 (value_range_t *vr,
 		  /* Min underflow or max overflow.  The range kind
 		     changes to VR_ANTI_RANGE.  */
 		  bool covers = false;
-		  double_int tem = tmin;
+		  wide_int tem = tmin;
 		  gcc_assert ((min_ovf == -1 && max_ovf == 0)
 			      || (max_ovf == 1 && min_ovf == 0));
 		  type = VR_ANTI_RANGE;
-		  tmin = tmax + double_int_one;
-		  if (tmin.cmp (tmax, uns) < 0)
+		  tmin = tmax + wide_int::one (largest_bitsize, 
+					       largest_precision);
+		  if (tmin.lt_p (tmax, uns))
 		    covers = true;
-		  tmax = tem + double_int_minus_one;
-		  if (tmax.cmp (tem, uns) > 0)
+		  tmax = tem + wide_int::minus_one (largest_bitsize, 
+						    largest_precision);
+		  if (tmax.gt_p (tem, uns))
 		    covers = true;
 		  /* If the anti-range would cover nothing, drop to varying.
 		     Likewise if the anti-range bounds are outside of the
 		     types values.  */
-		  if (covers || tmin.cmp (tmax, uns) > 0)
+		  if (covers || tmin.gt_p (tmax, uns))
 		    {
 		      set_value_range_to_varying (vr);
 		      return;
 		    }
-		  min = double_int_to_tree (expr_type, tmin);
-		  max = double_int_to_tree (expr_type, tmax);
+		  min = wide_int_to_infinite_tree (expr_type, tmin,
+						   largest_precision);
+		  max = wide_int_to_infinite_tree (expr_type, tmax, 
+						   largest_precision);
 		}
 	    }
 	  else
@@ -2531,7 +2527,8 @@  extract_range_from_binary_expr_1 (value_range_t *vr,
 		      && supports_overflow_infinity (expr_type))
 		    min = negative_overflow_infinity (expr_type);
 		  else
-		    min = double_int_to_tree (expr_type, type_min);
+		    min = wide_int_to_infinite_tree (expr_type, type_min, 
+						     largest_precision);
 		}
 	      else if (min_ovf == 1)
 		{
@@ -2539,10 +2536,12 @@  extract_range_from_binary_expr_1 (value_range_t *vr,
 		      && supports_overflow_infinity (expr_type))
 		    min = positive_overflow_infinity (expr_type);
 		  else
-		    min = double_int_to_tree (expr_type, type_max);
+		    min = wide_int_to_infinite_tree (expr_type, type_max,
+						     largest_precision);
 		}
 	      else
-		min = double_int_to_tree (expr_type, dmin);
+		min = wide_int_to_infinite_tree (expr_type, dmin,
+						 largest_precision);
 
 	      if (max_ovf == -1)
 		{
@@ -2550,7 +2549,8 @@  extract_range_from_binary_expr_1 (value_range_t *vr,
 		      && supports_overflow_infinity (expr_type))
 		    max = negative_overflow_infinity (expr_type);
 		  else
-		    max = double_int_to_tree (expr_type, type_min);
+		    max = wide_int_to_infinite_tree (expr_type, type_min,
+						     largest_precision);
 		}
 	      else if (max_ovf == 1)
 		{
@@ -2558,10 +2558,12 @@  extract_range_from_binary_expr_1 (value_range_t *vr,
 		      && supports_overflow_infinity (expr_type))
 		    max = positive_overflow_infinity (expr_type);
 		  else
-		    max = double_int_to_tree (expr_type, type_max);
+		    max = wide_int_to_infinite_tree (expr_type, type_max,
+						     largest_precision);
 		}
 	      else
-		max = double_int_to_tree (expr_type, dmax);
+		max = wide_int_to_infinite_tree (expr_type, dmax,
+						 largest_precision);
 	    }
 	  if (needs_overflow_infinity (expr_type)
 	      && supports_overflow_infinity (expr_type))
@@ -2624,92 +2626,133 @@  extract_range_from_binary_expr_1 (value_range_t *vr,
 	  && range_int_cst_p (&vr1)
 	  && TYPE_OVERFLOW_WRAPS (expr_type))
 	{
-	  double_int min0, max0, min1, max1, sizem1, size;
-	  double_int prod0l, prod0h, prod1l, prod1h,
-		     prod2l, prod2h, prod3l, prod3h;
-	  bool uns0, uns1, uns;
-
-	  sizem1 = double_int::max_value (TYPE_PRECISION (expr_type), true);
-	  size = sizem1 + double_int_one;
+	  wide_int min0, max0, min1, max1, sizem1, size;
+	  wide_int prod0, prod1, prod2, prod3;
+	  wide_int::SignOp uns0, uns1, uns;
+
+	  /* The largest_precision and the largest_bitsize are already
+	     twice the size of the largest type that has been seen in
+	     this function.  We do not need to multiply them by a
+	     factor of two again to do this analysis.  So we trim them
+	     down to the size of the largest types seen and then we
+	     will fix things up at the end.  */
+
+	  int bs2 = largest_bitsize*2;
+	  int prec2 = largest_precision*2;
+
+	  /* Use the mode here to force the unsigned max value.  */
+	  sizem1 = wide_int::max_value (TYPE_MODE (expr_type), wide_int::UNSIGNED)
+	    .force_to_size (largest_bitsize, largest_precision, wide_int::UNSIGNED)
+	    .zext (TYPE_PRECISION (expr_type));
+	  size = sizem1 + wide_int::one (largest_bitsize, largest_precision);
+
+	  uns0 = TYPE_UNSIGNED (expr_type) 
+	    ? wide_int::UNSIGNED : wide_int::SIGNED;
+
+	  min0 = tree_to_infinite_wide_int (vr0.min)
+	    .force_to_size (largest_bitsize, largest_precision, uns0);
+	  max0 = tree_to_infinite_wide_int (vr0.max)
+	    .force_to_size (largest_bitsize, largest_precision, uns0);
+	  min1 = tree_to_infinite_wide_int (vr1.min)
+	    .force_to_size (largest_bitsize, largest_precision, uns0);
+	  max1 = tree_to_infinite_wide_int (vr1.max)
+	    .force_to_size (largest_bitsize, largest_precision, uns0);
 
-	  min0 = tree_to_double_int (vr0.min);
-	  max0 = tree_to_double_int (vr0.max);
-	  min1 = tree_to_double_int (vr1.min);
-	  max1 = tree_to_double_int (vr1.max);
-
-	  uns0 = TYPE_UNSIGNED (expr_type);
 	  uns1 = uns0;
 
 	  /* Canonicalize the intervals.  */
 	  if (TYPE_UNSIGNED (expr_type))
 	    {
-	      double_int min2 = size - min0;
-	      if (min2.cmp (max0, true) < 0)
+	      wide_int min2 = size - min0;
+	      if (min2.ltu_p (max0))
 		{
-		  min0 = -min2;
-		  max0 -= size;
-		  uns0 = false;
+		  min0 = min2.neg ();
+		  max0 = max0 - size;
+		  uns0 = wide_int::SIGNED;
 		}
 
 	      min2 = size - min1;
-	      if (min2.cmp (max1, true) < 0)
+	      if (min2.ltu_p (max1))
 		{
-		  min1 = -min2;
-		  max1 -= size;
-		  uns1 = false;
+		  min1 = min2.neg();
+		  max1 = max1 - size;
+		  uns1 = wide_int::SIGNED;
 		}
 	    }
-	  uns = uns0 & uns1;
-
-	  bool overflow;
-	  prod0l = min0.wide_mul_with_sign (min1, true, &prod0h, &overflow);
-	  if (!uns0 && min0.is_negative ())
-	    prod0h -= min1;
-	  if (!uns1 && min1.is_negative ())
-	    prod0h -= min0;
-
-	  prod1l = min0.wide_mul_with_sign (max1, true, &prod1h, &overflow);
-	  if (!uns0 && min0.is_negative ())
-	    prod1h -= max1;
-	  if (!uns1 && max1.is_negative ())
-	    prod1h -= min0;
-
-	  prod2l = max0.wide_mul_with_sign (min1, true, &prod2h, &overflow);
-	  if (!uns0 && max0.is_negative ())
-	    prod2h -= min1;
-	  if (!uns1 && min1.is_negative ())
-	    prod2h -= max0;
-
-	  prod3l = max0.wide_mul_with_sign (max1, true, &prod3h, &overflow);
-	  if (!uns0 && max0.is_negative ())
-	    prod3h -= max1;
-	  if (!uns1 && max1.is_negative ())
-	    prod3h -= max0;
-
-	  /* Sort the 4 products.  */
-	  quad_int_pair_sort (&prod0l, &prod0h, &prod3l, &prod3h, uns);
-	  quad_int_pair_sort (&prod1l, &prod1h, &prod2l, &prod2h, uns);
-	  quad_int_pair_sort (&prod0l, &prod0h, &prod1l, &prod1h, uns);
-	  quad_int_pair_sort (&prod2l, &prod2h, &prod3l, &prod3h, uns);
 
-	  /* Max - min.  */
-	  if (prod0l.is_zero ())
+	  uns = ((uns0 == wide_int::UNSIGNED) & (uns1 == wide_int::UNSIGNED))
+	    ? wide_int::UNSIGNED : wide_int::SIGNED;
+
+	  prod0 = min0.umul_full (min1);
+	  if (uns0 == wide_int::SIGNED && min0.neg_p ())
+	    prod0 = prod0 - min1.lshift (largest_precision, wide_int::NONE, 
+					 bs2, prec2);
+	  if (uns1 == wide_int::SIGNED && min1.neg_p ())
+	    prod0 = prod0 - min0.lshift (largest_precision, wide_int::NONE, 
+					 bs2, prec2);
+
+	  prod1 = min0.umul_full (max1);
+	  if (uns0 == wide_int::SIGNED && min0.neg_p ())
+	    prod1 = prod1 - max1.lshift (largest_precision, wide_int::NONE, 
+					 bs2, prec2);
+	  if (uns1 == wide_int::SIGNED && max1.neg_p ())
+	    prod1 = prod1 - min0.lshift (largest_precision, wide_int::NONE, 
+					 bs2, prec2);
+
+	  prod2 = max0.umul_full (min1);
+	  if (uns0 == wide_int::SIGNED && max0.neg_p ())
+	    prod2 = prod2 - min1.lshift (largest_precision, wide_int::NONE, 
+					 bs2, prec2);
+	  if (uns1 == wide_int::SIGNED && min1.neg_p ())
+	    prod2 = prod2 - max0.lshift (largest_precision, wide_int::NONE, 
+					 bs2, prec2);
+
+	  prod3 = max0.umul_full (max1);
+	  if (uns0 == wide_int::SIGNED && max0.neg_p ())
+	    prod3 = prod3 - max1.lshift (largest_precision, wide_int::NONE, 
+					 bs2, prec2);
+	  if (uns1 == wide_int::SIGNED && max1.neg_p ())
+	    prod3 = prod3 - max0.lshift (largest_precision, wide_int::NONE, 
+					 bs2, prec2);
+
+	  /* Sort the 4 products so that min is in prod0 and max is in
+	     prod3.  */
+
+	  /* min0min1 > max0max1 */
+	  if (!prod0.lt_p (prod3, uns))
 	    {
-	      prod1l = double_int_zero;
-	      prod1h = -prod0h;
+	      wide_int tmp = prod0;
+	      prod0 = prod3;
+	      prod3 = tmp;
 	    }
-	  else
+	  
+	  /* min0max1 > max0min1 */
+	  if (!prod1.lt_p (prod2, uns))
 	    {
-	      prod1l = -prod0l;
-	      prod1h = ~prod0h;
+	      wide_int tmp = prod1;
+	      prod1 = prod2;
+	      prod2 = tmp;
 	    }
-	  prod2l = prod3l + prod1l;
-	  prod2h = prod3h + prod1h;
-	  if (prod2l.ult (prod3l))
-	    prod2h += double_int_one; /* carry */
 
-	  if (!prod2h.is_zero ()
-	      || prod2l.cmp (sizem1, true) >= 0)
+	  if (!prod0.lt_p (prod1, uns))
+	    {
+	      wide_int tmp = prod0;
+	      prod0 = prod1;
+	      prod1 = tmp;
+	    }
+
+	  if (!prod2.lt_p (prod3, uns))
+	    {
+	      wide_int tmp = prod2;
+	      prod2 = prod3;
+	      prod3 = tmp;
+	    }
+
+	  /* Max - min.  */
+	  prod2 = prod3 - prod0;
+	  if (sizem1.force_to_size (bs2, prec2, 
+				    wide_int::UNSIGNED)
+	      .ltu_p (prod2))
 	    {
 	      /* the range covers all values.  */
 	      set_value_range_to_varying (vr);
@@ -2718,8 +2761,10 @@  extract_range_from_binary_expr_1 (value_range_t *vr,
 
 	  /* The following should handle the wrapping and selecting
 	     VR_ANTI_RANGE for us.  */
-	  min = double_int_to_tree (expr_type, prod0l);
-	  max = double_int_to_tree (expr_type, prod3l);
+	  min = wide_int_to_infinite_tree (expr_type, prod0,
+					   prec2);
+	  max = wide_int_to_infinite_tree (expr_type, prod3,
+					   prec2);
 	  set_and_canonicalize_value_range (vr, VR_RANGE, min, max, NULL);
 	  return;
 	}
@@ -2766,11 +2811,11 @@  extract_range_from_binary_expr_1 (value_range_t *vr,
 	      bool saved_flag_wrapv;
 	      value_range_t vr1p = VR_INITIALIZER;
 	      vr1p.type = VR_RANGE;
-	      vr1p.min
-		= double_int_to_tree (expr_type,
-				      double_int_one
-				      .llshift (tree_to_shwi (vr1.min),
-					        TYPE_PRECISION (expr_type)));
+	      vr1p.min = wide_int_to_infinite_tree 
+		(expr_type,
+		 wide_int::set_bit_in_zero (tree_to_uhwi (vr1.min),
+					    largest_bitsize, largest_precision),
+		 largest_precision);
 	      vr1p.max = vr1p.min;
 	      /* We have to use a wrapping multiply though as signed overflow
 		 on lshifts is implementation defined in C89.  */
@@ -2787,7 +2832,7 @@  extract_range_from_binary_expr_1 (value_range_t *vr,
 	      int prec = TYPE_PRECISION (expr_type);
 	      int overflow_pos = prec;
 	      int bound_shift;
-	      double_int bound, complement, low_bound, high_bound;
+	      wide_int bound, complement, low_bound, high_bound;
 	      bool uns = TYPE_UNSIGNED (expr_type);
 	      bool in_bounds = false;
 
@@ -2800,21 +2845,23 @@  extract_range_from_binary_expr_1 (value_range_t *vr,
 		 zero, which means vr1 is a singleton range of zero, which
 		 means it should be handled by the previous LSHIFT_EXPR
 		 if-clause.  */
-	      bound = double_int_one.llshift (bound_shift, prec);
-	      complement = ~(bound - double_int_one);
+	      bound = wide_int::set_bit_in_zero (bound_shift, 
+						 largest_bitsize, 
+						 largest_precision);
+	      complement = bound.neg ();
 
 	      if (uns)
 		{
 		  low_bound = bound;
 		  high_bound = complement.zext (prec);
-		  if (tree_to_double_int (vr0.max).ult (low_bound))
+		  if (tree_to_infinite_wide_int (vr0.max).ltu_p (low_bound))
 		    {
 		      /* [5, 6] << [1, 2] == [10, 24].  */
 		      /* We're shifting out only zeroes, the value increases
 			 monotonically.  */
 		      in_bounds = true;
 		    }
-		  else if (high_bound.ult (tree_to_double_int (vr0.min)))
+		  else if (high_bound.ltu_p (tree_to_infinite_wide_int (vr0.min)))
 		    {
 		      /* [0xffffff00, 0xffffffff] << [1, 2]
 		         == [0xfffffc00, 0xfffffffe].  */
@@ -2828,8 +2875,8 @@  extract_range_from_binary_expr_1 (value_range_t *vr,
 		  /* [-1, 1] << [1, 2] == [-4, 4].  */
 		  low_bound = complement.sext (prec);
 		  high_bound = bound;
-		  if (tree_to_double_int (vr0.max).slt (high_bound)
-		      && low_bound.slt (tree_to_double_int (vr0.min)))
+		  if (tree_to_infinite_wide_int (vr0.max).lts_p (high_bound)
+		      && low_bound.lts_p (tree_to_infinite_wide_int (vr0.min)))
 		    {
 		      /* For non-negative numbers, we're shifting out only
 			 zeroes, the value increases monotonically.
@@ -2965,8 +3012,8 @@  extract_range_from_binary_expr_1 (value_range_t *vr,
   else if (code == BIT_AND_EXPR || code == BIT_IOR_EXPR || code == BIT_XOR_EXPR)
     {
       bool int_cst_range0, int_cst_range1;
-      double_int may_be_nonzero0, may_be_nonzero1;
-      double_int must_be_nonzero0, must_be_nonzero1;
+      wide_int may_be_nonzero0, may_be_nonzero1;
+      wide_int must_be_nonzero0, must_be_nonzero1;
 
       int_cst_range0 = zero_nonzero_bits_from_vr (&vr0, &may_be_nonzero0,
 						  &must_be_nonzero0);
@@ -2976,10 +3023,13 @@  extract_range_from_binary_expr_1 (value_range_t *vr,
       type = VR_RANGE;
       if (code == BIT_AND_EXPR)
 	{
-	  double_int dmax;
-	  min = double_int_to_tree (expr_type,
-				    must_be_nonzero0 & must_be_nonzero1);
+	  wide_int dmax;
+	    
+	  min = wide_int_to_infinite_tree (expr_type,
+					   must_be_nonzero0 & must_be_nonzero1,
+					   largest_precision);
 	  dmax = may_be_nonzero0 & may_be_nonzero1;
+
 	  /* If both input ranges contain only negative values we can
 	     truncate the result range maximum to the minimum of the
 	     input range maxima.  */
@@ -2987,27 +3037,26 @@  extract_range_from_binary_expr_1 (value_range_t *vr,
 	      && tree_int_cst_sgn (vr0.max) < 0
 	      && tree_int_cst_sgn (vr1.max) < 0)
 	    {
-	      dmax = dmax.min (tree_to_double_int (vr0.max),
-				     TYPE_UNSIGNED (expr_type));
-	      dmax = dmax.min (tree_to_double_int (vr1.max),
-				     TYPE_UNSIGNED (expr_type));
+	      dmax = dmax.min (tree_to_infinite_wide_int (vr0.max), sgn);
+	      dmax = dmax.min (tree_to_infinite_wide_int (vr1.max), sgn);
 	    }
 	  /* If either input range contains only non-negative values
 	     we can truncate the result range maximum to the respective
 	     maximum of the input range.  */
 	  if (int_cst_range0 && tree_int_cst_sgn (vr0.min) >= 0)
-	    dmax = dmax.min (tree_to_double_int (vr0.max),
-				   TYPE_UNSIGNED (expr_type));
+	    dmax = dmax.min (tree_to_infinite_wide_int (vr0.max), sgn);
+
 	  if (int_cst_range1 && tree_int_cst_sgn (vr1.min) >= 0)
-	    dmax = dmax.min (tree_to_double_int (vr1.max),
-				   TYPE_UNSIGNED (expr_type));
-	  max = double_int_to_tree (expr_type, dmax);
+	    dmax = dmax.min (tree_to_infinite_wide_int (vr1.max), sgn);
+	  max = wide_int_to_infinite_tree (expr_type, dmax,
+					   largest_precision);
 	}
       else if (code == BIT_IOR_EXPR)
 	{
-	  double_int dmin;
-	  max = double_int_to_tree (expr_type,
-				    may_be_nonzero0 | may_be_nonzero1);
+	  wide_int dmin;
+	  max = wide_int_to_infinite_tree (expr_type, 
+					   may_be_nonzero0 | may_be_nonzero1,
+					   largest_precision);
 	  dmin = must_be_nonzero0 | must_be_nonzero1;
 	  /* If the input ranges contain only positive values we can
 	     truncate the minimum of the result range to the maximum
@@ -3016,35 +3065,35 @@  extract_range_from_binary_expr_1 (value_range_t *vr,
 	      && tree_int_cst_sgn (vr0.min) >= 0
 	      && tree_int_cst_sgn (vr1.min) >= 0)
 	    {
-	      dmin = dmin.max (tree_to_double_int (vr0.min),
-			       TYPE_UNSIGNED (expr_type));
-	      dmin = dmin.max (tree_to_double_int (vr1.min),
-			       TYPE_UNSIGNED (expr_type));
+	      dmin = dmin.max (tree_to_infinite_wide_int (vr0.min), sgn);
+	      dmin = dmin.max (tree_to_infinite_wide_int (vr1.min), sgn);
 	    }
 	  /* If either input range contains only negative values
 	     we can truncate the minimum of the result range to the
 	     respective minimum range.  */
 	  if (int_cst_range0 && tree_int_cst_sgn (vr0.max) < 0)
-	    dmin = dmin.max (tree_to_double_int (vr0.min),
-			     TYPE_UNSIGNED (expr_type));
+	    dmin = dmin.max (tree_to_infinite_wide_int (vr0.min), sgn);
 	  if (int_cst_range1 && tree_int_cst_sgn (vr1.max) < 0)
-	    dmin = dmin.max (tree_to_double_int (vr1.min),
-			     TYPE_UNSIGNED (expr_type));
-	  min = double_int_to_tree (expr_type, dmin);
+	    dmin = dmin.max (tree_to_infinite_wide_int (vr1.min), sgn);
+	  min = wide_int_to_infinite_tree (expr_type, dmin,
+					   largest_precision);
 	}
       else if (code == BIT_XOR_EXPR)
 	{
-	  double_int result_zero_bits, result_one_bits;
-	  result_zero_bits = (must_be_nonzero0 & must_be_nonzero1)
-			     | ~(may_be_nonzero0 | may_be_nonzero1);
-	  result_one_bits = must_be_nonzero0.and_not (may_be_nonzero1)
-			    | must_be_nonzero1.and_not (may_be_nonzero0);
-	  max = double_int_to_tree (expr_type, ~result_zero_bits);
-	  min = double_int_to_tree (expr_type, result_one_bits);
+	  wide_int result_zero_bits, result_one_bits;
+	  result_zero_bits
+	    = (must_be_nonzero0 & must_be_nonzero1).or_not
+	    (may_be_nonzero0 | may_be_nonzero1);
+	  result_one_bits
+	    = must_be_nonzero0.and_not (may_be_nonzero1)
+	    | must_be_nonzero1.and_not (may_be_nonzero0);
+	  max = wide_int_to_infinite_tree (expr_type, ~result_zero_bits,
+					   largest_precision);
+	  min = wide_int_to_infinite_tree (expr_type, result_one_bits,
+					   largest_precision);
 	  /* If the range has all positive or all negative values the
 	     result is better than VARYING.  */
-	  if (tree_int_cst_sgn (min) < 0
-	      || tree_int_cst_sgn (max) >= 0)
+	  if (tree_int_cst_sgn (min) < 0 || tree_int_cst_sgn (max) >= 0)
 	    ;
 	  else
 	    max = min = NULL_TREE;
@@ -3252,18 +3301,27 @@  extract_range_from_unary_expr_1 (value_range_t *vr,
 		         size_int (TYPE_PRECISION (outer_type)))))))
 	{
 	  tree new_min, new_max;
+	  wide_int::SignOp inner_type_sgn = 
+	    TYPE_UNSIGNED (inner_type) ? wide_int::UNSIGNED : wide_int::SIGNED;
+	  /* We want to extend these from the inner type to the outer
+	     type using the signedness of the inner type but the
+	     precision of the outer type.  */
 	  if (is_overflow_infinity (vr0.min))
 	    new_min = negative_overflow_infinity (outer_type);
 	  else
-	    new_min = force_fit_type_double (outer_type,
-					     tree_to_double_int (vr0.min),
-					     0, false);
+	    new_min = wide_int_to_infinite_tree 
+	      (outer_type,
+	       tree_to_infinite_wide_int (vr0.min)
+	       .ext (TYPE_PRECISION (outer_type), inner_type_sgn),
+	       largest_precision);
 	  if (is_overflow_infinity (vr0.max))
 	    new_max = positive_overflow_infinity (outer_type);
 	  else
-	    new_max = force_fit_type_double (outer_type,
-					     tree_to_double_int (vr0.max),
-					     0, false);
+	    new_max = wide_int_to_infinite_tree
+	      (outer_type,
+	       tree_to_infinite_wide_int (vr0.max)
+	       .ext (TYPE_PRECISION (outer_type), inner_type_sgn),
+	       largest_precision);
 	  set_and_canonicalize_value_range (vr, vr0.type,
 					    new_min, new_max, NULL);
 	  return;
@@ -3660,30 +3718,33 @@  adjust_range_with_scev (value_range_t *vr, struct loop *loop,
       && (TREE_CODE (init) != SSA_NAME
 	  || get_value_range (init)->type == VR_RANGE))
     {
-      double_int nit;
+      wide_int nit;
+      wide_int wstep = tree_to_infinite_wide_int (step); 
 
       /* We are only entering here for loop header PHI nodes, so using
 	 the number of latch executions is the correct thing to use.  */
-      if (max_loop_iterations (loop, &nit))
+      if (max_loop_iterations (loop, &nit,
+			       wstep.get_bitsize (), wstep.get_precision()))
 	{
 	  value_range_t maxvr = VR_INITIALIZER;
-	  double_int dtmp;
-	  bool unsigned_p = TYPE_UNSIGNED (TREE_TYPE (step));
+	  wide_int dtmp;
+	  wide_int::SignOp uns = TYPE_UNSIGNED (TREE_TYPE (step)) 
+	    ? wide_int::UNSIGNED : wide_int::SIGNED;
 	  bool overflow = false;
 
-	  dtmp = tree_to_double_int (step)
-		 .mul_with_sign (nit, unsigned_p, &overflow);
+	  dtmp = wstep.mul (nit, uns, &overflow);
 	  /* If the multiplication overflowed we can't do a meaningful
 	     adjustment.  Likewise if the result doesn't fit in the type
 	     of the induction variable.  For a signed type we have to
 	     check whether the result has the expected signedness which
 	     is that of the step as number of iterations is unsigned.  */
 	  if (!overflow
-	      && double_int_fits_to_tree_p (TREE_TYPE (init), dtmp)
-	      && (unsigned_p
-		  || ((dtmp.high ^ TREE_INT_CST_HIGH (step)) >= 0)))
+	      && dtmp.fits_to_tree_p (TREE_TYPE (init))
+	      && (TYPE_UNSIGNED (TREE_TYPE (step))
+		  || (dtmp.neg_p () == tree_int_cst_sign_bit (step))))
 	    {
-	      tem = double_int_to_tree (TREE_TYPE (init), dtmp);
+	      tem = wide_int_to_infinite_tree (TREE_TYPE (init), dtmp,
+					       largest_precision);
 	      extract_range_from_binary_expr (&maxvr, PLUS_EXPR,
 					      TREE_TYPE (init), init, tem);
 	      /* Likewise if the addition did.  */
@@ -4404,9 +4465,9 @@  register_new_assert_for (tree name, tree expr,
      machinery.  */
   if (TREE_CODE (val) == INTEGER_CST
       && TREE_OVERFLOW (val))
-    val = build_int_cst_wide (TREE_TYPE (val),
-			      TREE_INT_CST_LOW (val), TREE_INT_CST_HIGH (val));
-
+    val = wide_int_to_infinite_tree
+      (TREE_TYPE (val), tree_to_infinite_wide_int (val), largest_precision);
+  
   /* The new assertion A will be inserted at BB or E.  We need to
      determine if the new location is dominated by a previously
      registered location for A.  If we are doing an edge insertion,
@@ -4558,23 +4619,27 @@  extract_code_and_val_from_cond_with_ops (tree name, enum tree_code cond_code,
    (to transform signed values into unsigned) and at the end xor
    SGNBIT back.  */
 
-static double_int
-masked_increment (double_int val, double_int mask, double_int sgnbit,
-		  unsigned int prec)
+static wide_int
+masked_increment (const wide_int &val_in, const wide_int &mask_in, 
+		  const wide_int &sgnbit, const_tree type)
 {
-  double_int bit = double_int_one, res;
+  wide_int val = val_in.zext (TYPE_PRECISION (type));
+  wide_int mask = mask_in.zext (TYPE_PRECISION (type));
+  wide_int one = wide_int::one (largest_bitsize, largest_precision);
+  wide_int bit = one, res;
+  unsigned int prec = TYPE_PRECISION (type);
   unsigned int i;
 
-  val ^= sgnbit;
-  for (i = 0; i < prec; i++, bit += bit)
+  val = val ^ sgnbit;
+  for (i = 0; i < prec; i++, bit = bit + bit)
     {
       res = mask;
-      if ((res & bit).is_zero ())
+      if ((res & bit).zero_p())
 	continue;
-      res = bit - double_int_one;
+      res = bit - one;
       res = (val + bit).and_not (res);
-      res &= mask;
-      if (res.ugt (val))
+      res = res & mask;
+      if (res.gtu_p (val))
 	return res ^ sgnbit;
     }
   return val ^ sgnbit;
@@ -4710,8 +4775,12 @@  register_edge_assert_for_2 (tree name, edge e, gimple_stmt_iterator bsi,
       gimple def_stmt = SSA_NAME_DEF_STMT (name);
       tree name2 = NULL_TREE, names[2], cst2 = NULL_TREE;
       tree val2 = NULL_TREE;
-      double_int mask = double_int_zero;
-      unsigned int prec = TYPE_PRECISION (TREE_TYPE (val));
+      tree ntype = TREE_TYPE (val);
+      unsigned int prec = TYPE_PRECISION (ntype);
+      wide_int mask = wide_int::zero (largest_bitsize, largest_precision);
+
+      /* Add asserts for NAME cmp CST and NAME being defined
+	 as NAME = (int) NAME2.  */
       unsigned int nprec = prec;
       enum tree_code rhs_code = ERROR_MARK;
 
@@ -4780,12 +4849,12 @@  register_edge_assert_for_2 (tree name, edge e, gimple_stmt_iterator bsi,
 	      && tree_fits_uhwi_p (cst2)
 	      && INTEGRAL_TYPE_P (TREE_TYPE (name2))
 	      && IN_RANGE (tree_to_uhwi (cst2), 1, prec - 1)
-	      && prec <= HOST_BITS_PER_DOUBLE_INT
 	      && prec == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (val)))
 	      && live_on_edge (e, name2)
 	      && !has_single_use (name2))
 	    {
-	      mask = double_int::mask (tree_to_uhwi (cst2));
+	      mask = wide_int::mask (tree_to_uhwi (cst2), false, 
+				     largest_bitsize, largest_precision);
 	      val2 = fold_binary (LSHIFT_EXPR, TREE_TYPE (val), val, cst2);
 	    }
 	}
@@ -4808,20 +4877,26 @@  register_edge_assert_for_2 (tree name, edge e, gimple_stmt_iterator bsi,
 		  val2 = fold_convert (type, val2);
 		}
 	      tmp = fold_build2 (MINUS_EXPR, TREE_TYPE (tmp), tmp, val2);
-	      new_val = double_int_to_tree (TREE_TYPE (tmp), mask);
+	      new_val = wide_int_to_infinite_tree (TREE_TYPE (tmp), mask,
+						   largest_precision);
 	      new_comp_code = comp_code == EQ_EXPR ? LE_EXPR : GT_EXPR;
 	    }
 	  else if (comp_code == LT_EXPR || comp_code == GE_EXPR)
 	    new_val = val2;
 	  else
 	    {
-	      double_int maxval
-		= double_int::max_value (prec, TYPE_UNSIGNED (TREE_TYPE (val)));
-	      mask |= tree_to_double_int (val2);
+	      wide_int::SignOp uns = TYPE_UNSIGNED (TREE_TYPE (val))
+		? wide_int::UNSIGNED : wide_int::SIGNED;
+		
+	      wide_int maxval = wide_int::max_value (TREE_TYPE (val))
+		.force_to_size (largest_bitsize, largest_precision, uns)
+		.ext (TYPE_PRECISION (TREE_TYPE (val)), wide_int::UNSIGNED);
+	      mask = tree_to_infinite_wide_int (val2) | mask;
 	      if (mask == maxval)
 		new_val = NULL_TREE;
 	      else
-		new_val = double_int_to_tree (TREE_TYPE (val2), mask);
+		new_val = wide_int_to_infinite_tree (TREE_TYPE (val2), mask,
+						     largest_precision);
 	    }
 
 	  if (new_val)
@@ -4867,13 +4942,13 @@  register_edge_assert_for_2 (tree name, edge e, gimple_stmt_iterator bsi,
 	  else
 	    {
 	      cst2 = TYPE_MAX_VALUE (TREE_TYPE (val));
-	      nprec = TYPE_PRECISION (TREE_TYPE (name2));
+	      ntype = TREE_TYPE (name2);
+	      nprec = TYPE_PRECISION (ntype);
 	    }
 	  if (TREE_CODE (name2) == SSA_NAME
 	      && INTEGRAL_TYPE_P (TREE_TYPE (name2))
 	      && TREE_CODE (cst2) == INTEGER_CST
 	      && !integer_zerop (cst2)
-	      && nprec <= HOST_BITS_PER_DOUBLE_INT
 	      && (nprec > 1
 		  || TYPE_UNSIGNED (TREE_TYPE (val))))
 	    {
@@ -4896,17 +4971,19 @@  register_edge_assert_for_2 (tree name, edge e, gimple_stmt_iterator bsi,
 	}
       if (names[0] || names[1])
 	{
-	  double_int minv, maxv = double_int_zero, valv, cst2v;
-	  double_int tem, sgnbit;
+	  wide_int minv, valv, cst2v;
+	  wide_int zero = wide_int::zero (largest_bitsize, largest_precision);
+	  wide_int maxv = zero;
+	  wide_int tem, sgnbit;
 	  bool valid_p = false, valn = false, cst2n = false;
 	  enum tree_code ccode = comp_code;
 
-	  valv = tree_to_double_int (val).zext (nprec);
-	  cst2v = tree_to_double_int (cst2).zext (nprec);
+	  valv = tree_to_infinite_wide_int (val).zext (nprec);
+	  cst2v = tree_to_infinite_wide_int (cst2).zext (nprec);
 	  if (!TYPE_UNSIGNED (TREE_TYPE (val)))
 	    {
-	      valn = valv.sext (nprec).is_negative ();
-	      cst2n = cst2v.sext (nprec).is_negative ();
+	      valn = valv.sext (nprec).neg_p ();
+	      cst2n = cst2v.sext (nprec).neg_p ();
 	    }
 	  /* If CST2 doesn't have most significant bit set,
 	     but VAL is negative, we have comparison like
@@ -4914,9 +4991,11 @@  register_edge_assert_for_2 (tree name, edge e, gimple_stmt_iterator bsi,
 	  if (!cst2n && valn)
 	    ccode = ERROR_MARK;
 	  if (cst2n)
-	    sgnbit = double_int_one.llshift (nprec - 1, nprec).zext (nprec);
+	    sgnbit = wide_int::set_bit_in_zero (nprec - 1, 
+						largest_bitsize, 
+						largest_precision);
 	  else
-	    sgnbit = double_int_zero;
+	    sgnbit = zero;
 	  minv = valv & cst2v;
 	  switch (ccode)
 	    {
@@ -4925,34 +5004,35 @@  register_edge_assert_for_2 (tree name, edge e, gimple_stmt_iterator bsi,
 		 (should be equal to VAL, otherwise we probably should
 		 have folded the comparison into false) and
 		 maximum unsigned value is VAL | ~CST2.  */
-	      maxv = valv | ~cst2v;
+	      maxv = valv.or_not (cst2v);
 	      maxv = maxv.zext (nprec);
 	      valid_p = true;
 	      break;
 	    case NE_EXPR:
-	      tem = valv | ~cst2v;
+	      tem = valv.or_not (cst2v);
 	      tem = tem.zext (nprec);
 	      /* If VAL is 0, handle (X & CST2) != 0 as (X & CST2) > 0U.  */
-	      if (valv.is_zero ())
+	      if (valv.zero_p ())
 		{
 		  cst2n = false;
-		  sgnbit = double_int_zero;
+		  sgnbit = zero;
 		  goto gt_expr;
 		}
 	      /* If (VAL | ~CST2) is all ones, handle it as
 		 (X & CST2) < VAL.  */
-	      if (tem == double_int::mask (nprec))
+	      if (tem == wide_int::mask (nprec, false, 
+					 largest_bitsize, largest_precision))
 		{
 		  cst2n = false;
 		  valn = false;
-		  sgnbit = double_int_zero;
+		  sgnbit = zero;
 		  goto lt_expr;
 		}
-	      if (!cst2n
-		  && cst2v.sext (nprec).is_negative ())
-		sgnbit
-		  = double_int_one.llshift (nprec - 1, nprec).zext (nprec);
-	      if (!sgnbit.is_zero ())
+	      if (!cst2n && cst2v.sext (nprec).neg_p ())
+		sgnbit = wide_int::set_bit_in_zero (nprec - 1, 
+						    largest_bitsize, 
+						    largest_precision);
+	      if (!sgnbit.zero_p ())
 		{
 		  if (valv == sgnbit)
 		    {
@@ -4960,13 +5040,15 @@  register_edge_assert_for_2 (tree name, edge e, gimple_stmt_iterator bsi,
 		      valn = true;
 		      goto gt_expr;
 		    }
-		  if (tem == double_int::mask (nprec - 1))
+		  if (tem == wide_int::mask (nprec - 1, false, 
+					     largest_bitsize, 
+					     largest_precision))
 		    {
 		      cst2n = true;
 		      goto lt_expr;
 		    }
 		  if (!cst2n)
-		    sgnbit = double_int_zero;
+		    sgnbit = zero;
 		}
 	      break;
 	    case GE_EXPR:
@@ -4979,11 +5061,12 @@  register_edge_assert_for_2 (tree name, edge e, gimple_stmt_iterator bsi,
 		{
 		  /* If (VAL & CST2) != VAL, X & CST2 can't be equal to
 		     VAL.  */
-		  minv = masked_increment (valv, cst2v, sgnbit, nprec);
+		  minv = masked_increment (valv, cst2v, sgnbit, ntype);
 		  if (minv == valv)
 		    break;
 		}
-	      maxv = double_int::mask (nprec - (cst2n ? 1 : 0));
+	      maxv = wide_int::mask (nprec - (cst2n ? 1 : 0), false,
+				     largest_bitsize, largest_precision);
 	      valid_p = true;
 	      break;
 	    case GT_EXPR:
@@ -4991,10 +5074,11 @@  register_edge_assert_for_2 (tree name, edge e, gimple_stmt_iterator bsi,
 	      /* Find out smallest MINV where MINV > VAL
 		 && (MINV & CST2) == MINV, if any.  If VAL is signed and
 		 CST2 has MSB set, compute it biased by 1 << (nprec - 1).  */
-	      minv = masked_increment (valv, cst2v, sgnbit, nprec);
+	      minv = masked_increment (valv, cst2v, sgnbit, ntype);
 	      if (minv == valv)
 		break;
-	      maxv = double_int::mask (nprec - (cst2n ? 1 : 0));
+	      maxv = wide_int::mask (nprec - (cst2n ? 1 : 0), false,
+				     largest_bitsize, largest_precision);
 	      valid_p = true;
 	      break;
 	    case LE_EXPR:
@@ -5010,12 +5094,13 @@  register_edge_assert_for_2 (tree name, edge e, gimple_stmt_iterator bsi,
 		maxv = valv;
 	      else
 		{
-		  maxv = masked_increment (valv, cst2v, sgnbit, nprec);
+		  maxv = masked_increment (valv, cst2v, sgnbit, ntype);
 		  if (maxv == valv)
 		    break;
-		  maxv -= double_int_one;
+		  maxv = maxv - wide_int::one (largest_bitsize, 
+					       largest_precision);
 		}
-	      maxv |= ~cst2v;
+	      maxv = maxv.or_not (cst2v);
 	      maxv = maxv.zext (nprec);
 	      minv = sgnbit;
 	      valid_p = true;
@@ -5038,12 +5123,13 @@  register_edge_assert_for_2 (tree name, edge e, gimple_stmt_iterator bsi,
 		}
 	      else
 		{
-		  maxv = masked_increment (valv, cst2v, sgnbit, nprec);
+		  maxv = masked_increment (valv, cst2v, sgnbit, ntype);
 		  if (maxv == valv)
 		    break;
 		}
-	      maxv -= double_int_one;
-	      maxv |= ~cst2v;
+	      maxv = maxv - wide_int::one (largest_bitsize, 
+					   largest_precision);
+	      maxv = maxv.or_not (cst2v);
 	      maxv = maxv.zext (nprec);
 	      minv = sgnbit;
 	      valid_p = true;
@@ -5052,7 +5138,9 @@  register_edge_assert_for_2 (tree name, edge e, gimple_stmt_iterator bsi,
 	      break;
 	    }
 	  if (valid_p
-	      && (maxv - minv).zext (nprec) != double_int::mask (nprec))
+	      && (maxv - minv).zext (nprec) 
+	        != wide_int::mask (nprec, false, 
+				   largest_bitsize, largest_precision))
 	    {
 	      tree tmp, new_val, type;
 	      int i;
@@ -5060,7 +5148,7 @@  register_edge_assert_for_2 (tree name, edge e, gimple_stmt_iterator bsi,
 	      for (i = 0; i < 2; i++)
 		if (names[i])
 		  {
-		    double_int maxv2 = maxv;
+		    wide_int maxv2 = maxv;
 		    tmp = names[i];
 		    type = TREE_TYPE (names[i]);
 		    if (!TYPE_UNSIGNED (type))
@@ -5068,13 +5156,15 @@  register_edge_assert_for_2 (tree name, edge e, gimple_stmt_iterator bsi,
 			type = build_nonstandard_integer_type (nprec, 1);
 			tmp = build1 (NOP_EXPR, type, names[i]);
 		      }
-		    if (!minv.is_zero ())
+		    if (!minv.zero_p ())
 		      {
 			tmp = build2 (PLUS_EXPR, type, tmp,
-				      double_int_to_tree (type, -minv));
+				      wide_int_to_infinite_tree 
+				      (type, minv.neg (), largest_precision));
 			maxv2 = maxv - minv;
 		      }
-		    new_val = double_int_to_tree (type, maxv2);
+		    new_val = wide_int_to_infinite_tree (type, maxv2,
+							 largest_precision);
 
 		    if (dump_file)
 		      {
@@ -5994,7 +6084,7 @@  search_for_addr_array (tree t, location_t location)
     {
       tree tem = TREE_OPERAND (TREE_OPERAND (t, 0), 0);
       tree low_bound, up_bound, el_sz;
-      double_int idx;
+      wide_int idx;
       if (TREE_CODE (TREE_TYPE (tem)) != ARRAY_TYPE
 	  || TREE_CODE (TREE_TYPE (TREE_TYPE (tem))) == ARRAY_TYPE
 	  || !TYPE_DOMAIN (TREE_TYPE (tem)))
@@ -6011,17 +6101,19 @@  search_for_addr_array (tree t, location_t location)
 	  || TREE_CODE (el_sz) != INTEGER_CST)
 	return;
 
-      idx = mem_ref_offset (t);
-      idx = idx.sdiv (tree_to_double_int (el_sz), TRUNC_DIV_EXPR);
-      if (idx.slt (double_int_zero))
+      idx = mem_ref_offset (t).force_to_size (largest_bitsize, 
+					      largest_precision,
+					      wide_int::SIGNED);
+      idx = idx.sdiv_trunc (tree_to_infinite_wide_int (el_sz));
+      if (idx.lts_p (0))
 	{
 	  warning_at (location, OPT_Warray_bounds,
 		      "array subscript is below array bounds");
 	  TREE_NO_WARNING (t) = 1;
 	}
-      else if (idx.sgt (tree_to_double_int (up_bound)
-			- tree_to_double_int (low_bound)
-			+ double_int_one))
+      else if (idx.gts_p (tree_to_infinite_wide_int (up_bound)
+			  - tree_to_infinite_wide_int (low_bound)
+			  + wide_int::one (largest_bitsize, largest_precision)))
 	{
 	  warning_at (location, OPT_Warray_bounds,
 		      "array subscript is above array bounds");
@@ -6188,6 +6280,78 @@  remove_range_assertions (void)
 }
 
 
+/* Return true if STMT is interesting for VRP and initialize
+   largest_bitsize and largest_precision.  */
+
+static void
+look_for_largest (gimple stmt)
+{
+  if (gimple_code (stmt) == GIMPLE_PHI)
+    {
+      tree res = gimple_phi_result (stmt);
+      tree type = TREE_TYPE (res);
+
+      if (!virtual_operand_p (res)
+	  && (INTEGRAL_TYPE_P (type) || POINTER_TYPE_P (type)))
+      {
+	largest_bitsize = MAX (largest_bitsize, 
+			       GET_MODE_BITSIZE (TYPE_MODE (type)));
+	largest_precision = MAX (largest_precision, TYPE_PRECISION (type));
+      }
+    }
+  else if (is_gimple_assign (stmt) || is_gimple_call (stmt))
+    {
+      tree lhs = gimple_get_lhs (stmt);
+
+      /* In general, assignments with virtual operands are not useful
+	 for deriving ranges, with the obvious exception of calls to
+	 builtin functions.  */
+      if (lhs && TREE_CODE (lhs) == SSA_NAME
+	  && (INTEGRAL_TYPE_P (TREE_TYPE (lhs))
+	      || POINTER_TYPE_P (TREE_TYPE (lhs)))
+	  && ((is_gimple_call (stmt)
+	       && gimple_call_fndecl (stmt) != NULL_TREE
+	       && DECL_BUILT_IN (gimple_call_fndecl (stmt)))
+	      || !gimple_vuse (stmt)))
+	{
+	  tree type = TREE_TYPE (lhs);
+	  largest_bitsize = MAX (largest_bitsize, 
+				 GET_MODE_BITSIZE (TYPE_MODE (type)));
+	  largest_precision = MAX (largest_precision, TYPE_PRECISION (type));
+	}
+    }
+}
+
+
+/* Find the largest integer types.  */
+
+static void
+largest_initialize (void)
+{
+  basic_block bb;
+
+  largest_bitsize = GET_MODE_BITSIZE (word_mode);
+  largest_precision = GET_MODE_PRECISION (word_mode);
+
+  FOR_EACH_BB (bb)
+    {
+      gimple_stmt_iterator si;
+
+      for (si = gsi_start_phis (bb); !gsi_end_p (si); gsi_next (&si))
+        look_for_largest (gsi_stmt (si));
+
+      for (si = gsi_start_bb (bb); !gsi_end_p (si); gsi_next (&si))
+        look_for_largest (gsi_stmt (si));
+    }
+
+  /* We need to leave enough space so that we can see some bits above
+     the largest precision.  */
+  largest_bitsize *= 2;
+  largest_bitsize = MIN (largest_bitsize, MAX_BITSIZE_MODE_ANY_INT);
+  largest_precision *= 2;
+  largest_precision = MIN (largest_precision, MAX_BITSIZE_MODE_ANY_INT);
+}
+
 /* Return true if STMT is interesting for VRP.  */
 
 static bool
@@ -8256,9 +8420,9 @@  simplify_bit_ops_using_ranges (gimple_stmt_iterator *gsi, gimple stmt)
   tree op = NULL_TREE;
   value_range_t vr0 = VR_INITIALIZER;
   value_range_t vr1 = VR_INITIALIZER;
-  double_int may_be_nonzero0, may_be_nonzero1;
-  double_int must_be_nonzero0, must_be_nonzero1;
-  double_int mask;
+  wide_int may_be_nonzero0, may_be_nonzero1;
+  wide_int must_be_nonzero0, must_be_nonzero1;
+  wide_int mask;
 
   if (TREE_CODE (op0) == SSA_NAME)
     vr0 = *(get_value_range (op0));
@@ -8283,13 +8447,13 @@  simplify_bit_ops_using_ranges (gimple_stmt_iterator *gsi, gimple stmt)
     {
     case BIT_AND_EXPR:
       mask = may_be_nonzero0.and_not (must_be_nonzero1);
-      if (mask.is_zero ())
+      if (mask.zero_p ())
 	{
 	  op = op0;
 	  break;
 	}
       mask = may_be_nonzero1.and_not (must_be_nonzero0);
-      if (mask.is_zero ())
+      if (mask.zero_p ())
 	{
 	  op = op1;
 	  break;
@@ -8297,13 +8461,13 @@  simplify_bit_ops_using_ranges (gimple_stmt_iterator *gsi, gimple stmt)
       break;
     case BIT_IOR_EXPR:
       mask = may_be_nonzero0.and_not (must_be_nonzero1);
-      if (mask.is_zero ())
+      if (mask.zero_p ())
 	{
 	  op = op1;
 	  break;
 	}
       mask = may_be_nonzero1.and_not (must_be_nonzero0);
-      if (mask.is_zero ())
+      if (mask.zero_p ())
 	{
 	  op = op0;
 	  break;
@@ -8573,15 +8737,18 @@  simplify_switch_using_ranges (gimple stmt)
 static bool
 simplify_conversion_using_ranges (gimple stmt)
 {
-  tree innerop, middleop, finaltype;
+  tree innerop, middleop;
+  tree middle_type, final_type;
   gimple def_stmt;
   value_range_t *innervr;
-  bool inner_unsigned_p, middle_unsigned_p, final_unsigned_p;
+  bool inner_unsigned_p;
   unsigned inner_prec, middle_prec, final_prec;
-  double_int innermin, innermed, innermax, middlemin, middlemed, middlemax;
+  wide_int innermin, innermed, innermax, middlemin, middlemed, middlemax;
+  tree inner_type;
+  wide_int::SignOp inner_uns;
 
-  finaltype = TREE_TYPE (gimple_assign_lhs (stmt));
-  if (!INTEGRAL_TYPE_P (finaltype))
+  final_type = TREE_TYPE (gimple_assign_lhs (stmt));
+  if (!INTEGRAL_TYPE_P (final_type))
     return false;
   middleop = gimple_assign_rhs1 (stmt);
   def_stmt = SSA_NAME_DEF_STMT (middleop);
@@ -8601,43 +8768,48 @@  simplify_conversion_using_ranges (gimple stmt)
 
   /* Simulate the conversion chain to check if the result is equal if
      the middle conversion is removed.  */
-  innermin = tree_to_double_int (innervr->min);
-  innermax = tree_to_double_int (innervr->max);
-
-  inner_prec = TYPE_PRECISION (TREE_TYPE (innerop));
-  middle_prec = TYPE_PRECISION (TREE_TYPE (middleop));
-  final_prec = TYPE_PRECISION (finaltype);
-
+  innermin = tree_to_infinite_wide_int (innervr->min);
+  innermax = tree_to_infinite_wide_int (innervr->max);
+
+  inner_type = TREE_TYPE (innerop);
+  inner_prec = TYPE_PRECISION (inner_type);
+  middle_type = TREE_TYPE (middleop);
+  middle_prec = TYPE_PRECISION (middle_type);
+  final_prec = TYPE_PRECISION (final_type);
+  
   /* If the first conversion is not injective, the second must not
      be widening.  */
-  if ((innermax - innermin).ugt (double_int::mask (middle_prec))
+  if ((innermax - innermin).gtu_p (wide_int::mask (middle_prec, false, 
+						   largest_bitsize,
+						   largest_precision))
       && middle_prec < final_prec)
     return false;
   /* We also want a medium value so that we can track the effect that
      narrowing conversions with sign change have.  */
   inner_unsigned_p = TYPE_UNSIGNED (TREE_TYPE (innerop));
+  inner_uns = inner_unsigned_p ? wide_int::UNSIGNED : wide_int::SIGNED;
   if (inner_unsigned_p)
-    innermed = double_int::mask (inner_prec).lrshift (1, inner_prec);
+    innermed = wide_int::mask (inner_prec - 1, false, 
+			       largest_bitsize, largest_precision);
   else
-    innermed = double_int_zero;
-  if (innermin.cmp (innermed, inner_unsigned_p) >= 0
-      || innermed.cmp (innermax, inner_unsigned_p) >= 0)
+    innermed = wide_int::zero (largest_bitsize, largest_precision);
+
+  if ((!innermin.lt_p (innermed, inner_uns))
+      || !innermed.lt_p (innermax, inner_uns))
     innermed = innermin;
 
-  middle_unsigned_p = TYPE_UNSIGNED (TREE_TYPE (middleop));
-  middlemin = innermin.ext (middle_prec, middle_unsigned_p);
-  middlemed = innermed.ext (middle_prec, middle_unsigned_p);
-  middlemax = innermax.ext (middle_prec, middle_unsigned_p);
+  middlemin = innermin.ext (middle_type);
+  middlemed = innermed.ext (middle_type);
+  middlemax = innermax.ext (middle_type);
 
   /* Require that the final conversion applied to both the original
      and the intermediate range produces the same result.  */
-  final_unsigned_p = TYPE_UNSIGNED (finaltype);
-  if (middlemin.ext (final_prec, final_unsigned_p)
-	 != innermin.ext (final_prec, final_unsigned_p)
-      || middlemed.ext (final_prec, final_unsigned_p)
-	 != innermed.ext (final_prec, final_unsigned_p)
-      || middlemax.ext (final_prec, final_unsigned_p)
-	 != innermax.ext (final_prec, final_unsigned_p))
+  if (middlemin.ext (final_type)
+      != innermin.ext (final_type)
+      || middlemed.ext (final_type)
+      != innermed.ext (final_type)
+      || middlemax.ext (final_type)
+      != innermax.ext (final_type))
     return false;
 
   gimple_assign_set_rhs1 (stmt, innerop);
@@ -8646,14 +8818,14 @@  simplify_conversion_using_ranges (gimple stmt)
 }
 
 /* Return whether the value range *VR fits in an integer type specified
-   by PRECISION and UNSIGNED_P.  */
+   by PRECISION.  The operand is assumed to be signed.  */
 
 static bool
-range_fits_type_p (value_range_t *vr, unsigned precision, bool unsigned_p)
+range_fits_type_p (value_range_t *vr, unsigned precision)
 {
   tree src_type;
   unsigned src_precision;
-  double_int tem;
+  wide_int tem;
 
   /* We can only handle integral and pointer types.  */
   src_type = TREE_TYPE (vr->min);
@@ -8665,7 +8837,7 @@  range_fits_type_p (value_range_t *vr, unsigned precision, bool unsigned_p)
   src_precision = TYPE_PRECISION (TREE_TYPE (vr->min));
   if (src_precision < precision
       || (src_precision == precision
-	  && TYPE_UNSIGNED (src_type) == unsigned_p))
+	  && !TYPE_UNSIGNED (src_type)))
     return true;
 
   /* Now we can only handle ranges with constant bounds.  */
@@ -8674,19 +8846,13 @@  range_fits_type_p (value_range_t *vr, unsigned precision, bool unsigned_p)
       || TREE_CODE (vr->max) != INTEGER_CST)
     return false;
 
-  /* For precision-preserving sign-changes the MSB of the double-int
-     has to be clear.  */
-  if (src_precision == precision
-      && (TREE_INT_CST_HIGH (vr->min) | TREE_INT_CST_HIGH (vr->max)) < 0)
-    return false;
-
   /* Then we can perform the conversion on both ends and compare
      the result for equality.  */
-  tem = tree_to_double_int (vr->min).ext (precision, unsigned_p);
-  if (tree_to_double_int (vr->min) != tem)
+  tem = tree_to_infinite_wide_int (vr->min).sext (precision);
+  if (tree_to_infinite_wide_int (vr->min) != tem)
     return false;
-  tem = tree_to_double_int (vr->max).ext (precision, unsigned_p);
-  if (tree_to_double_int (vr->max) != tem)
+  tem = tree_to_infinite_wide_int (vr->max).sext (precision);
+  if (tree_to_infinite_wide_int (vr->max) != tem)
     return false;
 
   return true;
@@ -8715,7 +8881,7 @@  simplify_float_conversion_using_ranges (gimple_stmt_iterator *gsi, gimple stmt)
       && (can_float_p (fltmode, TYPE_MODE (TREE_TYPE (rhs1)), 0)
 	  != CODE_FOR_nothing)
       && range_fits_type_p (vr, GET_MODE_PRECISION
-			          (TYPE_MODE (TREE_TYPE (rhs1))), 0))
+			    (TYPE_MODE (TREE_TYPE (rhs1)))))
     mode = TYPE_MODE (TREE_TYPE (rhs1));
   /* If we can do the conversion in the current input mode do nothing.  */
   else if (can_float_p (fltmode, TYPE_MODE (TREE_TYPE (rhs1)),
@@ -8732,7 +8898,7 @@  simplify_float_conversion_using_ranges (gimple_stmt_iterator *gsi, gimple stmt)
 	     or if the value-range does not fit in the signed type
 	     try with a wider mode.  */
 	  if (can_float_p (fltmode, mode, 0) != CODE_FOR_nothing
-	      && range_fits_type_p (vr, GET_MODE_PRECISION (mode), 0))
+	      && range_fits_type_p (vr, GET_MODE_PRECISION (mode)))
 	    break;
 
 	  mode = GET_MODE_WIDER_MODE (mode);
@@ -9152,6 +9318,7 @@  execute_vrp (void)
   loop_optimizer_init (LOOPS_NORMAL | LOOPS_HAVE_RECORDED_EXITS);
   rewrite_into_loop_closed_ssa (NULL, TODO_update_ssa);
   scev_initialize ();
+  largest_initialize ();
 
   insert_range_assertions ();
 
diff --git a/gcc/tree.c b/gcc/tree.c
index 84f2cbc..1b11044 100644
--- a/gcc/tree.c
+++ b/gcc/tree.c
@@ -1074,15 +1074,27 @@  double_int_to_tree (tree type, double_int cst)
 tree
 wide_int_to_tree (tree type, const wide_int &cst)
 {
-  wide_int v;
+  wide_int v = cst.force_to_size ((const_tree)type);
+ 
+  return build_int_cst_wide (type, v.elt (0), v.elt (1));
+}
 
-  gcc_assert (cst.get_len () <= 2);
-  if (TYPE_UNSIGNED (type))
-    v = cst.zext (TYPE_PRECISION (type));
-  else
-    v = cst.sext (TYPE_PRECISION (type));
+/* Constructs tree in type TYPE from with value given by CST.
+   Signedness of CST is assumed to be the same as the signedness of
+   TYPE.  The number is represented in an infinite filed of PREC bits.
+   This number should be twice the sized of the largest integer mode
+   in the function being compiled.  */
 
-  return build_int_cst_wide (type, v.elt (0), v.elt (1));
+tree
+wide_int_to_infinite_tree (tree type, const wide_int &cst, unsigned int prec)
+{
+  wide_int::SignOp sgn = TYPE_UNSIGNED (type) 
+    ? wide_int::UNSIGNED : wide_int::SIGNED;
+  wide_int v = cst.ext (TYPE_PRECISION (type), sgn);
+  HOST_WIDE_INT e1 = (prec <= HOST_BITS_PER_WIDE_INT) 
+    ? v.elt (0) >> (HOST_BITS_PER_WIDE_INT - 1) : v.elt (1);
+
+  return build_int_cst_wide (type, v.elt (0), e1);
 }
 
 /* Return 0 or -1 depending on the sign of the cst.  */ 
@@ -1265,6 +1277,61 @@  force_fit_type_double (tree type, double_int cst, int overflowable,
   return double_int_to_tree (type, cst);
 }
 
+/* We force the wide_int CST to the range of the type TYPE by sign or
+   zero extending it.  OVERFLOWABLE indicates if we are interested in
+   overflow of the value, when >0 we are only interested in signed
+   overflow, for <0 we are interested in any overflow.  OVERFLOWED
+   indicates whether overflow has already occurred.  CONST_OVERFLOWED
+   indicates whether constant overflow has already occurred.  We force
+   T's value to be within range of T's type (by setting to 0 or 1 all
+   the bits outside the type's range).  We set TREE_OVERFLOWED if,
+        OVERFLOWED is nonzero,
+        or OVERFLOWABLE is >0 and signed overflow occurs
+        or OVERFLOWABLE is <0 and any overflow occurs
+   We return a new tree node for the extended wide_int.  The node
+   is shared if no overflow flags are set.  */
+
+
+tree
+force_fit_type_wide (tree type, const wide_int &cst, int overflowable,
+		     bool overflowed)
+{
+  /* Size types *are* sign extended.  */
+  bool sign_extended_type = !TYPE_UNSIGNED (type);
+
+  /* If we need to set overflow flags, return a new unshared node.  */
+  if (overflowed || !cst.fits_to_tree_p (type))
+    {
+      if (overflowed
+	  || overflowable < 0
+	  || (overflowable > 0 && sign_extended_type))
+	{
+#ifdef NEW_REP_FOR_INT_CST
+	  wide_int::SignOp uns = TYPE_UNSIGNED (type) 
+	    ? wide_int::UNSIGNED : wide_int::SIGNED;
+	  const wide_int &r = cst.ext (TYPE_PRECISION (type), uns);
+	  tree t = make_int_cst (r.get_len ());
+	  int i;
+
+	  for (i = 0; i < r.get_len (); ++i)
+	    TREE_INT_CST_ELT (t, i) = r.elt (i);
+#else
+	  tree t = make_node (INTEGER_CST);
+	  double_int d = double_int::from_pair (cst.elt (1), cst.elt(0));
+
+	  TREE_INT_CST (t) = d.ext (TYPE_PRECISION (type),
+					     !sign_extended_type);
+#endif
+	  TREE_TYPE (t) = type;
+	  TREE_OVERFLOW (t) = 1;
+	  return t;
+	}
+    }
+
+  /* Else build a shared node.  */
+  return wide_int_to_tree (type, cst);
+}
+
 /* These are the hash table functions for the hash table of INTEGER_CST
    nodes of a sizetype.  */
 
@@ -1293,6 +1360,7 @@  int_cst_hash_eq (const void *x, const void *y)
 	  && TREE_INT_CST_LOW (xt) == TREE_INT_CST_LOW (yt));
 }
 
+
 /* Create an INT_CST node of TYPE and value HI:LOW.
    The returned node is always shared.  For small integers we use a
    per-type vector cache, for larger ones we use a single hash table.  */
@@ -4130,12 +4198,21 @@  build_simple_mem_ref_loc (location_t loc, tree ptr)
 /* Return the constant offset of a MEM_REF or TARGET_MEM_REF tree T.  */
 
 double_int
-mem_ref_offset (const_tree t)
+mem_ref_offset_as_double (const_tree t)
 {
   tree toff = TREE_OPERAND (t, 1);
   return tree_to_double_int (toff).sext (TYPE_PRECISION (TREE_TYPE (toff)));
 }
 
+/* Return the constant offset of a MEM_REF or TARGET_MEM_REF tree T.  */
+
+wide_int
+mem_ref_offset (const_tree t)
+{
+  tree toff = TREE_OPERAND (t, 1);
+  return wide_int::from_tree (toff);
+}
+
 /* Return the pointer-type relevant for TBAA purposes from the
    gimple memory reference tree T.  This is the type to be used for
    the offset operand of MEM_REF or TARGET_MEM_REF replacements of T.  */
diff --git a/gcc/tree.h b/gcc/tree.h
index f5497cd..74348d1 100644
--- a/gcc/tree.h
+++ b/gcc/tree.h
@@ -5835,7 +5835,7 @@  extern tree fold_indirect_ref_loc (location_t, tree);
 extern tree build_simple_mem_ref_loc (location_t, tree);
 #define build_simple_mem_ref(T)\
 	build_simple_mem_ref_loc (UNKNOWN_LOCATION, T)
-extern double_int mem_ref_offset (const_tree);
+extern double_int mem_ref_offset_as_double (const_tree);
 extern tree reference_alias_ptr_type (const_tree);
 extern tree build_invariant_address (tree, tree, HOST_WIDE_INT);
 extern tree constant_boolean_node (bool, tree);
diff --git a/gcc/varasm.c b/gcc/varasm.c
index 0666fcb..ee477f0 100644
--- a/gcc/varasm.c
+++ b/gcc/varasm.c
@@ -2583,7 +2583,7 @@  decode_addr_const (tree exp, struct addr_const *value)
       else if (TREE_CODE (target) == MEM_REF
 	       && TREE_CODE (TREE_OPERAND (target, 0)) == ADDR_EXPR)
 	{
-	  offset += mem_ref_offset (target).low;
+	  offset += mem_ref_offset_as_double (target).low;
 	  target = TREE_OPERAND (TREE_OPERAND (target, 0), 0);
 	}
       else if (TREE_CODE (target) == INDIRECT_REF