diff mbox

wide-int, tree-ssa

Message ID 87siufs17e.fsf@talisman.default
State New
Headers show

Commit Message

Richard Sandiford Nov. 29, 2013, 2:59 p.m. UTC
Only a partial reply.  I'll leave Kenny and Mike to answer the VARYING
question.

Richard Biener <richard.guenther@gmail.com> writes:
> On Sat, Nov 23, 2013 at 8:23 PM, Mike Stump <mikestump@comcast.net> wrote:
>> Richi has asked the we break the wide-int patch so that the individual
>> port and front end maintainers can review their parts without have to
>> go through the entire patch.  This patch covers the tree-saa code.
>>
>> Ok?
>
> Same comments as to tree-affine.c apply to tree-ssa-address.c.
>
> @@ -887,8 +886,8 @@ copy_ref_info (tree new_ref, tree old_ref)
>                            && (TREE_INT_CST_LOW (TMR_STEP (new_ref))
>                                < align)))))
>             {
> -             unsigned int inc = (mem_ref_offset (old_ref)
> -                                 - mem_ref_offset (new_ref)).low;
> +             unsigned int inc = (mem_ref_offset (old_ref).to_uhwi ()
> +                                 - mem_ref_offset (new_ref).to_uhwi ());
>               adjust_ptr_info_misalignment (new_pi, inc);
>             }
>           else
>
> other patches used .to_short_addr (), also your conversion isn't
> correct - previously the subtraction was in infinite precision and only
> the result (possibly) truncated.  Now the subtraction operands are
> truncated - that looks wrong.

Sorry, forgot about .to_short_addr () when doing the merge.
The conversion should be OK with that fixed though, since truncation
distributes through subtraction (and truncating first is cheaper).

> +/* Return a widest_int that can be used for bitwise simplifications
>     from VAL.  */
>
> -static double_int
> -value_to_double_int (prop_value_t val)
> +static widest_int
> +value_to_wide_int (prop_value_t val)
>  {
>    if (val.value
>        && TREE_CODE (val.value) == INTEGER_CST)
> -    return tree_to_double_int (val.value);
> -  else
> -    return double_int_zero;
> +    return wi::to_widest (val.value);
> +
> +  return 0;
>  }
>
> you also get to hope that we optimize all the widest_int return-by-value
> code to elide the implied copying ... (not sure if you can do that by
> adding a not implemented copy / assignment constructor - C++ folks?)

Are you worried about a copy from one widest_int to another?
Won't the return-value optimisation stop that?

Or are you worried about the copy from the tree to the widest_int?
In this particular case I suppose we could use:

  wi::to_widest (integer_zero_node)

instead of 0 and return a:

  generic_wide_int <extended_tree <MAX_BITSIZE_MODE_ANY_INT> >

(typedefed :-)).  Is the function really hot enough to justify the
uglification though?  We're thinking about these kinds of tweaks now
because it's flavour of the month, but I bet post-merge patches will
use the more obvious implementation instead.

OTOH maybe this is a natural candidate for C++11 auto...

>      case LT_EXPR:
>      case LE_EXPR:
>        {
> +       widest_int o1val, o2val, o1mask, o2mask;
>         int minmax, maxmin;
> +
> +       if ((code == GE_EXPR) || (code == GT_EXPR))
> +         {
> +           o1val = r2val;
> +           o1mask = r2mask;
> +           o2val = r1val;
> +           o2mask = r1mask;
> +           code = swap_tree_comparison (code);
> +         }
> +       else
> +         {
> +           o1val = r1val;
> +           o1mask = r1mask;
> +           o2val = r2val;
> +           o2mask = r2mask;
> +         }
>
> that are pretty expensive copies, no?  Consider using
>
>   const widest_int &o1val = swap ? r2val : r1val;
>
> and so on.  (C++ folks?  Any idea how to avoid the repeated
> conditional init in favor of sth that more resembles the original?)

Not a C++ person, but I can't think of one either.  It probably
wouldn't be as readable as the four separate statements though.

> diff --git a/gcc/tree-ssa-loop-im.c b/gcc/tree-ssa-loop-im.c
> index 6ea634c..c975a97 100644
> --- a/gcc/tree-ssa-loop-im.c
> +++ b/gcc/tree-ssa-loop-im.c
> @@ -1643,7 +1643,7 @@ mem_refs_may_alias_p (mem_ref_p mem1, mem_ref_p mem2,
>    /* Perform BASE + OFFSET analysis -- if MEM1 and MEM2 are based on the same
>       object and their offset differ in such a way that the locations cannot
>       overlap, then they cannot alias.  */
> -  double_int size1, size2;
> +  widest_int size1, size2;
>    aff_tree off1, off2;
>
> from the names you can know this is offset_int.

I agree that's true in the overlap and get_inner_reference_aff cases,
but most of tree-affine seems to be more general than that.  Is it really
worth bunging in conversions between offset_int and widest_int to save a
few HWIs of stack space?

> @@ -529,15 +526,15 @@ end:
>     difference of two values in TYPE.  */
>
>  static void
> -bounds_add (bounds *bnds, double_int delta, tree type)
> +bounds_add (bounds *bnds, widest_int delta, tree type)
>  {
>    mpz_t mdelta, max;
>
> const widest_int &delta please (please double-check the patches for
> widest-int-by-value passing).
>
> @@ -2592,7 +2587,7 @@ derive_constant_upper_bound_ops (tree type, tree op0,
>
>  static void
>  do_warn_aggressive_loop_optimizations (struct loop *loop,
> -                                      double_int i_bound, gimple stmt)
> +                                      widest_int i_bound, gimple stmt)
>  {
>    /* Don't warn if the loop doesn't have known constant bound.  */
>    if (!loop->nb_iterations
>
> Likewise.

Fixed, along with a few other cases.

Testing still in progress, but does this look OK?

Thanks,
Richard

Comments

Richard Biener Dec. 2, 2013, 10:45 a.m. UTC | #1
On Fri, Nov 29, 2013 at 3:59 PM, Richard Sandiford
<rdsandiford@googlemail.com> wrote:
> Only a partial reply.  I'll leave Kenny and Mike to answer the VARYING
> question.
>
> Richard Biener <richard.guenther@gmail.com> writes:
>> On Sat, Nov 23, 2013 at 8:23 PM, Mike Stump <mikestump@comcast.net> wrote:
>>> Richi has asked the we break the wide-int patch so that the individual
>>> port and front end maintainers can review their parts without have to
>>> go through the entire patch.  This patch covers the tree-saa code.
>>>
>>> Ok?
>>
>> Same comments as to tree-affine.c apply to tree-ssa-address.c.
>>
>> @@ -887,8 +886,8 @@ copy_ref_info (tree new_ref, tree old_ref)
>>                            && (TREE_INT_CST_LOW (TMR_STEP (new_ref))
>>                                < align)))))
>>             {
>> -             unsigned int inc = (mem_ref_offset (old_ref)
>> -                                 - mem_ref_offset (new_ref)).low;
>> +             unsigned int inc = (mem_ref_offset (old_ref).to_uhwi ()
>> +                                 - mem_ref_offset (new_ref).to_uhwi ());
>>               adjust_ptr_info_misalignment (new_pi, inc);
>>             }
>>           else
>>
>> other patches used .to_short_addr (), also your conversion isn't
>> correct - previously the subtraction was in infinite precision and only
>> the result (possibly) truncated.  Now the subtraction operands are
>> truncated - that looks wrong.
>
> Sorry, forgot about .to_short_addr () when doing the merge.
> The conversion should be OK with that fixed though, since truncation
> distributes through subtraction (and truncating first is cheaper).
>
>> +/* Return a widest_int that can be used for bitwise simplifications
>>     from VAL.  */
>>
>> -static double_int
>> -value_to_double_int (prop_value_t val)
>> +static widest_int
>> +value_to_wide_int (prop_value_t val)
>>  {
>>    if (val.value
>>        && TREE_CODE (val.value) == INTEGER_CST)
>> -    return tree_to_double_int (val.value);
>> -  else
>> -    return double_int_zero;
>> +    return wi::to_widest (val.value);
>> +
>> +  return 0;
>>  }
>>
>> you also get to hope that we optimize all the widest_int return-by-value
>> code to elide the implied copying ... (not sure if you can do that by
>> adding a not implemented copy / assignment constructor - C++ folks?)
>
> Are you worried about a copy from one widest_int to another?
> Won't the return-value optimisation stop that?

If it works and applies - as far as I know this is not guaranteed.  It would
be interesting to know whether any non-NRV cases are left.

> Or are you worried about the copy from the tree to the widest_int?
> In this particular case I suppose we could use:
>
>   wi::to_widest (integer_zero_node)
>
> instead of 0 and return a:
>
>   generic_wide_int <extended_tree <MAX_BITSIZE_MODE_ANY_INT> >
>
> (typedefed :-)).  Is the function really hot enough to justify the
> uglification though?  We're thinking about these kinds of tweaks now
> because it's flavour of the month, but I bet post-merge patches will
> use the more obvious implementation instead.

I wasn't worried about this specifically.

> OTOH maybe this is a natural candidate for C++11 auto...
>
>>      case LT_EXPR:
>>      case LE_EXPR:
>>        {
>> +       widest_int o1val, o2val, o1mask, o2mask;
>>         int minmax, maxmin;
>> +
>> +       if ((code == GE_EXPR) || (code == GT_EXPR))
>> +         {
>> +           o1val = r2val;
>> +           o1mask = r2mask;
>> +           o2val = r1val;
>> +           o2mask = r1mask;
>> +           code = swap_tree_comparison (code);
>> +         }
>> +       else
>> +         {
>> +           o1val = r1val;
>> +           o1mask = r1mask;
>> +           o2val = r2val;
>> +           o2mask = r2mask;
>> +         }
>>
>> that are pretty expensive copies, no?  Consider using
>>
>>   const widest_int &o1val = swap ? r2val : r1val;
>>
>> and so on.  (C++ folks?  Any idea how to avoid the repeated
>> conditional init in favor of sth that more resembles the original?)
>
> Not a C++ person, but I can't think of one either.  It probably
> wouldn't be as readable as the four separate statements though.
>
>> diff --git a/gcc/tree-ssa-loop-im.c b/gcc/tree-ssa-loop-im.c
>> index 6ea634c..c975a97 100644
>> --- a/gcc/tree-ssa-loop-im.c
>> +++ b/gcc/tree-ssa-loop-im.c
>> @@ -1643,7 +1643,7 @@ mem_refs_may_alias_p (mem_ref_p mem1, mem_ref_p mem2,
>>    /* Perform BASE + OFFSET analysis -- if MEM1 and MEM2 are based on the same
>>       object and their offset differ in such a way that the locations cannot
>>       overlap, then they cannot alias.  */
>> -  double_int size1, size2;
>> +  widest_int size1, size2;
>>    aff_tree off1, off2;
>>
>> from the names you can know this is offset_int.
>
> I agree that's true in the overlap and get_inner_reference_aff cases,
> but most of tree-affine seems to be more general than that.  Is it really
> worth bunging in conversions between offset_int and widest_int to save a
> few HWIs of stack space?

I think so.

>> @@ -529,15 +526,15 @@ end:
>>     difference of two values in TYPE.  */
>>
>>  static void
>> -bounds_add (bounds *bnds, double_int delta, tree type)
>> +bounds_add (bounds *bnds, widest_int delta, tree type)
>>  {
>>    mpz_t mdelta, max;
>>
>> const widest_int &delta please (please double-check the patches for
>> widest-int-by-value passing).
>>
>> @@ -2592,7 +2587,7 @@ derive_constant_upper_bound_ops (tree type, tree op0,
>>
>>  static void
>>  do_warn_aggressive_loop_optimizations (struct loop *loop,
>> -                                      double_int i_bound, gimple stmt)
>> +                                      widest_int i_bound, gimple stmt)
>>  {
>>    /* Don't warn if the loop doesn't have known constant bound.  */
>>    if (!loop->nb_iterations
>>
>> Likewise.
>
> Fixed, along with a few other cases.
>
> Testing still in progress, but does this look OK?

Yes.

Thanks,
Richard.

> Thanks,
> Richard
>
>
> Index: gcc/dwarf2out.c
> ===================================================================
> --- gcc/dwarf2out.c     2013-11-29 14:52:01.820750709 +0000
> +++ gcc/dwarf2out.c     2013-11-29 14:52:03.464763467 +0000
> @@ -14793,12 +14793,9 @@ simple_decl_align_in_bits (const_tree de
>  /* Return the result of rounding T up to ALIGN.  */
>
>  static inline offset_int
> -round_up_to_align (offset_int t, unsigned int align)
> +round_up_to_align (const offset_int &t, unsigned int align)
>  {
> -  t += align - 1;
> -  t = wi::udiv_trunc (t, align);
> -  t *= align;
> -  return t;
> +  return wi::udiv_trunc (t + align - 1, align) * align;
>  }
>
>  /* Given a pointer to a FIELD_DECL, compute and return the byte offset of the
> Index: gcc/gimple-ssa-strength-reduction.c
> ===================================================================
> --- gcc/gimple-ssa-strength-reduction.c 2013-11-29 14:52:01.820750709 +0000
> +++ gcc/gimple-ssa-strength-reduction.c 2013-11-29 14:52:03.465763474 +0000
> @@ -2043,7 +2043,7 @@ replace_unconditional_candidate (slsr_ca
>     MAX_INCR_VEC_LEN increments have been found.  */
>
>  static inline int
> -incr_vec_index (widest_int increment)
> +incr_vec_index (const widest_int &increment)
>  {
>    unsigned i;
>
> Index: gcc/tree-ssa-address.c
> ===================================================================
> --- gcc/tree-ssa-address.c      2013-11-29 14:52:01.820750709 +0000
> +++ gcc/tree-ssa-address.c      2013-11-29 14:52:03.459763428 +0000
> @@ -886,8 +886,8 @@ copy_ref_info (tree new_ref, tree old_re
>                            && (TREE_INT_CST_LOW (TMR_STEP (new_ref))
>                                < align)))))
>             {
> -             unsigned int inc = (mem_ref_offset (old_ref).to_uhwi ()
> -                                 - mem_ref_offset (new_ref).to_uhwi ());
> +             unsigned int inc = (mem_ref_offset (old_ref).to_short_addr ()
> +                                 - mem_ref_offset (new_ref).to_short_addr ());
>               adjust_ptr_info_misalignment (new_pi, inc);
>             }
>           else
> Index: gcc/tree-ssa-ccp.c
> ===================================================================
> --- gcc/tree-ssa-ccp.c  2013-11-29 14:52:01.820750709 +0000
> +++ gcc/tree-ssa-ccp.c  2013-11-29 14:52:03.460763435 +0000
> @@ -529,8 +529,8 @@ set_lattice_value (tree var, prop_value_
>  static prop_value_t get_value_for_expr (tree, bool);
>  static prop_value_t bit_value_binop (enum tree_code, tree, tree, tree);
>  static void bit_value_binop_1 (enum tree_code, tree, widest_int *, widest_int *,
> -                              tree, widest_int, widest_int,
> -                              tree, widest_int, widest_int);
> +                              tree, const widest_int &, const widest_int &,
> +                              tree, const widest_int &, const widest_int &);
>
>  /* Return a widest_int that can be used for bitwise simplifications
>     from VAL.  */
> @@ -1199,11 +1199,13 @@ bit_value_unop_1 (enum tree_code code, t
>  static void
>  bit_value_binop_1 (enum tree_code code, tree type,
>                    widest_int *val, widest_int *mask,
> -                  tree r1type, widest_int r1val, widest_int r1mask,
> -                  tree r2type, widest_int r2val, widest_int r2mask)
> +                  tree r1type, const widest_int &r1val,
> +                  const widest_int &r1mask, tree r2type,
> +                  const widest_int &r2val, const widest_int &r2mask)
>  {
>    signop sgn = TYPE_SIGN (type);
>    int width = TYPE_PRECISION (type);
> +  bool swap_p = false;
>
>    /* Assume we'll get a constant result.  Use an initial non varying
>       value, we fall back to varying in the end if necessary.  */
> @@ -1376,27 +1378,19 @@ bit_value_binop_1 (enum tree_code code,
>
>      case GE_EXPR:
>      case GT_EXPR:
> +      swap_p = true;
> +      code = swap_tree_comparison (code);
> +      /* Fall through.  */
>      case LT_EXPR:
>      case LE_EXPR:
>        {
> -       widest_int o1val, o2val, o1mask, o2mask;
>         int minmax, maxmin;
>
> -       if ((code == GE_EXPR) || (code == GT_EXPR))
> -         {
> -           o1val = r2val;
> -           o1mask = r2mask;
> -           o2val = r1val;
> -           o2mask = r1mask;
> -           code = swap_tree_comparison (code);
> -         }
> -       else
> -         {
> -           o1val = r1val;
> -           o1mask = r1mask;
> -           o2val = r2val;
> -           o2mask = r2mask;
> -         }
> +       const widest_int &o1val = swap_p ? r2val : r1val;
> +       const widest_int &o1mask = swap_p ? r2mask : r1mask;
> +       const widest_int &o2val = swap_p ? r1val : r2val;
> +       const widest_int &o2mask = swap_p ? r1mask : r2mask;
> +
>         /* If the most significant bits are not known we know nothing.  */
>         if (wi::neg_p (o1mask) || wi::neg_p (o2mask))
>           break;
> Index: gcc/tree-ssa-loop-niter.c
> ===================================================================
> --- gcc/tree-ssa-loop-niter.c   2013-11-29 14:52:01.820750709 +0000
> +++ gcc/tree-ssa-loop-niter.c   2013-11-29 14:52:34.655006442 +0000
> @@ -527,7 +527,7 @@ bound_difference (struct loop *loop, tre
>     difference of two values in TYPE.  */
>
>  static void
> -bounds_add (bounds *bnds, widest_int delta, tree type)
> +bounds_add (bounds *bnds, const widest_int &delta, tree type)
>  {
>    mpz_t mdelta, max;
>
> @@ -2624,10 +2624,10 @@ do_warn_aggressive_loop_optimizations (s
>     is taken at last when the STMT is executed BOUND + 1 times.
>     REALISTIC is true if BOUND is expected to be close to the real number
>     of iterations.  UPPER is true if we are sure the loop iterates at most
> -   BOUND times.  I_BOUND is an unsigned wide_int upper estimate on BOUND.  */
> +   BOUND times.  I_BOUND is a widest_int upper estimate on BOUND.  */
>
>  static void
> -record_estimate (struct loop *loop, tree bound, widest_int i_bound,
> +record_estimate (struct loop *loop, tree bound, const widest_int &i_bound,
>                  gimple at_stmt, bool is_exit, bool realistic, bool upper)
>  {
>    widest_int delta;
> @@ -2683,15 +2683,15 @@ record_estimate (struct loop *loop, tree
>      delta = 0;
>    else
>      delta = 1;
> -  i_bound += delta;
> +  widest_int new_i_bound = i_bound + delta;
>
>    /* If an overflow occurred, ignore the result.  */
> -  if (wi::ltu_p (i_bound, delta))
> +  if (wi::ltu_p (new_i_bound, delta))
>      return;
>
>    if (upper && !is_exit)
> -    do_warn_aggressive_loop_optimizations (loop, i_bound, at_stmt);
> -  record_niter_bound (loop, i_bound, realistic, upper);
> +    do_warn_aggressive_loop_optimizations (loop, new_i_bound, at_stmt);
> +  record_niter_bound (loop, new_i_bound, realistic, upper);
>  }
>
>  /* Record the estimate on number of iterations of LOOP based on the fact that
> Index: gcc/tree-vrp.c
> ===================================================================
> --- gcc/tree-vrp.c      2013-11-29 14:52:01.820750709 +0000
> +++ gcc/tree-vrp.c      2013-11-29 14:52:03.466763482 +0000
> @@ -4683,13 +4683,13 @@ extract_code_and_val_from_cond_with_ops
>     SGNBIT back.  */
>
>  static wide_int
> -masked_increment (wide_int val, wide_int mask, wide_int sgnbit,
> -                 unsigned int prec)
> +masked_increment (const wide_int &val_in, const wide_int &mask,
> +                 const wide_int &sgnbit, unsigned int prec)
>  {
>    wide_int bit = wi::one (prec), res;
>    unsigned int i;
>
> -  val ^= sgnbit;
> +  wide_int val = val_in ^ sgnbit;
>    for (i = 0; i < prec; i++, bit += bit)
>      {
>        res = mask;
diff mbox

Patch

Index: gcc/dwarf2out.c
===================================================================
--- gcc/dwarf2out.c	2013-11-29 14:52:01.820750709 +0000
+++ gcc/dwarf2out.c	2013-11-29 14:52:03.464763467 +0000
@@ -14793,12 +14793,9 @@  simple_decl_align_in_bits (const_tree de
 /* Return the result of rounding T up to ALIGN.  */
 
 static inline offset_int
-round_up_to_align (offset_int t, unsigned int align)
+round_up_to_align (const offset_int &t, unsigned int align)
 {
-  t += align - 1;
-  t = wi::udiv_trunc (t, align);
-  t *= align;
-  return t;
+  return wi::udiv_trunc (t + align - 1, align) * align;
 }
 
 /* Given a pointer to a FIELD_DECL, compute and return the byte offset of the
Index: gcc/gimple-ssa-strength-reduction.c
===================================================================
--- gcc/gimple-ssa-strength-reduction.c	2013-11-29 14:52:01.820750709 +0000
+++ gcc/gimple-ssa-strength-reduction.c	2013-11-29 14:52:03.465763474 +0000
@@ -2043,7 +2043,7 @@  replace_unconditional_candidate (slsr_ca
    MAX_INCR_VEC_LEN increments have been found.  */
 
 static inline int
-incr_vec_index (widest_int increment)
+incr_vec_index (const widest_int &increment)
 {
   unsigned i;
   
Index: gcc/tree-ssa-address.c
===================================================================
--- gcc/tree-ssa-address.c	2013-11-29 14:52:01.820750709 +0000
+++ gcc/tree-ssa-address.c	2013-11-29 14:52:03.459763428 +0000
@@ -886,8 +886,8 @@  copy_ref_info (tree new_ref, tree old_re
 			   && (TREE_INT_CST_LOW (TMR_STEP (new_ref))
 			       < align)))))
 	    {
-	      unsigned int inc = (mem_ref_offset (old_ref).to_uhwi ()
-				  - mem_ref_offset (new_ref).to_uhwi ());
+	      unsigned int inc = (mem_ref_offset (old_ref).to_short_addr ()
+				  - mem_ref_offset (new_ref).to_short_addr ());
 	      adjust_ptr_info_misalignment (new_pi, inc);
 	    }
 	  else
Index: gcc/tree-ssa-ccp.c
===================================================================
--- gcc/tree-ssa-ccp.c	2013-11-29 14:52:01.820750709 +0000
+++ gcc/tree-ssa-ccp.c	2013-11-29 14:52:03.460763435 +0000
@@ -529,8 +529,8 @@  set_lattice_value (tree var, prop_value_
 static prop_value_t get_value_for_expr (tree, bool);
 static prop_value_t bit_value_binop (enum tree_code, tree, tree, tree);
 static void bit_value_binop_1 (enum tree_code, tree, widest_int *, widest_int *,
-			       tree, widest_int, widest_int,
-			       tree, widest_int, widest_int);
+			       tree, const widest_int &, const widest_int &,
+			       tree, const widest_int &, const widest_int &);
 
 /* Return a widest_int that can be used for bitwise simplifications
    from VAL.  */
@@ -1199,11 +1199,13 @@  bit_value_unop_1 (enum tree_code code, t
 static void
 bit_value_binop_1 (enum tree_code code, tree type,
 		   widest_int *val, widest_int *mask,
-		   tree r1type, widest_int r1val, widest_int r1mask,
-		   tree r2type, widest_int r2val, widest_int r2mask)
+		   tree r1type, const widest_int &r1val,
+		   const widest_int &r1mask, tree r2type,
+		   const widest_int &r2val, const widest_int &r2mask)
 {
   signop sgn = TYPE_SIGN (type);
   int width = TYPE_PRECISION (type);
+  bool swap_p = false;
 
   /* Assume we'll get a constant result.  Use an initial non varying
      value, we fall back to varying in the end if necessary.  */
@@ -1376,27 +1378,19 @@  bit_value_binop_1 (enum tree_code code,
 
     case GE_EXPR:
     case GT_EXPR:
+      swap_p = true;
+      code = swap_tree_comparison (code);
+      /* Fall through.  */
     case LT_EXPR:
     case LE_EXPR:
       {
-	widest_int o1val, o2val, o1mask, o2mask;
 	int minmax, maxmin;
 
-	if ((code == GE_EXPR) || (code == GT_EXPR))
-	  {
-	    o1val = r2val;
-	    o1mask = r2mask;
-	    o2val = r1val;
-	    o2mask = r1mask;
-	    code = swap_tree_comparison (code);
-	  }
-	else
-	  {
-	    o1val = r1val;
-	    o1mask = r1mask;
-	    o2val = r2val;
-	    o2mask = r2mask;
-	  }
+	const widest_int &o1val = swap_p ? r2val : r1val;
+	const widest_int &o1mask = swap_p ? r2mask : r1mask;
+	const widest_int &o2val = swap_p ? r1val : r2val;
+	const widest_int &o2mask = swap_p ? r1mask : r2mask;
+
 	/* If the most significant bits are not known we know nothing.  */
 	if (wi::neg_p (o1mask) || wi::neg_p (o2mask))
 	  break;
Index: gcc/tree-ssa-loop-niter.c
===================================================================
--- gcc/tree-ssa-loop-niter.c	2013-11-29 14:52:01.820750709 +0000
+++ gcc/tree-ssa-loop-niter.c	2013-11-29 14:52:34.655006442 +0000
@@ -527,7 +527,7 @@  bound_difference (struct loop *loop, tre
    difference of two values in TYPE.  */
 
 static void
-bounds_add (bounds *bnds, widest_int delta, tree type)
+bounds_add (bounds *bnds, const widest_int &delta, tree type)
 {
   mpz_t mdelta, max;
 
@@ -2624,10 +2624,10 @@  do_warn_aggressive_loop_optimizations (s
    is taken at last when the STMT is executed BOUND + 1 times.
    REALISTIC is true if BOUND is expected to be close to the real number
    of iterations.  UPPER is true if we are sure the loop iterates at most
-   BOUND times.  I_BOUND is an unsigned wide_int upper estimate on BOUND.  */
+   BOUND times.  I_BOUND is a widest_int upper estimate on BOUND.  */
 
 static void
-record_estimate (struct loop *loop, tree bound, widest_int i_bound,
+record_estimate (struct loop *loop, tree bound, const widest_int &i_bound,
 		 gimple at_stmt, bool is_exit, bool realistic, bool upper)
 {
   widest_int delta;
@@ -2683,15 +2683,15 @@  record_estimate (struct loop *loop, tree
     delta = 0;
   else
     delta = 1;
-  i_bound += delta;
+  widest_int new_i_bound = i_bound + delta;
 
   /* If an overflow occurred, ignore the result.  */
-  if (wi::ltu_p (i_bound, delta))
+  if (wi::ltu_p (new_i_bound, delta))
     return;
 
   if (upper && !is_exit)
-    do_warn_aggressive_loop_optimizations (loop, i_bound, at_stmt);
-  record_niter_bound (loop, i_bound, realistic, upper);
+    do_warn_aggressive_loop_optimizations (loop, new_i_bound, at_stmt);
+  record_niter_bound (loop, new_i_bound, realistic, upper);
 }
 
 /* Record the estimate on number of iterations of LOOP based on the fact that
Index: gcc/tree-vrp.c
===================================================================
--- gcc/tree-vrp.c	2013-11-29 14:52:01.820750709 +0000
+++ gcc/tree-vrp.c	2013-11-29 14:52:03.466763482 +0000
@@ -4683,13 +4683,13 @@  extract_code_and_val_from_cond_with_ops
    SGNBIT back.  */
 
 static wide_int
-masked_increment (wide_int val, wide_int mask, wide_int sgnbit,
-		  unsigned int prec)
+masked_increment (const wide_int &val_in, const wide_int &mask,
+		  const wide_int &sgnbit, unsigned int prec)
 {
   wide_int bit = wi::one (prec), res;
   unsigned int i;
 
-  val ^= sgnbit;
+  wide_int val = val_in ^ sgnbit;
   for (i = 0; i < prec; i++, bit += bit)
     {
       res = mask;