diff mbox

wide-int, tree-ssa

Message ID CAFiYyc0bsnnepKc0ci4bz44XppsomCu4FgSk_tt17JnDL3UK0A@mail.gmail.com
State New
Headers show

Commit Message

Richard Biener Nov. 26, 2013, 12:34 p.m. UTC
On Sat, Nov 23, 2013 at 8:23 PM, Mike Stump <mikestump@comcast.net> wrote:
> Richi has asked the we break the wide-int patch so that the individual port and front end maintainers can review their parts without have to go through the entire patch.    This patch covers the tree-saa code.
>
> Ok?

Same comments as to tree-affine.c apply to tree-ssa-address.c.

@@ -887,8 +886,8 @@ copy_ref_info (tree new_ref, tree old_ref)
                           && (TREE_INT_CST_LOW (TMR_STEP (new_ref))
                               < align)))))
            {
-             unsigned int inc = (mem_ref_offset (old_ref)
-                                 - mem_ref_offset (new_ref)).low;
+             unsigned int inc = (mem_ref_offset (old_ref).to_uhwi ()
+                                 - mem_ref_offset (new_ref).to_uhwi ());
              adjust_ptr_info_misalignment (new_pi, inc);
            }
          else

other patches used .to_short_addr (), also your conversion isn't
correct - previously the subtraction was in infinite precision and only
the result (possibly) truncated.  Now the subtraction operands are
truncated - that looks wrong.

I've raised this in the other mail - these should remain double_ints for now
as we may have _tons_ of them around (well, I think only the after-IPA
pass pipeline will create them).

The rest of the patch looks ok to me (modulo what I said in the other
mails and what overlaps with code in this patch).

Thanks,
Richard.

Comments

Kenneth Zadeck Jan. 3, 2014, 4:04 p.m. UTC | #1
On 11/26/2013 07:34 AM, Richard Biener wrote:
> --- a/gcc/tree-ssa-ccp.c
> +++ b/gcc/tree-ssa-ccp.c
> @@ -98,6 +98,15 @@ along with GCC; see the file COPYING3.  If not see
>      array CONST_VAL[i].VALUE.  That is fed into substitute_and_fold for
>      final substitution and folding.
>
> +   This algorithm uses wide-ints at the max precision of the target.
> +   This means that, with one uninteresting exception, variables with
> +   UNSIGNED types never go to VARYING because the bits above the
> +   precision of the type of the variable are always zero.  The
> +   uninteresting case is a variable of UNSIGNED type that has the
> +   maximum precision of the target.  Such variables can go to VARYING,
> +   but this causes no loss of infomation since these variables will
> +   never be extended.
> +
>
> I don't understand this.  In CCP a SSA name drops to varying when
> it's not constant.  How is that related to signedness or precision?!

Richi,

This is a bogosity that is inherited from the double-int code. Consider 
an unsigned int in double-int (or for that matter in widest-int).   It 
has a rep of a bunch of leading zeros followed by 32 bits of real 
stuff.    Even if you know nothing about the value, it still has the 
upper bits of zero that you know are constant zeros.   Of course this is 
not true for signed numbers because the upper bits are smeared with the 
sign bits.

This was what i thought you were talking about when you argued many 
months ago when you said that infinite precision was more natural and 
that i would have trouble with a fixed precision based on the size of 
the type or mode and is the reason that tree-ssa-ccp uses widest int 
rather than wide-int.   I do actually believe that wide-int is more 
natural here.   However this bogosity does buy you a few constants.

My first cut at this pass used wide-int rather than widest int, but 
there are in fact constants that the pass finds that depend on this 
being done this way so to satisfy the bit for bit same code rule for 
values smaller than timode, so i left this the way that it was but 
decided to document it.

The proper way to do this is in fact to use wide-int, not widest-int but 
that would require major surgery to the code that converts values from 
one type to another.    I plan to do this during the next stage 1.   At 
that time, i will also enhance the code to expand the sign bit if it is 
known for signed types.

Kenny
diff mbox

Patch

--- a/gcc/tree-ssa-ccp.c
+++ b/gcc/tree-ssa-ccp.c
@@ -98,6 +98,15 @@  along with GCC; see the file COPYING3.  If not see
    array CONST_VAL[i].VALUE.  That is fed into substitute_and_fold for
    final substitution and folding.

+   This algorithm uses wide-ints at the max precision of the target.
+   This means that, with one uninteresting exception, variables with
+   UNSIGNED types never go to VARYING because the bits above the
+   precision of the type of the variable are always zero.  The
+   uninteresting case is a variable of UNSIGNED type that has the
+   maximum precision of the target.  Such variables can go to VARYING,
+   but this causes no loss of infomation since these variables will
+   never be extended.
+

I don't understand this.  In CCP a SSA name drops to varying when
it's not constant.  How is that related to signedness or precision?!

@@ -511,21 +523,21 @@  set_lattice_value (tree var, prop_value_t new_val)

 static prop_value_t get_value_for_expr (tree, bool);
 static prop_value_t bit_value_binop (enum tree_code, tree, tree, tree);
-static void bit_value_binop_1 (enum tree_code, tree, double_int *,
double_int *,
-                              tree, double_int, double_int,
-                              tree, double_int, double_int);
+static void bit_value_binop_1 (enum tree_code, tree, widest_int *,
widest_int *,
+                              tree, widest_int, widest_int,
+                              tree, widest_int, widest_int);

please don't pass widest_int by value but instead pass it by const reference.
compared to double_int it is really large (most targets passed double_int
in two registers).

+/* Return a widest_int that can be used for bitwise simplifications
    from VAL.  */

-static double_int
-value_to_double_int (prop_value_t val)
+static widest_int
+value_to_wide_int (prop_value_t val)
 {
   if (val.value
       && TREE_CODE (val.value) == INTEGER_CST)
-    return tree_to_double_int (val.value);
-  else
-    return double_int_zero;
+    return wi::to_widest (val.value);
+
+  return 0;
 }

you also get to hope that we optimize all the widest_int return-by-value
code to elide the implied copying ... (not sure if you can do that by
adding a not implemented copy / assignment constructor - C++ folks?)

+  val.lattice_val = val.mask == -1 ? VARYING : CONSTANT;
   if (val.lattice_val == CONSTANT)

you mean this check wrt the toplevel VARYING comment?  It would
be bad if that no longer ends up VARYING.  OTOH I don't know whether
the current check is correct either - we extend the mask according to
the sign of the lattice element.  Thus the wide-int variant looks equivalent.
Note that for unsigned values we have knowledge about the bits
outside of the precision - they are zero, so techincally unsigneds
never go VARYING.

     case LT_EXPR:
     case LE_EXPR:
       {
+       widest_int o1val, o2val, o1mask, o2mask;
        int minmax, maxmin;
+
+       if ((code == GE_EXPR) || (code == GT_EXPR))
+         {
+           o1val = r2val;
+           o1mask = r2mask;
+           o2val = r1val;
+           o2mask = r1mask;
+           code = swap_tree_comparison (code);
+         }
+       else
+         {
+           o1val = r1val;
+           o1mask = r1mask;
+           o2val = r2val;
+           o2mask = r2mask;
+         }

that are pretty expensive copies, no?  Consider using

  const widest_int &o1val = swap ? r2val : r1val;

and so on.  (C++ folks?  Any idea how to avoid the repeated
conditional init in favor of sth that more resembles the original?)

diff --git a/gcc/tree-ssa-loop-im.c b/gcc/tree-ssa-loop-im.c
index 6ea634c..c975a97 100644
--- a/gcc/tree-ssa-loop-im.c
+++ b/gcc/tree-ssa-loop-im.c
@@ -1643,7 +1643,7 @@  mem_refs_may_alias_p (mem_ref_p mem1, mem_ref_p mem2,
   /* Perform BASE + OFFSET analysis -- if MEM1 and MEM2 are based on the same
      object and their offset differ in such a way that the locations cannot
      overlap, then they cannot alias.  */
-  double_int size1, size2;
+  widest_int size1, size2;
   aff_tree off1, off2;

from the names you can know this is offset_int.

diff --git a/gcc/tree-ssa-loop-ivopts.c b/gcc/tree-ssa-loop-ivopts.c
index 476f3a1..aa94400 100644
--- a/gcc/tree-ssa-loop-ivopts.c
+++ b/gcc/tree-ssa-loop-ivopts.c
@@ -944,7 +944,7 @@  alloc_iv (tree base, tree step)
       && !DECL_P (TREE_OPERAND (base_object, 0)))
     {
       aff_tree comb;
-      double_int size;
+      widest_int size;
       base_object = get_inner_reference_aff (TREE_OPERAND (base_object, 0),
                                             &comb, &size);
       gcc_assert (base_object != NULL_TREE);

likewise.

@@ -529,15 +526,15 @@  end:
    difference of two values in TYPE.  */

 static void
-bounds_add (bounds *bnds, double_int delta, tree type)
+bounds_add (bounds *bnds, widest_int delta, tree type)
 {
   mpz_t mdelta, max;

const widest_int &delta please (please double-check the patches for
widest-int-by-value passing).

@@ -2592,7 +2587,7 @@  derive_constant_upper_bound_ops (tree type, tree op0,

 static void
 do_warn_aggressive_loop_optimizations (struct loop *loop,
-                                      double_int i_bound, gimple stmt)
+                                      widest_int i_bound, gimple stmt)
 {
   /* Don't warn if the loop doesn't have known constant bound.  */
   if (!loop->nb_iterations

Likewise.

@@ -2628,13 +2623,13 @@  do_warn_aggressive_loop_optimizations (struct
loop *loop,
    is taken at last when the STMT is executed BOUND + 1 times.
    REALISTIC is true if BOUND is expected to be close to the real number
    of iterations.  UPPER is true if we are sure the loop iterates at most
-   BOUND times.  I_BOUND is an unsigned double_int upper estimate on BOUND.  */
+   BOUND times.  I_BOUND is an unsigned wide_int upper estimate on BOUND.  */

 static void
-record_estimate (struct loop *loop, tree bound, double_int i_bound,
+record_estimate (struct loop *loop, tree bound, widest_int i_bound,
                 gimple at_stmt, bool is_exit, bool realistic, bool upper)
 {
-  double_int delta;
+  widest_int delta;

   if (dump_file && (dump_flags & TDF_DETAILS))
     {

Likewise.  (this is also about stack-usage in deep call frames, even if
the actual copies might end up being cheap)

 /* Return index of BOUND in BOUNDS array sorted in increasing order.
    Lookup by binary search.  */

 static int
-bound_index (vec<double_int> bounds, double_int bound)
+bound_index (vec<widest_int> bounds, const widest_int &bound)
 {
   unsigned int end = bounds.length ();
   unsigned int begin = 0;

eh... a vec of widest_int ... you know this is of the order of O (# statements)?
Oh well.  At least it's temporary.

diff --git a/gcc/tree-ssanames.h b/gcc/tree-ssanames.h
index d0a6542..2ac7744 100644
--- a/gcc/tree-ssanames.h
+++ b/gcc/tree-ssanames.h
@@ -49,11 +49,11 @@  struct GTY(()) ptr_info_def

 struct GTY (()) range_info_def {
   /* Minimum for value range.  */
-  double_int min;
+  widest_int min;
   /* Maximum for value range.  */
-  double_int max;
+  widest_int max;
   /* Non-zero bits - bits not set are guaranteed to be always zero.  */
-  double_int nonzero_bits;
+  widest_int nonzero_bits;
 };