diff mbox

[PR69052] Check if loop inv can be propagated into mem ref with additional addr expr canonicalization

Message ID HE1PR08MB050748A21B9A8BA2CEBA3D21E7D60@HE1PR08MB0507.eurprd08.prod.outlook.com
State New
Headers show

Commit Message

Bin Cheng Feb. 9, 2016, 11:08 a.m. UTC
Hi,
When counting cost for loop inv, GCC checks if a loop inv can be propagated into its use site (a memory reference).  If it cannot be propagated, we increase its cost so that it's expensive enough to be hoisted out of loop.  Currently we simply replace loop inv register in the use site with its definition expression, then call validate_changes to check if the result insn is valid.  This is weak because validate_changes doesn't take canonicalization into consideration.  Given below example:

  Loop inv def:  
   69: r149:SI=r87:SI+const(unspec[`xxxx'] 1)
      REG_DEAD r87:SI
  Loop inv use:
   70: r150:SI=[r90:SI*0x4+r149:SI]
      REG_DEAD r149:SI

The address expression after propagation is "r90 * 0x4 + (r87 + const(unspec[`xxxx']))".  Function validate_changes simply returns false to it.  As a matter of fact, the propagation is feasible if we canonicalize address expression into the form like "(r90 * 0x4 + r87) + const(unspec[`xxxx'])".

This patch fixes the problem by canonicalizing address expression and verifying if the new addr is valid.  The canonicalization follows GCC insn canonicalization rules.  The test case from bugzilla PR is also included.
As for the canonicalize_address interface, there is another canonicalize_address in fwprop.c which only changes shift into mult.  I think it would be good to factor out a common RTL interface in GCC, but that's stage1 work.

Bootstrap and test on x86_64 and AArch64.  Is it OK?

Thanks,
bin

2016-02-09  Bin Cheng  <bin.cheng@arm.com>

	PR tree-optimization/69052
	* loop-invariant.c (canonicalize_address): New function.
	(inv_can_prop_to_addr_use): Check validity of address expression
	which is canonicalized by above function.

gcc/testsuite/ChangeLog
2016-02-09  Bin Cheng  <bin.cheng@arm.com>

	PR tree-optimization/69052
	* gcc.target/i386/pr69052.c: New test.

Comments

Jeff Law Feb. 11, 2016, 7:14 a.m. UTC | #1
On 02/09/2016 04:08 AM, Bin Cheng wrote:
> Hi,
> When counting cost for loop inv, GCC checks if a loop inv can be propagated into its use site (a memory reference).  If it cannot be propagated, we increase its cost so that it's expensive enough to be hoisted out of loop.  Currently we simply replace loop inv register in the use site with its definition expression, then call validate_changes to check if the result insn is valid.  This is weak because validate_changes doesn't take canonicalization into consideration.  Given below example:
>
>    Loop inv def:
>     69: r149:SI=r87:SI+const(unspec[`xxxx'] 1)
>        REG_DEAD r87:SI
>    Loop inv use:
>     70: r150:SI=[r90:SI*0x4+r149:SI]
>        REG_DEAD r149:SI
>
> The address expression after propagation is "r90 * 0x4 + (r87 + const(unspec[`xxxx']))".  Function validate_changes simply returns false to it.  As a matter of fact, the propagation is feasible if we canonicalize address expression into the form like "(r90 * 0x4 + r87) + const(unspec[`xxxx'])".
>
> This patch fixes the problem by canonicalizing address expression and verifying if the new addr is valid.  The canonicalization follows GCC insn canonicalization rules.  The test case from bugzilla PR is also included.
> As for the canonicalize_address interface, there is another canonicalize_address in fwprop.c which only changes shift into mult.  I think it would be good to factor out a common RTL interface in GCC, but that's stage1 work.

Also note there's bits in combine that will canonicalize appropriate 
shifts into mults.  Clearly there's a need for some generalized routines 
to take a fairly generic address and perform canonicalizations and 
simplifications on it.

> Bootstrap and test on x86_64 and AArch64.  Is it OK?
>
> Thanks,
> bin
>
> 2016-02-09  Bin Cheng<bin.cheng@arm.com>
>
> 	PR tree-optimization/69052
> 	* loop-invariant.c (canonicalize_address): New function.
> 	(inv_can_prop_to_addr_use): Check validity of address expression
> 	which is canonicalized by above function.
>
> gcc/testsuite/ChangeLog
> 2016-02-09  Bin Cheng<bin.cheng@arm.com>
>
> 	PR tree-optimization/69052
> 	* gcc.target/i386/pr69052.c: New test.
>
>
> pr69052-20160204.txt
>
>
> diff --git a/gcc/loop-invariant.c b/gcc/loop-invariant.c
> index 707f044..157e273 100644
> --- a/gcc/loop-invariant.c
> +++ b/gcc/loop-invariant.c
> @@ -754,6 +754,74 @@ create_new_invariant (struct def *def, rtx_insn *insn, bitmap depends_on,
>     return inv;
>   }
>
> +/* Returns a canonical version address for X.  It identifies
> +   addr expr in the form of A + B + C.  Following instruction
> +   canonicalization rules, MULT operand is moved to the front,
> +   CONST operand is moved to the end; also PLUS operators are
> +   chained to the left.  */
> +
> +static rtx
> +canonicalize_address (rtx x)
> +{
> +  rtx op0, op1, op2;
> +  machine_mode mode = GET_MODE (x);
> +  enum rtx_code code = GET_CODE (x);
> +
> +  if (code != PLUS)
> +    return x;
> +
> +  /* Extract operands from A + B (+ C).  */
> +  if (GET_CODE (XEXP (x, 0)) == PLUS)
> +    {
> +      op0 = XEXP (XEXP (x, 0), 0);
> +      op1 = XEXP (XEXP (x, 0), 1);
> +      op2 = XEXP (x, 1);
> +    }
> +  else if (GET_CODE (XEXP (x, 1)) == PLUS)
> +    {
> +      op0 = XEXP (x, 0);
> +      op1 = XEXP (XEXP (x, 1), 0);
> +      op2 = XEXP (XEXP (x, 1), 1);
> +    }
> +  else
> +    {
> +      op0 = XEXP (x, 0);
> +      op1 = XEXP (x, 1);
> +      op2 = NULL_RTX;
> +    }
> +
> +  /* Move MULT operand to the front.  */
> +  if (!REG_P (op1) && !CONST_INT_P (op1))
> +    std::swap (op0, op1);
This feels a bit hack-ish in the sense that you already know the form of 
the RTL you're expecting and just assume that you'll be given something 
of that form, but no more complex.

ISTM you're better off walking the whole rtx, recording the tidbits as 
you go into a vec.  If you see something unexpected during that walk, 
you punt canonicalization of the whole expression.

You then sort the vec.  You want to move things like MULT to the start 
and all the constants to the end I think.

You then do simplifications, particularly on the constants, but there 
may be something useful to do with MULT terms as well.  You could also 
arrange to rewrite ASHIFTs into MULTs at this stage.

Then you generate a new equivalent expression from the simplified 
operands in the vec.

You might look at tree-ssa-reassoc for ideas on implementation details.

Initially just use it in the LICM code, but I think given that kind of 
structure it'd be generally useful to replace bits of combine and fwprop

If your contention is that only a few forms really matter, then I'd like 
to see those forms spelled out better in the comment and some kind of 
checking that we have reasonable incoming RTL.


> +
> +  /* Move CONST operand to the end.  */
> +  if (CONST_INT_P (op0))
> +    std::swap (op0, op1);
You might want to check CONSTANT_P here.  Maybe it doesn't matter in 
practice, but things like (plus (plus (symbol-ref) (const_int) const_int))

That also gives you a fighting chance at extending this to handle 
HIGH/LO_SUM which are going to appear on the RISCy targets.

So I think the concept of making sure we're passing canonical RTL to the 
verification step is good, I'm a bit concerned about the implementation 
of the canonicalization step.

Jeff
Bin.Cheng Feb. 11, 2016, 5:59 p.m. UTC | #2
On Thu, Feb 11, 2016 at 7:14 AM, Jeff Law <law@redhat.com> wrote:
> On 02/09/2016 04:08 AM, Bin Cheng wrote:
>>
>> Hi,
>> When counting cost for loop inv, GCC checks if a loop inv can be
>> propagated into its use site (a memory reference).  If it cannot be
>> propagated, we increase its cost so that it's expensive enough to be hoisted
>> out of loop.  Currently we simply replace loop inv register in the use site
>> with its definition expression, then call validate_changes to check if the
>> result insn is valid.  This is weak because validate_changes doesn't take
>> canonicalization into consideration.  Given below example:
>>
>>    Loop inv def:
>>     69: r149:SI=r87:SI+const(unspec[`xxxx'] 1)
>>        REG_DEAD r87:SI
>>    Loop inv use:
>>     70: r150:SI=[r90:SI*0x4+r149:SI]
>>        REG_DEAD r149:SI
>>
>> The address expression after propagation is "r90 * 0x4 + (r87 +
>> const(unspec[`xxxx']))".  Function validate_changes simply returns false to
>> it.  As a matter of fact, the propagation is feasible if we canonicalize
>> address expression into the form like "(r90 * 0x4 + r87) +
>> const(unspec[`xxxx'])".
>>
>> This patch fixes the problem by canonicalizing address expression and
>> verifying if the new addr is valid.  The canonicalization follows GCC insn
>> canonicalization rules.  The test case from bugzilla PR is also included.
>> As for the canonicalize_address interface, there is another
>> canonicalize_address in fwprop.c which only changes shift into mult.  I
>> think it would be good to factor out a common RTL interface in GCC, but
>> that's stage1 work.
>
>
> Also note there's bits in combine that will canonicalize appropriate shifts
> into mults.  Clearly there's a need for some generalized routines to take a
> fairly generic address and perform canonicalizations and simplifications on
> it.
>
>> Bootstrap and test on x86_64 and AArch64.  Is it OK?
>>
>> Thanks,
>> bin
>>
>> 2016-02-09  Bin Cheng<bin.cheng@arm.com>
>>
>>         PR tree-optimization/69052
>>         * loop-invariant.c (canonicalize_address): New function.
>>         (inv_can_prop_to_addr_use): Check validity of address expression
>>         which is canonicalized by above function.
>>
>> gcc/testsuite/ChangeLog
>> 2016-02-09  Bin Cheng<bin.cheng@arm.com>
>>
>>         PR tree-optimization/69052
>>         * gcc.target/i386/pr69052.c: New test.
>>
>>
>> pr69052-20160204.txt
>>
>>
>> diff --git a/gcc/loop-invariant.c b/gcc/loop-invariant.c
>> index 707f044..157e273 100644
>> --- a/gcc/loop-invariant.c
>> +++ b/gcc/loop-invariant.c
>> @@ -754,6 +754,74 @@ create_new_invariant (struct def *def, rtx_insn
>> *insn, bitmap depends_on,
>>     return inv;
>>   }
>>
>> +/* Returns a canonical version address for X.  It identifies
>> +   addr expr in the form of A + B + C.  Following instruction
>> +   canonicalization rules, MULT operand is moved to the front,
>> +   CONST operand is moved to the end; also PLUS operators are
>> +   chained to the left.  */
>> +
>> +static rtx
>> +canonicalize_address (rtx x)
>> +{
>> +  rtx op0, op1, op2;
>> +  machine_mode mode = GET_MODE (x);
>> +  enum rtx_code code = GET_CODE (x);
>> +
>> +  if (code != PLUS)
>> +    return x;
>> +
>> +  /* Extract operands from A + B (+ C).  */
>> +  if (GET_CODE (XEXP (x, 0)) == PLUS)
>> +    {
>> +      op0 = XEXP (XEXP (x, 0), 0);
>> +      op1 = XEXP (XEXP (x, 0), 1);
>> +      op2 = XEXP (x, 1);
>> +    }
>> +  else if (GET_CODE (XEXP (x, 1)) == PLUS)
>> +    {
>> +      op0 = XEXP (x, 0);
>> +      op1 = XEXP (XEXP (x, 1), 0);
>> +      op2 = XEXP (XEXP (x, 1), 1);
>> +    }
>> +  else
>> +    {
>> +      op0 = XEXP (x, 0);
>> +      op1 = XEXP (x, 1);
>> +      op2 = NULL_RTX;
>> +    }
>> +
>> +  /* Move MULT operand to the front.  */
>> +  if (!REG_P (op1) && !CONST_INT_P (op1))
>> +    std::swap (op0, op1);
>
> This feels a bit hack-ish in the sense that you already know the form of the
> RTL you're expecting and just assume that you'll be given something of that
> form, but no more complex.
>
> ISTM you're better off walking the whole rtx, recording the tidbits as you
> go into a vec.  If you see something unexpected during that walk, you punt
> canonicalization of the whole expression.
>
> You then sort the vec.  You want to move things like MULT to the start and
> all the constants to the end I think.
>
> You then do simplifications, particularly on the constants, but there may be
> something useful to do with MULT terms as well.  You could also arrange to
> rewrite ASHIFTs into MULTs at this stage.
>
> Then you generate a new equivalent expression from the simplified operands
> in the vec.
>
> You might look at tree-ssa-reassoc for ideas on implementation details.
>
> Initially just use it in the LICM code, but I think given that kind of
> structure it'd be generally useful to replace bits of combine and fwprop
>
> If your contention is that only a few forms really matter, then I'd like to
> see those forms spelled out better in the comment and some kind of checking
> that we have reasonable incoming RTL.
>
>
>> +
>> +  /* Move CONST operand to the end.  */
>> +  if (CONST_INT_P (op0))
>> +    std::swap (op0, op1);
>
> You might want to check CONSTANT_P here.  Maybe it doesn't matter in
> practice, but things like (plus (plus (symbol-ref) (const_int) const_int))
>
> That also gives you a fighting chance at extending this to handle
> HIGH/LO_SUM which are going to appear on the RISCy targets.
>
> So I think the concept of making sure we're passing canonical RTL to the
> verification step is good, I'm a bit concerned about the implementation of
> the canonicalization step.
Hi Jeff,
Thanks for detailed review.  I also think a generic canonical
interface for RTL is much better.  I will give it a try.  But with
high chance it's a next stage1 stuff.

Thanks,
bin
>
> Jeff
Jeff Law Feb. 11, 2016, 11:26 p.m. UTC | #3
On 02/11/2016 10:59 AM, Bin.Cheng wrote:

> Hi Jeff,
> Thanks for detailed review.  I also think a generic canonical
> interface for RTL is much better.  I will give it a try.  But with
> high chance it's a next stage1 stuff.
That is, of course, fine.  However, if you do get something ready, I'd 
support using it within LICM for gcc-6, then using it in other places 
for gcc-7.

Jeff
diff mbox

Patch

diff --git a/gcc/loop-invariant.c b/gcc/loop-invariant.c
index 707f044..157e273 100644
--- a/gcc/loop-invariant.c
+++ b/gcc/loop-invariant.c
@@ -754,6 +754,74 @@  create_new_invariant (struct def *def, rtx_insn *insn, bitmap depends_on,
   return inv;
 }
 
+/* Returns a canonical version address for X.  It identifies
+   addr expr in the form of A + B + C.  Following instruction
+   canonicalization rules, MULT operand is moved to the front,
+   CONST operand is moved to the end; also PLUS operators are
+   chained to the left.  */
+
+static rtx
+canonicalize_address (rtx x)
+{
+  rtx op0, op1, op2;
+  machine_mode mode = GET_MODE (x);
+  enum rtx_code code = GET_CODE (x);
+
+  if (code != PLUS)
+    return x;
+
+  /* Extract operands from A + B (+ C).  */
+  if (GET_CODE (XEXP (x, 0)) == PLUS)
+    {
+      op0 = XEXP (XEXP (x, 0), 0);
+      op1 = XEXP (XEXP (x, 0), 1);
+      op2 = XEXP (x, 1);
+    }
+  else if (GET_CODE (XEXP (x, 1)) == PLUS)
+    {
+      op0 = XEXP (x, 0);
+      op1 = XEXP (XEXP (x, 1), 0);
+      op2 = XEXP (XEXP (x, 1), 1);
+    }
+  else
+    {
+      op0 = XEXP (x, 0);
+      op1 = XEXP (x, 1);
+      op2 = NULL_RTX;
+    }
+
+  /* Move MULT operand to the front.  */
+  if (!REG_P (op1) && !CONST_INT_P (op1))
+    std::swap (op0, op1);
+
+  /* Move CONST operand to the end.  */
+  if (CONST_INT_P (op0))
+    std::swap (op0, op1);
+
+  if (op2 != NULL && CONST_INT_P (op1))
+    {
+      /* Try to simplify CONST1 + CONST2 into one operand.  */
+      if (CONST_INT_P (op2))
+	{
+	  rtx x = simplify_binary_operation (PLUS, mode, op1, op2);
+
+	  if (x != NULL_RTX && CONST_INT_P (x))
+	    {
+	      op1 = x;
+	      op2 = NULL_RTX;
+	    }
+	}
+      else
+	std::swap (op1, op2);
+    }
+  /* Chain PLUS operators to the left.  */
+  op0 = simplify_gen_binary (PLUS, mode, op0, op1);
+  if (op2 == NULL_RTX)
+    return op0;
+  else
+    return simplify_gen_binary (PLUS, mode, op0, op2);
+}
+
 /* Given invariant DEF and its address USE, check if the corresponding
    invariant expr can be propagated into the use or not.  */
 
@@ -761,7 +829,7 @@  static bool
 inv_can_prop_to_addr_use (struct def *def, df_ref use)
 {
   struct invariant *inv;
-  rtx *pos = DF_REF_REAL_LOC (use), def_set;
+  rtx *pos = DF_REF_REAL_LOC (use), def_set, use_set;
   rtx_insn *use_insn = DF_REF_INSN (use);
   rtx_insn *def_insn;
   bool ok;
@@ -778,6 +846,29 @@  inv_can_prop_to_addr_use (struct def *def, df_ref use)
 
   validate_unshare_change (use_insn, pos, SET_SRC (def_set), true);
   ok = verify_changes (0);
+  /* Try harder with canonicalization in address expression.  */
+  if (!ok && (use_set = single_set (use_insn)) != NULL_RTX)
+    {
+      rtx src, dest, mem = NULL_RTX;
+
+      src = SET_SRC (use_set);
+      dest = SET_DEST (use_set);
+      if (MEM_P (src))
+	mem = src;
+      else if (MEM_P (dest))
+	mem = dest;
+
+      if (mem != NULL_RTX
+	  && !memory_address_addr_space_p (GET_MODE (mem),
+					   XEXP (mem, 0),
+					   MEM_ADDR_SPACE (mem)))
+	{
+	  rtx addr = canonicalize_address (copy_rtx (XEXP (mem, 0)));
+	  if (memory_address_addr_space_p (GET_MODE (mem),
+					   addr, MEM_ADDR_SPACE (mem)))
+	    ok = true;
+	}
+    }
   cancel_changes (0);
   return ok;
 }
diff --git a/gcc/testsuite/gcc.target/i386/pr69052.c b/gcc/testsuite/gcc.target/i386/pr69052.c
new file mode 100644
index 0000000..6f491e9
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/pr69052.c
@@ -0,0 +1,54 @@ 
+/* { dg-do compile } */
+/* { dg-require-effective-target pie } */
+/* { dg-options "-O2 -fPIE -pie" } */
+
+int look_nbits[256], loop_sym[256];
+const int ind[] = {
+  0,  1,  8, 16,  9,  2,  3, 10, 17, 24, 32, 25, 18, 11,  4,  5,
+ 12, 19, 26, 33, 40, 48, 41, 34, 27, 20, 13,  6,  7, 14, 21, 28,
+ 35, 42, 49, 56, 57, 50, 43, 36, 29, 22, 15, 23, 30, 37, 44, 51,
+ 58, 59, 52, 45, 38, 31, 39, 46, 53, 60, 61, 54, 47, 55, 62, 63
+};
+int out[256];
+extern void bar (int *, int *);
+void foo (int *l1, int *l2, int *v, int *v1, int *m1, int i)
+{
+  int L = i + 1, b = 20;
+  int result, k;
+
+  for (k = 1; k < 64; k++)
+    {
+      int look = (((L >> (b - 8))) & ((1 << 8) - 1));
+      int nb = l1[look];
+      int code;
+      int r;
+
+      if (nb)
+	{
+	  b -= nb;
+	  result = l2[look];
+	}
+      else
+	{
+	  nb = 9;
+	  code = (((L >> (b -= nb))) & ((1 << nb) - 1));
+	  result = v[(code + v1[nb])];
+	}
+      r = result >> 4;
+      result &= 15;
+      if (result)
+	{
+	  k += r;
+	  r = (((L >> (b -= result))) & ((1 << result) - 1));
+	  if (r < (1 << (result - 1)))
+	    result = r + (((-1) << result) + 1);
+	  else
+	    result = r;
+
+	  out[ind[k]] = result;
+	}
+      bar (&L, &b);
+    }
+}
+
+/* { dg-final { scan-assembler-not "leal\[ \t\]ind@GOTOFF\\(%\[^,\]*\\), %" { target ia32 } } } */