diff mbox

[ARM] PR target/71436: Restrict *load_multiple pattern till after LRA

Message ID 5852C60C.7010100@foss.arm.com
State New
Headers show

Commit Message

Kyrill Tkachov Dec. 15, 2016, 4:34 p.m. UTC
On 15/12/16 09:55, Richard Earnshaw (lists) wrote:
> On 30/11/16 16:47, Kyrill Tkachov wrote:
>> Hi all,
>>
>> In this awkward ICE we have a *load_multiple pattern that is being
>> transformed in reload from:
>> (insn 55 67 151 3 (parallel [
>>              (set (reg:SI 0 r0)
>>                  (mem/u/c:SI (reg/f:SI 147) [2 c+0 S4 A32]))
>>              (set (reg:SI 158 [ c+4 ])
>>                  (mem/u/c:SI (plus:SI (reg/f:SI 147)
>>                          (const_int 4 [0x4])) [2 c+4 S4 A32]))
>>          ]) arm-crash.c:25 393 {*load_multiple}
>>       (expr_list:REG_UNUSED (reg:SI 0 r0)
>>          (nil)))
>>
>>
>> into the invalid:
>> (insn 55 67 70 3 (parallel [
>>              (set (reg:SI 0 r0)
>>                  (mem/u/c:SI (reg/f:SI 5 r5 [147]) [2 c+0 S4 A32]))
>>              (set (mem/c:SI (plus:SI (reg/f:SI 102 sfp)
>>                          (const_int -4 [0xfffffffffffffffc])) [4 %sfp+-12
>> S4 A32])
>>                  (mem/u/c:SI (plus:SI (reg/f:SI 5 r5 [147])
>>                          (const_int 4 [0x4])) [2 c+4 S4 A32]))
>>          ]) arm-crash.c:25 393 {*load_multiple}
>>       (nil))
>>
>> The operands of *load_multiple are not validated through constraints
>> like LRA is used to, but rather through
>> a match_parallel predicate which ends up calling ldm_stm_operation_p to
>> validate the multiple sets.
>> But this means that LRA cannot reason about the constraints properly.
>> This two-regiseter load should not have used *load_multiple anyway, it
>> should have used *ldm2_ from ldmstm.md
>> and indeed it did until the loop2_invariant pass which copied the ldm2_
>> pattern:
>> (insn 27 23 28 4 (parallel [
>>              (set (reg:SI 0 r0)
>>                  (mem/u/c:SI (reg/f:SI 147) [2 c+0 S4 A32]))
>>              (set (reg:SI 1 r1)
>>                  (mem/u/c:SI (plus:SI (reg/f:SI 147)
>>                          (const_int 4 [0x4])) [2 c+4 S4 A32]))
>>          ]) "ldm.c":25 385 {*ldm2_}
>>       (nil))
>>
>> into:
>> (insn 55 19 67 3 (parallel [
>>              (set (reg:SI 0 r0)
>>                  (mem/u/c:SI (reg/f:SI 147) [2 c+0 S4 A32]))
>>              (set (reg:SI 158)
>>                  (mem/u/c:SI (plus:SI (reg/f:SI 147)
>>                          (const_int 4 [0x4])) [2 c+4 S4 A32]))
>>          ]) "ldm.c":25 404 {*load_multiple}
>>       (expr_list:REG_UNUSED (reg:SI 0 r0)
>>          (nil)))
>>
>> Note that it now got recognised as load_multiple because the second
>> register is not a hard register but the pseudo 158.
>> In any case, the solution suggested in the PR (and I agree with it) is
>> to restrict *load_multiple to after reload.
>> The similar pattern *load_multiple_with_writeback also has a similar
>> condition and the comment above *load_multiple says that
>> it's used to generate epilogues, which is done after reload anyway. For
>> pre-reload load-multiples the patterns in ldmstm.md
>> should do just fine.
>>
>> Bootstrapped and tested on arm-none-linux-gnueabihf.
>>
>> Ok for trunk?
>>
> I don't think this is right.  Firstly, these patterns look to me like
> the ones used for memcpy expansion, so not recognizing them could lead
> to compiler aborts.

I think the other patterns in ldmstm.md would catch these anyway but...


> Secondly, the bug is when we generate
>
>   (insn 55 67 70 3 (parallel [
>               (set (reg:SI 0 r0)
>                   (mem/u/c:SI (reg/f:SI 5 r5 [147]) [2 c+0 S4 A32]))
>               (set (mem/c:SI (plus:SI (reg/f:SI 102 sfp)
>                           (const_int -4 [0xfffffffffffffffc])) [4 %sfp+-12
>   S4 A32])
>                   (mem/u/c:SI (plus:SI (reg/f:SI 5 r5 [147])
>                           (const_int 4 [0x4])) [2 c+4 S4 A32]))
>           ]) arm-crash.c:25 393 {*load_multiple}
>        (nil))
>
> These patterns are supposed to enforce that the load (store) target
> register is a hard register that is higher numbered than the hard
> register in the memory slot that precedes it (thus satisfying the
> constraints for a normal ldm/stm instruction.
>
> The real question is why did ldm_stm_operation_p permit this modified
> pattern through, or was the pattern validation bypassed incorrectly by
> the loop2 invariant code when it copied the insn and made changes?

... ldm_stm_operation_p doesn't check that the registers are hard registers,
which is an implicit requirement since it validates their ascending order.
So the safe easy fix is to require hard registers.

This patch does that.
Bootstrapped and tested on arm-none-linux-gnueabihf.
Ok for trunk?

Thanks,
Kyrill

2016-12-15  Kyrylo Tkachov  <kyrylo.tkachov@arm.com>

     PR target/71436
     * config/arm/arm.c (ldm_stm_operation_p): Check that last register
     in the list is a hard reg.

2016-12-15  Kyrylo Tkachov  <kyrylo.tkachov@arm.com>

     PR target/71436
     * gcc.c-torture/compile/pr71436.c: New test.
diff mbox

Patch

diff --git a/gcc/config/arm/arm.c b/gcc/config/arm/arm.c
index 437da6fe3d34978e7a3a72f7ec39dc76a54d6408..b6c5fdf42ee359a0c0b0fab8de4d374ff39cd35b 100644
--- a/gcc/config/arm/arm.c
+++ b/gcc/config/arm/arm.c
@@ -12709,6 +12709,13 @@  ldm_stm_operation_p (rtx op, bool load, machine_mode mode,
         addr_reg_in_reglist = true;
     }
 
+  /* The ascending register number requirement only makes sense when dealing
+     with hard registers.  The code above already validated the ascending
+     requirement so we only need to validate the "hardness" of the last
+     register in the list.  */
+  if (regno >= FIRST_PSEUDO_REGISTER)
+    return false;
+
   if (load)
     {
       if (update && addr_reg_in_reglist)
diff --git a/gcc/testsuite/gcc.c-torture/compile/pr71436.c b/gcc/testsuite/gcc.c-torture/compile/pr71436.c
new file mode 100644
index 0000000000000000000000000000000000000000..ab08d5d369c769fc9e411238fcf1379705e576f2
--- /dev/null
+++ b/gcc/testsuite/gcc.c-torture/compile/pr71436.c
@@ -0,0 +1,35 @@ 
+/* PR target/71436.  */
+
+#pragma pack(1)
+struct S0
+{
+  volatile int f0;
+  short f2;
+};
+
+void foo (struct S0 *);
+int a, d;
+static struct S0 b[5];
+static struct S0 c;
+void fn1 ();
+void
+main ()
+{
+  {
+    struct S0 e;
+    for (; d; fn1 ())
+      {
+        {
+          a = 3;
+          for (; a >= 0; a -= 1)
+            {
+              {
+                e = c;
+              }
+              b[a] = e;
+            }
+        }
+      }
+  }
+  foo (b);
+}