Patchwork [ARM] Fix failing gcc.dg/atomics/atomic-exec-2.c - movmisalign<mode> expanders.

login
register
mail settings
Submitter Ramana Radhakrishnan
Date Feb. 27, 2014, 11:53 a.m.
Message ID <530F274C.8060608@arm.com>
Download mbox | patch
Permalink /patch/324781/
State New
Headers show

Comments

Ramana Radhakrishnan - Feb. 27, 2014, 11:53 a.m.
Hi,

	This is a case where we end up with an ICE in movmisalign<mode> where 
mode is DImode because the address so generated is aligned to 32 bits as 
a result of V_C_E on an _Atomic Complex float temporary to a DImode 
temporary. The problem also shows up only when you have -mfpu=neon 
turned on because these expanders are only valid on hardware where you 
have neon and not elsewhere.

Now, the problem we have is that the address in this case is

unspec:DI [
                 (mem/c:DI (plus:SI (reg/f:SI 1073)
                         (const_int -8 [0xfffffffffffffff8])) [0  S8 A32])
             ] UNSPEC_MISALIGNED_ACCESS)) ../gcc/besttry.c:38 -1
      (nil))


Now the problem here is that we don't allow anything but the plus 
(virtual*-regs) (offset) type addresses in our predicates but the 
expander allows the generation in that form. So, forcing the memory 
addresses to be the right form is the right thing to be doing here with 
movmisalign<mode> unfortunately. Given we have no other way of 
controlling illegitimate misaligned addresses, this is the only place to 
fix this at this point of time. Longer term maybe we should look at 
relaxing the predicates and allowing LRA / reload to deal with this but 
that's not suitable for stage4.

Tested by bootstrap on arm-linux-gnueabihf with neon, no regressions and 
the testcase passes.

Will apply if RMs don't object in 24 hours.

regards
Ramana


* config/arm/neon.md (movmisalign<mode>): Legitimize addresses not 
allowed by recognizers.


P.S. I had my share of the fun with the C11 atomics and realized that 
the standard allows for the size and alignment of atomic types to be 
different from the equivalent base types (n1570 - 6.2.5 => 27). Keeping 
the 2 out of sync is a bit unfortunate but required for atomic access 
(in this case we can only have atomic access to 8byte objects on 8 byte 
aligned addresses).

Patch

diff --git a/gcc/config/arm/neon.md b/gcc/config/arm/neon.md
index 2f06e42..aad420c 100644
--- a/gcc/config/arm/neon.md
+++ b/gcc/config/arm/neon.md
@@ -245,12 +245,23 @@ 
 		     UNSPEC_MISALIGNED_ACCESS))]
   "TARGET_NEON && !BYTES_BIG_ENDIAN && unaligned_access"
 {
+  rtx adjust_mem;
   /* This pattern is not permitted to fail during expansion: if both arguments
      are non-registers (e.g. memory := constant, which can be created by the
      auto-vectorizer), force operand 1 into a register.  */
   if (!s_register_operand (operands[0], <MODE>mode)
       && !s_register_operand (operands[1], <MODE>mode))
     operands[1] = force_reg (<MODE>mode, operands[1]);
+
+  if (s_register_operand (operands[0], <MODE>mode))
+    adjust_mem = operands[1];
+  else
+    adjust_mem = operands[0];
+
+  /* Legitimize address.  */
+  if (!neon_vector_mem_operand (adjust_mem, 2, true))
+    XEXP (adjust_mem, 0) = force_reg (Pmode, XEXP (adjust_mem, 0));
+
 })
 
 (define_insn "*movmisalign<mode>_neon_store"