diff mbox series

LRA: handle memory constraints that accept more than "m"

Message ID mptv9ruloyx.fsf@arm.com
State New
Headers show
Series LRA: handle memory constraints that accept more than "m" | expand

Commit Message

Richard Sandiford Nov. 8, 2019, 9:03 a.m. UTC
LRA allows address constraints that are more relaxed than "p":

  /* Target hooks sometimes don't treat extra-constraint addresses as
     legitimate address_operands, so handle them specially.  */
  if (insn_extra_address_constraint (cn)
      && satisfies_address_constraint_p (&ad, cn))
    return change_p;

For SVE it's useful to allow the same thing for memory constraints.
The particular use case is LD1RQ, which is an SVE instruction that
addresses Advanced SIMD vector modes and that accepts some addresses
that normal Advanced SIMD moves don't.

Normally we require every memory to satisfy at least "m", which is
defined to be a memory "with any kind of address that the machine
supports in general".  However, LD1RQ is very much special-purpose:
it doesn't really have any relation to normal operations on these
modes.  Adding its addressing modes to "m" would lead to bad Advanced
SIMD optimisation decisions in passes like ivopts.  LD1RQ therefore
has a memory constraint that accepts things "m" doesn't.

Tested on aarch64-linux-gnu and x86_64-linux-gnu.  OK to install?

Richard


2019-11-08  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* lra-constraints.c (valid_address_p): Take the operand and a
	constraint as argument.  If the operand is a MEM and the constraint
	is a memory constraint, check whether the eliminated form of the
	MEM already satisfies the constraint.
	(process_address_1): Update calls accordingly.

gcc/testsuite/
	* gcc.target/aarch64/sve/acle/asm/ld1rq_f16.c: Remove XFAIL.
	* gcc.target/aarch64/sve/acle/asm/ld1rq_f32.c: Likewise.
	* gcc.target/aarch64/sve/acle/asm/ld1rq_f64.c: Likewise.
	* gcc.target/aarch64/sve/acle/asm/ld1rq_s16.c: Likewise.
	* gcc.target/aarch64/sve/acle/asm/ld1rq_s32.c: Likewise.
	* gcc.target/aarch64/sve/acle/asm/ld1rq_s64.c: Likewise.
	* gcc.target/aarch64/sve/acle/asm/ld1rq_u16.c: Likewise.
	* gcc.target/aarch64/sve/acle/asm/ld1rq_u32.c: Likewise.
	* gcc.target/aarch64/sve/acle/asm/ld1rq_u64.c: Likewise.

Comments

Jeff Law Nov. 17, 2019, 9:02 p.m. UTC | #1
On 11/8/19 2:03 AM, Richard Sandiford wrote:
> LRA allows address constraints that are more relaxed than "p":
> 
>   /* Target hooks sometimes don't treat extra-constraint addresses as
>      legitimate address_operands, so handle them specially.  */
>   if (insn_extra_address_constraint (cn)
>       && satisfies_address_constraint_p (&ad, cn))
>     return change_p;
> 
> For SVE it's useful to allow the same thing for memory constraints.
> The particular use case is LD1RQ, which is an SVE instruction that
> addresses Advanced SIMD vector modes and that accepts some addresses
> that normal Advanced SIMD moves don't.
> 
> Normally we require every memory to satisfy at least "m", which is
> defined to be a memory "with any kind of address that the machine
> supports in general".  However, LD1RQ is very much special-purpose:
> it doesn't really have any relation to normal operations on these
> modes.  Adding its addressing modes to "m" would lead to bad Advanced
> SIMD optimisation decisions in passes like ivopts.  LD1RQ therefore
> has a memory constraint that accepts things "m" doesn't.
> 
> Tested on aarch64-linux-gnu and x86_64-linux-gnu.  OK to install?
> 
> Richard
> 
> 
> 2019-11-08  Richard Sandiford  <richard.sandiford@arm.com>
> 
> gcc/
> 	* lra-constraints.c (valid_address_p): Take the operand and a
> 	constraint as argument.  If the operand is a MEM and the constraint
> 	is a memory constraint, check whether the eliminated form of the
> 	MEM already satisfies the constraint.
> 	(process_address_1): Update calls accordingly.
> 
> gcc/testsuite/
> 	* gcc.target/aarch64/sve/acle/asm/ld1rq_f16.c: Remove XFAIL.
> 	* gcc.target/aarch64/sve/acle/asm/ld1rq_f32.c: Likewise.
> 	* gcc.target/aarch64/sve/acle/asm/ld1rq_f64.c: Likewise.
> 	* gcc.target/aarch64/sve/acle/asm/ld1rq_s16.c: Likewise.
> 	* gcc.target/aarch64/sve/acle/asm/ld1rq_s32.c: Likewise.
> 	* gcc.target/aarch64/sve/acle/asm/ld1rq_s64.c: Likewise.
> 	* gcc.target/aarch64/sve/acle/asm/ld1rq_u16.c: Likewise.
> 	* gcc.target/aarch64/sve/acle/asm/ld1rq_u32.c: Likewise.
> 	* gcc.target/aarch64/sve/acle/asm/ld1rq_u64.c: Likewise.
OK.  Obviously I'll be on the lookout for any fallout on other targets.

jeff
diff mbox series

Patch

Index: gcc/lra-constraints.c
===================================================================
--- gcc/lra-constraints.c	2019-09-30 17:20:57.366608014 +0100
+++ gcc/lra-constraints.c	2019-11-08 09:00:58.517228517 +0000
@@ -389,11 +389,24 @@  address_eliminator::~address_eliminator
     *m_index_loc = m_index_reg;
 }
 
-/* Return true if the eliminated form of AD is a legitimate target address.  */
+/* Return true if the eliminated form of AD is a legitimate target address.
+   If OP is a MEM, AD is the address within OP, otherwise OP should be
+   ignored.  CONSTRAINT is one constraint that the operand may need
+   to meet.  */
 static bool
-valid_address_p (struct address_info *ad)
+valid_address_p (rtx op, struct address_info *ad,
+		 enum constraint_num constraint)
 {
   address_eliminator eliminator (ad);
+
+  /* Allow a memory OP if it matches CONSTRAINT, even if CONSTRAINT is more
+     forgiving than "m".  */
+  if (MEM_P (op)
+      && (insn_extra_memory_constraint (constraint)
+	  || insn_extra_special_memory_constraint (constraint))
+      && constraint_satisfied_p (op, constraint))
+    return true;
+
   return valid_address_p (ad->mode, *ad->outer, ad->as);
 }
 
@@ -3398,7 +3411,7 @@  process_address_1 (int nop, bool check_o
 
      All these cases involve a non-autoinc address, so there is no
      point revalidating other types.  */
-  if (ad.autoinc_p || valid_address_p (&ad))
+  if (ad.autoinc_p || valid_address_p (op, &ad, cn))
     return change_p;
 
   /* Any index existed before LRA started, so we can assume that the
@@ -3427,7 +3440,7 @@  process_address_1 (int nop, bool check_o
 	      if (code >= 0)
 		{
 		  *ad.inner = gen_rtx_LO_SUM (Pmode, new_reg, addr);
-		  if (! valid_address_p (ad.mode, *ad.outer, ad.as))
+		  if (!valid_address_p (op, &ad, cn))
 		    {
 		      /* Try to put lo_sum into register.  */
 		      insn = emit_insn (gen_rtx_SET
@@ -3437,7 +3450,7 @@  process_address_1 (int nop, bool check_o
 		      if (code >= 0)
 			{
 			  *ad.inner = new_reg;
-			  if (! valid_address_p (ad.mode, *ad.outer, ad.as))
+			  if (!valid_address_p (op, &ad, cn))
 			    {
 			      *ad.inner = addr;
 			      code = -1;
@@ -3532,7 +3545,7 @@  process_address_1 (int nop, bool check_o
 	  && CONSTANT_P (XEXP (SET_SRC (set), 1)))
 	{
 	  *ad.inner = SET_SRC (set);
-	  if (valid_address_p (ad.mode, *ad.outer, ad.as))
+	  if (valid_address_p (op, &ad, cn))
 	    {
 	      *ad.base_term = XEXP (SET_SRC (set), 0);
 	      *ad.disp_term = XEXP (SET_SRC (set), 1);
Index: gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_f16.c
===================================================================
--- gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_f16.c	2019-10-29 09:13:26.137442273 +0000
+++ gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_f16.c	2019-11-08 09:00:58.525228460 +0000
@@ -12,7 +12,7 @@  TEST_LOAD (ld1rq_f16_base, svfloat16_t,
 	   z0 = svld1rq (p0, x0))
 
 /*
-** ld1rq_f16_index: { xfail *-*-* }
+** ld1rq_f16_index:
 **	ld1rqh	z0\.h, p0/z, \[x0, x1, lsl 1\]
 **	ret
 */
Index: gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_f32.c
===================================================================
--- gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_f32.c	2019-10-29 09:13:26.137442273 +0000
+++ gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_f32.c	2019-11-08 09:00:58.525228460 +0000
@@ -12,7 +12,7 @@  TEST_LOAD (ld1rq_f32_base, svfloat32_t,
 	   z0 = svld1rq (p0, x0))
 
 /*
-** ld1rq_f32_index: { xfail *-*-* }
+** ld1rq_f32_index:
 **	ld1rqw	z0\.s, p0/z, \[x0, x1, lsl 2\]
 **	ret
 */
Index: gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_f64.c
===================================================================
--- gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_f64.c	2019-10-29 09:13:26.137442273 +0000
+++ gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_f64.c	2019-11-08 09:00:58.525228460 +0000
@@ -12,7 +12,7 @@  TEST_LOAD (ld1rq_f64_base, svfloat64_t,
 	   z0 = svld1rq (p0, x0))
 
 /*
-** ld1rq_f64_index: { xfail *-*-* }
+** ld1rq_f64_index:
 **	ld1rqd	z0\.d, p0/z, \[x0, x1, lsl 3\]
 **	ret
 */
Index: gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_s16.c
===================================================================
--- gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_s16.c	2019-10-29 09:13:26.137442273 +0000
+++ gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_s16.c	2019-11-08 09:00:58.525228460 +0000
@@ -12,7 +12,7 @@  TEST_LOAD (ld1rq_s16_base, svint16_t, in
 	   z0 = svld1rq (p0, x0))
 
 /*
-** ld1rq_s16_index: { xfail *-*-* }
+** ld1rq_s16_index:
 **	ld1rqh	z0\.h, p0/z, \[x0, x1, lsl 1\]
 **	ret
 */
Index: gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_s32.c
===================================================================
--- gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_s32.c	2019-10-29 09:13:26.137442273 +0000
+++ gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_s32.c	2019-11-08 09:00:58.525228460 +0000
@@ -12,7 +12,7 @@  TEST_LOAD (ld1rq_s32_base, svint32_t, in
 	   z0 = svld1rq (p0, x0))
 
 /*
-** ld1rq_s32_index: { xfail *-*-* }
+** ld1rq_s32_index:
 **	ld1rqw	z0\.s, p0/z, \[x0, x1, lsl 2\]
 **	ret
 */
Index: gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_s64.c
===================================================================
--- gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_s64.c	2019-10-29 09:13:26.137442273 +0000
+++ gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_s64.c	2019-11-08 09:00:58.525228460 +0000
@@ -12,7 +12,7 @@  TEST_LOAD (ld1rq_s64_base, svint64_t, in
 	   z0 = svld1rq (p0, x0))
 
 /*
-** ld1rq_s64_index: { xfail *-*-* }
+** ld1rq_s64_index:
 **	ld1rqd	z0\.d, p0/z, \[x0, x1, lsl 3\]
 **	ret
 */
Index: gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_u16.c
===================================================================
--- gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_u16.c	2019-10-29 09:13:26.137442273 +0000
+++ gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_u16.c	2019-11-08 09:00:58.525228460 +0000
@@ -12,7 +12,7 @@  TEST_LOAD (ld1rq_u16_base, svuint16_t, u
 	   z0 = svld1rq (p0, x0))
 
 /*
-** ld1rq_u16_index: { xfail *-*-* }
+** ld1rq_u16_index:
 **	ld1rqh	z0\.h, p0/z, \[x0, x1, lsl 1\]
 **	ret
 */
Index: gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_u32.c
===================================================================
--- gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_u32.c	2019-10-29 09:13:26.137442273 +0000
+++ gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_u32.c	2019-11-08 09:00:58.525228460 +0000
@@ -12,7 +12,7 @@  TEST_LOAD (ld1rq_u32_base, svuint32_t, u
 	   z0 = svld1rq (p0, x0))
 
 /*
-** ld1rq_u32_index: { xfail *-*-* }
+** ld1rq_u32_index:
 **	ld1rqw	z0\.s, p0/z, \[x0, x1, lsl 2\]
 **	ret
 */
Index: gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_u64.c
===================================================================
--- gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_u64.c	2019-10-29 09:13:26.137442273 +0000
+++ gcc/testsuite/gcc.target/aarch64/sve/acle/asm/ld1rq_u64.c	2019-11-08 09:00:58.525228460 +0000
@@ -12,7 +12,7 @@  TEST_LOAD (ld1rq_u64_base, svuint64_t, u
 	   z0 = svld1rq (p0, x0))
 
 /*
-** ld1rq_u64_index: { xfail *-*-* }
+** ld1rq_u64_index:
 **	ld1rqd	z0\.d, p0/z, \[x0, x1, lsl 3\]
 **	ret
 */