diff mbox

[4.9,PR69082] Backport "[PATCH][ARM]Tighten the conditions for arm_movw, arm_movt"

Message ID 56951C40.3080100@foss.arm.com
State New
Headers show

Commit Message

Renlin Li Jan. 12, 2016, 3:31 p.m. UTC
Hi all,

Here I backport r227129 to branch 4.9 to fix exactly the same issue reported in PR69082.
It's been already committed on trunk and backportted to branch 5.


I have quoted the original message for the explanation.
The patch applies to branch 4.9 without any modifications.
Test case is not added as the one provided in the bugzilla ticket is too big and complex.

arm-none-linux-gnueabihf regression tested without any issues.

Is Okay to backport to branch 4.9?

Renlin Li


gcc/ChangeLog

2016-01-08  Renlin Li  <renlin.li@arm.com>

         PR target/69082
         Backport from mainline:
         2015-08-24  Renlin Li  <renlin.li@arm.com>

	* config/arm/arm-protos.h (arm_valid_symbolic_address_p): Declare.
	* config/arm/arm.c (arm_valid_symbolic_address_p): Define.
	* config/arm/arm.md (arm_movt): Use arm_valid_symbolic_address_p.
	* config/arm/constraints.md ("j"): Add check for high code


On 19/08/15 15:37, Renlin Li wrote:
>
>> On 19/08/15 12:49, Renlin Li wrote:
>>> Hi all,
>>>
>>> This simple patch will tighten the conditions when matching movw and
>>> arm_movt rtx pattern.
>>> Those two patterns will generate the following assembly:
>>>
>>> movw w1, #:lower16: dummy + addend
>>> movt w1, #:upper16: dummy + addend
>>>
>>> The addend here is optional. However, it should be an 16-bit signed
>>> value with in the range -32768 <= A <= 32768.
>>>
>>> By impose this restriction explicitly, it will prevent LRA/reload code
>>> from generation invalid high/lo_sum code for arm target.
>>> In process_address_1(), if the address is not legitimate, it will 
>>> try to
>>> generate high/lo_sum pair to put the address into register. It will
>>> check if the target support those newly generated reload instructions.
>>> By define those two patterns, arm will reject them if conditions is not
>>> meet.
>>>
>>> Otherwise, it might generate movw/movt instructions with addend larger
>>> than 32768, this will cause a GAS error. GAS will produce '''offset out
>>> of range'' error message when the addend for MOVW/MOVT REL 
>>> relocation is
>>> too large.
>>>
>>>
>>> arm-none-eabi regression tests Okay, Okay to commit to the trunk and
>>> backport to 5.0?
>>>
>>> Regards,
>>> Renlin
>>>
>>> gcc/ChangeLog:
>>>
>>> 2015-08-19  Renlin Li  <renlin.li@arm.com>
>>>
>>>        * config/arm/arm-protos.h (arm_valid_symbolic_address_p): 
>>> Declare.
>>>        * config/arm/arm.c (arm_valid_symbolic_address_p): Define.
>>>        * config/arm/arm.md (arm_movt): Use 
>>> arm_valid_symbolic_address_p.
>>>        * config/arm/constraints.md ("j"): Add check for high code.
>
> Thank you,
> Renlin
>

Comments

Richard Earnshaw (lists) Jan. 12, 2016, 4:28 p.m. UTC | #1
On 12/01/16 15:31, Renlin Li wrote:
> Hi all,
> 
> Here I backport r227129 to branch 4.9 to fix exactly the same issue
> reported in PR69082.
> It's been already committed on trunk and backportted to branch 5.
> 
> 
> I have quoted the original message for the explanation.
> The patch applies to branch 4.9 without any modifications.
> Test case is not added as the one provided in the bugzilla ticket is too
> big and complex.
> 
> arm-none-linux-gnueabihf regression tested without any issues.
> 
> Is Okay to backport to branch 4.9?
> 
> Renlin Li
> 
> 
> gcc/ChangeLog
> 
> 2016-01-08  Renlin Li  <renlin.li@arm.com>
> 
>         PR target/69082
>         Backport from mainline:
>         2015-08-24  Renlin Li  <renlin.li@arm.com>
> 
>     * config/arm/arm-protos.h (arm_valid_symbolic_address_p): Declare.
>     * config/arm/arm.c (arm_valid_symbolic_address_p): Define.
>     * config/arm/arm.md (arm_movt): Use arm_valid_symbolic_address_p.
>     * config/arm/constraints.md ("j"): Add check for high code
> 
> 

OK.

R.

> On 19/08/15 15:37, Renlin Li wrote:
>>
>>> On 19/08/15 12:49, Renlin Li wrote:
>>>> Hi all,
>>>>
>>>> This simple patch will tighten the conditions when matching movw and
>>>> arm_movt rtx pattern.
>>>> Those two patterns will generate the following assembly:
>>>>
>>>> movw w1, #:lower16: dummy + addend
>>>> movt w1, #:upper16: dummy + addend
>>>>
>>>> The addend here is optional. However, it should be an 16-bit signed
>>>> value with in the range -32768 <= A <= 32768.
>>>>
>>>> By impose this restriction explicitly, it will prevent LRA/reload code
>>>> from generation invalid high/lo_sum code for arm target.
>>>> In process_address_1(), if the address is not legitimate, it will
>>>> try to
>>>> generate high/lo_sum pair to put the address into register. It will
>>>> check if the target support those newly generated reload instructions.
>>>> By define those two patterns, arm will reject them if conditions is not
>>>> meet.
>>>>
>>>> Otherwise, it might generate movw/movt instructions with addend larger
>>>> than 32768, this will cause a GAS error. GAS will produce '''offset out
>>>> of range'' error message when the addend for MOVW/MOVT REL
>>>> relocation is
>>>> too large.
>>>>
>>>>
>>>> arm-none-eabi regression tests Okay, Okay to commit to the trunk and
>>>> backport to 5.0?
>>>>
>>>> Regards,
>>>> Renlin
>>>>
>>>> gcc/ChangeLog:
>>>>
>>>> 2015-08-19  Renlin Li  <renlin.li@arm.com>
>>>>
>>>>        * config/arm/arm-protos.h (arm_valid_symbolic_address_p):
>>>> Declare.
>>>>        * config/arm/arm.c (arm_valid_symbolic_address_p): Define.
>>>>        * config/arm/arm.md (arm_movt): Use
>>>> arm_valid_symbolic_address_p.
>>>>        * config/arm/constraints.md ("j"): Add check for high code.
>>
>> Thank you,
>> Renlin
>>
> 
> 
> backport.diff
> 
> 
> diff --git a/gcc/config/arm/arm-protos.h b/gcc/config/arm/arm-protos.h
> index cef9eec..ff168bf 100644
> --- a/gcc/config/arm/arm-protos.h
> +++ b/gcc/config/arm/arm-protos.h
> @@ -319,6 +319,7 @@ extern int vfp3_const_double_for_bits (rtx);
>  
>  extern void arm_emit_coreregs_64bit_shift (enum rtx_code, rtx, rtx, rtx, rtx,
>  					   rtx);
> +extern bool arm_valid_symbolic_address_p (rtx);
>  extern bool arm_validize_comparison (rtx *, rtx *, rtx *);
>  #endif /* RTX_CODE */
>  
> diff --git a/gcc/config/arm/arm.c b/gcc/config/arm/arm.c
> index c2095a3..7cc4d93 100644
> --- a/gcc/config/arm/arm.c
> +++ b/gcc/config/arm/arm.c
> @@ -28664,6 +28664,38 @@ arm_emit_coreregs_64bit_shift (enum rtx_code code, rtx out, rtx in,
>    #undef BRANCH
>  }
>  
> +/* Returns true if the pattern is a valid symbolic address, which is either a
> +   symbol_ref or (symbol_ref + addend).
> +
> +   According to the ARM ELF ABI, the initial addend of REL-type relocations
> +   processing MOVW and MOVT instructions is formed by interpreting the 16-bit
> +   literal field of the instruction as a 16-bit signed value in the range
> +   -32768 <= A < 32768.  */
> +
> +bool
> +arm_valid_symbolic_address_p (rtx addr)
> +{
> +  rtx xop0, xop1 = NULL_RTX;
> +  rtx tmp = addr;
> +
> +  if (GET_CODE (tmp) == SYMBOL_REF || GET_CODE (tmp) == LABEL_REF)
> +    return true;
> +
> +  /* (const (plus: symbol_ref const_int))  */
> +  if (GET_CODE (addr) == CONST)
> +    tmp = XEXP (addr, 0);
> +
> +  if (GET_CODE (tmp) == PLUS)
> +    {
> +      xop0 = XEXP (tmp, 0);
> +      xop1 = XEXP (tmp, 1);
> +
> +      if (GET_CODE (xop0) == SYMBOL_REF && CONST_INT_P (xop1))
> +	  return IN_RANGE (INTVAL (xop1), -0x8000, 0x7fff);
> +    }
> +
> +  return false;
> +}
>  
>  /* Returns true if a valid comparison operation and makes
>     the operands in a form that is valid.  */
> diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md
> index 288bbb9..eefb7fa 100644
> --- a/gcc/config/arm/arm.md
> +++ b/gcc/config/arm/arm.md
> @@ -5774,7 +5774,7 @@
>    [(set (match_operand:SI 0 "nonimmediate_operand" "=r")
>  	(lo_sum:SI (match_operand:SI 1 "nonimmediate_operand" "0")
>  		   (match_operand:SI 2 "general_operand"      "i")))]
> -  "arm_arch_thumb2"
> +  "arm_arch_thumb2 && arm_valid_symbolic_address_p (operands[2])"
>    "movt%?\t%0, #:upper16:%c2"
>    [(set_attr "predicable" "yes")
>     (set_attr "predicable_short_it" "no")
> diff --git a/gcc/config/arm/constraints.md b/gcc/config/arm/constraints.md
> index 42935a4..f9e11e0 100644
> --- a/gcc/config/arm/constraints.md
> +++ b/gcc/config/arm/constraints.md
> @@ -67,7 +67,8 @@
>  (define_constraint "j"
>   "A constant suitable for a MOVW instruction. (ARM/Thumb-2)"
>   (and (match_test "TARGET_32BIT && arm_arch_thumb2")
> -      (ior (match_code "high")
> +      (ior (and (match_code "high")
> +		(match_test "arm_valid_symbolic_address_p (XEXP (op, 0))"))
>  	   (and (match_code "const_int")
>                  (match_test "(ival & 0xffff0000) == 0")))))
>  
>
diff mbox

Patch

diff --git a/gcc/config/arm/arm-protos.h b/gcc/config/arm/arm-protos.h
index cef9eec..ff168bf 100644
--- a/gcc/config/arm/arm-protos.h
+++ b/gcc/config/arm/arm-protos.h
@@ -319,6 +319,7 @@  extern int vfp3_const_double_for_bits (rtx);
 
 extern void arm_emit_coreregs_64bit_shift (enum rtx_code, rtx, rtx, rtx, rtx,
 					   rtx);
+extern bool arm_valid_symbolic_address_p (rtx);
 extern bool arm_validize_comparison (rtx *, rtx *, rtx *);
 #endif /* RTX_CODE */
 
diff --git a/gcc/config/arm/arm.c b/gcc/config/arm/arm.c
index c2095a3..7cc4d93 100644
--- a/gcc/config/arm/arm.c
+++ b/gcc/config/arm/arm.c
@@ -28664,6 +28664,38 @@  arm_emit_coreregs_64bit_shift (enum rtx_code code, rtx out, rtx in,
   #undef BRANCH
 }
 
+/* Returns true if the pattern is a valid symbolic address, which is either a
+   symbol_ref or (symbol_ref + addend).
+
+   According to the ARM ELF ABI, the initial addend of REL-type relocations
+   processing MOVW and MOVT instructions is formed by interpreting the 16-bit
+   literal field of the instruction as a 16-bit signed value in the range
+   -32768 <= A < 32768.  */
+
+bool
+arm_valid_symbolic_address_p (rtx addr)
+{
+  rtx xop0, xop1 = NULL_RTX;
+  rtx tmp = addr;
+
+  if (GET_CODE (tmp) == SYMBOL_REF || GET_CODE (tmp) == LABEL_REF)
+    return true;
+
+  /* (const (plus: symbol_ref const_int))  */
+  if (GET_CODE (addr) == CONST)
+    tmp = XEXP (addr, 0);
+
+  if (GET_CODE (tmp) == PLUS)
+    {
+      xop0 = XEXP (tmp, 0);
+      xop1 = XEXP (tmp, 1);
+
+      if (GET_CODE (xop0) == SYMBOL_REF && CONST_INT_P (xop1))
+	  return IN_RANGE (INTVAL (xop1), -0x8000, 0x7fff);
+    }
+
+  return false;
+}
 
 /* Returns true if a valid comparison operation and makes
    the operands in a form that is valid.  */
diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md
index 288bbb9..eefb7fa 100644
--- a/gcc/config/arm/arm.md
+++ b/gcc/config/arm/arm.md
@@ -5774,7 +5774,7 @@ 
   [(set (match_operand:SI 0 "nonimmediate_operand" "=r")
 	(lo_sum:SI (match_operand:SI 1 "nonimmediate_operand" "0")
 		   (match_operand:SI 2 "general_operand"      "i")))]
-  "arm_arch_thumb2"
+  "arm_arch_thumb2 && arm_valid_symbolic_address_p (operands[2])"
   "movt%?\t%0, #:upper16:%c2"
   [(set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")
diff --git a/gcc/config/arm/constraints.md b/gcc/config/arm/constraints.md
index 42935a4..f9e11e0 100644
--- a/gcc/config/arm/constraints.md
+++ b/gcc/config/arm/constraints.md
@@ -67,7 +67,8 @@ 
 (define_constraint "j"
  "A constant suitable for a MOVW instruction. (ARM/Thumb-2)"
  (and (match_test "TARGET_32BIT && arm_arch_thumb2")
-      (ior (match_code "high")
+      (ior (and (match_code "high")
+		(match_test "arm_valid_symbolic_address_p (XEXP (op, 0))"))
 	   (and (match_code "const_int")
                 (match_test "(ival & 0xffff0000) == 0")))))