diff mbox

[AArch64,PR65375] Fix RTX cost for vector SET

Message ID 5506D77B.5060909@linaro.org
State New
Headers show

Commit Message

Kugan Vivekanandarajah March 16, 2015, 1:15 p.m. UTC
On 16/03/15 23:32, Kugan wrote:
>>> lower-subreg.c:compute_costs() only cares about the cost of a (set (reg)
>>> (const_int )) move but I think the intention, at least for now, is to
>>> return extra_cost->vect.alu for all the vector operations.
>>
>> Almost, what we want at the moment is COSTS_N_INSNS (1) +
>> extra_cost->vect.alu
> 
> Thanks Kyrill for the review.
> 
>>> Regression tested on aarch64-linux-gnu with no new regression. Is this
>>> OK for trunk?
>>
>> Are you sure it's a (set (reg) (const_int)) that's being costed here? I
>> thought for moves into vecto registers it would be a (set (reg)
>> (const_vector)) which we don't handle in our rtx costs currently. I
>> think the correct approach would be to extend the aarch64_rtx_costs
>> switch statement to handle the CONST_VECT case. I believe you can use
>> aarch64_simd_valid_immediate to check whether x is a valid immediate for
>> a simd instruction and give it a cost of extra_cost->vect.alu. The logic
>> should be similar to the CONST_INT case.
> 
> Sorry about the (set (reg) (const_int)) above. But the actual RTL that
> is being split at 220r.subreg2 is
> 
> (insn 11 10 12 2 (set (subreg:V4SF (reg/v:OI 77 [ __o ]) 0)
>          (subreg:V4SF (reg/v:OI 73 [ __o ]) 0))
> /home/kugan/work/builds/gcc-fsf-gcc/tools/lib/gcc/aarch64-none-linux-gnu/5.0.0/include/arm_neon.h:22625
> 800 {*aarch64_simd_movv4sf}
>       (nil))
> 
> And also, if we return RTX cost above COSTS_N_INSNS (1), it will be
> split and it dosent recover from there. Therefore we need something like
> the below to prevent that happening.
> 

Hi Kyrill,

How about the attached patch? It is similar to what is currently done
for scalar register move.

Thanks,
Kugan

Comments

Jim Wilson March 16, 2015, 4:42 p.m. UTC | #1
Resending, now that I've figured out how to make gmail send text email
instead of html.

> >> Almost, what we want at the moment is COSTS_N_INSNS (1) +
> >> extra_cost->vect.alu

This won't work, because extra_cost->vect.alu is COSTS_N_INSNS (1),
which means the total is COSTS_N_INSNS (2).

The lower-subreg pass makes a decision on whether to split based on
cost >= (word_move_cost * size/word_mode_size).  Vectors are twice the
size of word mode, and word moves are cost COSTS_N_INSNS (1).  Setting
the vector move cost to COSTS_N_INSNS (2) means we have COSTS_N_INSNS
(2) >= COSTS_N_INSNS (2) and vector moves are split which is bad for
vector register allocation.  This calculation happens in compute_costs
in lower-subreg.c.

> How about the attached patch? It is similar to what is currently done
> for scalar register move.

I like this approach of using the vector register size instead of word
size when we have a vector mode.

Jim
Kyrylo Tkachov March 16, 2015, 4:48 p.m. UTC | #2
On 16/03/15 13:15, Kugan wrote:
> On 16/03/15 23:32, Kugan wrote:
>>>> lower-subreg.c:compute_costs() only cares about the cost of a (set (reg)
>>>> (const_int )) move but I think the intention, at least for now, is to
>>>> return extra_cost->vect.alu for all the vector operations.
>>> Almost, what we want at the moment is COSTS_N_INSNS (1) +
>>> extra_cost->vect.alu
>> Thanks Kyrill for the review.
>>
>>>> Regression tested on aarch64-linux-gnu with no new regression. Is this
>>>> OK for trunk?
>>> Are you sure it's a (set (reg) (const_int)) that's being costed here? I
>>> thought for moves into vecto registers it would be a (set (reg)
>>> (const_vector)) which we don't handle in our rtx costs currently. I
>>> think the correct approach would be to extend the aarch64_rtx_costs
>>> switch statement to handle the CONST_VECT case. I believe you can use
>>> aarch64_simd_valid_immediate to check whether x is a valid immediate for
>>> a simd instruction and give it a cost of extra_cost->vect.alu. The logic
>>> should be similar to the CONST_INT case.
>> Sorry about the (set (reg) (const_int)) above. But the actual RTL that
>> is being split at 220r.subreg2 is
>>
>> (insn 11 10 12 2 (set (subreg:V4SF (reg/v:OI 77 [ __o ]) 0)
>>           (subreg:V4SF (reg/v:OI 73 [ __o ]) 0))
>> /home/kugan/work/builds/gcc-fsf-gcc/tools/lib/gcc/aarch64-none-linux-gnu/5.0.0/include/arm_neon.h:22625
>> 800 {*aarch64_simd_movv4sf}
>>        (nil))
>>
>> And also, if we return RTX cost above COSTS_N_INSNS (1), it will be
>> split and it dosent recover from there. Therefore we need something like
>> the below to prevent that happening.
>>
> Hi Kyrill,
>
> How about the attached patch? It is similar to what is currently done
> for scalar register move.

Hi Kugan,
yeah, I think this is a better approach, though I can't approve.

Kyrill

>
> Thanks,
> Kugan
diff mbox

Patch

diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c
index cba3c1a..b9db3ac 100644
--- a/gcc/config/aarch64/aarch64.c
+++ b/gcc/config/aarch64/aarch64.c
@@ -5544,10 +5544,17 @@  aarch64_rtx_costs (rtx x, int code, int outer ATTRIBUTE_UNUSED,
 
 	  /* Fall through.  */
 	case REG:
+	  if (VECTOR_MODE_P (GET_MODE (op0)) && REG_P (op1))
+	    {
+              /* The cost is 1 per register copied.  */
+              int n_minus_1 = (GET_MODE_SIZE (GET_MODE (op0)) - 1)
+			      / GET_MODE_SIZE (V4SImode);
+              *cost = COSTS_N_INSNS (n_minus_1 + 1);
+	    }
 	  /* const0_rtx is in general free, but we will use an
 	     instruction to set a register to 0.  */
-          if (REG_P (op1) || op1 == const0_rtx)
-            {
+	  else if (REG_P (op1) || op1 == const0_rtx)
+	    {
               /* The cost is 1 per register copied.  */
               int n_minus_1 = (GET_MODE_SIZE (GET_MODE (op0)) - 1)
 			      / UNITS_PER_WORD;