Patchwork [AARCH64] Fix unrecognizable insn issue

login
register
mail settings
Submitter Zhenqiang Chen
Date April 10, 2013, 8:02 a.m.
Message ID <CACgzC7ADK2m2u8P9SOvjVB+=wsV2UwRHsRWvAWDtxhyao55AAg@mail.gmail.com>
Download mbox | patch
Permalink /patch/235326/
State New
Headers show

Comments

Zhenqiang Chen - April 10, 2013, 8:02 a.m.
Hi,

During expand, function aarch64_vcond_internal inverses some CMP, e.g.

  a LE b -> b GE a

But if "b" is "CONST0_RTX", "b GE a" will be an illegal insn.

Refer https://bugs.launchpad.net/linaro-toolchain-binaries/+bug/1163942
for detail about the issue.

The patch is to make "b" a register when inversing LE.

Is it OK for trunk, 4.8 and arm/aarch64-4.7-branch?

Thanks!
-Zhenqiang

ChangeLog:
2013-04-10  Zhenqiang Chen <zhenqiang.chen@linaro.org>

	* config/aarch64/aarch64-simd.md (aarch64_vcond_internal): Set
	operands[5] to register when inversing LE.
Andrew Pinski - April 10, 2013, 8:05 a.m.
On Wed, Apr 10, 2013 at 1:02 AM, Zhenqiang Chen
<zhenqiang.chen@linaro.org> wrote:
> Hi,
>
> During expand, function aarch64_vcond_internal inverses some CMP, e.g.
>
>   a LE b -> b GE a
>
> But if "b" is "CONST0_RTX", "b GE a" will be an illegal insn.
>
> Refer https://bugs.launchpad.net/linaro-toolchain-binaries/+bug/1163942
> for detail about the issue.
>
> The patch is to make "b" a register when inversing LE.
>
> Is it OK for trunk, 4.8 and arm/aarch64-4.7-branch?


Can you add a testcase also?  It would be best to add one that ia a
reduced testcase.
Also can you expand on what is going on here?  There is not enough
information in either this email or the bug report to figure out if
this is the correct patch.

Thanks,
Andrew Pinski

>
> Thanks!
> -Zhenqiang
>
> ChangeLog:
> 2013-04-10  Zhenqiang Chen <zhenqiang.chen@linaro.org>
>
>         * config/aarch64/aarch64-simd.md (aarch64_vcond_internal): Set
>         operands[5] to register when inversing LE.
>
> diff --git a/gcc/config/aarch64/aarch64-simd.md
> b/gcc/config/aarch64/aarch64-simd.md
> index 92dcfc0..d08d23a 100644
> --- a/gcc/config/aarch64/aarch64-simd.md
> +++ b/gcc/config/aarch64/aarch64-simd.md
> @@ -1657,6 +1657,8 @@
>        complimentary_comparison = gen_aarch64_cmgt<mode>;
>        break;
>      case LE:
> +      if (!REG_P (operands[5]))
> +       operands[5] = force_reg (<MODE>mode, operands[5]);
>      case UNLE:
>        inverse = 1;
>        /* Fall through.  */


On Wed, Apr 10, 2013 at 1:02 AM, Zhenqiang Chen
<zhenqiang.chen@linaro.org> wrote:
> Hi,
>
> During expand, function aarch64_vcond_internal inverses some CMP, e.g.
>
>   a LE b -> b GE a
>
> But if "b" is "CONST0_RTX", "b GE a" will be an illegal insn.
>
> Refer https://bugs.launchpad.net/linaro-toolchain-binaries/+bug/1163942
> for detail about the issue.
>
> The patch is to make "b" a register when inversing LE.
>
> Is it OK for trunk, 4.8 and arm/aarch64-4.7-branch?
>
> Thanks!
> -Zhenqiang
>
> ChangeLog:
> 2013-04-10  Zhenqiang Chen <zhenqiang.chen@linaro.org>
>
>         * config/aarch64/aarch64-simd.md (aarch64_vcond_internal): Set
>         operands[5] to register when inversing LE.
>
> diff --git a/gcc/config/aarch64/aarch64-simd.md
> b/gcc/config/aarch64/aarch64-simd.md
> index 92dcfc0..d08d23a 100644
> --- a/gcc/config/aarch64/aarch64-simd.md
> +++ b/gcc/config/aarch64/aarch64-simd.md
> @@ -1657,6 +1657,8 @@
>        complimentary_comparison = gen_aarch64_cmgt<mode>;
>        break;
>      case LE:
> +      if (!REG_P (operands[5]))
> +       operands[5] = force_reg (<MODE>mode, operands[5]);
>      case UNLE:
>        inverse = 1;
>        /* Fall through.  */
Zhenqiang Chen - April 10, 2013, 9:02 a.m.
On 10 April 2013 16:05, Andrew Pinski <pinskia@gmail.com> wrote:
> On Wed, Apr 10, 2013 at 1:02 AM, Zhenqiang Chen
> <zhenqiang.chen@linaro.org> wrote:
>> Hi,
>>
>> During expand, function aarch64_vcond_internal inverses some CMP, e.g.
>>
>>   a LE b -> b GE a
>>
>> But if "b" is "CONST0_RTX", "b GE a" will be an illegal insn.
>>
>> Refer https://bugs.launchpad.net/linaro-toolchain-binaries/+bug/1163942
>> for detail about the issue.
>>
>> The patch is to make "b" a register when inversing LE.
>>
>> Is it OK for trunk, 4.8 and arm/aarch64-4.7-branch?
>
>
> Can you add a testcase also?  It would be best to add one that ia a
> reduced testcase.

Sorry. Due to licence, I can not post the code from SPEC 2006. And it
is hard for me to rewrite a program to reproduce it.

> Also can you expand on what is going on here?  There is not enough
> information in either this email or the bug report to figure out if
> this is the correct patch.

Here is more detail about the issue:

A VEC_COND_EXPR is generated during tree-level optimization, like

vect_var_.21092_16406 = VEC_COND_EXPR <vect_var_.21091_16404 <= { 0.0,
0.0, 0.0, 0.0 }, vect_var_.21086_16391, vect_var_.21091_16404>;

During expand, function aarch64_vcond_internal inverse "LE" to "GE", i.e.
reverse "vect_var_.21091_16404 <= { 0.0, 0.0, 0.0, 0.0 }" to " { 0.0,
0.0, 0.0, 0.0 } >= vect_var_.21091_16404".

The insn after expand is like

(insn 2909 2908 2910 165 (set (reg:V4SI 16777)
        (unspec:V4SI [
                (const_vector:V4SF [
                        (const_double:SF 0.0 [0x0.0p+0])
                        (const_double:SF 0.0 [0x0.0p+0])
                        (const_double:SF 0.0 [0x0.0p+0])
                        (const_double:SF 0.0 [0x0.0p+0])
                    ])
                (reg:V4SF 9594 [ vect_var_.21059 ])
            ] UNSPEC_CMGE))

But this is illegal. FCMGE has two formats:

FCMGE <v>d, <v>n, <v>m
FCMGE <v>d, <v>n, #0

Both require "<v>n" be a register. "#0" can only be the last argument.
So when inversing LE to GE, function aarch64_vcond_internal should
make sure operands[5] is a register.

Thanks!
-Zhenqiang

>>
>> Thanks!
>> -Zhenqiang
>>
>> ChangeLog:
>> 2013-04-10  Zhenqiang Chen <zhenqiang.chen@linaro.org>
>>
>>         * config/aarch64/aarch64-simd.md (aarch64_vcond_internal): Set
>>         operands[5] to register when inversing LE.
>>
>> diff --git a/gcc/config/aarch64/aarch64-simd.md
>> b/gcc/config/aarch64/aarch64-simd.md
>> index 92dcfc0..d08d23a 100644
>> --- a/gcc/config/aarch64/aarch64-simd.md
>> +++ b/gcc/config/aarch64/aarch64-simd.md
>> @@ -1657,6 +1657,8 @@
>>        complimentary_comparison = gen_aarch64_cmgt<mode>;
>>        break;
>>      case LE:
>> +      if (!REG_P (operands[5]))
>> +       operands[5] = force_reg (<MODE>mode, operands[5]);
>>      case UNLE:
>>        inverse = 1;
>>        /* Fall through.  */
>
>
> On Wed, Apr 10, 2013 at 1:02 AM, Zhenqiang Chen
> <zhenqiang.chen@linaro.org> wrote:
>> Hi,
>>
>> During expand, function aarch64_vcond_internal inverses some CMP, e.g.
>>
>>   a LE b -> b GE a
>>
>> But if "b" is "CONST0_RTX", "b GE a" will be an illegal insn.
>>
>> Refer https://bugs.launchpad.net/linaro-toolchain-binaries/+bug/1163942
>> for detail about the issue.
>>
>> The patch is to make "b" a register when inversing LE.
>>
>> Is it OK for trunk, 4.8 and arm/aarch64-4.7-branch?
>>
>> Thanks!
>> -Zhenqiang
>>
>> ChangeLog:
>> 2013-04-10  Zhenqiang Chen <zhenqiang.chen@linaro.org>
>>
>>         * config/aarch64/aarch64-simd.md (aarch64_vcond_internal): Set
>>         operands[5] to register when inversing LE.
>>
>> diff --git a/gcc/config/aarch64/aarch64-simd.md
>> b/gcc/config/aarch64/aarch64-simd.md
>> index 92dcfc0..d08d23a 100644
>> --- a/gcc/config/aarch64/aarch64-simd.md
>> +++ b/gcc/config/aarch64/aarch64-simd.md
>> @@ -1657,6 +1657,8 @@
>>        complimentary_comparison = gen_aarch64_cmgt<mode>;
>>        break;
>>      case LE:
>> +      if (!REG_P (operands[5]))
>> +       operands[5] = force_reg (<MODE>mode, operands[5]);
>>      case UNLE:
>>        inverse = 1;
>>        /* Fall through.  */
Andrew Pinski - April 10, 2013, 9:18 a.m.
On Wed, Apr 10, 2013 at 2:02 AM, Zhenqiang Chen
<zhenqiang.chen@linaro.org> wrote:
> On 10 April 2013 16:05, Andrew Pinski <pinskia@gmail.com> wrote:
>> On Wed, Apr 10, 2013 at 1:02 AM, Zhenqiang Chen
>> <zhenqiang.chen@linaro.org> wrote:
>>> Hi,
>>>
>>> During expand, function aarch64_vcond_internal inverses some CMP, e.g.
>>>
>>>   a LE b -> b GE a
>>>
>>> But if "b" is "CONST0_RTX", "b GE a" will be an illegal insn.
>>>
>>> Refer https://bugs.launchpad.net/linaro-toolchain-binaries/+bug/1163942
>>> for detail about the issue.
>>>
>>> The patch is to make "b" a register when inversing LE.
>>>
>>> Is it OK for trunk, 4.8 and arm/aarch64-4.7-branch?
>>
>>
>> Can you add a testcase also?  It would be best to add one that ia a
>> reduced testcase.
>
> Sorry. Due to licence, I can not post the code from SPEC 2006. And it
> is hard for me to rewrite a program to reproduce it.

Actually most of the code inside of SPEC 2006 is free/open source
software.  I think that is true of wrf also.  What is not considered
part of that is the data that is used for benchmarking (except maybe
the GCC input which is IIRC preprocessed source from the other
benchmarks).
There are ways to get a reduced testcase without the full source also:
http://gcc.gnu.org/wiki/A_guide_to_testcase_reduction
Again a testcase is still highly recommended here.

Thanks,
Andrew Pinski


>
>> Also can you expand on what is going on here?  There is not enough
>> information in either this email or the bug report to figure out if
>> this is the correct patch.
>
> Here is more detail about the issue:
>
> A VEC_COND_EXPR is generated during tree-level optimization, like
>
> vect_var_.21092_16406 = VEC_COND_EXPR <vect_var_.21091_16404 <= { 0.0,
> 0.0, 0.0, 0.0 }, vect_var_.21086_16391, vect_var_.21091_16404>;
>
> During expand, function aarch64_vcond_internal inverse "LE" to "GE", i.e.
> reverse "vect_var_.21091_16404 <= { 0.0, 0.0, 0.0, 0.0 }" to " { 0.0,
> 0.0, 0.0, 0.0 } >= vect_var_.21091_16404".
>
> The insn after expand is like
>
> (insn 2909 2908 2910 165 (set (reg:V4SI 16777)
>         (unspec:V4SI [
>                 (const_vector:V4SF [
>                         (const_double:SF 0.0 [0x0.0p+0])
>                         (const_double:SF 0.0 [0x0.0p+0])
>                         (const_double:SF 0.0 [0x0.0p+0])
>                         (const_double:SF 0.0 [0x0.0p+0])
>                     ])
>                 (reg:V4SF 9594 [ vect_var_.21059 ])
>             ] UNSPEC_CMGE))
>
> But this is illegal. FCMGE has two formats:
>
> FCMGE <v>d, <v>n, <v>m
> FCMGE <v>d, <v>n, #0
>
> Both require "<v>n" be a register. "#0" can only be the last argument.
> So when inversing LE to GE, function aarch64_vcond_internal should
> make sure operands[5] is a register.
>
> Thanks!
> -Zhenqiang
>
>>>
>>> Thanks!
>>> -Zhenqiang
>>>
>>> ChangeLog:
>>> 2013-04-10  Zhenqiang Chen <zhenqiang.chen@linaro.org>
>>>
>>>         * config/aarch64/aarch64-simd.md (aarch64_vcond_internal): Set
>>>         operands[5] to register when inversing LE.
>>>
>>> diff --git a/gcc/config/aarch64/aarch64-simd.md
>>> b/gcc/config/aarch64/aarch64-simd.md
>>> index 92dcfc0..d08d23a 100644
>>> --- a/gcc/config/aarch64/aarch64-simd.md
>>> +++ b/gcc/config/aarch64/aarch64-simd.md
>>> @@ -1657,6 +1657,8 @@
>>>        complimentary_comparison = gen_aarch64_cmgt<mode>;
>>>        break;
>>>      case LE:
>>> +      if (!REG_P (operands[5]))
>>> +       operands[5] = force_reg (<MODE>mode, operands[5]);
>>>      case UNLE:
>>>        inverse = 1;
>>>        /* Fall through.  */
>>
>>
>> On Wed, Apr 10, 2013 at 1:02 AM, Zhenqiang Chen
>> <zhenqiang.chen@linaro.org> wrote:
>>> Hi,
>>>
>>> During expand, function aarch64_vcond_internal inverses some CMP, e.g.
>>>
>>>   a LE b -> b GE a
>>>
>>> But if "b" is "CONST0_RTX", "b GE a" will be an illegal insn.
>>>
>>> Refer https://bugs.launchpad.net/linaro-toolchain-binaries/+bug/1163942
>>> for detail about the issue.
>>>
>>> The patch is to make "b" a register when inversing LE.
>>>
>>> Is it OK for trunk, 4.8 and arm/aarch64-4.7-branch?
>>>
>>> Thanks!
>>> -Zhenqiang
>>>
>>> ChangeLog:
>>> 2013-04-10  Zhenqiang Chen <zhenqiang.chen@linaro.org>
>>>
>>>         * config/aarch64/aarch64-simd.md (aarch64_vcond_internal): Set
>>>         operands[5] to register when inversing LE.
>>>
>>> diff --git a/gcc/config/aarch64/aarch64-simd.md
>>> b/gcc/config/aarch64/aarch64-simd.md
>>> index 92dcfc0..d08d23a 100644
>>> --- a/gcc/config/aarch64/aarch64-simd.md
>>> +++ b/gcc/config/aarch64/aarch64-simd.md
>>> @@ -1657,6 +1657,8 @@
>>>        complimentary_comparison = gen_aarch64_cmgt<mode>;
>>>        break;
>>>      case LE:
>>> +      if (!REG_P (operands[5]))
>>> +       operands[5] = force_reg (<MODE>mode, operands[5]);
>>>      case UNLE:
>>>        inverse = 1;
>>>        /* Fall through.  */

On Wed, Apr 10, 2013 at 2:02 AM, Zhenqiang Chen
<zhenqiang.chen@linaro.org> wrote:
> On 10 April 2013 16:05, Andrew Pinski <pinskia@gmail.com> wrote:
>> On Wed, Apr 10, 2013 at 1:02 AM, Zhenqiang Chen
>> <zhenqiang.chen@linaro.org> wrote:
>>> Hi,
>>>
>>> During expand, function aarch64_vcond_internal inverses some CMP, e.g.
>>>
>>>   a LE b -> b GE a
>>>
>>> But if "b" is "CONST0_RTX", "b GE a" will be an illegal insn.
>>>
>>> Refer https://bugs.launchpad.net/linaro-toolchain-binaries/+bug/1163942
>>> for detail about the issue.
>>>
>>> The patch is to make "b" a register when inversing LE.
>>>
>>> Is it OK for trunk, 4.8 and arm/aarch64-4.7-branch?
>>
>>
>> Can you add a testcase also?  It would be best to add one that ia a
>> reduced testcase.
>
> Sorry. Due to licence, I can not post the code from SPEC 2006. And it
> is hard for me to rewrite a program to reproduce it.
>
>> Also can you expand on what is going on here?  There is not enough
>> information in either this email or the bug report to figure out if
>> this is the correct patch.
>
> Here is more detail about the issue:
>
> A VEC_COND_EXPR is generated during tree-level optimization, like
>
> vect_var_.21092_16406 = VEC_COND_EXPR <vect_var_.21091_16404 <= { 0.0,
> 0.0, 0.0, 0.0 }, vect_var_.21086_16391, vect_var_.21091_16404>;
>
> During expand, function aarch64_vcond_internal inverse "LE" to "GE", i.e.
> reverse "vect_var_.21091_16404 <= { 0.0, 0.0, 0.0, 0.0 }" to " { 0.0,
> 0.0, 0.0, 0.0 } >= vect_var_.21091_16404".
>
> The insn after expand is like
>
> (insn 2909 2908 2910 165 (set (reg:V4SI 16777)
>         (unspec:V4SI [
>                 (const_vector:V4SF [
>                         (const_double:SF 0.0 [0x0.0p+0])
>                         (const_double:SF 0.0 [0x0.0p+0])
>                         (const_double:SF 0.0 [0x0.0p+0])
>                         (const_double:SF 0.0 [0x0.0p+0])
>                     ])
>                 (reg:V4SF 9594 [ vect_var_.21059 ])
>             ] UNSPEC_CMGE))
>
> But this is illegal. FCMGE has two formats:
>
> FCMGE <v>d, <v>n, <v>m
> FCMGE <v>d, <v>n, #0
>
> Both require "<v>n" be a register. "#0" can only be the last argument.
> So when inversing LE to GE, function aarch64_vcond_internal should
> make sure operands[5] is a register.
>
> Thanks!
> -Zhenqiang
>
>>>
>>> Thanks!
>>> -Zhenqiang
>>>
>>> ChangeLog:
>>> 2013-04-10  Zhenqiang Chen <zhenqiang.chen@linaro.org>
>>>
>>>         * config/aarch64/aarch64-simd.md (aarch64_vcond_internal): Set
>>>         operands[5] to register when inversing LE.
>>>
>>> diff --git a/gcc/config/aarch64/aarch64-simd.md
>>> b/gcc/config/aarch64/aarch64-simd.md
>>> index 92dcfc0..d08d23a 100644
>>> --- a/gcc/config/aarch64/aarch64-simd.md
>>> +++ b/gcc/config/aarch64/aarch64-simd.md
>>> @@ -1657,6 +1657,8 @@
>>>        complimentary_comparison = gen_aarch64_cmgt<mode>;
>>>        break;
>>>      case LE:
>>> +      if (!REG_P (operands[5]))
>>> +       operands[5] = force_reg (<MODE>mode, operands[5]);
>>>      case UNLE:
>>>        inverse = 1;
>>>        /* Fall through.  */
>>
>>
>> On Wed, Apr 10, 2013 at 1:02 AM, Zhenqiang Chen
>> <zhenqiang.chen@linaro.org> wrote:
>>> Hi,
>>>
>>> During expand, function aarch64_vcond_internal inverses some CMP, e.g.
>>>
>>>   a LE b -> b GE a
>>>
>>> But if "b" is "CONST0_RTX", "b GE a" will be an illegal insn.
>>>
>>> Refer https://bugs.launchpad.net/linaro-toolchain-binaries/+bug/1163942
>>> for detail about the issue.
>>>
>>> The patch is to make "b" a register when inversing LE.
>>>
>>> Is it OK for trunk, 4.8 and arm/aarch64-4.7-branch?
>>>
>>> Thanks!
>>> -Zhenqiang
>>>
>>> ChangeLog:
>>> 2013-04-10  Zhenqiang Chen <zhenqiang.chen@linaro.org>
>>>
>>>         * config/aarch64/aarch64-simd.md (aarch64_vcond_internal): Set
>>>         operands[5] to register when inversing LE.
>>>
>>> diff --git a/gcc/config/aarch64/aarch64-simd.md
>>> b/gcc/config/aarch64/aarch64-simd.md
>>> index 92dcfc0..d08d23a 100644
>>> --- a/gcc/config/aarch64/aarch64-simd.md
>>> +++ b/gcc/config/aarch64/aarch64-simd.md
>>> @@ -1657,6 +1657,8 @@
>>>        complimentary_comparison = gen_aarch64_cmgt<mode>;
>>>        break;
>>>      case LE:
>>> +      if (!REG_P (operands[5]))
>>> +       operands[5] = force_reg (<MODE>mode, operands[5]);
>>>      case UNLE:
>>>        inverse = 1;
>>>        /* Fall through.  */
Zhenqiang Chen - April 10, 2013, 9:46 a.m.
On 10 April 2013 17:18, Andrew Pinski <pinskia@gmail.com> wrote:
> On Wed, Apr 10, 2013 at 2:02 AM, Zhenqiang Chen
> <zhenqiang.chen@linaro.org> wrote:
>> On 10 April 2013 16:05, Andrew Pinski <pinskia@gmail.com> wrote:
>>> On Wed, Apr 10, 2013 at 1:02 AM, Zhenqiang Chen
>>> <zhenqiang.chen@linaro.org> wrote:
>>>> Hi,
>>>>
>>>> During expand, function aarch64_vcond_internal inverses some CMP, e.g.
>>>>
>>>>   a LE b -> b GE a
>>>>
>>>> But if "b" is "CONST0_RTX", "b GE a" will be an illegal insn.
>>>>
>>>> Refer https://bugs.launchpad.net/linaro-toolchain-binaries/+bug/1163942
>>>> for detail about the issue.
>>>>
>>>> The patch is to make "b" a register when inversing LE.
>>>>
>>>> Is it OK for trunk, 4.8 and arm/aarch64-4.7-branch?
>>>
>>>
>>> Can you add a testcase also?  It would be best to add one that ia a
>>> reduced testcase.
>>
>> Sorry. Due to licence, I can not post the code from SPEC 2006. And it
>> is hard for me to rewrite a program to reproduce it.
>
> Actually most of the code inside of SPEC 2006 is free/open source
> software.  I think that is true of wrf also.  What is not considered
> part of that is the data that is used for benchmarking (except maybe
> the GCC input which is IIRC preprocessed source from the other
> benchmarks).
> There are ways to get a reduced testcase without the full source also:
> http://gcc.gnu.org/wiki/A_guide_to_testcase_reduction
> Again a testcase is still highly recommended here.

Thank you for the comment. I will check the licence issue.

-Zhenqiang

>>
>>> Also can you expand on what is going on here?  There is not enough
>>> information in either this email or the bug report to figure out if
>>> this is the correct patch.
>>
>> Here is more detail about the issue:
>>
>> A VEC_COND_EXPR is generated during tree-level optimization, like
>>
>> vect_var_.21092_16406 = VEC_COND_EXPR <vect_var_.21091_16404 <= { 0.0,
>> 0.0, 0.0, 0.0 }, vect_var_.21086_16391, vect_var_.21091_16404>;
>>
>> During expand, function aarch64_vcond_internal inverse "LE" to "GE", i.e.
>> reverse "vect_var_.21091_16404 <= { 0.0, 0.0, 0.0, 0.0 }" to " { 0.0,
>> 0.0, 0.0, 0.0 } >= vect_var_.21091_16404".
>>
>> The insn after expand is like
>>
>> (insn 2909 2908 2910 165 (set (reg:V4SI 16777)
>>         (unspec:V4SI [
>>                 (const_vector:V4SF [
>>                         (const_double:SF 0.0 [0x0.0p+0])
>>                         (const_double:SF 0.0 [0x0.0p+0])
>>                         (const_double:SF 0.0 [0x0.0p+0])
>>                         (const_double:SF 0.0 [0x0.0p+0])
>>                     ])
>>                 (reg:V4SF 9594 [ vect_var_.21059 ])
>>             ] UNSPEC_CMGE))
>>
>> But this is illegal. FCMGE has two formats:
>>
>> FCMGE <v>d, <v>n, <v>m
>> FCMGE <v>d, <v>n, #0
>>
>> Both require "<v>n" be a register. "#0" can only be the last argument.
>> So when inversing LE to GE, function aarch64_vcond_internal should
>> make sure operands[5] is a register.
>>
>> Thanks!
>> -Zhenqiang
>>
>>>>
>>>> Thanks!
>>>> -Zhenqiang
>>>>
>>>> ChangeLog:
>>>> 2013-04-10  Zhenqiang Chen <zhenqiang.chen@linaro.org>
>>>>
>>>>         * config/aarch64/aarch64-simd.md (aarch64_vcond_internal): Set
>>>>         operands[5] to register when inversing LE.
>>>>
>>>> diff --git a/gcc/config/aarch64/aarch64-simd.md
>>>> b/gcc/config/aarch64/aarch64-simd.md
>>>> index 92dcfc0..d08d23a 100644
>>>> --- a/gcc/config/aarch64/aarch64-simd.md
>>>> +++ b/gcc/config/aarch64/aarch64-simd.md
>>>> @@ -1657,6 +1657,8 @@
>>>>        complimentary_comparison = gen_aarch64_cmgt<mode>;
>>>>        break;
>>>>      case LE:
>>>> +      if (!REG_P (operands[5]))
>>>> +       operands[5] = force_reg (<MODE>mode, operands[5]);
>>>>      case UNLE:
>>>>        inverse = 1;
>>>>        /* Fall through.  */
>>>
>>>
>>> On Wed, Apr 10, 2013 at 1:02 AM, Zhenqiang Chen
>>> <zhenqiang.chen@linaro.org> wrote:
>>>> Hi,
>>>>
>>>> During expand, function aarch64_vcond_internal inverses some CMP, e.g.
>>>>
>>>>   a LE b -> b GE a
>>>>
>>>> But if "b" is "CONST0_RTX", "b GE a" will be an illegal insn.
>>>>
>>>> Refer https://bugs.launchpad.net/linaro-toolchain-binaries/+bug/1163942
>>>> for detail about the issue.
>>>>
>>>> The patch is to make "b" a register when inversing LE.
>>>>
>>>> Is it OK for trunk, 4.8 and arm/aarch64-4.7-branch?
>>>>
>>>> Thanks!
>>>> -Zhenqiang
>>>>
>>>> ChangeLog:
>>>> 2013-04-10  Zhenqiang Chen <zhenqiang.chen@linaro.org>
>>>>
>>>>         * config/aarch64/aarch64-simd.md (aarch64_vcond_internal): Set
>>>>         operands[5] to register when inversing LE.
>>>>
>>>> diff --git a/gcc/config/aarch64/aarch64-simd.md
>>>> b/gcc/config/aarch64/aarch64-simd.md
>>>> index 92dcfc0..d08d23a 100644
>>>> --- a/gcc/config/aarch64/aarch64-simd.md
>>>> +++ b/gcc/config/aarch64/aarch64-simd.md
>>>> @@ -1657,6 +1657,8 @@
>>>>        complimentary_comparison = gen_aarch64_cmgt<mode>;
>>>>        break;
>>>>      case LE:
>>>> +      if (!REG_P (operands[5]))
>>>> +       operands[5] = force_reg (<MODE>mode, operands[5]);
>>>>      case UNLE:
>>>>        inverse = 1;
>>>>        /* Fall through.  */
>
> On Wed, Apr 10, 2013 at 2:02 AM, Zhenqiang Chen
> <zhenqiang.chen@linaro.org> wrote:
>> On 10 April 2013 16:05, Andrew Pinski <pinskia@gmail.com> wrote:
>>> On Wed, Apr 10, 2013 at 1:02 AM, Zhenqiang Chen
>>> <zhenqiang.chen@linaro.org> wrote:
>>>> Hi,
>>>>
>>>> During expand, function aarch64_vcond_internal inverses some CMP, e.g.
>>>>
>>>>   a LE b -> b GE a
>>>>
>>>> But if "b" is "CONST0_RTX", "b GE a" will be an illegal insn.
>>>>
>>>> Refer https://bugs.launchpad.net/linaro-toolchain-binaries/+bug/1163942
>>>> for detail about the issue.
>>>>
>>>> The patch is to make "b" a register when inversing LE.
>>>>
>>>> Is it OK for trunk, 4.8 and arm/aarch64-4.7-branch?
>>>
>>>
>>> Can you add a testcase also?  It would be best to add one that ia a
>>> reduced testcase.
>>
>> Sorry. Due to licence, I can not post the code from SPEC 2006. And it
>> is hard for me to rewrite a program to reproduce it.
>>
>>> Also can you expand on what is going on here?  There is not enough
>>> information in either this email or the bug report to figure out if
>>> this is the correct patch.
>>
>> Here is more detail about the issue:
>>
>> A VEC_COND_EXPR is generated during tree-level optimization, like
>>
>> vect_var_.21092_16406 = VEC_COND_EXPR <vect_var_.21091_16404 <= { 0.0,
>> 0.0, 0.0, 0.0 }, vect_var_.21086_16391, vect_var_.21091_16404>;
>>
>> During expand, function aarch64_vcond_internal inverse "LE" to "GE", i.e.
>> reverse "vect_var_.21091_16404 <= { 0.0, 0.0, 0.0, 0.0 }" to " { 0.0,
>> 0.0, 0.0, 0.0 } >= vect_var_.21091_16404".
>>
>> The insn after expand is like
>>
>> (insn 2909 2908 2910 165 (set (reg:V4SI 16777)
>>         (unspec:V4SI [
>>                 (const_vector:V4SF [
>>                         (const_double:SF 0.0 [0x0.0p+0])
>>                         (const_double:SF 0.0 [0x0.0p+0])
>>                         (const_double:SF 0.0 [0x0.0p+0])
>>                         (const_double:SF 0.0 [0x0.0p+0])
>>                     ])
>>                 (reg:V4SF 9594 [ vect_var_.21059 ])
>>             ] UNSPEC_CMGE))
>>
>> But this is illegal. FCMGE has two formats:
>>
>> FCMGE <v>d, <v>n, <v>m
>> FCMGE <v>d, <v>n, #0
>>
>> Both require "<v>n" be a register. "#0" can only be the last argument.
>> So when inversing LE to GE, function aarch64_vcond_internal should
>> make sure operands[5] is a register.
>>
>> Thanks!
>> -Zhenqiang
>>
>>>>
>>>> Thanks!
>>>> -Zhenqiang
>>>>
>>>> ChangeLog:
>>>> 2013-04-10  Zhenqiang Chen <zhenqiang.chen@linaro.org>
>>>>
>>>>         * config/aarch64/aarch64-simd.md (aarch64_vcond_internal): Set
>>>>         operands[5] to register when inversing LE.
>>>>
>>>> diff --git a/gcc/config/aarch64/aarch64-simd.md
>>>> b/gcc/config/aarch64/aarch64-simd.md
>>>> index 92dcfc0..d08d23a 100644
>>>> --- a/gcc/config/aarch64/aarch64-simd.md
>>>> +++ b/gcc/config/aarch64/aarch64-simd.md
>>>> @@ -1657,6 +1657,8 @@
>>>>        complimentary_comparison = gen_aarch64_cmgt<mode>;
>>>>        break;
>>>>      case LE:
>>>> +      if (!REG_P (operands[5]))
>>>> +       operands[5] = force_reg (<MODE>mode, operands[5]);
>>>>      case UNLE:
>>>>        inverse = 1;
>>>>        /* Fall through.  */
>>>
>>>
>>> On Wed, Apr 10, 2013 at 1:02 AM, Zhenqiang Chen
>>> <zhenqiang.chen@linaro.org> wrote:
>>>> Hi,
>>>>
>>>> During expand, function aarch64_vcond_internal inverses some CMP, e.g.
>>>>
>>>>   a LE b -> b GE a
>>>>
>>>> But if "b" is "CONST0_RTX", "b GE a" will be an illegal insn.
>>>>
>>>> Refer https://bugs.launchpad.net/linaro-toolchain-binaries/+bug/1163942
>>>> for detail about the issue.
>>>>
>>>> The patch is to make "b" a register when inversing LE.
>>>>
>>>> Is it OK for trunk, 4.8 and arm/aarch64-4.7-branch?
>>>>
>>>> Thanks!
>>>> -Zhenqiang
>>>>
>>>> ChangeLog:
>>>> 2013-04-10  Zhenqiang Chen <zhenqiang.chen@linaro.org>
>>>>
>>>>         * config/aarch64/aarch64-simd.md (aarch64_vcond_internal): Set
>>>>         operands[5] to register when inversing LE.
>>>>
>>>> diff --git a/gcc/config/aarch64/aarch64-simd.md
>>>> b/gcc/config/aarch64/aarch64-simd.md
>>>> index 92dcfc0..d08d23a 100644
>>>> --- a/gcc/config/aarch64/aarch64-simd.md
>>>> +++ b/gcc/config/aarch64/aarch64-simd.md
>>>> @@ -1657,6 +1657,8 @@
>>>>        complimentary_comparison = gen_aarch64_cmgt<mode>;
>>>>        break;
>>>>      case LE:
>>>> +      if (!REG_P (operands[5]))
>>>> +       operands[5] = force_reg (<MODE>mode, operands[5]);
>>>>      case UNLE:
>>>>        inverse = 1;
>>>>        /* Fall through.  */

Patch

diff --git a/gcc/config/aarch64/aarch64-simd.md
b/gcc/config/aarch64/aarch64-simd.md
index 92dcfc0..d08d23a 100644
--- a/gcc/config/aarch64/aarch64-simd.md
+++ b/gcc/config/aarch64/aarch64-simd.md
@@ -1657,6 +1657,8 @@ 
       complimentary_comparison = gen_aarch64_cmgt<mode>;
       break;
     case LE:
+      if (!REG_P (operands[5]))
+	operands[5] = force_reg (<MODE>mode, operands[5]);
     case UNLE:
       inverse = 1;
       /* Fall through.  */