diff mbox series

[v1] VECT: Bugfix ICE for vectorizable_store when both len and mask

Message ID 20240308000401.2766685-1-pan2.li@intel.com
State New
Headers show
Series [v1] VECT: Bugfix ICE for vectorizable_store when both len and mask | expand

Commit Message

Li, Pan2 March 8, 2024, 12:04 a.m. UTC
From: Pan Li <pan2.li@intel.com>

This patch would like to fix one ICE in vectorizable_store for both the
loop_masks and loop_lens.  The ICE looks like below with "-march=rv64gcv -O3".

during GIMPLE pass: vect
test.c: In function ‘d’:
test.c:6:6: internal compiler error: in vectorizable_store, at
tree-vect-stmts.cc:8691
    6 | void d() {
      |      ^
0x37a6f2f vectorizable_store
        .../__RISC-V_BUILD__/../gcc/tree-vect-stmts.cc:8691
0x37b861c vect_analyze_stmt(vec_info*, _stmt_vec_info*, bool*,
_slp_tree*, _slp_instance*, vec<stmt_info_for_cost, va_heap, vl_ptr>*)
        .../__RISC-V_BUILD__/../gcc/tree-vect-stmts.cc:13242
0x1db5dca vect_analyze_loop_operations
        .../__RISC-V_BUILD__/../gcc/tree-vect-loop.cc:2208
0x1db885b vect_analyze_loop_2
        .../__RISC-V_BUILD__/../gcc/tree-vect-loop.cc:3041
0x1dba029 vect_analyze_loop_1
        .../__RISC-V_BUILD__/../gcc/tree-vect-loop.cc:3481
0x1dbabad vect_analyze_loop(loop*, vec_info_shared*)
        .../__RISC-V_BUILD__/../gcc/tree-vect-loop.cc:3639
0x1e389d1 try_vectorize_loop_1
        .../__RISC-V_BUILD__/../gcc/tree-vectorizer.cc:1066
0x1e38f3d try_vectorize_loop
        .../__RISC-V_BUILD__/../gcc/tree-vectorizer.cc:1182
0x1e39230 execute
        .../__RISC-V_BUILD__/../gcc/tree-vectorizer.cc:1298

Given the masks and the lens cannot be enabled simultanously when loop is
using partial vectors.  Thus, we need to ensure the one is disabled when we
would like to record the other in check_load_store_for_partial_vectors.  For
example, when we try to record loop len, we need to check if the loop mask
is disabled or not.

Below testsuites are passed for this patch:
* The x86 bootstrap tests.
* The x86 fully regression tests.
* The aarch64 fully regression tests.
* The riscv fully regressison tests.

	PR target/114195

gcc/ChangeLog:

	* tree-vect-stmts.cc (check_load_store_for_partial_vectors): Add
	loop mask/len check before recording as they are mutual exclusion.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/base/pr114195-1.c: New test.

Signed-off-by: Pan Li <pan2.li@intel.com>
---
 .../gcc.target/riscv/rvv/base/pr114195-1.c    | 15 +++++++++++
 gcc/tree-vect-stmts.cc                        | 26 ++++++++++++++-----
 2 files changed, 35 insertions(+), 6 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/pr114195-1.c

Comments

Richard Biener March 8, 2024, 1:59 p.m. UTC | #1
On Fri, Mar 8, 2024 at 1:04 AM <pan2.li@intel.com> wrote:
>
> From: Pan Li <pan2.li@intel.com>
>
> This patch would like to fix one ICE in vectorizable_store for both the
> loop_masks and loop_lens.  The ICE looks like below with "-march=rv64gcv -O3".
>
> during GIMPLE pass: vect
> test.c: In function ‘d’:
> test.c:6:6: internal compiler error: in vectorizable_store, at
> tree-vect-stmts.cc:8691
>     6 | void d() {
>       |      ^
> 0x37a6f2f vectorizable_store
>         .../__RISC-V_BUILD__/../gcc/tree-vect-stmts.cc:8691
> 0x37b861c vect_analyze_stmt(vec_info*, _stmt_vec_info*, bool*,
> _slp_tree*, _slp_instance*, vec<stmt_info_for_cost, va_heap, vl_ptr>*)
>         .../__RISC-V_BUILD__/../gcc/tree-vect-stmts.cc:13242
> 0x1db5dca vect_analyze_loop_operations
>         .../__RISC-V_BUILD__/../gcc/tree-vect-loop.cc:2208
> 0x1db885b vect_analyze_loop_2
>         .../__RISC-V_BUILD__/../gcc/tree-vect-loop.cc:3041
> 0x1dba029 vect_analyze_loop_1
>         .../__RISC-V_BUILD__/../gcc/tree-vect-loop.cc:3481
> 0x1dbabad vect_analyze_loop(loop*, vec_info_shared*)
>         .../__RISC-V_BUILD__/../gcc/tree-vect-loop.cc:3639
> 0x1e389d1 try_vectorize_loop_1
>         .../__RISC-V_BUILD__/../gcc/tree-vectorizer.cc:1066
> 0x1e38f3d try_vectorize_loop
>         .../__RISC-V_BUILD__/../gcc/tree-vectorizer.cc:1182
> 0x1e39230 execute
>         .../__RISC-V_BUILD__/../gcc/tree-vectorizer.cc:1298
>
> Given the masks and the lens cannot be enabled simultanously when loop is
> using partial vectors.  Thus, we need to ensure the one is disabled when we
> would like to record the other in check_load_store_for_partial_vectors.  For
> example, when we try to record loop len, we need to check if the loop mask
> is disabled or not.

I don't think you can rely on LOOP_VINFO_FULLY_WITH_LENGTH_P during
analysis.  Instead how we tried to set up things is that we never even try
both and there is (was?) code to reject partial vector usage when we end
up recording both lens and masks.

That said, the assert you run into should be only asserted during transform,
not during analysis.  It possibly was before Robins costing reorg?

Richard.

> Below testsuites are passed for this patch:
> * The x86 bootstrap tests.
> * The x86 fully regression tests.
> * The aarch64 fully regression tests.
> * The riscv fully regressison tests.
>
>         PR target/114195
>
> gcc/ChangeLog:
>
>         * tree-vect-stmts.cc (check_load_store_for_partial_vectors): Add
>         loop mask/len check before recording as they are mutual exclusion.
>
> gcc/testsuite/ChangeLog:
>
>         * gcc.target/riscv/rvv/base/pr114195-1.c: New test.
>
> Signed-off-by: Pan Li <pan2.li@intel.com>
> ---
>  .../gcc.target/riscv/rvv/base/pr114195-1.c    | 15 +++++++++++
>  gcc/tree-vect-stmts.cc                        | 26 ++++++++++++++-----
>  2 files changed, 35 insertions(+), 6 deletions(-)
>  create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/pr114195-1.c
>
> diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pr114195-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pr114195-1.c
> new file mode 100644
> index 00000000000..b0c9d5b81b8
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pr114195-1.c
> @@ -0,0 +1,15 @@
> +/* Test that we do not have ice when compile */
> +/* { dg-do compile } */
> +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -ftree-vectorize" } */
> +
> +long a, b;
> +extern short c[];
> +
> +void d() {
> +  for (int e = 0; e < 35; e += 2) {
> +    a = ({ a < 0 ? a : 0; });
> +    b = ({ b < 0 ? b : 0; });
> +
> +    c[e] = 0;
> +  }
> +}
> diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc
> index 14a3ffb5f02..624947ed271 100644
> --- a/gcc/tree-vect-stmts.cc
> +++ b/gcc/tree-vect-stmts.cc
> @@ -1502,6 +1502,8 @@ check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
>                                       gather_scatter_info *gs_info,
>                                       tree scalar_mask)
>  {
> +  gcc_assert (LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P (loop_vinfo));
> +
>    /* Invariant loads need no special support.  */
>    if (memory_access_type == VMAT_INVARIANT)
>      return;
> @@ -1521,9 +1523,17 @@ check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
>        internal_fn ifn
>         = (is_load ? vect_load_lanes_supported (vectype, group_size, true)
>                    : vect_store_lanes_supported (vectype, group_size, true));
> -      if (ifn == IFN_MASK_LEN_LOAD_LANES || ifn == IFN_MASK_LEN_STORE_LANES)
> +
> +      /* When the loop_vinfo using partial vector,  we cannot enable both
> +        the fully mask and length simultaneously.  Thus, make sure the
> +        other one is disabled when record one of them.
> +        The same as other place for both the vect_record_loop_len and
> +        vect_record_loop_mask.  */
> +      if ((ifn == IFN_MASK_LEN_LOAD_LANES || ifn == IFN_MASK_LEN_STORE_LANES)
> +       && !LOOP_VINFO_FULLY_MASKED_P (loop_vinfo))
>         vect_record_loop_len (loop_vinfo, lens, nvectors, vectype, 1);
> -      else if (ifn == IFN_MASK_LOAD_LANES || ifn == IFN_MASK_STORE_LANES)
> +      else if ((ifn == IFN_MASK_LOAD_LANES || ifn == IFN_MASK_STORE_LANES)
> +       && !LOOP_VINFO_FULLY_WITH_LENGTH_P (loop_vinfo))
>         vect_record_loop_mask (loop_vinfo, masks, nvectors, vectype,
>                                scalar_mask);
>        else
> @@ -1549,12 +1559,14 @@ check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
>        if (internal_gather_scatter_fn_supported_p (len_ifn, vectype,
>                                                   gs_info->memory_type,
>                                                   gs_info->offset_vectype,
> -                                                 gs_info->scale))
> +                                                 gs_info->scale)
> +       && !LOOP_VINFO_FULLY_MASKED_P (loop_vinfo))
>         vect_record_loop_len (loop_vinfo, lens, nvectors, vectype, 1);
>        else if (internal_gather_scatter_fn_supported_p (ifn, vectype,
>                                                        gs_info->memory_type,
>                                                        gs_info->offset_vectype,
> -                                                      gs_info->scale))
> +                                                      gs_info->scale)
> +       && !LOOP_VINFO_FULLY_WITH_LENGTH_P (loop_vinfo))
>         vect_record_loop_mask (loop_vinfo, masks, nvectors, vectype,
>                                scalar_mask);
>        else
> @@ -1608,7 +1620,8 @@ check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
>    machine_mode mask_mode;
>    machine_mode vmode;
>    bool using_partial_vectors_p = false;
> -  if (get_len_load_store_mode (vecmode, is_load).exists (&vmode))
> +  if (get_len_load_store_mode (vecmode, is_load).exists (&vmode)
> +    && !LOOP_VINFO_FULLY_MASKED_P (loop_vinfo))
>      {
>        nvectors = group_memory_nvectors (group_size * vf, nunits);
>        unsigned factor = (vecmode == vmode) ? 1 : GET_MODE_UNIT_SIZE (vecmode);
> @@ -1616,7 +1629,8 @@ check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
>        using_partial_vectors_p = true;
>      }
>    else if (targetm.vectorize.get_mask_mode (vecmode).exists (&mask_mode)
> -          && can_vec_mask_load_store_p (vecmode, mask_mode, is_load))
> +          && can_vec_mask_load_store_p (vecmode, mask_mode, is_load)
> +          && !LOOP_VINFO_FULLY_WITH_LENGTH_P (loop_vinfo))
>      {
>        nvectors = group_memory_nvectors (group_size * vf, nunits);
>        vect_record_loop_mask (loop_vinfo, masks, nvectors, vectype, scalar_mask);
> --
> 2.34.1
>
Richard Biener March 8, 2024, 2:02 p.m. UTC | #2
On Fri, Mar 8, 2024 at 2:59 PM Richard Biener
<richard.guenther@gmail.com> wrote:
>
> On Fri, Mar 8, 2024 at 1:04 AM <pan2.li@intel.com> wrote:
> >
> > From: Pan Li <pan2.li@intel.com>
> >
> > This patch would like to fix one ICE in vectorizable_store for both the
> > loop_masks and loop_lens.  The ICE looks like below with "-march=rv64gcv -O3".
> >
> > during GIMPLE pass: vect
> > test.c: In function ‘d’:
> > test.c:6:6: internal compiler error: in vectorizable_store, at
> > tree-vect-stmts.cc:8691
> >     6 | void d() {
> >       |      ^
> > 0x37a6f2f vectorizable_store
> >         .../__RISC-V_BUILD__/../gcc/tree-vect-stmts.cc:8691
> > 0x37b861c vect_analyze_stmt(vec_info*, _stmt_vec_info*, bool*,
> > _slp_tree*, _slp_instance*, vec<stmt_info_for_cost, va_heap, vl_ptr>*)
> >         .../__RISC-V_BUILD__/../gcc/tree-vect-stmts.cc:13242
> > 0x1db5dca vect_analyze_loop_operations
> >         .../__RISC-V_BUILD__/../gcc/tree-vect-loop.cc:2208
> > 0x1db885b vect_analyze_loop_2
> >         .../__RISC-V_BUILD__/../gcc/tree-vect-loop.cc:3041
> > 0x1dba029 vect_analyze_loop_1
> >         .../__RISC-V_BUILD__/../gcc/tree-vect-loop.cc:3481
> > 0x1dbabad vect_analyze_loop(loop*, vec_info_shared*)
> >         .../__RISC-V_BUILD__/../gcc/tree-vect-loop.cc:3639
> > 0x1e389d1 try_vectorize_loop_1
> >         .../__RISC-V_BUILD__/../gcc/tree-vectorizer.cc:1066
> > 0x1e38f3d try_vectorize_loop
> >         .../__RISC-V_BUILD__/../gcc/tree-vectorizer.cc:1182
> > 0x1e39230 execute
> >         .../__RISC-V_BUILD__/../gcc/tree-vectorizer.cc:1298
> >
> > Given the masks and the lens cannot be enabled simultanously when loop is
> > using partial vectors.  Thus, we need to ensure the one is disabled when we
> > would like to record the other in check_load_store_for_partial_vectors.  For
> > example, when we try to record loop len, we need to check if the loop mask
> > is disabled or not.
>
> I don't think you can rely on LOOP_VINFO_FULLY_WITH_LENGTH_P during
> analysis.  Instead how we tried to set up things is that we never even try
> both and there is (was?) code to reject partial vector usage when we end
> up recording both lens and masks.

That is, a fix along what you do would have been to split
LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P into
_WITH_LENGTH and _WITH_MASKS, make sure to record both when
a stmt can handle both so in the end we'll have a choice.  Currently
if we end up with both mask and len we don't know whether all stmts
support lens or masks or just some.

But we simply assumed on RISCV you'd never end up with unsupported len
but supported mask I guess.

Richard.

> That said, the assert you run into should be only asserted during transform,
> not during analysis.  It possibly was before Robins costing reorg?
>
> Richard.
>
> > Below testsuites are passed for this patch:
> > * The x86 bootstrap tests.
> > * The x86 fully regression tests.
> > * The aarch64 fully regression tests.
> > * The riscv fully regressison tests.
> >
> >         PR target/114195
> >
> > gcc/ChangeLog:
> >
> >         * tree-vect-stmts.cc (check_load_store_for_partial_vectors): Add
> >         loop mask/len check before recording as they are mutual exclusion.
> >
> > gcc/testsuite/ChangeLog:
> >
> >         * gcc.target/riscv/rvv/base/pr114195-1.c: New test.
> >
> > Signed-off-by: Pan Li <pan2.li@intel.com>
> > ---
> >  .../gcc.target/riscv/rvv/base/pr114195-1.c    | 15 +++++++++++
> >  gcc/tree-vect-stmts.cc                        | 26 ++++++++++++++-----
> >  2 files changed, 35 insertions(+), 6 deletions(-)
> >  create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/pr114195-1.c
> >
> > diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pr114195-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pr114195-1.c
> > new file mode 100644
> > index 00000000000..b0c9d5b81b8
> > --- /dev/null
> > +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pr114195-1.c
> > @@ -0,0 +1,15 @@
> > +/* Test that we do not have ice when compile */
> > +/* { dg-do compile } */
> > +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -ftree-vectorize" } */
> > +
> > +long a, b;
> > +extern short c[];
> > +
> > +void d() {
> > +  for (int e = 0; e < 35; e += 2) {
> > +    a = ({ a < 0 ? a : 0; });
> > +    b = ({ b < 0 ? b : 0; });
> > +
> > +    c[e] = 0;
> > +  }
> > +}
> > diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc
> > index 14a3ffb5f02..624947ed271 100644
> > --- a/gcc/tree-vect-stmts.cc
> > +++ b/gcc/tree-vect-stmts.cc
> > @@ -1502,6 +1502,8 @@ check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
> >                                       gather_scatter_info *gs_info,
> >                                       tree scalar_mask)
> >  {
> > +  gcc_assert (LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P (loop_vinfo));
> > +
> >    /* Invariant loads need no special support.  */
> >    if (memory_access_type == VMAT_INVARIANT)
> >      return;
> > @@ -1521,9 +1523,17 @@ check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
> >        internal_fn ifn
> >         = (is_load ? vect_load_lanes_supported (vectype, group_size, true)
> >                    : vect_store_lanes_supported (vectype, group_size, true));
> > -      if (ifn == IFN_MASK_LEN_LOAD_LANES || ifn == IFN_MASK_LEN_STORE_LANES)
> > +
> > +      /* When the loop_vinfo using partial vector,  we cannot enable both
> > +        the fully mask and length simultaneously.  Thus, make sure the
> > +        other one is disabled when record one of them.
> > +        The same as other place for both the vect_record_loop_len and
> > +        vect_record_loop_mask.  */
> > +      if ((ifn == IFN_MASK_LEN_LOAD_LANES || ifn == IFN_MASK_LEN_STORE_LANES)
> > +       && !LOOP_VINFO_FULLY_MASKED_P (loop_vinfo))
> >         vect_record_loop_len (loop_vinfo, lens, nvectors, vectype, 1);
> > -      else if (ifn == IFN_MASK_LOAD_LANES || ifn == IFN_MASK_STORE_LANES)
> > +      else if ((ifn == IFN_MASK_LOAD_LANES || ifn == IFN_MASK_STORE_LANES)
> > +       && !LOOP_VINFO_FULLY_WITH_LENGTH_P (loop_vinfo))
> >         vect_record_loop_mask (loop_vinfo, masks, nvectors, vectype,
> >                                scalar_mask);
> >        else
> > @@ -1549,12 +1559,14 @@ check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
> >        if (internal_gather_scatter_fn_supported_p (len_ifn, vectype,
> >                                                   gs_info->memory_type,
> >                                                   gs_info->offset_vectype,
> > -                                                 gs_info->scale))
> > +                                                 gs_info->scale)
> > +       && !LOOP_VINFO_FULLY_MASKED_P (loop_vinfo))
> >         vect_record_loop_len (loop_vinfo, lens, nvectors, vectype, 1);
> >        else if (internal_gather_scatter_fn_supported_p (ifn, vectype,
> >                                                        gs_info->memory_type,
> >                                                        gs_info->offset_vectype,
> > -                                                      gs_info->scale))
> > +                                                      gs_info->scale)
> > +       && !LOOP_VINFO_FULLY_WITH_LENGTH_P (loop_vinfo))
> >         vect_record_loop_mask (loop_vinfo, masks, nvectors, vectype,
> >                                scalar_mask);
> >        else
> > @@ -1608,7 +1620,8 @@ check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
> >    machine_mode mask_mode;
> >    machine_mode vmode;
> >    bool using_partial_vectors_p = false;
> > -  if (get_len_load_store_mode (vecmode, is_load).exists (&vmode))
> > +  if (get_len_load_store_mode (vecmode, is_load).exists (&vmode)
> > +    && !LOOP_VINFO_FULLY_MASKED_P (loop_vinfo))
> >      {
> >        nvectors = group_memory_nvectors (group_size * vf, nunits);
> >        unsigned factor = (vecmode == vmode) ? 1 : GET_MODE_UNIT_SIZE (vecmode);
> > @@ -1616,7 +1629,8 @@ check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
> >        using_partial_vectors_p = true;
> >      }
> >    else if (targetm.vectorize.get_mask_mode (vecmode).exists (&mask_mode)
> > -          && can_vec_mask_load_store_p (vecmode, mask_mode, is_load))
> > +          && can_vec_mask_load_store_p (vecmode, mask_mode, is_load)
> > +          && !LOOP_VINFO_FULLY_WITH_LENGTH_P (loop_vinfo))
> >      {
> >        nvectors = group_memory_nvectors (group_size * vf, nunits);
> >        vect_record_loop_mask (loop_vinfo, masks, nvectors, vectype, scalar_mask);
> > --
> > 2.34.1
> >
Li, Pan2 March 9, 2024, 10:57 a.m. UTC | #3
Thanks Richard for comments.

> That said, the assert you run into should be only asserted during transform,
> not during analysis.
Good to learn that the assertion is only valid during transform, I guess we may have almost
the same case in vectorizable_load. I will try to test only allow assertion during transform, to
see if there is any regressions and send the v2.

> It possibly was before Robins costing reorg?
Sorry, not very sure which commit from robin.

Pan

-----Original Message-----
From: Richard Biener <richard.guenther@gmail.com> 
Sent: Friday, March 8, 2024 10:03 PM
To: Li, Pan2 <pan2.li@intel.com>
Cc: gcc-patches@gcc.gnu.org; juzhe.zhong@rivai.ai; kito.cheng@gmail.com; Wang, Yanzhang <yanzhang.wang@intel.com>; rdapp.gcc@gmail.com; jeffreyalaw@gmail.com
Subject: Re: [PATCH v1] VECT: Bugfix ICE for vectorizable_store when both len and mask

On Fri, Mar 8, 2024 at 2:59 PM Richard Biener
<richard.guenther@gmail.com> wrote:
>
> On Fri, Mar 8, 2024 at 1:04 AM <pan2.li@intel.com> wrote:
> >
> > From: Pan Li <pan2.li@intel.com>
> >
> > This patch would like to fix one ICE in vectorizable_store for both the
> > loop_masks and loop_lens.  The ICE looks like below with "-march=rv64gcv -O3".
> >
> > during GIMPLE pass: vect
> > test.c: In function ‘d’:
> > test.c:6:6: internal compiler error: in vectorizable_store, at
> > tree-vect-stmts.cc:8691
> >     6 | void d() {
> >       |      ^
> > 0x37a6f2f vectorizable_store
> >         .../__RISC-V_BUILD__/../gcc/tree-vect-stmts.cc:8691
> > 0x37b861c vect_analyze_stmt(vec_info*, _stmt_vec_info*, bool*,
> > _slp_tree*, _slp_instance*, vec<stmt_info_for_cost, va_heap, vl_ptr>*)
> >         .../__RISC-V_BUILD__/../gcc/tree-vect-stmts.cc:13242
> > 0x1db5dca vect_analyze_loop_operations
> >         .../__RISC-V_BUILD__/../gcc/tree-vect-loop.cc:2208
> > 0x1db885b vect_analyze_loop_2
> >         .../__RISC-V_BUILD__/../gcc/tree-vect-loop.cc:3041
> > 0x1dba029 vect_analyze_loop_1
> >         .../__RISC-V_BUILD__/../gcc/tree-vect-loop.cc:3481
> > 0x1dbabad vect_analyze_loop(loop*, vec_info_shared*)
> >         .../__RISC-V_BUILD__/../gcc/tree-vect-loop.cc:3639
> > 0x1e389d1 try_vectorize_loop_1
> >         .../__RISC-V_BUILD__/../gcc/tree-vectorizer.cc:1066
> > 0x1e38f3d try_vectorize_loop
> >         .../__RISC-V_BUILD__/../gcc/tree-vectorizer.cc:1182
> > 0x1e39230 execute
> >         .../__RISC-V_BUILD__/../gcc/tree-vectorizer.cc:1298
> >
> > Given the masks and the lens cannot be enabled simultanously when loop is
> > using partial vectors.  Thus, we need to ensure the one is disabled when we
> > would like to record the other in check_load_store_for_partial_vectors.  For
> > example, when we try to record loop len, we need to check if the loop mask
> > is disabled or not.
>
> I don't think you can rely on LOOP_VINFO_FULLY_WITH_LENGTH_P during
> analysis.  Instead how we tried to set up things is that we never even try
> both and there is (was?) code to reject partial vector usage when we end
> up recording both lens and masks.

That is, a fix along what you do would have been to split
LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P into
_WITH_LENGTH and _WITH_MASKS, make sure to record both when
a stmt can handle both so in the end we'll have a choice.  Currently
if we end up with both mask and len we don't know whether all stmts
support lens or masks or just some.

But we simply assumed on RISCV you'd never end up with unsupported len
but supported mask I guess.

Richard.

> That said, the assert you run into should be only asserted during transform,
> not during analysis.  It possibly was before Robins costing reorg?
>
> Richard.
>
> > Below testsuites are passed for this patch:
> > * The x86 bootstrap tests.
> > * The x86 fully regression tests.
> > * The aarch64 fully regression tests.
> > * The riscv fully regressison tests.
> >
> >         PR target/114195
> >
> > gcc/ChangeLog:
> >
> >         * tree-vect-stmts.cc (check_load_store_for_partial_vectors): Add
> >         loop mask/len check before recording as they are mutual exclusion.
> >
> > gcc/testsuite/ChangeLog:
> >
> >         * gcc.target/riscv/rvv/base/pr114195-1.c: New test.
> >
> > Signed-off-by: Pan Li <pan2.li@intel.com>
> > ---
> >  .../gcc.target/riscv/rvv/base/pr114195-1.c    | 15 +++++++++++
> >  gcc/tree-vect-stmts.cc                        | 26 ++++++++++++++-----
> >  2 files changed, 35 insertions(+), 6 deletions(-)
> >  create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/pr114195-1.c
> >
> > diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pr114195-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pr114195-1.c
> > new file mode 100644
> > index 00000000000..b0c9d5b81b8
> > --- /dev/null
> > +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pr114195-1.c
> > @@ -0,0 +1,15 @@
> > +/* Test that we do not have ice when compile */
> > +/* { dg-do compile } */
> > +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -ftree-vectorize" } */
> > +
> > +long a, b;
> > +extern short c[];
> > +
> > +void d() {
> > +  for (int e = 0; e < 35; e += 2) {
> > +    a = ({ a < 0 ? a : 0; });
> > +    b = ({ b < 0 ? b : 0; });
> > +
> > +    c[e] = 0;
> > +  }
> > +}
> > diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc
> > index 14a3ffb5f02..624947ed271 100644
> > --- a/gcc/tree-vect-stmts.cc
> > +++ b/gcc/tree-vect-stmts.cc
> > @@ -1502,6 +1502,8 @@ check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
> >                                       gather_scatter_info *gs_info,
> >                                       tree scalar_mask)
> >  {
> > +  gcc_assert (LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P (loop_vinfo));
> > +
> >    /* Invariant loads need no special support.  */
> >    if (memory_access_type == VMAT_INVARIANT)
> >      return;
> > @@ -1521,9 +1523,17 @@ check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
> >        internal_fn ifn
> >         = (is_load ? vect_load_lanes_supported (vectype, group_size, true)
> >                    : vect_store_lanes_supported (vectype, group_size, true));
> > -      if (ifn == IFN_MASK_LEN_LOAD_LANES || ifn == IFN_MASK_LEN_STORE_LANES)
> > +
> > +      /* When the loop_vinfo using partial vector,  we cannot enable both
> > +        the fully mask and length simultaneously.  Thus, make sure the
> > +        other one is disabled when record one of them.
> > +        The same as other place for both the vect_record_loop_len and
> > +        vect_record_loop_mask.  */
> > +      if ((ifn == IFN_MASK_LEN_LOAD_LANES || ifn == IFN_MASK_LEN_STORE_LANES)
> > +       && !LOOP_VINFO_FULLY_MASKED_P (loop_vinfo))
> >         vect_record_loop_len (loop_vinfo, lens, nvectors, vectype, 1);
> > -      else if (ifn == IFN_MASK_LOAD_LANES || ifn == IFN_MASK_STORE_LANES)
> > +      else if ((ifn == IFN_MASK_LOAD_LANES || ifn == IFN_MASK_STORE_LANES)
> > +       && !LOOP_VINFO_FULLY_WITH_LENGTH_P (loop_vinfo))
> >         vect_record_loop_mask (loop_vinfo, masks, nvectors, vectype,
> >                                scalar_mask);
> >        else
> > @@ -1549,12 +1559,14 @@ check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
> >        if (internal_gather_scatter_fn_supported_p (len_ifn, vectype,
> >                                                   gs_info->memory_type,
> >                                                   gs_info->offset_vectype,
> > -                                                 gs_info->scale))
> > +                                                 gs_info->scale)
> > +       && !LOOP_VINFO_FULLY_MASKED_P (loop_vinfo))
> >         vect_record_loop_len (loop_vinfo, lens, nvectors, vectype, 1);
> >        else if (internal_gather_scatter_fn_supported_p (ifn, vectype,
> >                                                        gs_info->memory_type,
> >                                                        gs_info->offset_vectype,
> > -                                                      gs_info->scale))
> > +                                                      gs_info->scale)
> > +       && !LOOP_VINFO_FULLY_WITH_LENGTH_P (loop_vinfo))
> >         vect_record_loop_mask (loop_vinfo, masks, nvectors, vectype,
> >                                scalar_mask);
> >        else
> > @@ -1608,7 +1620,8 @@ check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
> >    machine_mode mask_mode;
> >    machine_mode vmode;
> >    bool using_partial_vectors_p = false;
> > -  if (get_len_load_store_mode (vecmode, is_load).exists (&vmode))
> > +  if (get_len_load_store_mode (vecmode, is_load).exists (&vmode)
> > +    && !LOOP_VINFO_FULLY_MASKED_P (loop_vinfo))
> >      {
> >        nvectors = group_memory_nvectors (group_size * vf, nunits);
> >        unsigned factor = (vecmode == vmode) ? 1 : GET_MODE_UNIT_SIZE (vecmode);
> > @@ -1616,7 +1629,8 @@ check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
> >        using_partial_vectors_p = true;
> >      }
> >    else if (targetm.vectorize.get_mask_mode (vecmode).exists (&mask_mode)
> > -          && can_vec_mask_load_store_p (vecmode, mask_mode, is_load))
> > +          && can_vec_mask_load_store_p (vecmode, mask_mode, is_load)
> > +          && !LOOP_VINFO_FULLY_WITH_LENGTH_P (loop_vinfo))
> >      {
> >        nvectors = group_memory_nvectors (group_size * vf, nunits);
> >        vect_record_loop_mask (loop_vinfo, masks, nvectors, vectype, scalar_mask);
> > --
> > 2.34.1
> >
diff mbox series

Patch

diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pr114195-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pr114195-1.c
new file mode 100644
index 00000000000..b0c9d5b81b8
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pr114195-1.c
@@ -0,0 +1,15 @@ 
+/* Test that we do not have ice when compile */
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -ftree-vectorize" } */
+
+long a, b;
+extern short c[];
+
+void d() {
+  for (int e = 0; e < 35; e += 2) {
+    a = ({ a < 0 ? a : 0; });
+    b = ({ b < 0 ? b : 0; });
+
+    c[e] = 0;
+  }
+}
diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc
index 14a3ffb5f02..624947ed271 100644
--- a/gcc/tree-vect-stmts.cc
+++ b/gcc/tree-vect-stmts.cc
@@ -1502,6 +1502,8 @@  check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
 				      gather_scatter_info *gs_info,
 				      tree scalar_mask)
 {
+  gcc_assert (LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P (loop_vinfo));
+
   /* Invariant loads need no special support.  */
   if (memory_access_type == VMAT_INVARIANT)
     return;
@@ -1521,9 +1523,17 @@  check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
       internal_fn ifn
 	= (is_load ? vect_load_lanes_supported (vectype, group_size, true)
 		   : vect_store_lanes_supported (vectype, group_size, true));
-      if (ifn == IFN_MASK_LEN_LOAD_LANES || ifn == IFN_MASK_LEN_STORE_LANES)
+
+      /* When the loop_vinfo using partial vector,  we cannot enable both
+	 the fully mask and length simultaneously.  Thus, make sure the
+	 other one is disabled when record one of them.
+	 The same as other place for both the vect_record_loop_len and
+	 vect_record_loop_mask.  */
+      if ((ifn == IFN_MASK_LEN_LOAD_LANES || ifn == IFN_MASK_LEN_STORE_LANES)
+	&& !LOOP_VINFO_FULLY_MASKED_P (loop_vinfo))
 	vect_record_loop_len (loop_vinfo, lens, nvectors, vectype, 1);
-      else if (ifn == IFN_MASK_LOAD_LANES || ifn == IFN_MASK_STORE_LANES)
+      else if ((ifn == IFN_MASK_LOAD_LANES || ifn == IFN_MASK_STORE_LANES)
+	&& !LOOP_VINFO_FULLY_WITH_LENGTH_P (loop_vinfo))
 	vect_record_loop_mask (loop_vinfo, masks, nvectors, vectype,
 			       scalar_mask);
       else
@@ -1549,12 +1559,14 @@  check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
       if (internal_gather_scatter_fn_supported_p (len_ifn, vectype,
 						  gs_info->memory_type,
 						  gs_info->offset_vectype,
-						  gs_info->scale))
+						  gs_info->scale)
+	&& !LOOP_VINFO_FULLY_MASKED_P (loop_vinfo))
 	vect_record_loop_len (loop_vinfo, lens, nvectors, vectype, 1);
       else if (internal_gather_scatter_fn_supported_p (ifn, vectype,
 						       gs_info->memory_type,
 						       gs_info->offset_vectype,
-						       gs_info->scale))
+						       gs_info->scale)
+	&& !LOOP_VINFO_FULLY_WITH_LENGTH_P (loop_vinfo))
 	vect_record_loop_mask (loop_vinfo, masks, nvectors, vectype,
 			       scalar_mask);
       else
@@ -1608,7 +1620,8 @@  check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
   machine_mode mask_mode;
   machine_mode vmode;
   bool using_partial_vectors_p = false;
-  if (get_len_load_store_mode (vecmode, is_load).exists (&vmode))
+  if (get_len_load_store_mode (vecmode, is_load).exists (&vmode)
+    && !LOOP_VINFO_FULLY_MASKED_P (loop_vinfo))
     {
       nvectors = group_memory_nvectors (group_size * vf, nunits);
       unsigned factor = (vecmode == vmode) ? 1 : GET_MODE_UNIT_SIZE (vecmode);
@@ -1616,7 +1629,8 @@  check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
       using_partial_vectors_p = true;
     }
   else if (targetm.vectorize.get_mask_mode (vecmode).exists (&mask_mode)
-	   && can_vec_mask_load_store_p (vecmode, mask_mode, is_load))
+	   && can_vec_mask_load_store_p (vecmode, mask_mode, is_load)
+	   && !LOOP_VINFO_FULLY_WITH_LENGTH_P (loop_vinfo))
     {
       nvectors = group_memory_nvectors (group_size * vf, nunits);
       vect_record_loop_mask (loop_vinfo, masks, nvectors, vectype, scalar_mask);