diff mbox series

cleanup: Change condition order

Message ID 20230721022343.314043-1-juzhe.zhong@rivai.ai
State New
Headers show
Series cleanup: Change condition order | expand

Commit Message

juzhe.zhong@rivai.ai July 21, 2023, 2:23 a.m. UTC
Hi, Richard and Richi.

I have double check the recent codes for len && mask support again.

Some places code structure:

if (len_mask_fn)
...
else if (mask_fn)
...

some places code structure:

if (mask_len_fn)
...
else if (mask)

Base on previous review comment from Richi:
https://gcc.gnu.org/pipermail/gcc-patches/2023-July/625067.html

len mask stuff should be checked before mask.

So I reorder all condition order to check LEN MASK stuff before MASK.

This is the last clean up patch.

Boostrap and Regression is on the way.

gcc/ChangeLog:

	* tree-vect-stmts.cc (check_load_store_for_partial_vectors): Change condition order.
	(vectorizable_operation): Ditto.

---
 gcc/tree-vect-stmts.cc | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

Comments

Richard Biener July 21, 2023, 6:20 a.m. UTC | #1
On Fri, 21 Jul 2023, Juzhe-Zhong wrote:

> Hi, Richard and Richi.
> 
> I have double check the recent codes for len && mask support again.
> 
> Some places code structure:
> 
> if (len_mask_fn)
> ...
> else if (mask_fn)
> ...
> 
> some places code structure:
> 
> if (mask_len_fn)
> ...
> else if (mask)
> 
> Base on previous review comment from Richi:
> https://gcc.gnu.org/pipermail/gcc-patches/2023-July/625067.html
> 
> len mask stuff should be checked before mask.
> 
> So I reorder all condition order to check LEN MASK stuff before MASK.
> 
> This is the last clean up patch.
> 
> Boostrap and Regression is on the way.

OK.

> gcc/ChangeLog:
> 
> 	* tree-vect-stmts.cc (check_load_store_for_partial_vectors): Change condition order.
> 	(vectorizable_operation): Ditto.
> 
> ---
>  gcc/tree-vect-stmts.cc | 24 ++++++++++++------------
>  1 file changed, 12 insertions(+), 12 deletions(-)
> 
> diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc
> index d5b4f020332..2fe856db9ab 100644
> --- a/gcc/tree-vect-stmts.cc
> +++ b/gcc/tree-vect-stmts.cc
> @@ -1635,17 +1635,17 @@ check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
>        internal_fn len_ifn = (is_load
>  			     ? IFN_MASK_LEN_GATHER_LOAD
>  			     : IFN_MASK_LEN_SCATTER_STORE);
> -      if (internal_gather_scatter_fn_supported_p (ifn, vectype,
> +      if (internal_gather_scatter_fn_supported_p (len_ifn, vectype,
>  						  gs_info->memory_type,
>  						  gs_info->offset_vectype,
>  						  gs_info->scale))
> -	vect_record_loop_mask (loop_vinfo, masks, nvectors, vectype,
> -			       scalar_mask);
> -      else if (internal_gather_scatter_fn_supported_p (len_ifn, vectype,
> +	vect_record_loop_len (loop_vinfo, lens, nvectors, vectype, 1);
> +      else if (internal_gather_scatter_fn_supported_p (ifn, vectype,
>  						       gs_info->memory_type,
>  						       gs_info->offset_vectype,
>  						       gs_info->scale))
> -	vect_record_loop_len (loop_vinfo, lens, nvectors, vectype, 1);
> +	vect_record_loop_mask (loop_vinfo, masks, nvectors, vectype,
> +			       scalar_mask);
>        else
>  	{
>  	  if (dump_enabled_p ())
> @@ -6596,16 +6596,16 @@ vectorizable_operation (vec_info *vinfo,
>  	  && LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P (loop_vinfo)
>  	  && mask_out_inactive)
>  	{
> -	  if (cond_fn != IFN_LAST
> -	      && direct_internal_fn_supported_p (cond_fn, vectype,
> +	  if (cond_len_fn != IFN_LAST
> +	      && direct_internal_fn_supported_p (cond_len_fn, vectype,
>  						 OPTIMIZE_FOR_SPEED))
> -	    vect_record_loop_mask (loop_vinfo, masks, ncopies * vec_num,
> -				   vectype, NULL);
> -	  else if (cond_len_fn != IFN_LAST
> -		   && direct_internal_fn_supported_p (cond_len_fn, vectype,
> -						      OPTIMIZE_FOR_SPEED))
>  	    vect_record_loop_len (loop_vinfo, lens, ncopies * vec_num, vectype,
>  				  1);
> +	  else if (cond_fn != IFN_LAST
> +		   && direct_internal_fn_supported_p (cond_fn, vectype,
> +						      OPTIMIZE_FOR_SPEED))
> +	    vect_record_loop_mask (loop_vinfo, masks, ncopies * vec_num,
> +				   vectype, NULL);
>  	  else
>  	    {
>  	      if (dump_enabled_p ())
>
Lehua Ding July 21, 2023, 8:38 a.m. UTC | #2
Commited, thanks Richard.


Bootstrap and regression passed.




------------------ Original ------------------
From:                                                                                                                        "Richard Biener"                                                                                    <gcc-patches@gcc.gnu.org&gt;;
Date:&nbsp;Fri, Jul 21, 2023 02:20 PM
To:&nbsp;"Juzhe-Zhong"<juzhe.zhong@rivai.ai&gt;;
Cc:&nbsp;"gcc-patches"<gcc-patches@gcc.gnu.org&gt;;"richard.sandiford"<richard.sandiford@arm.com&gt;;
Subject:&nbsp;Re: [PATCH] cleanup: Change condition order



On Fri, 21 Jul 2023, Juzhe-Zhong wrote:

&gt; Hi, Richard and Richi.
&gt; 
&gt; I have double check the recent codes for len &amp;&amp; mask support again.
&gt; 
&gt; Some places code structure:
&gt; 
&gt; if (len_mask_fn)
&gt; ...
&gt; else if (mask_fn)
&gt; ...
&gt; 
&gt; some places code structure:
&gt; 
&gt; if (mask_len_fn)
&gt; ...
&gt; else if (mask)
&gt; 
&gt; Base on previous review comment from Richi:
&gt; https://gcc.gnu.org/pipermail/gcc-patches/2023-July/625067.html
&gt; 
&gt; len mask stuff should be checked before mask.
&gt; 
&gt; So I reorder all condition order to check LEN MASK stuff before MASK.
&gt; 
&gt; This is the last clean up patch.
&gt; 
&gt; Boostrap and Regression is on the way.

OK.

&gt; gcc/ChangeLog:
&gt; 
&gt; 	* tree-vect-stmts.cc (check_load_store_for_partial_vectors): Change condition order.
&gt; 	(vectorizable_operation): Ditto.
&gt; 
&gt; ---
&gt;&nbsp; gcc/tree-vect-stmts.cc | 24 ++++++++++++------------
&gt;&nbsp; 1 file changed, 12 insertions(+), 12 deletions(-)
&gt; 
&gt; diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc
&gt; index d5b4f020332..2fe856db9ab 100644
&gt; --- a/gcc/tree-vect-stmts.cc
&gt; +++ b/gcc/tree-vect-stmts.cc
&gt; @@ -1635,17 +1635,17 @@ check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; internal_fn len_ifn = (is_load
&gt;&nbsp; 			&nbsp;&nbsp;&nbsp;&nbsp; ? IFN_MASK_LEN_GATHER_LOAD
&gt;&nbsp; 			&nbsp;&nbsp;&nbsp;&nbsp; : IFN_MASK_LEN_SCATTER_STORE);
&gt; -&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (internal_gather_scatter_fn_supported_p (ifn, vectype,
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (internal_gather_scatter_fn_supported_p (len_ifn, vectype,
&gt;&nbsp; 						&nbsp; gs_info-&gt;memory_type,
&gt;&nbsp; 						&nbsp; gs_info-&gt;offset_vectype,
&gt;&nbsp; 						&nbsp; gs_info-&gt;scale))
&gt; -	vect_record_loop_mask (loop_vinfo, masks, nvectors, vectype,
&gt; -			&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; scalar_mask);
&gt; -&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; else if (internal_gather_scatter_fn_supported_p (len_ifn, vectype,
&gt; +	vect_record_loop_len (loop_vinfo, lens, nvectors, vectype, 1);
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; else if (internal_gather_scatter_fn_supported_p (ifn, vectype,
&gt;&nbsp; 						&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; gs_info-&gt;memory_type,
&gt;&nbsp; 						&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; gs_info-&gt;offset_vectype,
&gt;&nbsp; 						&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; gs_info-&gt;scale))
&gt; -	vect_record_loop_len (loop_vinfo, lens, nvectors, vectype, 1);
&gt; +	vect_record_loop_mask (loop_vinfo, masks, nvectors, vectype,
&gt; +			&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; scalar_mask);
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; else
&gt;&nbsp; 	{
&gt;&nbsp; 	&nbsp; if (dump_enabled_p ())
&gt; @@ -6596,16 +6596,16 @@ vectorizable_operation (vec_info *vinfo,
&gt;&nbsp; 	&nbsp; &amp;&amp; LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P (loop_vinfo)
&gt;&nbsp; 	&nbsp; &amp;&amp; mask_out_inactive)
&gt;&nbsp; 	{
&gt; -	&nbsp; if (cond_fn != IFN_LAST
&gt; -	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &amp;&amp; direct_internal_fn_supported_p (cond_fn, vectype,
&gt; +	&nbsp; if (cond_len_fn != IFN_LAST
&gt; +	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &amp;&amp; direct_internal_fn_supported_p (cond_len_fn, vectype,
&gt;&nbsp; 						 OPTIMIZE_FOR_SPEED))
&gt; -	&nbsp;&nbsp;&nbsp; vect_record_loop_mask (loop_vinfo, masks, ncopies * vec_num,
&gt; -				&nbsp;&nbsp; vectype, NULL);
&gt; -	&nbsp; else if (cond_len_fn != IFN_LAST
&gt; -		&nbsp;&nbsp; &amp;&amp; direct_internal_fn_supported_p (cond_len_fn, vectype,
&gt; -						&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; OPTIMIZE_FOR_SPEED))
&gt;&nbsp; 	&nbsp;&nbsp;&nbsp; vect_record_loop_len (loop_vinfo, lens, ncopies * vec_num, vectype,
&gt;&nbsp; 				&nbsp; 1);
&gt; +	&nbsp; else if (cond_fn != IFN_LAST
&gt; +		&nbsp;&nbsp; &amp;&amp; direct_internal_fn_supported_p (cond_fn, vectype,
&gt; +						&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; OPTIMIZE_FOR_SPEED))
&gt; +	&nbsp;&nbsp;&nbsp; vect_record_loop_mask (loop_vinfo, masks, ncopies * vec_num,
&gt; +				&nbsp;&nbsp; vectype, NULL);
&gt;&nbsp; 	&nbsp; else
&gt;&nbsp; 	&nbsp;&nbsp;&nbsp; {
&gt;&nbsp; 	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (dump_enabled_p ())
&gt;
diff mbox series

Patch

diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc
index d5b4f020332..2fe856db9ab 100644
--- a/gcc/tree-vect-stmts.cc
+++ b/gcc/tree-vect-stmts.cc
@@ -1635,17 +1635,17 @@  check_load_store_for_partial_vectors (loop_vec_info loop_vinfo, tree vectype,
       internal_fn len_ifn = (is_load
 			     ? IFN_MASK_LEN_GATHER_LOAD
 			     : IFN_MASK_LEN_SCATTER_STORE);
-      if (internal_gather_scatter_fn_supported_p (ifn, vectype,
+      if (internal_gather_scatter_fn_supported_p (len_ifn, vectype,
 						  gs_info->memory_type,
 						  gs_info->offset_vectype,
 						  gs_info->scale))
-	vect_record_loop_mask (loop_vinfo, masks, nvectors, vectype,
-			       scalar_mask);
-      else if (internal_gather_scatter_fn_supported_p (len_ifn, vectype,
+	vect_record_loop_len (loop_vinfo, lens, nvectors, vectype, 1);
+      else if (internal_gather_scatter_fn_supported_p (ifn, vectype,
 						       gs_info->memory_type,
 						       gs_info->offset_vectype,
 						       gs_info->scale))
-	vect_record_loop_len (loop_vinfo, lens, nvectors, vectype, 1);
+	vect_record_loop_mask (loop_vinfo, masks, nvectors, vectype,
+			       scalar_mask);
       else
 	{
 	  if (dump_enabled_p ())
@@ -6596,16 +6596,16 @@  vectorizable_operation (vec_info *vinfo,
 	  && LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P (loop_vinfo)
 	  && mask_out_inactive)
 	{
-	  if (cond_fn != IFN_LAST
-	      && direct_internal_fn_supported_p (cond_fn, vectype,
+	  if (cond_len_fn != IFN_LAST
+	      && direct_internal_fn_supported_p (cond_len_fn, vectype,
 						 OPTIMIZE_FOR_SPEED))
-	    vect_record_loop_mask (loop_vinfo, masks, ncopies * vec_num,
-				   vectype, NULL);
-	  else if (cond_len_fn != IFN_LAST
-		   && direct_internal_fn_supported_p (cond_len_fn, vectype,
-						      OPTIMIZE_FOR_SPEED))
 	    vect_record_loop_len (loop_vinfo, lens, ncopies * vec_num, vectype,
 				  1);
+	  else if (cond_fn != IFN_LAST
+		   && direct_internal_fn_supported_p (cond_fn, vectype,
+						      OPTIMIZE_FOR_SPEED))
+	    vect_record_loop_mask (loop_vinfo, masks, ncopies * vec_num,
+				   vectype, NULL);
 	  else
 	    {
 	      if (dump_enabled_p ())