diff mbox

Avoid synth multiply if target doesn't have left shift insn for a vector mode (PR middle-end/59261)

Message ID 20140204085918.GC12671@tucnak.redhat.com
State New
Headers show

Commit Message

Jakub Jelinek Feb. 4, 2014, 8:59 a.m. UTC
Hi!

Apparently ia64 has mulv8qi3 and addv8qi3 insns, but not ashlv8qi3
nor vashlv8qi3, thus the vectorizer can create or vector generic lowering
keep a V8QImode multiplication by constant, but expand_mult if it is
multiplied by a scalar (CONST_VECTOR with the same CONST_INT everywhere)
attempts to expand it as left shift (if power of 8) or tests various costs
and attempts to emit it as combination of left shift and add etc.
I think there are no targets that have vector multiply but not vector add
for the same mode, and similarly it doesn't make sense to test for scalar
left shifts, those are pretty much assumed everywhere in the compiler,
so this patch just ensures we emit normal vector multiply if there is no
vector left shift (be it with scalar or vector shift count).

Bootstrapped/regtested on i686-linux and Andreas has bootstrapped/regtested
it on ia64-linux.  Ok for trunk?

2014-02-04  Jakub Jelinek  <jakub@redhat.com>

	PR middle-end/59261
	* expmed.c (expand_mult): For MODE_VECTOR_INT multiplication
	if there is no vashl<mode>3 or ashl<mode>3 insn, skip_synth.

	* gcc.dg/pr59261.c: New test.


	Jakub

Comments

Richard Biener Feb. 4, 2014, 9:27 a.m. UTC | #1
On Tue, 4 Feb 2014, Jakub Jelinek wrote:

> Hi!
> 
> Apparently ia64 has mulv8qi3 and addv8qi3 insns, but not ashlv8qi3
> nor vashlv8qi3, thus the vectorizer can create or vector generic lowering
> keep a V8QImode multiplication by constant, but expand_mult if it is
> multiplied by a scalar (CONST_VECTOR with the same CONST_INT everywhere)
> attempts to expand it as left shift (if power of 8) or tests various costs
> and attempts to emit it as combination of left shift and add etc.
> I think there are no targets that have vector multiply but not vector add
> for the same mode, and similarly it doesn't make sense to test for scalar
> left shifts, those are pretty much assumed everywhere in the compiler,
> so this patch just ensures we emit normal vector multiply if there is no
> vector left shift (be it with scalar or vector shift count).
> 
> Bootstrapped/regtested on i686-linux and Andreas has bootstrapped/regtested
> it on ia64-linux.  Ok for trunk?

Ok.

Thanks,
Richard.

> 2014-02-04  Jakub Jelinek  <jakub@redhat.com>
> 
> 	PR middle-end/59261
> 	* expmed.c (expand_mult): For MODE_VECTOR_INT multiplication
> 	if there is no vashl<mode>3 or ashl<mode>3 insn, skip_synth.
> 
> 	* gcc.dg/pr59261.c: New test.
> 
> --- gcc/expmed.c.jj	2014-01-03 11:40:57.000000000 +0100
> +++ gcc/expmed.c	2014-02-03 19:06:30.459304401 +0100
> @@ -3136,6 +3136,14 @@ expand_mult (enum machine_mode mode, rtx
>        if (do_trapv)
>  	goto skip_synth;
>  
> +      /* If mode is integer vector mode, check if the backend supports
> +	 vector lshift (by scalar or vector) at all.  If not, we can't use
> +	 synthetized multiply.  */
> +      if (GET_MODE_CLASS (mode) == MODE_VECTOR_INT
> +	  && optab_handler (vashl_optab, mode) == CODE_FOR_nothing
> +	  && optab_handler (ashl_optab, mode) == CODE_FOR_nothing)
> +	goto skip_synth;
> +
>        /* These are the operations that are potentially turned into
>  	 a sequence of shifts and additions.  */
>        mode_bitsize = GET_MODE_UNIT_BITSIZE (mode);
> --- gcc/testsuite/gcc.dg/pr59261.c.jj	2014-02-03 19:14:39.457797016 +0100
> +++ gcc/testsuite/gcc.dg/pr59261.c	2014-02-03 19:14:20.000000000 +0100
> @@ -0,0 +1,17 @@
> +/* PR middle-end/59261 */
> +/* { dg-do compile } */
> +/* { dg-options "-O2" } */
> +
> +typedef signed char V __attribute__((vector_size (8)));
> +
> +void
> +foo (V *a, V *b)
> +{
> +  *a = *b * 3;
> +}
> +
> +void
> +bar (V *a, V *b)
> +{
> +  *a = *b * 4;
> +}
> 
> 	Jakub
> 
>
diff mbox

Patch

--- gcc/expmed.c.jj	2014-01-03 11:40:57.000000000 +0100
+++ gcc/expmed.c	2014-02-03 19:06:30.459304401 +0100
@@ -3136,6 +3136,14 @@  expand_mult (enum machine_mode mode, rtx
       if (do_trapv)
 	goto skip_synth;
 
+      /* If mode is integer vector mode, check if the backend supports
+	 vector lshift (by scalar or vector) at all.  If not, we can't use
+	 synthetized multiply.  */
+      if (GET_MODE_CLASS (mode) == MODE_VECTOR_INT
+	  && optab_handler (vashl_optab, mode) == CODE_FOR_nothing
+	  && optab_handler (ashl_optab, mode) == CODE_FOR_nothing)
+	goto skip_synth;
+
       /* These are the operations that are potentially turned into
 	 a sequence of shifts and additions.  */
       mode_bitsize = GET_MODE_UNIT_BITSIZE (mode);
--- gcc/testsuite/gcc.dg/pr59261.c.jj	2014-02-03 19:14:39.457797016 +0100
+++ gcc/testsuite/gcc.dg/pr59261.c	2014-02-03 19:14:20.000000000 +0100
@@ -0,0 +1,17 @@ 
+/* PR middle-end/59261 */
+/* { dg-do compile } */
+/* { dg-options "-O2" } */
+
+typedef signed char V __attribute__((vector_size (8)));
+
+void
+foo (V *a, V *b)
+{
+  *a = *b * 3;
+}
+
+void
+bar (V *a, V *b)
+{
+  *a = *b * 4;
+}