From patchwork Wed Oct 12 22:32:46 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 119315 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id 9362CB6F18 for ; Thu, 13 Oct 2011 09:33:11 +1100 (EST) Received: (qmail 20461 invoked by alias); 12 Oct 2011 22:33:08 -0000 Received: (qmail 20435 invoked by uid 22791); 12 Oct 2011 22:33:06 -0000 X-SWARE-Spam-Status: No, hits=-6.6 required=5.0 tests=AWL, BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, SPF_HELO_PASS X-Spam-Check-By: sourceware.org Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Wed, 12 Oct 2011 22:32:47 +0000 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id p9CMWlLo010647 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Wed, 12 Oct 2011 18:32:47 -0400 Received: from anchor.twiddle.net (vpn-237-55.phx2.redhat.com [10.3.237.55]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id p9CMWkT5006124; Wed, 12 Oct 2011 18:32:47 -0400 Message-ID: <4E96158E.5030106@redhat.com> Date: Wed, 12 Oct 2011 15:32:46 -0700 From: Richard Henderson User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:7.0) Gecko/20110927 Thunderbird/7.0 MIME-Version: 1.0 To: dje.gcc@gmail.com CC: GCC Patches Subject: [rs6000] Enable scalar shifts of vectors X-IsSubscribed: yes Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org I suppose technically the middle-end could be improved to implement ashl as vashl by broadcasting the scalar, but Altivec is the only extant SIMD ISA that would make use of this. All of the others can arrange for constant shifts to be encoded into the insn, and so implement the ashl named pattern. Tested on ppc64-linux, --with-cpu=G5. Ok? r~ * config/rs6000/rs6000.c (rs6000_expand_vector_broadcast): New. * config/rs6000/rs6000-protos.h: Update. * config/rs6000/vector.md (ashl3): New. (lshr3, ashr3): New. commit 63a6b475bcde403cc4e220827370e6ecea9aad33 Author: Richard Henderson Date: Mon Oct 10 12:34:59 2011 -0700 rs6000: Implement scalar shifts of vectors. diff --git a/gcc/config/rs6000/rs6000-protos.h b/gcc/config/rs6000/rs6000-protos.h index 73da0f6..4dee23f 100644 --- a/gcc/config/rs6000/rs6000-protos.h +++ b/gcc/config/rs6000/rs6000-protos.h @@ -55,6 +55,7 @@ extern void rs6000_expand_vector_init (rtx, rtx); extern void paired_expand_vector_init (rtx, rtx); extern void rs6000_expand_vector_set (rtx, rtx, int); extern void rs6000_expand_vector_extract (rtx, rtx, int); +extern rtx rs6000_expand_vector_broadcast (enum machine_mode, rtx); extern void build_mask64_2_operands (rtx, rtx *); extern int expand_block_clear (rtx[]); extern int expand_block_move (rtx[]); diff --git a/gcc/config/rs6000/rs6000.c b/gcc/config/rs6000/rs6000.c index 63c0f0c..786736d 100644 --- a/gcc/config/rs6000/rs6000.c +++ b/gcc/config/rs6000/rs6000.c @@ -4890,6 +4890,35 @@ rs6000_expand_vector_extract (rtx target, rtx vec, int elt) emit_move_insn (target, adjust_address_nv (mem, inner_mode, 0)); } +/* Broadcast an element to all parts of a vector, loaded into a register. + Used to turn vector shifts by a scalar into vector shifts by a vector. */ + +rtx +rs6000_expand_vector_broadcast (enum machine_mode mode, rtx elt) +{ + rtx repl, vec[16]; + int i, n; + + n = GET_MODE_NUNITS (mode); + for (i = 0; i < n; ++i) + vec[i] = elt; + + if (CONSTANT_P (elt)) + { + repl = gen_rtx_CONST_VECTOR (mode, gen_rtvec_v (n, vec)); + repl = force_reg (mode, repl); + } + else + { + rtx par = gen_rtx_PARALLEL (VOIDmode, gen_rtvec_v (n, vec)); + repl = gen_reg_rtx (mode); + rs6000_expand_vector_init (repl, par); + } + + return repl; +} + + /* Generates shifts and masks for a pair of rldicl or rldicr insns to implement ANDing by the mask IN. */ void diff --git a/gcc/config/rs6000/vector.md b/gcc/config/rs6000/vector.md index 0179cd9..24b473e 100644 --- a/gcc/config/rs6000/vector.md +++ b/gcc/config/rs6000/vector.md @@ -987,6 +987,16 @@ "TARGET_ALTIVEC" "") +(define_expand "ashl3" + [(set (match_operand:VEC_I 0 "vint_operand" "") + (ashift:VEC_I + (match_operand:VEC_I 1 "vint_operand" "") + (match_operand: 2 "nonmemory_operand" "")))] + "TARGET_ALTIVEC" +{ + operands[2] = rs6000_expand_vector_broadcast (mode, operands[2]); +}) + ;; Expanders for logical shift right on each vector element (define_expand "vlshr3" [(set (match_operand:VEC_I 0 "vint_operand" "") @@ -995,6 +1005,16 @@ "TARGET_ALTIVEC" "") +(define_expand "lshr3" + [(set (match_operand:VEC_I 0 "vint_operand" "") + (lshiftrt:VEC_I + (match_operand:VEC_I 1 "vint_operand" "") + (match_operand: 2 "nonmemory_operand" "")))] + "TARGET_ALTIVEC" +{ + operands[2] = rs6000_expand_vector_broadcast (mode, operands[2]); +}) + ;; Expanders for arithmetic shift right on each vector element (define_expand "vashr3" [(set (match_operand:VEC_I 0 "vint_operand" "") @@ -1002,6 +1022,16 @@ (match_operand:VEC_I 2 "vint_operand" "")))] "TARGET_ALTIVEC" "") + +(define_expand "ashr3" + [(set (match_operand:VEC_I 0 "vint_operand" "") + (ashiftrt:VEC_I + (match_operand:VEC_I 1 "vint_operand" "") + (match_operand: 2 "nonmemory_operand" "")))] + "TARGET_ALTIVEC" +{ + operands[2] = rs6000_expand_vector_broadcast (mode, operands[2]); +}) ;; Vector reduction expanders for VSX