From patchwork Sun Mar 11 15:23:02 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Bergner X-Patchwork-Id: 884259 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=gcc.gnu.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=gcc-patches-return-474557-incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=vnet.ibm.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="efHrRLa0"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3zzlHs55g5z9sQy for ; Mon, 12 Mar 2018 02:23:21 +1100 (AEDT) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:to:cc :from:subject:date:mime-version:content-type :content-transfer-encoding:message-id; q=dns; s=default; b=jmHk5 qWClRZKzsfKtaPZSif7Es1xJb+/xPQtUaYaB4dH77CxzBid0SmgyvdV35Og524lk X/jOyJQdZ82DkOB+6zaCbF+PEy0G+/s/zSuRi4pKqy/ZDXrv8F8rMTuKVBpJ0gnm bOzNTNDMUc0rLJ909p+thCyuLhjFVIJj5WttHA= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:to:cc :from:subject:date:mime-version:content-type :content-transfer-encoding:message-id; s=default; bh=mra2iS1ilTS USQu8oqjVjVw3dt0=; b=efHrRLa0Wgvk4TGCtsa+uUB45qS4QUNZQFSTR0a7mLs dqIKS85gtqtP+X9AjXP0UFtG9YIphz02nKQ8D95Lhg8tjmZH4nRQIasDO1k9WMAy 6BU6nIfW+6hWRaksDkByac64I9EIF+SF6aTG5SJ+qD/ckscmkFnD1JXng6VfFrP8 = Received: (qmail 50916 invoked by alias); 11 Mar 2018 15:23:14 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 50907 invoked by uid 89); 11 Mar 2018 15:23:13 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-11.0 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_2, GIT_PATCH_3, KAM_ASCII_DIVIDERS, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 spammy= X-HELO: mx0a-001b2d01.pphosted.com Received: from mx0b-001b2d01.pphosted.com (HELO mx0a-001b2d01.pphosted.com) (148.163.158.5) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Sun, 11 Mar 2018 15:23:10 +0000 Received: from pps.filterd (m0098413.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w2BFIVja082805 for ; Sun, 11 Mar 2018 11:23:08 -0400 Received: from e37.co.us.ibm.com (e37.co.us.ibm.com [32.97.110.158]) by mx0b-001b2d01.pphosted.com with ESMTP id 2gmwqhw7nf-1 (version=TLSv1.2 cipher=AES256-SHA256 bits=256 verify=NOT) for ; Sun, 11 Mar 2018 11:23:08 -0400 Received: from localhost by e37.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sun, 11 Mar 2018 09:23:07 -0600 Received: from b03cxnp08026.gho.boulder.ibm.com (9.17.130.18) by e37.co.us.ibm.com (192.168.1.137) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Sun, 11 Mar 2018 09:23:04 -0600 Received: from b03ledav003.gho.boulder.ibm.com (b03ledav003.gho.boulder.ibm.com [9.17.130.234]) by b03cxnp08026.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w2BFN3n310945018; Sun, 11 Mar 2018 08:23:03 -0700 Received: from b03ledav003.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id BF61C6A03B; Sun, 11 Mar 2018 09:23:03 -0600 (MDT) Received: from otta.local (unknown [9.80.218.254]) by b03ledav003.gho.boulder.ibm.com (Postfix) with ESMTP id DC4886A03D; Sun, 11 Mar 2018 09:23:02 -0600 (MDT) To: GCC Patches Cc: Segher Boessenkool , Kaushik Phatak , Bill Schmidt From: Peter Bergner Subject: [PATCH, rs6000] Fix PR83789: __builtin_altivec_lvx fails for powerpc for altivec-4.c Date: Sun, 11 Mar 2018 10:23:02 -0500 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 X-TM-AS-GCONF: 00 x-cbid: 18031115-0024-0000-0000-0000180FED41 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00008653; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000254; SDB=6.01001529; UDB=6.00509528; IPR=6.00780859; MB=3.00019973; MTD=3.00000008; XFM=3.00000015; UTC=2018-03-11 15:23:05 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18031115-0025-0000-0000-00004F0F0E70 Message-Id: <06a0a953-b8ec-f235-01b5-c0de8f4bb9e6@vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2018-03-11_08:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1011 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1709140000 definitions=main-1803110197 X-IsSubscribed: yes PR83789 shows a problem in the builtin expansion code not calling the correct define_insn, given the correct mode (32-bit versus 64-bit). One could add tests in this code to call the correct pattern, but it's easier to create a common define_expand which everyone can call that does the right thing. This allows us to clean up all the callers, making for much simpler code. This also fixes the issue that Segher mentioned that this needs fixing for multiple other vector modes and not just the one mentioned in the bugzilla. This passed bootstrap and regtesting on powerpc64le-linux and powerpc64-linux (running testsuite in both 32-bit and 64-bit modes) with no regressions. Ok for trunk? I was not able to reproduce the failure reported in the bugzilla, but Kaushik confirmed that this patch fixes the ICE. P.S. I will be away on vacation for the neek week, so if this is ok, I won't be able to commit the patch until I return. Unless you want to commit it Segher and watch for fallout. It's up to you. Peter PR target/83789 * config/rs6000/altivec.md (altivec_lvx__2op): Delete define_insn. (altivec_lvx__1op): Likewise. (altivec_stvx__2op): Likewise. (altivec_stvx__1op): Likewise. (altivec_lvx_): New define_expand. (altivec_stvx_): Likewise. (altivec_lvx__2op_): New define_insn. (altivec_lvx__1op_): Likewise. (altivec_stvx__2op_): Likewise. (altivec_stvx__1op_): Likewise. * config/rs6000/rs6000-p8swap.c (rs6000_gen_stvx): Use new expanders. (rs6000_gen_lvx): Likewise. * config/rs6000/rs6000.c (altivec_expand_lv_builtin): Likewise. (altivec_expand_stv_builtin): Likewise. (altivec_expand_builtin): Likewise. * config/rs6000/vector.md: Likewise. Index: gcc/config/rs6000/altivec.md =================================================================== --- gcc/config/rs6000/altivec.md (revision 258348) +++ gcc/config/rs6000/altivec.md (working copy) @@ -2747,39 +2747,47 @@ (define_insn "altivec_lvx__interna "lvx %0,%y1" [(set_attr "type" "vecload")]) -; The next two patterns embody what lvx should usually look like. -(define_insn "altivec_lvx__2op" - [(set (match_operand:VM2 0 "register_operand" "=v") - (mem:VM2 (and:DI (plus:DI (match_operand:DI 1 "register_operand" "b") - (match_operand:DI 2 "register_operand" "r")) - (const_int -16))))] - "TARGET_ALTIVEC && TARGET_64BIT" - "lvx %0,%1,%2" - [(set_attr "type" "vecload")]) - -(define_insn "altivec_lvx__1op" - [(set (match_operand:VM2 0 "register_operand" "=v") - (mem:VM2 (and:DI (match_operand:DI 1 "register_operand" "r") - (const_int -16))))] - "TARGET_ALTIVEC && TARGET_64BIT" - "lvx %0,0,%1" - [(set_attr "type" "vecload")]) +; The following patterns embody what lvx should usually look like. +(define_expand "altivec_lvx_" + [(set (match_operand:VM2 0 "register_operand" "") + (match_operand:VM2 1 "altivec_indexed_or_indirect_operand" ""))] + "TARGET_ALTIVEC" +{ + rtx addr = XEXP (operand1, 0); + if (rs6000_sum_of_two_registers_p (addr)) + { + rtx op1 = XEXP (addr, 0); + rtx op2 = XEXP (addr, 1); + if (TARGET_64BIT) + emit_insn (gen_altivec_lvx__2op_di (operand0, op1, op2)); + else + emit_insn (gen_altivec_lvx__2op_si (operand0, op1, op2)); + } + else + { + if (TARGET_64BIT) + emit_insn (gen_altivec_lvx__1op_di (operand0, addr)); + else + emit_insn (gen_altivec_lvx__1op_si (operand0, addr)); + } + DONE; +}) -; 32-bit versions of the above. -(define_insn "altivec_lvx__2op_si" +; The next two patterns embody what lvx should usually look like. +(define_insn "altivec_lvx__2op_" [(set (match_operand:VM2 0 "register_operand" "=v") - (mem:VM2 (and:SI (plus:SI (match_operand:SI 1 "register_operand" "b") - (match_operand:SI 2 "register_operand" "r")) - (const_int -16))))] - "TARGET_ALTIVEC && TARGET_32BIT" + (mem:VM2 (and:P (plus:P (match_operand:P 1 "register_operand" "b") + (match_operand:P 2 "register_operand" "r")) + (const_int -16))))] + "TARGET_ALTIVEC" "lvx %0,%1,%2" [(set_attr "type" "vecload")]) -(define_insn "altivec_lvx__1op_si" +(define_insn "altivec_lvx__1op_" [(set (match_operand:VM2 0 "register_operand" "=v") - (mem:VM2 (and:SI (match_operand:SI 1 "register_operand" "r") - (const_int -16))))] - "TARGET_ALTIVEC && TARGET_32BIT" + (mem:VM2 (and:P (match_operand:P 1 "register_operand" "r") + (const_int -16))))] + "TARGET_ALTIVEC" "lvx %0,0,%1" [(set_attr "type" "vecload")]) @@ -2795,39 +2803,47 @@ (define_insn "altivec_stvx__intern "stvx %1,%y0" [(set_attr "type" "vecstore")]) -; The next two patterns embody what stvx should usually look like. -(define_insn "altivec_stvx__2op" - [(set (mem:VM2 (and:DI (plus:DI (match_operand:DI 1 "register_operand" "b") - (match_operand:DI 2 "register_operand" "r")) - (const_int -16))) - (match_operand:VM2 0 "register_operand" "v"))] - "TARGET_ALTIVEC && TARGET_64BIT" - "stvx %0,%1,%2" - [(set_attr "type" "vecstore")]) - -(define_insn "altivec_stvx__1op" - [(set (mem:VM2 (and:DI (match_operand:DI 1 "register_operand" "r") - (const_int -16))) - (match_operand:VM2 0 "register_operand" "v"))] - "TARGET_ALTIVEC && TARGET_64BIT" - "stvx %0,0,%1" - [(set_attr "type" "vecstore")]) +; The following patterns embody what stvx should usually look like. +(define_expand "altivec_stvx_" + [(set (match_operand:VM2 1 "altivec_indexed_or_indirect_operand" "") + (match_operand:VM2 0 "register_operand" ""))] + "TARGET_ALTIVEC" +{ + rtx addr = XEXP (operand1, 0); + if (rs6000_sum_of_two_registers_p (addr)) + { + rtx op1 = XEXP (addr, 0); + rtx op2 = XEXP (addr, 1); + if (TARGET_64BIT) + emit_insn (gen_altivec_stvx__2op_di (operand0, op1, op2)); + else + emit_insn (gen_altivec_stvx__2op_si (operand0, op1, op2)); + } + else + { + if (TARGET_64BIT) + emit_insn (gen_altivec_stvx__1op_di (operand0, addr)); + else + emit_insn (gen_altivec_stvx__1op_si (operand0, addr)); + } + DONE; +}) -; 32-bit versions of the above. -(define_insn "altivec_stvx__2op_si" - [(set (mem:VM2 (and:SI (plus:SI (match_operand:SI 1 "register_operand" "b") - (match_operand:SI 2 "register_operand" "r")) - (const_int -16))) - (match_operand:VM2 0 "register_operand" "v"))] - "TARGET_ALTIVEC && TARGET_32BIT" +; The next two patterns embody what stvx should usually look like. +(define_insn "altivec_stvx__2op_" + [(set (mem:VM2 (and:P (plus:P (match_operand:P 1 "register_operand" "b") + (match_operand:P 2 "register_operand" "r")) + (const_int -16))) + (match_operand:VM2 0 "register_operand" "v"))] + "TARGET_ALTIVEC" "stvx %0,%1,%2" [(set_attr "type" "vecstore")]) -(define_insn "altivec_stvx__1op_si" - [(set (mem:VM2 (and:SI (match_operand:SI 1 "register_operand" "r") - (const_int -16))) - (match_operand:VM2 0 "register_operand" "v"))] - "TARGET_ALTIVEC && TARGET_32BIT" +(define_insn "altivec_stvx__1op_" + [(set (mem:VM2 (and:P (match_operand:P 1 "register_operand" "r") + (const_int -16))) + (match_operand:VM2 0 "register_operand" "v"))] + "TARGET_ALTIVEC" "stvx %0,0,%1" [(set_attr "type" "vecstore")]) Index: gcc/config/rs6000/rs6000-p8swap.c =================================================================== --- gcc/config/rs6000/rs6000-p8swap.c (revision 258348) +++ gcc/config/rs6000/rs6000-p8swap.c (working copy) @@ -1547,94 +1547,31 @@ mimic_memory_attributes_and_flags (rtx n rtx rs6000_gen_stvx (enum machine_mode mode, rtx dest_exp, rtx src_exp) { - rtx memory_address = XEXP (dest_exp, 0); rtx stvx; - if (rs6000_sum_of_two_registers_p (memory_address)) - { - rtx op1, op2; - op1 = XEXP (memory_address, 0); - op2 = XEXP (memory_address, 1); - if (mode == V16QImode) - stvx = TARGET_64BIT - ? gen_altivec_stvx_v16qi_2op (src_exp, op1, op2) - : gen_altivec_stvx_v16qi_2op_si (src_exp, op1, op2); - else if (mode == V8HImode) - stvx = TARGET_64BIT - ? gen_altivec_stvx_v8hi_2op (src_exp, op1, op2) - : gen_altivec_stvx_v8hi_2op_si (src_exp, op1, op2); -#ifdef HAVE_V8HFmode - else if (mode == V8HFmode) - stvx = TARGET_64BIT - ? gen_altivec_stvx_v8hf_2op (src_exp, op1, op2) - : gen_altivec_stvx_v8hf_2op_si (src_exp, op1, op2); -#endif - else if (mode == V4SImode) - stvx = TARGET_64BIT - ? gen_altivec_stvx_v4si_2op (src_exp, op1, op2) - : gen_altivec_stvx_v4si_2op_si (src_exp, op1, op2); - else if (mode == V4SFmode) - stvx = TARGET_64BIT - ? gen_altivec_stvx_v4sf_2op (src_exp, op1, op2) - : gen_altivec_stvx_v4sf_2op_si (src_exp, op1, op2); - else if (mode == V2DImode) - stvx = TARGET_64BIT - ? gen_altivec_stvx_v2di_2op (src_exp, op1, op2) - : gen_altivec_stvx_v2di_2op_si (src_exp, op1, op2); - else if (mode == V2DFmode) - stvx = TARGET_64BIT - ? gen_altivec_stvx_v2df_2op (src_exp, op1, op2) - : gen_altivec_stvx_v2df_2op_si (src_exp, op1, op2); - else if (mode == V1TImode) - stvx = TARGET_64BIT - ? gen_altivec_stvx_v1ti_2op (src_exp, op1, op2) - : gen_altivec_stvx_v1ti_2op_si (src_exp, op1, op2); - else - /* KFmode, TFmode, other modes not expected in this context. */ - gcc_unreachable (); - } - else /* REG_P (memory_address) */ - { - if (mode == V16QImode) - stvx = TARGET_64BIT - ? gen_altivec_stvx_v16qi_1op (src_exp, memory_address) - : gen_altivec_stvx_v16qi_1op_si (src_exp, memory_address); - else if (mode == V8HImode) - stvx = TARGET_64BIT - ? gen_altivec_stvx_v8hi_1op (src_exp, memory_address) - : gen_altivec_stvx_v8hi_1op_si (src_exp, memory_address); + if (mode == V16QImode) + stvx = gen_altivec_stvx_v16qi (src_exp, dest_exp); + else if (mode == V8HImode) + stvx = gen_altivec_stvx_v8hi (src_exp, dest_exp); #ifdef HAVE_V8HFmode - else if (mode == V8HFmode) - stvx = TARGET_64BIT - ? gen_altivec_stvx_v8hf_1op (src_exp, memory_address) - : gen_altivec_stvx_v8hf_1op_si (src_exp, memory_address); + else if (mode == V8HFmode) + stvx = gen_altivec_stvx_v8hf (src_exp, dest_exp); #endif - else if (mode == V4SImode) - stvx =TARGET_64BIT - ? gen_altivec_stvx_v4si_1op (src_exp, memory_address) - : gen_altivec_stvx_v4si_1op_si (src_exp, memory_address); - else if (mode == V4SFmode) - stvx = TARGET_64BIT - ? gen_altivec_stvx_v4sf_1op (src_exp, memory_address) - : gen_altivec_stvx_v4sf_1op_si (src_exp, memory_address); - else if (mode == V2DImode) - stvx = TARGET_64BIT - ? gen_altivec_stvx_v2di_1op (src_exp, memory_address) - : gen_altivec_stvx_v2di_1op_si (src_exp, memory_address); - else if (mode == V2DFmode) - stvx = TARGET_64BIT - ? gen_altivec_stvx_v2df_1op (src_exp, memory_address) - : gen_altivec_stvx_v2df_1op_si (src_exp, memory_address); - else if (mode == V1TImode) - stvx = TARGET_64BIT - ? gen_altivec_stvx_v1ti_1op (src_exp, memory_address) - : gen_altivec_stvx_v1ti_1op_si (src_exp, memory_address); - else - /* KFmode, TFmode, other modes not expected in this context. */ - gcc_unreachable (); - } + else if (mode == V4SImode) + stvx = gen_altivec_stvx_v4si (src_exp, dest_exp); + else if (mode == V4SFmode) + stvx = gen_altivec_stvx_v4sf (src_exp, dest_exp); + else if (mode == V2DImode) + stvx = gen_altivec_stvx_v2di (src_exp, dest_exp); + else if (mode == V2DFmode) + stvx = gen_altivec_stvx_v2df (src_exp, dest_exp); + else if (mode == V1TImode) + stvx = gen_altivec_stvx_v1ti (src_exp, dest_exp); + else + /* KFmode, TFmode, other modes not expected in this context. */ + gcc_unreachable (); - rtx new_mem_exp = SET_DEST (stvx); + rtx new_mem_exp = SET_DEST (PATTERN (stvx)); mimic_memory_attributes_and_flags (new_mem_exp, dest_exp); return stvx; } @@ -1726,95 +1663,31 @@ replace_swapped_aligned_store (swap_web_ rtx rs6000_gen_lvx (enum machine_mode mode, rtx dest_exp, rtx src_exp) { - rtx memory_address = XEXP (src_exp, 0); rtx lvx; - if (rs6000_sum_of_two_registers_p (memory_address)) - { - rtx op1, op2; - op1 = XEXP (memory_address, 0); - op2 = XEXP (memory_address, 1); - - if (mode == V16QImode) - lvx = TARGET_64BIT - ? gen_altivec_lvx_v16qi_2op (dest_exp, op1, op2) - : gen_altivec_lvx_v16qi_2op_si (dest_exp, op1, op2); - else if (mode == V8HImode) - lvx = TARGET_64BIT - ? gen_altivec_lvx_v8hi_2op (dest_exp, op1, op2) - : gen_altivec_lvx_v8hi_2op_si (dest_exp, op1, op2); -#ifdef HAVE_V8HFmode - else if (mode == V8HFmode) - lvx = TARGET_64BIT - ? gen_altivec_lvx_v8hf_2op (dest_exp, op1, op2) - : gen_altivec_lvx_v8hf_2op_si (dest_exp, op1, op2); -#endif - else if (mode == V4SImode) - lvx = TARGET_64BIT - ? gen_altivec_lvx_v4si_2op (dest_exp, op1, op2) - : gen_altivec_lvx_v4si_2op_si (dest_exp, op1, op2); - else if (mode == V4SFmode) - lvx = TARGET_64BIT - ? gen_altivec_lvx_v4sf_2op (dest_exp, op1, op2) - : gen_altivec_lvx_v4sf_2op_si (dest_exp, op1, op2); - else if (mode == V2DImode) - lvx = TARGET_64BIT - ? gen_altivec_lvx_v2di_2op (dest_exp, op1, op2) - : gen_altivec_lvx_v2di_2op_si (dest_exp, op1, op2); - else if (mode == V2DFmode) - lvx = TARGET_64BIT - ? gen_altivec_lvx_v2df_2op (dest_exp, op1, op2) - : gen_altivec_lvx_v2df_2op_si (dest_exp, op1, op2); - else if (mode == V1TImode) - lvx = TARGET_64BIT - ? gen_altivec_lvx_v1ti_2op (dest_exp, op1, op2) - : gen_altivec_lvx_v1ti_2op_si (dest_exp, op1, op2); - else - /* KFmode, TFmode, other modes not expected in this context. */ - gcc_unreachable (); - } - else /* REG_P (memory_address) */ - { - if (mode == V16QImode) - lvx = TARGET_64BIT - ? gen_altivec_lvx_v16qi_1op (dest_exp, memory_address) - : gen_altivec_lvx_v16qi_1op_si (dest_exp, memory_address); - else if (mode == V8HImode) - lvx = TARGET_64BIT - ? gen_altivec_lvx_v8hi_1op (dest_exp, memory_address) - : gen_altivec_lvx_v8hi_1op_si (dest_exp, memory_address); + if (mode == V16QImode) + lvx = gen_altivec_lvx_v16qi (dest_exp, src_exp); + else if (mode == V8HImode) + lvx = gen_altivec_lvx_v8hi (dest_exp, src_exp); #ifdef HAVE_V8HFmode - else if (mode == V8HFmode) - lvx = TARGET_64BIT - ? gen_altivec_lvx_v8hf_1op (dest_exp, memory_address) - : gen_altivec_lvx_v8hf_1op_si (dest_exp, memory_address); + else if (mode == V8HFmode) + lvx = gen_altivec_lvx_v8hf (dest_exp, src_exp); #endif - else if (mode == V4SImode) - lvx = TARGET_64BIT - ? gen_altivec_lvx_v4si_1op (dest_exp, memory_address) - : gen_altivec_lvx_v4si_1op_si (dest_exp, memory_address); - else if (mode == V4SFmode) - lvx = TARGET_64BIT - ? gen_altivec_lvx_v4sf_1op (dest_exp, memory_address) - : gen_altivec_lvx_v4sf_1op_si (dest_exp, memory_address); - else if (mode == V2DImode) - lvx = TARGET_64BIT - ? gen_altivec_lvx_v2di_1op (dest_exp, memory_address) - : gen_altivec_lvx_v2di_1op_si (dest_exp, memory_address); - else if (mode == V2DFmode) - lvx = TARGET_64BIT - ? gen_altivec_lvx_v2df_1op (dest_exp, memory_address) - : gen_altivec_lvx_v2df_1op_si (dest_exp, memory_address); - else if (mode == V1TImode) - lvx = TARGET_64BIT - ? gen_altivec_lvx_v1ti_1op (dest_exp, memory_address) - : gen_altivec_lvx_v1ti_1op_si (dest_exp, memory_address); - else - /* KFmode, TFmode, other modes not expected in this context. */ - gcc_unreachable (); - } + else if (mode == V4SImode) + lvx = gen_altivec_lvx_v4si (dest_exp, src_exp); + else if (mode == V4SFmode) + lvx = gen_altivec_lvx_v4sf (dest_exp, src_exp); + else if (mode == V2DImode) + lvx = gen_altivec_lvx_v2di (dest_exp, src_exp); + else if (mode == V2DFmode) + lvx = gen_altivec_lvx_v2df (dest_exp, src_exp); + else if (mode == V1TImode) + lvx = gen_altivec_lvx_v1ti (dest_exp, src_exp); + else + /* KFmode, TFmode, other modes not expected in this context. */ + gcc_unreachable (); - rtx new_mem_exp = SET_SRC (lvx); + rtx new_mem_exp = SET_SRC (PATTERN (lvx)); mimic_memory_attributes_and_flags (new_mem_exp, src_exp); return lvx; Index: gcc/config/rs6000/rs6000.c =================================================================== --- gcc/config/rs6000/rs6000.c (revision 258348) +++ gcc/config/rs6000/rs6000.c (working copy) @@ -14451,12 +14451,12 @@ altivec_expand_lv_builtin (enum insn_cod /* For LVX, express the RTL accurately by ANDing the address with -16. LVXL and LVE*X expand to use UNSPECs to hide their special behavior, so the raw address is fine. */ - if (icode == CODE_FOR_altivec_lvx_v2df_2op - || icode == CODE_FOR_altivec_lvx_v2di_2op - || icode == CODE_FOR_altivec_lvx_v4sf_2op - || icode == CODE_FOR_altivec_lvx_v4si_2op - || icode == CODE_FOR_altivec_lvx_v8hi_2op - || icode == CODE_FOR_altivec_lvx_v16qi_2op) + if (icode == CODE_FOR_altivec_lvx_v2df + || icode == CODE_FOR_altivec_lvx_v2di + || icode == CODE_FOR_altivec_lvx_v4sf + || icode == CODE_FOR_altivec_lvx_v4si + || icode == CODE_FOR_altivec_lvx_v8hi + || icode == CODE_FOR_altivec_lvx_v16qi) { rtx rawaddr; if (op0 == const0_rtx) @@ -14609,12 +14609,12 @@ altivec_expand_stv_builtin (enum insn_co /* For STVX, express the RTL accurately by ANDing the address with -16. STVXL and STVE*X expand to use UNSPECs to hide their special behavior, so the raw address is fine. */ - if (icode == CODE_FOR_altivec_stvx_v2df_2op - || icode == CODE_FOR_altivec_stvx_v2di_2op - || icode == CODE_FOR_altivec_stvx_v4sf_2op - || icode == CODE_FOR_altivec_stvx_v4si_2op - || icode == CODE_FOR_altivec_stvx_v8hi_2op - || icode == CODE_FOR_altivec_stvx_v16qi_2op) + if (icode == CODE_FOR_altivec_stvx_v2df + || icode == CODE_FOR_altivec_stvx_v2di + || icode == CODE_FOR_altivec_stvx_v4sf + || icode == CODE_FOR_altivec_stvx_v4si + || icode == CODE_FOR_altivec_stvx_v8hi + || icode == CODE_FOR_altivec_stvx_v16qi) { if (op1 == const0_rtx) rawaddr = op2; @@ -15524,18 +15524,18 @@ altivec_expand_builtin (tree exp, rtx ta switch (fcode) { case ALTIVEC_BUILTIN_STVX_V2DF: - return altivec_expand_stv_builtin (CODE_FOR_altivec_stvx_v2df_2op, exp); + return altivec_expand_stv_builtin (CODE_FOR_altivec_stvx_v2df, exp); case ALTIVEC_BUILTIN_STVX_V2DI: - return altivec_expand_stv_builtin (CODE_FOR_altivec_stvx_v2di_2op, exp); + return altivec_expand_stv_builtin (CODE_FOR_altivec_stvx_v2di, exp); case ALTIVEC_BUILTIN_STVX_V4SF: - return altivec_expand_stv_builtin (CODE_FOR_altivec_stvx_v4sf_2op, exp); + return altivec_expand_stv_builtin (CODE_FOR_altivec_stvx_v4sf, exp); case ALTIVEC_BUILTIN_STVX: case ALTIVEC_BUILTIN_STVX_V4SI: - return altivec_expand_stv_builtin (CODE_FOR_altivec_stvx_v4si_2op, exp); + return altivec_expand_stv_builtin (CODE_FOR_altivec_stvx_v4si, exp); case ALTIVEC_BUILTIN_STVX_V8HI: - return altivec_expand_stv_builtin (CODE_FOR_altivec_stvx_v8hi_2op, exp); + return altivec_expand_stv_builtin (CODE_FOR_altivec_stvx_v8hi, exp); case ALTIVEC_BUILTIN_STVX_V16QI: - return altivec_expand_stv_builtin (CODE_FOR_altivec_stvx_v16qi_2op, exp); + return altivec_expand_stv_builtin (CODE_FOR_altivec_stvx_v16qi, exp); case ALTIVEC_BUILTIN_STVEBX: return altivec_expand_stv_builtin (CODE_FOR_altivec_stvebx, exp); case ALTIVEC_BUILTIN_STVEHX: @@ -15806,23 +15806,23 @@ altivec_expand_builtin (tree exp, rtx ta return altivec_expand_lv_builtin (CODE_FOR_altivec_lvxl_v16qi, exp, target, false); case ALTIVEC_BUILTIN_LVX_V2DF: - return altivec_expand_lv_builtin (CODE_FOR_altivec_lvx_v2df_2op, + return altivec_expand_lv_builtin (CODE_FOR_altivec_lvx_v2df, exp, target, false); case ALTIVEC_BUILTIN_LVX_V2DI: - return altivec_expand_lv_builtin (CODE_FOR_altivec_lvx_v2di_2op, + return altivec_expand_lv_builtin (CODE_FOR_altivec_lvx_v2di, exp, target, false); case ALTIVEC_BUILTIN_LVX_V4SF: - return altivec_expand_lv_builtin (CODE_FOR_altivec_lvx_v4sf_2op, + return altivec_expand_lv_builtin (CODE_FOR_altivec_lvx_v4sf, exp, target, false); case ALTIVEC_BUILTIN_LVX: case ALTIVEC_BUILTIN_LVX_V4SI: - return altivec_expand_lv_builtin (CODE_FOR_altivec_lvx_v4si_2op, + return altivec_expand_lv_builtin (CODE_FOR_altivec_lvx_v4si, exp, target, false); case ALTIVEC_BUILTIN_LVX_V8HI: - return altivec_expand_lv_builtin (CODE_FOR_altivec_lvx_v8hi_2op, + return altivec_expand_lv_builtin (CODE_FOR_altivec_lvx_v8hi, exp, target, false); case ALTIVEC_BUILTIN_LVX_V16QI: - return altivec_expand_lv_builtin (CODE_FOR_altivec_lvx_v16qi_2op, + return altivec_expand_lv_builtin (CODE_FOR_altivec_lvx_v16qi, exp, target, false); case ALTIVEC_BUILTIN_LVLX: return altivec_expand_lv_builtin (CODE_FOR_altivec_lvlx, Index: gcc/config/rs6000/vector.md =================================================================== --- gcc/config/rs6000/vector.md (revision 258348) +++ gcc/config/rs6000/vector.md (working copy) @@ -196,12 +196,7 @@ (define_expand "vector_altivec_load__2op (operands[0], XEXP (addr, 0), - XEXP (addr, 1))); - else - emit_insn (gen_altivec_lvx__1op (operands[0], operands[1])); + emit_insn (gen_altivec_lvx_ (operands[0], operands[1])); DONE; } }) @@ -218,12 +213,7 @@ (define_expand "vector_altivec_store__2op (operands[1], XEXP (addr, 0), - XEXP (addr, 1))); - else - emit_insn (gen_altivec_stvx__1op (operands[1], operands[0])); + emit_insn (gen_altivec_stvx_ (operands[1], operands[0])); DONE; } })