From patchwork Tue Oct 9 11:08:11 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Greenhalgh X-Patchwork-Id: 190278 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id B39D72C00E1 for ; Tue, 9 Oct 2012 22:09:35 +1100 (EST) Comment: DKIM? See http://www.dkim.org DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=gcc.gnu.org; s=default; x=1350385777; h=Comment: DomainKey-Signature:Received:Received:Received:Received:Received: From:To:Subject:Date:Message-Id:MIME-Version:Content-Type: Mailing-List:Precedence:List-Id:List-Unsubscribe:List-Archive: List-Post:List-Help:Sender:Delivered-To; bh=p8C2T0OSHubhcpQDvyWD 5VvQjUk=; b=RmOSIq0JYwr5qHPSzdRfu2rb0XfLiVwIdJCo8kM1AIoyX3q2vH/G 54i4WbBhlgspuhLH3uwBFdMPQzrCilI9jvBi+PwOPTLNxLaNy4dS1JBc44+hlBng kRnAxCXgsUU9vfUsIBBbXwm4EjwO6GFP1xq8+YogutQmbq2inyKgIV4= Comment: DomainKeys? See http://antispam.yahoo.com/domainkeys DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=gcc.gnu.org; h=Received:Received:X-SWARE-Spam-Status:X-Spam-Check-By:Received:Received:Received:From:To:Subject:Date:Message-Id:MIME-Version:X-MC-Unique:Content-Type:X-IsSubscribed:Mailing-List:Precedence:List-Id:List-Unsubscribe:List-Archive:List-Post:List-Help:Sender:Delivered-To; b=hoVUeGdnlAYss0rRzm6KGx0qvxOfvTNe+30jA7f9zwVLrGFxBjrGNepJ6E0D4l G7AoD7sGhbGd1ySErBE6lnzG103RHA6uH7iAKR9+hL4sqhU0Q6hIpQ+hjgQcQWgO b6aAwWRX6YYvgSaZvsyd91AyIlZBP1CqfxHwMlf8aQ6Eg=; Received: (qmail 14395 invoked by alias); 9 Oct 2012 11:09:25 -0000 Received: (qmail 14379 invoked by uid 22791); 9 Oct 2012 11:09:23 -0000 X-SWARE-Spam-Status: No, hits=-2.0 required=5.0 tests=AWL, BAYES_00, KHOP_RCVD_UNTRUST, KHOP_SPAMHAUS_DROP, RCVD_IN_DNSWL_LOW X-Spam-Check-By: sourceware.org Received: from service87.mimecast.com (HELO service87.mimecast.com) (91.220.42.44) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Tue, 09 Oct 2012 11:09:16 +0000 Received: from cam-owa2.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.21]) by service87.mimecast.com; Tue, 09 Oct 2012 12:09:14 +0100 Received: from e106375-lin.cambridge.arm.com ([10.1.255.212]) by cam-owa2.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Tue, 9 Oct 2012 12:09:13 +0100 From: James Greenhalgh To: gcc-patches@gcc.gnu.org Subject: [PATCH] [AArch64] Add vcond, vcondu support. Date: Tue, 9 Oct 2012 12:08:11 +0100 Message-Id: <1349780891-13329-1-git-send-email-james.greenhalgh@arm.com> MIME-Version: 1.0 X-MC-Unique: 112100912091410101 X-IsSubscribed: yes Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Hi, This patch adds support for vcond and vcondu to the AArch64 backend. Tested with no regressions on aarch64-none-elf. OK for aarch64-branch? (If so, someone will have to commit for me, as I do not have commit rights.) Thanks James Greenhalgh --- 2012-09-11 James Greenhalgh Tejas Belagod * config/aarch64/aarch64-simd.md (aarch64_simd_bsl_internal): New pattern. (aarch64_simd_bsl): Likewise. (aarch64_vcond_internal): Likewise. (vcondu): Likewise. (vcond): Likewise. * config/aarch64/iterators.md (UNSPEC_BSL): Add to define_constants. diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md index a7ddfb1..c9b5e17 100644 --- a/gcc/config/aarch64/aarch64-simd.md +++ b/gcc/config/aarch64/aarch64-simd.md @@ -1467,6 +1467,150 @@ (set_attr "simd_mode" "V2SI")] ) +;; vbsl_* intrinsics may compile to any of vbsl/vbif/vbit depending on register +;; allocation. For an intrinsic of form: +;; vD = bsl_* (vS, vN, vM) +;; We can use any of: +;; bsl vS, vN, vM (if D = S) +;; bit vD, vN, vS (if D = M, so 1-bits in vS choose bits from vN, else vM) +;; bif vD, vM, vS (if D = N, so 0-bits in vS choose bits from vM, else vN) + +(define_insn "aarch64_simd_bsl_internal" + [(set (match_operand:VDQ 0 "register_operand" "=w,w,w") + (unspec:VDQ [(match_operand:VDQ 1 "register_operand" " 0,w,w") + (match_operand:VDQ 2 "register_operand" " w,w,0") + (match_operand:VDQ 3 "register_operand" " w,0,w")] + UNSPEC_BSL))] + "TARGET_SIMD" + "@ + bsl\\t%0., %2., %3. + bit\\t%0., %2., %1. + bif\\t%0., %3., %1." +) + +(define_expand "aarch64_simd_bsl" + [(set (match_operand:VDQ 0 "register_operand") + (unspec:VDQ [(match_operand: 1 "register_operand") + (match_operand:VDQ 2 "register_operand") + (match_operand:VDQ 3 "register_operand")] + UNSPEC_BSL))] + "TARGET_SIMD" +{ + /* We can't alias operands together if they have different modes. */ + operands[1] = gen_lowpart (mode, operands[1]); +}) + +(define_expand "aarch64_vcond_internal" + [(set (match_operand:VDQ 0 "register_operand") + (if_then_else:VDQ + (match_operator 3 "comparison_operator" + [(match_operand:VDQ 4 "register_operand") + (match_operand:VDQ 5 "nonmemory_operand")]) + (match_operand:VDQ 1 "register_operand") + (match_operand:VDQ 2 "register_operand")))] + "TARGET_SIMD" +{ + int inverse = 0, has_zero_imm_form = 0; + rtx mask = gen_reg_rtx (mode); + + switch (GET_CODE (operands[3])) + { + case LE: + case LT: + case NE: + inverse = 1; + /* Fall through. */ + case GE: + case GT: + case EQ: + has_zero_imm_form = 1; + break; + case LEU: + case LTU: + inverse = 1; + break; + default: + break; + } + + if (!REG_P (operands[5]) + && (operands[5] != CONST0_RTX (mode) || !has_zero_imm_form)) + operands[5] = force_reg (mode, operands[5]); + + switch (GET_CODE (operands[3])) + { + case LT: + case GE: + emit_insn (gen_aarch64_cmge (mask, operands[4], operands[5])); + break; + + case LE: + case GT: + emit_insn (gen_aarch64_cmgt (mask, operands[4], operands[5])); + break; + + case LTU: + case GEU: + emit_insn (gen_aarch64_cmhs (mask, operands[4], operands[5])); + break; + + case LEU: + case GTU: + emit_insn (gen_aarch64_cmhi (mask, operands[4], operands[5])); + break; + + case NE: + case EQ: + emit_insn (gen_aarch64_cmeq (mask, operands[4], operands[5])); + break; + + default: + gcc_unreachable (); + } + + if (inverse) + emit_insn (gen_aarch64_simd_bsl (operands[0], mask, operands[2], + operands[1])); + else + emit_insn (gen_aarch64_simd_bsl (operands[0], mask, operands[1], + operands[2])); + + DONE; +}) + +(define_expand "vcond" + [(set (match_operand:VDQ 0 "register_operand") + (if_then_else:VDQ + (match_operator 3 "comparison_operator" + [(match_operand:VDQ 4 "register_operand") + (match_operand:VDQ 5 "nonmemory_operand")]) + (match_operand:VDQ 1 "register_operand") + (match_operand:VDQ 2 "register_operand")))] + "TARGET_SIMD" +{ + emit_insn (gen_aarch64_vcond_internal (operands[0], operands[1], + operands[2], operands[3], + operands[4], operands[5])); + DONE; +}) + + +(define_expand "vcondu" + [(set (match_operand:VDQ 0 "register_operand") + (if_then_else:VDQ + (match_operator 3 "comparison_operator" + [(match_operand:VDQ 4 "register_operand") + (match_operand:VDQ 5 "nonmemory_operand")]) + (match_operand:VDQ 1 "register_operand") + (match_operand:VDQ 2 "register_operand")))] + "TARGET_SIMD" +{ + emit_insn (gen_aarch64_vcond_internal (operands[0], operands[1], + operands[2], operands[3], + operands[4], operands[5])); + DONE; +}) + ;; Patterns for AArch64 SIMD Intrinsics. (define_expand "aarch64_create" diff --git a/gcc/config/aarch64/iterators.md b/gcc/config/aarch64/iterators.md index bf2041e..8d5d4b0 100644 --- a/gcc/config/aarch64/iterators.md +++ b/gcc/config/aarch64/iterators.md @@ -227,6 +227,7 @@ UNSPEC_CMTST ; Used in aarch64-simd.md. UNSPEC_FMAX ; Used in aarch64-simd.md. UNSPEC_FMIN ; Used in aarch64-simd.md. + UNSPEC_BSL ; Used in aarch64-simd.md. ]) ;; -------------------------------------------------------------------