From patchwork Thu Jan 20 11:27:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Earnshaw X-Patchwork-Id: 1582102 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.a=rsa-sha256 header.s=default header.b=lGD9RMpw; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=8.43.85.97; helo=sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Received: from sourceware.org (ip-8-43-85-97.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4JfgL52NnGz9s9c for ; Thu, 20 Jan 2022 22:30:37 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 1B21F3857C73 for ; Thu, 20 Jan 2022 11:30:35 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 1B21F3857C73 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1642678235; bh=oCeohBrWw86Nz306ASmYvP/FJGHZlAQHKRM/iGU+xt0=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=lGD9RMpwWEVlFnaSgwYGTDaSfP8hE9h0X7Bg9EkiMVqgi6gQ+4cYkpcXnFImv630y VHSuRmTXylrjuX9zsJF+IDx49X0/xZeIcAOPqNe9D4qPFncFzaIO62Q14dXBWMG8QB Gq8GjRxFPBmCgCnX1/pF0XV+w70lHHUm7S2pK0fk= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id AA1AE3858D35 for ; Thu, 20 Jan 2022 11:28:21 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org AA1AE3858D35 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5BA3C101E; Thu, 20 Jan 2022 03:28:21 -0800 (PST) Received: from e126323.arm.com (unknown [10.57.36.197]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CABBA3F774; Thu, 20 Jan 2022 03:28:20 -0800 (PST) To: GCC patches Subject: [PATCH 2/7] arm: Consistently use crypto_mode attribute in crypto patterns Date: Thu, 20 Jan 2022 11:27:19 +0000 Message-Id: <20220120112724.830872-3-rearnsha@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220120112724.830872-1-rearnsha@arm.com> References: <20220120112724.830872-1-rearnsha@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-13.7 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Richard Earnshaw via Gcc-patches From: Richard Earnshaw Reply-To: Richard Earnshaw Cc: Richard Earnshaw Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org Sender: "Gcc-patches" A couple of patterns in the crypto support code were hard-coding the mode rather than using the iterators. While not incorrect, it was slightly confusing, so adapt those patterns to the style of the rest of the file. Also fix some white space issues. gcc/ChangeLog: * config/arm/crypto.md (crypto_): Use rather than hard-coding the mode. (crypto_): Fix white space. (crypto_): Likewise. (*aarch32_crypto_aese_fused): Likewise. (*aarch32_crypto_aesd_fused): Likewise. (crypto_): Likewise. (crypto_): Likewise. (crypto_sha1h_lb): Likewise. (crypto_vmullp64): Likewise. (crypto_): Likewise. (crypto__lb): Likewise. --- gcc/config/arm/crypto.md | 94 ++++++++++++++++++++-------------------- 1 file changed, 47 insertions(+), 47 deletions(-) diff --git a/gcc/config/arm/crypto.md b/gcc/config/arm/crypto.md index 6071ea17eac..020dfba7dcf 100644 --- a/gcc/config/arm/crypto.md +++ b/gcc/config/arm/crypto.md @@ -22,7 +22,7 @@ (define_insn "crypto_" [(set (match_operand: 0 "register_operand" "=w") (unspec: - [(match_operand: 1 "register_operand" "w")] + [(match_operand: 1 "register_operand" "w")] CRYPTO_AESMC))] "TARGET_CRYPTO" ".\\t%q0, %q1" @@ -30,12 +30,12 @@ (define_insn "crypto_" ) (define_insn "crypto_" - [(set (match_operand:V16QI 0 "register_operand" "=w") - (unspec:V16QI - [(xor:V16QI - (match_operand:V16QI 1 "register_operand" "%0") - (match_operand:V16QI 2 "register_operand" "w"))] - CRYPTO_AES))] + [(set (match_operand: 0 "register_operand" "=w") + (unspec: + [(xor: + (match_operand: 1 "register_operand" "%0") + (match_operand: 2 "register_operand" "w"))] + CRYPTO_AES))] "TARGET_CRYPTO" ".\\t%q0, %q2" [(set_attr "type" "")] @@ -44,17 +44,16 @@ (define_insn "crypto_" ;; When AESE/AESMC fusion is enabled we really want to keep the two together ;; and enforce the register dependency without scheduling or register ;; allocation messing up the order or introducing moves inbetween. -;; Mash the two together during combine. +;; Mash the two together during combine. (define_insn "*aarch32_crypto_aese_fused" [(set (match_operand:V16QI 0 "register_operand" "=w") (unspec:V16QI - [(unspec:V16QI - [(xor:V16QI - (match_operand:V16QI 1 "register_operand" "%0") - (match_operand:V16QI 2 "register_operand" "w"))] - UNSPEC_AESE)] - UNSPEC_AESMC))] + [(unspec:V16QI [(xor:V16QI + (match_operand:V16QI 1 "register_operand" "%0") + (match_operand:V16QI 2 "register_operand" "w"))] + UNSPEC_AESE)] + UNSPEC_AESMC))] "TARGET_CRYPTO && arm_fusion_enabled_p (tune_params::FUSE_AES_AESMC)" "aese.8\\t%q0, %q2\;aesmc.8\\t%q0, %q0" @@ -65,17 +64,16 @@ (define_insn "*aarch32_crypto_aese_fused" ;; When AESD/AESIMC fusion is enabled we really want to keep the two together ;; and enforce the register dependency without scheduling or register ;; allocation messing up the order or introducing moves inbetween. -;; Mash the two together during combine. +;; Mash the two together during combine. (define_insn "*aarch32_crypto_aesd_fused" [(set (match_operand:V16QI 0 "register_operand" "=w") (unspec:V16QI - [(unspec:V16QI - [(xor:V16QI - (match_operand:V16QI 1 "register_operand" "%0") - (match_operand:V16QI 2 "register_operand" "w"))] - UNSPEC_AESD)] - UNSPEC_AESIMC))] + [(unspec:V16QI [(xor:V16QI + (match_operand:V16QI 1 "register_operand" "%0") + (match_operand:V16QI 2 "register_operand" "w"))] + UNSPEC_AESD)] + UNSPEC_AESIMC))] "TARGET_CRYPTO && arm_fusion_enabled_p (tune_params::FUSE_AES_AESMC)" "aesd.8\\t%q0, %q2\;aesimc.8\\t%q0, %q0" @@ -86,9 +84,9 @@ (define_insn "*aarch32_crypto_aesd_fused" (define_insn "crypto_" [(set (match_operand: 0 "register_operand" "=w") (unspec: - [(match_operand: 1 "register_operand" "0") - (match_operand: 2 "register_operand" "w")] - CRYPTO_BINARY))] + [(match_operand: 1 "register_operand" "0") + (match_operand: 2 "register_operand" "w")] + CRYPTO_BINARY))] "TARGET_CRYPTO" ".\\t%q0, %q2" [(set_attr "type" "")] @@ -96,18 +94,20 @@ (define_insn "crypto_" (define_insn "crypto_" [(set (match_operand: 0 "register_operand" "=w") - (unspec: [(match_operand: 1 "register_operand" "0") - (match_operand: 2 "register_operand" "w") - (match_operand: 3 "register_operand" "w")] - CRYPTO_TERNARY))] + (unspec: + [(match_operand: 1 "register_operand" "0") + (match_operand: 2 "register_operand" "w") + (match_operand: 3 "register_operand" "w")] + CRYPTO_TERNARY))] "TARGET_CRYPTO" ".\\t%q0, %q2, %q3" [(set_attr "type" "")] ) -/* The vec_select operation always selects index 0 from the lower V2SI subreg - of the V4SI, adjusted for endianness. Required due to neon_vget_lane and - neon_set_lane that change the element ordering in memory for big-endian. */ +;; The vec_select operation always selects index 0 from the lower V2SI +;; subreg of the V4SI, adjusted for endianness. Required due to +;; neon_vget_lane and neon_set_lane that change the element ordering +;; in memory for big-endian. (define_expand "crypto_sha1h" [(set (match_operand:V4SI 0 "register_operand") @@ -122,10 +122,10 @@ (define_expand "crypto_sha1h" (define_insn "crypto_sha1h_lb" [(set (match_operand:V4SI 0 "register_operand" "=w") (unspec:V4SI - [(vec_select:SI + [(vec_select:SI (match_operand:V4SI 1 "register_operand" "w") (parallel [(match_operand:SI 2 "immediate_operand" "i")]))] - UNSPEC_SHA1H))] + UNSPEC_SHA1H))] "TARGET_CRYPTO && INTVAL (operands[2]) == NEON_ENDIAN_LANE_N (V2SImode, 0)" "sha1h.32\\t%q0, %q1" [(set_attr "type" "crypto_sha1_fast")] @@ -133,9 +133,9 @@ (define_insn "crypto_sha1h_lb" (define_insn "crypto_vmullp64" [(set (match_operand:TI 0 "register_operand" "=w") - (unspec:TI [(match_operand:DI 1 "register_operand" "w") - (match_operand:DI 2 "register_operand" "w")] - UNSPEC_VMULLP64))] + (unspec:TI [(match_operand:DI 1 "register_operand" "w") + (match_operand:DI 2 "register_operand" "w")] + UNSPEC_VMULLP64))] "TARGET_CRYPTO" "vmull.p64\\t%q0, %P1, %P2" [(set_attr "type" "crypto_pmull")] @@ -148,10 +148,10 @@ (define_insn "crypto_vmullp64" (define_expand "crypto_" [(set (match_operand:V4SI 0 "register_operand") (unspec: - [(match_operand: 1 "register_operand") - (match_operand: 2 "register_operand") - (match_operand: 3 "register_operand")] - CRYPTO_SELECTING))] + [(match_operand: 1 "register_operand") + (match_operand: 2 "register_operand") + (match_operand: 3 "register_operand")] + CRYPTO_SELECTING))] "TARGET_CRYPTO" { rtx op4 = GEN_INT (NEON_ENDIAN_LANE_N (V2SImode, 0)); @@ -162,13 +162,13 @@ (define_expand "crypto_" (define_insn "crypto__lb" [(set (match_operand:V4SI 0 "register_operand" "=w") - (unspec: - [(match_operand: 1 "register_operand" "0") - (vec_select:SI - (match_operand: 2 "register_operand" "w") - (parallel [(match_operand:SI 4 "immediate_operand" "i")])) - (match_operand: 3 "register_operand" "w")] - CRYPTO_SELECTING))] + (unspec: + [(match_operand: 1 "register_operand" "0") + (vec_select:SI + (match_operand: 2 "register_operand" "w") + (parallel [(match_operand:SI 4 "immediate_operand" "i")])) + (match_operand: 3 "register_operand" "w")] + CRYPTO_SELECTING))] "TARGET_CRYPTO && INTVAL (operands[4]) == NEON_ENDIAN_LANE_N (V2SImode, 0)" ".\\t%q0, %q2, %q3" [(set_attr "type" "")]