From patchwork Thu Dec 7 02:15:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Wang X-Patchwork-Id: 1873006 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4SlyZv3JHwz23nD for ; Thu, 7 Dec 2023 13:17:11 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 7A143385AC3D for ; Thu, 7 Dec 2023 02:17:09 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from zg8tmty3ljk5ljewns4xndka.icoremail.net (zg8tmty3ljk5ljewns4xndka.icoremail.net [167.99.105.149]) by sourceware.org (Postfix) with ESMTP id AAF13385AC3D for ; Thu, 7 Dec 2023 02:16:27 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org AAF13385AC3D Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=eswincomputing.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=eswincomputing.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org AAF13385AC3D Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=167.99.105.149 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1701915392; cv=none; b=tTL8oqT1McUaASNSn6Yz4pWcq29ybFp7G3hcPlFRZmhilfXnQFkhn9gn6c33tV+4CcGXVCM+qguNJcXB2bZvQ7wJYo4Lj+0IS3uUiqW0k4upG00k4GmtN7gH8ydqQwFVFeOT8CWDNxLeMkOCyHlVAocaPm831sLVR0OIsClN0Ik= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1701915392; c=relaxed/simple; bh=VI+vJrZ4tltTYJCy7no4/IzTQWUu27jcr/Z/WecQYa0=; h=From:To:Subject:Date:Message-Id; b=sh9FZpCmrxZKuTqUdLMVCKiagSaSy4CuVKyA4aehj7KCxMjxAD7rNEmDwSv5/XNfUmQGEhilWHWdEakx943Q8gZiKtCbtlKGqpeWYldALEtNpTYp51QHv9Br5yTvyT4I1Gds5bTUwUslsLZ1HT86NITSHI0OgVard/8beCBTOw8= ARC-Authentication-Results: i=1; server2.sourceware.org Received: from localhost.localdomain (unknown [10.12.130.31]) by app1 (Coremail) with SMTP id TAJkCgA3w_2nKnFl14UAAA--.8152S6; Thu, 07 Dec 2023 10:15:17 +0800 (CST) From: Feng Wang To: gcc-patches@gcc.gnu.org Cc: kito.cheng@gmail.com, jeffreyalaw@gmail.com, juzhe.zhong@rivai.ai, zhusonghe@eswincomputing.com, panciyan@eswincomputing.com, Feng Wang Subject: [PATCH 3/4][v2] RISC-V: Add crypto machine descriptions Date: Thu, 7 Dec 2023 02:15:13 +0000 Message-Id: <20231207021514.10248-3-wangfeng@eswincomputing.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20231207021514.10248-1-wangfeng@eswincomputing.com> References: <20231207021514.10248-1-wangfeng@eswincomputing.com> X-CM-TRANSID: TAJkCgA3w_2nKnFl14UAAA--.8152S6 X-Coremail-Antispam: 1UD129KBjvAXoWftw1UXrWfZF18XFW3WFW5Awb_yoW5uw4kWo WrKr4kAF18WFyj939Yka1fGr1kXF43AF1xJayftr1Yvan8JrZ8trnF9a13Z343trsrXw4D Wryku3WUXFW8Jw4fn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7v73VFW2AGmfu7bjvjm3 AaLaJ3UjIYCTnIWjp_UUUYW7AC8VAFwI0_Wr0E3s1l1xkIjI8I6I8E6xAIw20EY4v20xva j40_Wr0E3s1l1IIY67AEw4v_Jr0_Jr4l82xGYIkIc2x26280x7IE14v26r15M28IrcIa0x kI8VCY1x0267AKxVW8JVW5JwA2ocxC64kIII0Yj41l84x0c7CEw4AK67xGY2AK021l84AC jcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF7I0E14v26F4UJVW0ow A2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAS 0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0I7IYx2 IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0 Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwCY02Avz4vE-syl42xK82IYc2 Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s02 6x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r126r1DMIIYrxkI7VAKI48JMIIF0x vE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4j6F4UMIIF0xvE 42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6x kF7I0E14v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjfU8BMNUUUUU X-CM-SenderInfo: pzdqwwxhqjqvxvzl0uprps33xlqjhudrp/ X-Spam-Status: No, score=-12.6 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org Patch v2: Add crypto vector ins into RATIO attr and use vr as destination register. This patch add the crypto machine descriptions(vector-crypto.md) and some new iterators which are used by crypto vector ext. Co-Authored by: Songhe Zhu Co-Authored by: Ciyan Pan gcc/ChangeLog: * config/riscv/iterators.md: Add rotate insn name. * config/riscv/riscv.md: Add new insns name for crypto vector. * config/riscv/vector-iterators.md: Add new iterators for crypto vector. * config/riscv/vector.md: Add the corresponding attr for crypto vector. * config/riscv/vector-crypto.md: New file.The machine descriptions for crypto vector. --- gcc/config/riscv/iterators.md | 4 +- gcc/config/riscv/riscv.md | 33 +- gcc/config/riscv/vector-crypto.md | 500 +++++++++++++++++++++++++++ gcc/config/riscv/vector-iterators.md | 41 +++ gcc/config/riscv/vector.md | 55 ++- 5 files changed, 612 insertions(+), 21 deletions(-) create mode 100755 gcc/config/riscv/vector-crypto.md diff --git a/gcc/config/riscv/iterators.md b/gcc/config/riscv/iterators.md index ecf033f2fa7..f332fba7031 100644 --- a/gcc/config/riscv/iterators.md +++ b/gcc/config/riscv/iterators.md @@ -304,7 +304,9 @@ (umax "maxu") (clz "clz") (ctz "ctz") - (popcount "cpop")]) + (popcount "cpop") + (rotate "rol") + (rotatert "ror")]) ;; ------------------------------------------------------------------- ;; Int Iterators. diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md index 935eeb7fd8e..a887f3cd412 100644 --- a/gcc/config/riscv/riscv.md +++ b/gcc/config/riscv/riscv.md @@ -428,6 +428,34 @@ ;; vcompress vector compress instruction ;; vmov whole vector register move ;; vector unknown vector instruction +;; 17. Crypto Vector instructions +;; vandn crypto vector bitwise and-not instructions +;; vbrev crypto vector reverse bits in elements instructions +;; vbrev8 crypto vector reverse bits in bytes instructions +;; vrev8 crypto vector reverse bytes instructions +;; vclz crypto vector count leading Zeros instructions +;; vctz crypto vector count lrailing Zeros instructions +;; vrol crypto vector rotate left instructions +;; vror crypto vector rotate right instructions +;; vwsll crypto vector widening shift left logical instructions +;; vclmul crypto vector carry-less multiply - return low half instructions +;; vclmulh crypto vector carry-less multiply - return high half instructions +;; vghsh crypto vector add-multiply over GHASH Galois-Field instructions +;; vgmul crypto vector multiply over GHASH Galois-Field instrumctions +;; vaesef crypto vector AES final-round encryption instructions +;; vaesem crypto vector AES middle-round encryption instructions +;; vaesdf crypto vector AES final-round decryption instructions +;; vaesdm crypto vector AES middle-round decryption instructions +;; vaeskf1 crypto vector AES-128 Forward KeySchedule generation instructions +;; vaeskf2 crypto vector AES-256 Forward KeySchedule generation instructions +;; vaesz crypto vector AES round zero encryption/decryption instructions +;; vsha2ms crypto vector SHA-2 message schedule instructions +;; vsha2ch crypto vector SHA-2 two rounds of compression instructions +;; vsha2cl crypto vector SHA-2 two rounds of compression instructions +;; vsm4k crypto vector SM4 KeyExpansion instructions +;; vsm4r crypto vector SM4 Rounds instructions +;; vsm3me crypto vector SM3 Message Expansion instructions +;; vsm3c crypto vector SM3 Compression instructions (define_attr "type" "unknown,branch,jump,jalr,ret,call,load,fpload,store,fpstore, mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul, @@ -447,7 +475,9 @@ vired,viwred,vfredu,vfredo,vfwredu,vfwredo, vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv, vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down, - vgather,vcompress,vmov,vector" + vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vcpop,vrol,vror,vwsll, + vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaeskf1,vaeskf2,vaesz, + vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c" (cond [(eq_attr "got" "load") (const_string "load") ;; If a doubleword move uses these expensive instructions, @@ -3747,6 +3777,7 @@ (include "thead.md") (include "generic-ooo.md") (include "vector.md") +(include "vector-crypto.md") (include "zicond.md") (include "zc.md") (include "corev.md") diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector-crypto.md new file mode 100755 index 00000000000..5ad3e59a70f --- /dev/null +++ b/gcc/config/riscv/vector-crypto.md @@ -0,0 +1,500 @@ +(define_c_enum "unspec" [ + ;; Zvbb unspecs + UNSPEC_VBREV + UNSPEC_VBREV8 + UNSPEC_VREV8 + UNSPEC_VCLMUL + UNSPEC_VCLMULH + UNSPEC_VGHSH + UNSPEC_VGMUL + UNSPEC_VAESEF + UNSPEC_VAESEFVV + UNSPEC_VAESEFVS + UNSPEC_VAESEM + UNSPEC_VAESEMVV + UNSPEC_VAESEMVS + UNSPEC_VAESDF + UNSPEC_VAESDFVV + UNSPEC_VAESDFVS + UNSPEC_VAESDM + UNSPEC_VAESDMVV + UNSPEC_VAESDMVS + UNSPEC_VAESZ + UNSPEC_VAESZVVNULL + UNSPEC_VAESZVS + UNSPEC_VAESKF1 + UNSPEC_VAESKF2 + UNSPEC_VSHA2MS + UNSPEC_VSHA2CH + UNSPEC_VSHA2CL + UNSPEC_VSM4K + UNSPEC_VSM4R + UNSPEC_VSM4RVV + UNSPEC_VSM4RVS + UNSPEC_VSM3ME + UNSPEC_VSM3C +]) + +(define_int_attr rev [(UNSPEC_VBREV "brev") (UNSPEC_VBREV8 "brev8") (UNSPEC_VREV8 "rev8")]) + +(define_int_attr h [(UNSPEC_VCLMUL "") (UNSPEC_VCLMULH "h")]) + +(define_int_attr vv_ins_name [(UNSPEC_VGMUL "gmul" ) (UNSPEC_VAESEFVV "aesef") + (UNSPEC_VAESEMVV "aesem") (UNSPEC_VAESDFVV "aesdf") + (UNSPEC_VAESDMVV "aesdm") (UNSPEC_VAESEFVS "aesef") + (UNSPEC_VAESEMVS "aesem") (UNSPEC_VAESDFVS "aesdf") + (UNSPEC_VAESDMVS "aesdm") (UNSPEC_VAESZVS "aesz" ) + (UNSPEC_VSM4RVV "sm4r" ) (UNSPEC_VSM4RVS "sm4r" )]) + +(define_int_attr vv_ins1_name [(UNSPEC_VGHSH "ghsh") (UNSPEC_VSHA2MS "sha2ms") + (UNSPEC_VSHA2CH "sha2ch") (UNSPEC_VSHA2CL "sha2cl")]) + +(define_int_attr vi_ins_name [(UNSPEC_VAESKF1 "aeskf1") (UNSPEC_VSM4K "sm4k")]) + +(define_int_attr vi_ins1_name [(UNSPEC_VAESKF2 "aeskf2") (UNSPEC_VSM3C "sm3c")]) + +(define_int_attr ins_type [(UNSPEC_VGMUL "vv") (UNSPEC_VAESEFVV "vv") + (UNSPEC_VAESEMVV "vv") (UNSPEC_VAESDFVV "vv") + (UNSPEC_VAESDMVV "vv") (UNSPEC_VAESEFVS "vs") + (UNSPEC_VAESEMVS "vs") (UNSPEC_VAESDFVS "vs") + (UNSPEC_VAESDMVS "vs") (UNSPEC_VAESZVS "vs") + (UNSPEC_VSM4RVV "vv") (UNSPEC_VSM4RVS "vs")]) + +(define_int_iterator UNSPEC_VRBB8 [UNSPEC_VBREV UNSPEC_VBREV8 UNSPEC_VREV8]) + +(define_int_iterator UNSPEC_CLMUL [UNSPEC_VCLMUL UNSPEC_VCLMULH]) + +(define_int_iterator UNSPEC_CRYPTO_VV [UNSPEC_VGMUL UNSPEC_VAESEFVV UNSPEC_VAESEMVV + UNSPEC_VAESDFVV UNSPEC_VAESDMVV UNSPEC_VAESEFVS + UNSPEC_VAESEMVS UNSPEC_VAESDFVS UNSPEC_VAESDMVS + UNSPEC_VAESZVS UNSPEC_VSM4RVV UNSPEC_VSM4RVS]) + +(define_int_iterator UNSPEC_VGNHAB [UNSPEC_VGHSH UNSPEC_VSHA2MS UNSPEC_VSHA2CH UNSPEC_VSHA2CL]) + +(define_int_iterator UNSPEC_CRYPTO_VI [UNSPEC_VAESKF1 UNSPEC_VSM4K]) + +(define_int_iterator UNSPEC_CRYPTO_VI1 [UNSPEC_VAESKF2 UNSPEC_VSM3C]) + +;; zvbb instructions patterns. +;; vandn.vv vandn.vx vrol.vv vrol.vx +;; vror.vv vror.vx vror.vi +;; vwsll.vv vwsll.vx vwsll.vi + +(define_insn "@pred_vandn" + [(set (match_operand:VI 0 "register_operand" "=vr,vr") + (if_then_else:VI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1,vmWc1") + (match_operand 5 "vector_length_operand" "rK,rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (and:VI + (match_operand:VI 3 "register_operand" "vr,vr") + (not:VI (match_operand:VI 4 "register_operand" "vr,vr"))) + (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBB || TARGET_ZVKB" + "vandn.vv\t%0,%3,%4%p1" + [(set_attr "type" "vandn") + (set_attr "mode" "")]) + +(define_insn "@pred_vandn_scalar" + [(set (match_operand:VI 0 "register_operand" "=vr,vr") + (if_then_else:VI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1,vmWc1") + (match_operand 5 "vector_length_operand" "rK,rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (and:VI + (match_operand:VI 3 "register_operand" "vr,vr") + (not: + (match_operand: 4 "register_operand" "r,r"))) + (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBB || TARGET_ZVKB" + "vandn.vx\t%0,%3,%4%p1" + [(set_attr "type" "vandn") + (set_attr "mode" "")]) + +(define_insn "@pred_v" + [(set (match_operand:VI 0 "register_operand" "=vr,vr") + (if_then_else:VI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1,vmWc1") + (match_operand 5 "vector_length_operand" "rK,rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (bitmanip_rotate:VI + (match_operand:VI 3 "register_operand" "vr,vr") + (match_operand:VI 4 "register_operand" "vr,vr")) + (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBB || TARGET_ZVKB" + "v.vv\t%0,%3,%4%p1" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +(define_insn "@pred_v_scalar" + [(set (match_operand:VI 0 "register_operand" "=vr, vr") + (if_then_else:VI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1, vmWc1") + (match_operand 5 "vector_length_operand" "rK, rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (bitmanip_rotate:VI + (match_operand:VI 3 "register_operand" "vr, vr") + (match_operand 4 "pmode_register_operand" "r, r")) + (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBB || TARGET_ZVKB" + "v.vx\t%0,%3,%4%p1" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +(define_insn "*pred_vror_scalar" + [(set (match_operand:VI 0 "register_operand" "=vr, vr") + (if_then_else:VI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1, vmWc1") + (match_operand 5 "vector_length_operand" "rK, rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (rotatert:VI + (match_operand:VI 3 "register_operand" "vr, vr") + (match_operand 4 "const_csr_operand" "K, K")) + (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBB || TARGET_ZVKB" + "vror.vi\t%0,%3,%4%p1" + [(set_attr "type" "vror") + (set_attr "mode" "")]) + +(define_insn "@pred_vwsll" + [(set (match_operand:VWEXTI 0 "register_operand" "=&vr") + (if_then_else:VWEXTI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1") + (match_operand 5 "vector_length_operand" " rK") + (match_operand 6 "const_int_operand" " i") + (match_operand 7 "const_int_operand" " i") + (match_operand 8 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (ashift:VWEXTI + (zero_extend:VWEXTI + (match_operand: 3 "register_operand" "vr")) + (match_operand: 4 "register_operand" "vr")) + (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))] + "TARGET_ZVBB" + "vwsll.vv\t%0,%3,%4%p1" + [(set_attr "type" "vwsll") + (set_attr "mode" "")]) + +(define_insn "@pred_vwsll_scalar" + [(set (match_operand:VWEXTI 0 "register_operand" "=&vr") + (if_then_else:VWEXTI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1") + (match_operand 5 "vector_length_operand" " rK") + (match_operand 6 "const_int_operand" " i") + (match_operand 7 "const_int_operand" " i") + (match_operand 8 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (ashift:VWEXTI + (zero_extend:VWEXTI + (match_operand: 3 "register_operand" "vr")) + (match_operand: 4 "pmode_reg_or_uimm5_operand" "rK")) + (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))] + "TARGET_ZVBB" + "vwsll.v%o4\t%0,%3,%4%p1" + [(set_attr "type" "vwsll") + (set_attr "mode" "")]) + +;; vbrev.v vbrev8.v vrev8.v + +(define_insn "@pred_v" + [(set (match_operand:VI 0 "register_operand" "=vr,vr") + (if_then_else:VI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1,vmWc1") + (match_operand 4 "vector_length_operand" "rK,rK") + (match_operand 5 "const_int_operand" "i, i") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VI + [(match_operand:VI 3 "register_operand" "vr,vr")]UNSPEC_VRBB8) + (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBB || TARGET_ZVKB" + "v.v\t%0,%3%p1" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +;; vclz.v vctz.v + +(define_insn "@pred_v" + [(set (match_operand:VI 0 "register_operand" "=vr") + (clz_ctz_pcnt:VI + (parallel + [(match_operand:VI 2 "register_operand" "vr") + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1") + (match_operand 3 "vector_length_operand" " rK") + (match_operand 4 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)])))] + "TARGET_ZVBB" + "v.v\t%0,%2%p1" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +;; zvbc instructions patterns. +;; vclmul.vv vclmul.vx +;; vclmulh.vv vclmulh.vx + +(define_insn "@pred_vclmul" + [(set (match_operand:VDI 0 "register_operand" "=vr,vr") + (if_then_else:VDI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1,vmWc1") + (match_operand 5 "vector_length_operand" "rK,rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VDI + [(match_operand:VDI 3 "register_operand" "vr,vr") + (match_operand:VDI 4 "register_operand" "vr,vr")]UNSPEC_CLMUL) + (match_operand:VDI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBC && TARGET_64BIT" + "vclmul.vv\t%0,%3,%4%p1" + [(set_attr "type" "vclmul") + (set_attr "mode" "")]) + +(define_insn "@pred_vclmul_scalar" + [(set (match_operand:VDI 0 "register_operand" "=vr,vr") + (if_then_else:VDI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1,vmWc1") + (match_operand 5 "vector_length_operand" "rK,rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VDI + [(match_operand:VDI 3 "register_operand" "vr,vr") + (match_operand: 4 "register_operand" "r,r")]UNSPEC_CLMUL) + (match_operand:VDI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBC && TARGET_64BIT" + "vclmul.vx\t%0,%3,%4%p1" + [(set_attr "type" "vclmul") + (set_attr "mode" "")]) + +;; zvknh[ab] and zvkg instructions patterns. +;; vsha2ms.vv vsha2ch.vv vsha2cl.vv vghsh.vv + +(define_insn "@pred_v" + [(set (match_operand:VQEXTI 0 "register_operand" "=vr") + (if_then_else:VQEXTI + (unspec: + [(match_operand 4 "vector_length_operand" "rK") + (match_operand 5 "const_int_operand" " i") + (match_operand 6 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VQEXTI + [(match_operand:VQEXTI 1 "register_operand" " 0") + (match_operand:VQEXTI 2 "register_operand" "vr") + (match_operand:VQEXTI 3 "register_operand" "vr")] UNSPEC_VGNHAB) + (match_dup 1)))] + "TARGET_ZVKNHA || TARGET_ZVKNHB || TARGET_ZVKG" + "v.vv\t%0,%2,%3" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +;; zvkned instructions patterns. +;; vgmul.vv vaesz.vs +;; vaesef.[vv,vs] vaesem.[vv,vs] vaesdf.[vv,vs] vaesdm.[vv,vs] +(define_insn "@pred_crypto_vv" + [(set (match_operand:VSI 0 "register_operand" "=vr") + (if_then_else:VSI + (unspec: + [(match_operand 3 "vector_length_operand" "rK") + (match_operand 4 "const_int_operand" " i") + (match_operand 5 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VSI + [(match_operand:VSI 1 "register_operand" " 0") + (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) + (match_dup 1)))] + "TARGET_ZVKG || TARGET_ZVKNED" + "v.\t%0,%2" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +(define_insn "@pred_crypto_vvx1_scalar" + [(set (match_operand:VSI 0 "register_operand" "=&vr") + (if_then_else:VSI + (unspec: + [(match_operand 3 "vector_length_operand" "rK") + (match_operand 4 "const_int_operand" " i") + (match_operand 5 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VSI + [(match_operand:VSI 1 "register_operand" " 0") + (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) + (match_dup 1)))] + "TARGET_ZVKNED || TARGET_ZVKSED" + "v.\t%0,%2" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +(define_insn "@pred_crypto_vvx2_scalar" + [(set (match_operand: 0 "register_operand" "=&vr") + (if_then_else: + (unspec: + [(match_operand 3 "vector_length_operand" "rK") + (match_operand 4 "const_int_operand" " i") + (match_operand 5 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec: + [(match_operand: 1 "register_operand" " 0") + (match_operand:VLMULX2_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) + (match_dup 1)))] + "TARGET_ZVKNED || TARGET_ZVKSED" + "v.\t%0,%2" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +(define_insn "@pred_crypto_vvx4_scalar" + [(set (match_operand: 0 "register_operand" "=&vr") + (if_then_else: + (unspec: + [(match_operand 3 "vector_length_operand" "rK") + (match_operand 4 "const_int_operand" " i") + (match_operand 5 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec: + [(match_operand: 1 "register_operand" " 0") + (match_operand:VLMULX4_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) + (match_dup 1)))] + "TARGET_ZVKNED || TARGET_ZVKSED" + "v.\t%0,%2" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +(define_insn "@pred_crypto_vvx8_scalar" + [(set (match_operand: 0 "register_operand" "=&vr") + (if_then_else: + (unspec: + [(match_operand 3 "vector_length_operand" "rK") + (match_operand 4 "const_int_operand" " i") + (match_operand 5 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec: + [(match_operand: 1 "register_operand" " 0") + (match_operand:VLMULX8_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) + (match_dup 1)))] + "TARGET_ZVKNED || TARGET_ZVKSED" + "v.\t%0,%2" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +(define_insn "@pred_crypto_vvx16_scalar" + [(set (match_operand: 0 "register_operand" "=&vr") + (if_then_else: + (unspec: + [(match_operand 3 "vector_length_operand" "rK") + (match_operand 4 "const_int_operand" " i") + (match_operand 5 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec: + [(match_operand: 1 "register_operand" " 0") + (match_operand:VLMULX16_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) + (match_dup 1)))] + "TARGET_ZVKNED || TARGET_ZVKSED" + "v.\t%0,%2" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +;; vaeskf1.vi vsm4k.vi +(define_insn "@pred_crypto_vi_scalar" + [(set (match_operand:VSI 0 "register_operand" "=vr, vr") + (if_then_else:VSI + (unspec: + [(match_operand 4 "vector_length_operand" "rK, rK") + (match_operand 5 "const_int_operand" " i, i") + (match_operand 6 "const_int_operand" " i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VSI + [(match_operand:VSI 2 "register_operand" "vr, vr") + (match_operand: 3 "const_int_operand" " i, i")] UNSPEC_CRYPTO_VI) + (match_operand:VSI 1 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVKNED || TARGET_ZVKSED" + "v.vi\t%0,%2,%3" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +;; vaeskf2.vi vsm3c.vi +(define_insn "@pred_vi_nomaskedoff_scalar" + [(set (match_operand:VSI 0 "register_operand" "=vr") + (if_then_else:VSI + (unspec: + [(match_operand 4 "vector_length_operand" "rK") + (match_operand 5 "const_int_operand" " i") + (match_operand 6 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VSI + [(match_operand:VSI 1 "register_operand" "0") + (match_operand:VSI 2 "register_operand" "vr") + (match_operand: 3 "const_int_operand" " i")] UNSPEC_CRYPTO_VI1) + (match_dup 1)))] + "TARGET_ZVKNED || TARGET_ZVKSH" + "v.vi\t%0,%2,%3" + [(set_attr "type" "v") + (set_attr "mode" "")]) + +;; zvksh instructions patterns. +;; vsm3me.vv + +(define_insn "@pred_vsm3me" + [(set (match_operand:VSI 0 "register_operand" "=vr, vr") + (if_then_else:VSI + (unspec: + [(match_operand 4 "vector_length_operand" "rK, rK") + (match_operand 5 "const_int_operand" " i, i") + (match_operand 6 "const_int_operand" " i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VSI + [(match_operand:VSI 2 "register_operand" "vr, vr") + (match_operand:VSI 3 "register_operand" "vr, vr")] UNSPEC_VSM3ME) + (match_operand:VSI 1 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVKSH" + "vsm3me.vv\t%0,%2,%3" + [(set_attr "type" "vsm3me") + (set_attr "mode" "")]) \ No newline at end of file diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md index 56080ed1f5f..1b16b476035 100644 --- a/gcc/config/riscv/vector-iterators.md +++ b/gcc/config/riscv/vector-iterators.md @@ -3916,3 +3916,44 @@ (V1024BI "riscv_vector::vls_mode_valid_p (V1024BImode) && TARGET_MIN_VLEN >= 1024") (V2048BI "riscv_vector::vls_mode_valid_p (V2048BImode) && TARGET_MIN_VLEN >= 2048") (V4096BI "riscv_vector::vls_mode_valid_p (V4096BImode) && TARGET_MIN_VLEN >= 4096")]) + +(define_mode_iterator VSI [ + RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") +]) + +(define_mode_iterator VLMULX2_SI [ + RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") +]) + +(define_mode_iterator VLMULX4_SI [ + RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") +]) + +(define_mode_iterator VLMULX8_SI [ + RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") +]) + +(define_mode_iterator VLMULX16_SI [ + (RVVMF2SI "TARGET_MIN_VLEN > 32") +]) + +(define_mode_attr VSIX2 [ + (RVVM8SI "RVVM8SI") (RVVM4SI "RVVM8SI") (RVVM2SI "RVVM4SI") (RVVM1SI "RVVM2SI") (RVVMF2SI "RVVM1SI") +]) + +(define_mode_attr VSIX4 [ + (RVVM2SI "RVVM8SI") (RVVM1SI "RVVM4SI") (RVVMF2SI "RVVM2SI") +]) + +(define_mode_attr VSIX8 [ + (RVVM1SI "RVVM8SI") (RVVMF2SI "RVVM4SI") +]) + +(define_mode_attr VSIX16 [ + (RVVMF2SI "RVVM8SI") +]) + +(define_mode_iterator VDI [ + (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64") + (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64") +]) \ No newline at end of file diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md index ba9c9e5a9b6..04630eba386 100644 --- a/gcc/config/riscv/vector.md +++ b/gcc/config/riscv/vector.md @@ -52,7 +52,9 @@ vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,\ vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\ vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\ - vssegtux,vssegtox,vlsegdff") + vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\ + vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\ + vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c") (const_string "true")] (const_string "false"))) @@ -74,7 +76,9 @@ vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovxv,vfmovfv,\ vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\ vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\ - vssegtux,vssegtox,vlsegdff") + vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\ + vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\ + vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c") (const_string "true")] (const_string "false"))) @@ -426,7 +430,11 @@ viwred,vfredu,vfredo,vfwredu,vfwredo,vimovvx,\ vimovxv,vfmovvf,vfmovfv,vslideup,vslidedown,\ vislide1up,vislide1down,vfslide1up,vfslide1down,\ - vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox") + vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox,\ + vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vwsll,\ + vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\ + vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,\ + vsm3me,vsm3c") (const_int INVALID_ATTRIBUTE) (eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1) (eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2) @@ -698,10 +706,12 @@ vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,\ vired,viwred,vfredu,vfredo,vfwredu,vfwredo,vimovxv,vfmovfv,\ vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\ - vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff") + vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff,\ + vandn,vbrev,vbrev8,vrev8,vrol,vror,vwsll,vclmul,vclmulh") (const_int 2) - (eq_attr "type" "vimerge,vfmerge,vcompress") + (eq_attr "type" "vimerge,vfmerge,vcompress,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\ + vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c") (const_int 1) (eq_attr "type" "vimuladd,vfmuladd") @@ -740,7 +750,8 @@ vstox,vext,vmsfs,vmiota,vfsqrt,vfrecp,vfcvtitof,vldff,\ vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,\ vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,vcompress,\ - vlsegde,vssegts,vssegtux,vssegtox,vlsegdff") + vlsegde,vssegts,vssegtux,vssegtox,vlsegdff,vbrev,vbrev8,vrev8,\ + vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c") (const_int 4) ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast. @@ -755,13 +766,15 @@ vsshift,vnclip,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\ vfsgnj,vfmerge,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\ vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\ - vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox") + vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\ + vror,vwsll,vclmul,vclmulh") (const_int 5) (eq_attr "type" "vicmp,vimuladd,vfcmp,vfmuladd") (const_int 6) - (eq_attr "type" "vmpop,vmffs,vmidx,vssegte") + (eq_attr "type" "vmpop,vmffs,vmidx,vssegte,vclz,vctz,vgmul,vaesef,vaesem,vaesdf,vaesdm,\ + vaesz,vsm4r") (const_int 3)] (const_int INVALID_ATTRIBUTE))) @@ -770,7 +783,8 @@ (cond [(eq_attr "type" "vlde,vimov,vfmov,vext,vmiota,vfsqrt,vfrecp,\ vfcvtitof,vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,\ vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,\ - vcompress,vldff,vlsegde,vlsegdff") + vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8,vghsh,\ + vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c") (symbol_ref "riscv_vector::get_ta(operands[5])") ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast. @@ -786,13 +800,13 @@ vfwalu,vfwmul,vfsgnj,vfmerge,vired,viwred,vfredu,\ vfredo,vfwredu,vfwredo,vslideup,vslidedown,vislide1up,\ vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\ - vlsegds,vlsegdux,vlsegdox") + vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll,vclmul,vclmulh") (symbol_ref "riscv_vector::get_ta(operands[6])") (eq_attr "type" "vimuladd,vfmuladd") (symbol_ref "riscv_vector::get_ta(operands[7])") - (eq_attr "type" "vmidx") + (eq_attr "type" "vmidx,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,vsm4r") (symbol_ref "riscv_vector::get_ta(operands[4])")] (const_int INVALID_ATTRIBUTE))) @@ -800,7 +814,7 @@ (define_attr "ma" "" (cond [(eq_attr "type" "vlde,vext,vmiota,vfsqrt,vfrecp,vfcvtitof,vfcvtftoi,\ vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,\ - vfncvtftof,vfclass,vldff,vlsegde,vlsegdff") + vfncvtftof,vfclass,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8") (symbol_ref "riscv_vector::get_ma(operands[6])") ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast. @@ -815,7 +829,8 @@ vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,\ vfwalu,vfwmul,vfsgnj,vfcmp,vslideup,vslidedown,\ vislide1up,vislide1down,vfslide1up,vfslide1down,vgather,\ - viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox") + viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\ + vror,vwsll,vclmul,vclmulh") (symbol_ref "riscv_vector::get_ma(operands[7])") (eq_attr "type" "vimuladd,vfmuladd") @@ -831,9 +846,10 @@ vfsqrt,vfrecp,vfmerge,vfcvtitof,vfcvtftoi,vfwcvtitof,\ vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,\ vfclass,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\ - vimovxv,vfmovfv,vlsegde,vlsegdff") + vimovxv,vfmovfv,vlsegde,vlsegdff,vbrev,vbrev8,vrev8") (const_int 7) - (eq_attr "type" "vldm,vstm,vmalu,vmalu") + (eq_attr "type" "vldm,vstm,vmalu,vmalu,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,\ + vsm4r") (const_int 5) ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast. @@ -848,18 +864,19 @@ vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\ vfsgnj,vfcmp,vslideup,vslidedown,vislide1up,\ vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\ - vlsegds,vlsegdux,vlsegdox") + vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll") (const_int 8) - (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox") + (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox,vclmul,vclmulh") (const_int 5) (eq_attr "type" "vimuladd,vfmuladd") (const_int 9) - (eq_attr "type" "vmsfs,vmidx,vcompress") + (eq_attr "type" "vmsfs,vmidx,vcompress,vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,\ + vsm4k,vsm3me,vsm3c") (const_int 6) - (eq_attr "type" "vmpop,vmffs,vssegte") + (eq_attr "type" "vmpop,vmffs,vssegte,vclz,vctz") (const_int 4)] (const_int INVALID_ATTRIBUTE)))