{"id":2237925,"url":"http://patchwork.ozlabs.org/api/1.1/patches/2237925/?format=json","web_url":"http://patchwork.ozlabs.org/project/gcc/patch/bmm.hihq3sdm4a.gcc.gcc-TEST.karmea01.158.1.4@forge-stage.sourceware.org/","project":{"id":17,"url":"http://patchwork.ozlabs.org/api/1.1/projects/17/?format=json","name":"GNU Compiler Collection","link_name":"gcc","list_id":"gcc-patches.gcc.gnu.org","list_email":"gcc-patches@gcc.gnu.org","web_url":null,"scm_url":null,"webscm_url":null},"msgid":"<bmm.hihq3sdm4a.gcc.gcc-TEST.karmea01.158.1.4@forge-stage.sourceware.org>","date":"2026-05-13T16:04:14","name":"[v1,4/6] aarch64: Port NEON bit manipulation intrinsics to pragma-based framework","commit_ref":null,"pull_url":null,"state":"new","archived":false,"hash":"67277109a5d1e9a2b7a5d275659e5d68358a273e","submitter":{"id":92188,"url":"http://patchwork.ozlabs.org/api/1.1/people/92188/?format=json","name":"Karl Meakin via Sourceware Forge","email":"forge-bot+karmea01@forge-stage.sourceware.org"},"delegate":null,"mbox":"http://patchwork.ozlabs.org/project/gcc/patch/bmm.hihq3sdm4a.gcc.gcc-TEST.karmea01.158.1.4@forge-stage.sourceware.org/mbox/","series":[{"id":504183,"url":"http://patchwork.ozlabs.org/api/1.1/series/504183/?format=json","web_url":"http://patchwork.ozlabs.org/project/gcc/list/?series=504183","date":"2026-05-13T16:04:10","name":"aarch64: port NEON intrinsics to pragma-based framework","version":1,"mbox":"http://patchwork.ozlabs.org/series/504183/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/2237925/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/2237925/checks/","tags":{},"headers":{"Return-Path":"<gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org>","X-Original-To":["incoming@patchwork.ozlabs.org","gcc-patches@gcc.gnu.org"],"Delivered-To":["patchwork-incoming@legolas.ozlabs.org","gcc-patches@gcc.gnu.org"],"Authentication-Results":["legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org\n (client-ip=2620:52:6:3111::32; helo=vm01.sourceware.org;\n envelope-from=gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org;\n receiver=patchwork.ozlabs.org)","sourceware.org; dmarc=none (p=none dis=none)\n header.from=forge-stage.sourceware.org","sourceware.org;\n spf=pass smtp.mailfrom=forge-stage.sourceware.org","sourceware.org; arc=none smtp.remote-ip=38.145.34.39"],"Received":["from vm01.sourceware.org (vm01.sourceware.org\n [IPv6:2620:52:6:3111::32])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4gFyxv4ShTz1y5L\n\tfor <incoming@patchwork.ozlabs.org>; Thu, 14 May 2026 02:06:27 +1000 (AEST)","from vm01.sourceware.org (localhost [IPv6:::1])\n\tby sourceware.org (Postfix) with ESMTP id C37BF4BBC0E8\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 13 May 2026 16:06:25 +0000 (GMT)","from forge-stage.sourceware.org (vm08.sourceware.org [38.145.34.39])\n by sourceware.org (Postfix) with ESMTPS id C16F24BBC0A7\n for <gcc-patches@gcc.gnu.org>; Wed, 13 May 2026 16:05:10 +0000 (GMT)","from forge-stage.sourceware.org (localhost [IPv6:::1])\n (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n key-exchange x25519 server-signature ECDSA (prime256v1) server-digest SHA256)\n (No client certificate requested)\n by forge-stage.sourceware.org (Postfix) with ESMTPS id 8716942D18;\n Wed, 13 May 2026 16:05:10 +0000 (UTC)"],"DKIM-Filter":["OpenDKIM Filter v2.11.0 sourceware.org C37BF4BBC0E8","OpenDKIM Filter v2.11.0 sourceware.org C16F24BBC0A7"],"DMARC-Filter":"OpenDMARC Filter v1.4.2 sourceware.org C16F24BBC0A7","ARC-Filter":"OpenARC Filter v1.0.0 sourceware.org C16F24BBC0A7","ARC-Seal":"i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1778688310; cv=none;\n b=hFd83KsbkZ4vKjOyOaNmKscq0xE0g3r17ZfeFRcT6SXwGDI7ydYKMvKZrv9rLNyg+GC7MQsGZvzc9jBtdNHlci6l2JSugbFqHzMdPA/5CyH4IWgRshB+fQMGXbwISB7yZH23HR2KxxviSH1DN5P1o78+jJZdfdZHF0CvwlIVWFg=","ARC-Message-Signature":"i=1; a=rsa-sha256; d=sourceware.org; s=key;\n t=1778688310; c=relaxed/simple;\n bh=erW/aWAYsD8cRniOhp1WG1Qz8jA6l0cZSYq15P0IVdg=;\n h=From:Date:Subject:To:Message-ID;\n b=wLfHiDyT89qB53HbV5fsN4bXyqGVnq5PB0pslwGMXxEkvWC4iI7OXpN8WTFYpom7nziSrM6ByXt12/jgteSNZ7pGxOqlubRfU1kaJ4otyna9xuoUppQMzvIiz7JDmuO+b9TSICVpbiNH2A/JK/hebGgaNLKHwuKODgdn87T3YLo=","ARC-Authentication-Results":"i=1; sourceware.org","From":"Karl Meakin via Sourceware Forge\n <forge-bot+karmea01@forge-stage.sourceware.org>","Date":"Wed, 13 May 2026 16:04:14 +0000","Subject":"[PATCH v1 4/6] aarch64: Port NEON bit manipulation intrinsics to\n pragma-based framework","To":"gcc-patches mailing list <gcc-patches@gcc.gnu.org>","Cc":"ktkachov@nvidia.com, richard.earnshaw@arm.com, tamar.christina@arm.com,\n karl.meakin@arm.com","Message-ID":"\n <bmm.hihq3sdm4a.gcc.gcc-TEST.karmea01.158.1.4@forge-stage.sourceware.org>","X-Mailer":"batrachomyomachia","X-Pull-Request-Organization":"gcc","X-Pull-Request-Repository":"gcc-TEST","X-Pull-Request":"https://forge.sourceware.org/gcc/gcc-TEST/pulls/158","References":"\n <bmm.hihq3sdm4a.gcc.gcc-TEST.karmea01.158.1.0@forge-stage.sourceware.org>","In-Reply-To":"\n <bmm.hihq3sdm4a.gcc.gcc-TEST.karmea01.158.1.0@forge-stage.sourceware.org>","X-Patch-URL":"\n https://forge.sourceware.org/karmea01/gcc-TEST/commit/46e80833e2921bf33863c2e66373294410d8019b","X-BeenThere":"gcc-patches@gcc.gnu.org","X-Mailman-Version":"2.1.30","Precedence":"list","List-Id":"Gcc-patches mailing list <gcc-patches.gcc.gnu.org>","List-Unsubscribe":"<https://gcc.gnu.org/mailman/options/gcc-patches>,\n <mailto:gcc-patches-request@gcc.gnu.org?subject=unsubscribe>","List-Archive":"<https://gcc.gnu.org/pipermail/gcc-patches/>","List-Post":"<mailto:gcc-patches@gcc.gnu.org>","List-Help":"<mailto:gcc-patches-request@gcc.gnu.org?subject=help>","List-Subscribe":"<https://gcc.gnu.org/mailman/listinfo/gcc-patches>,\n <mailto:gcc-patches-request@gcc.gnu.org?subject=subscribe>","Reply-To":"gcc-patches mailing list <gcc-patches@gcc.gnu.org>,\n ktkachov@nvidia.com, richard.earnshaw@arm.com, tamar.christina@arm.com,\n karl.meakin@arm.com, karmea01@sourceware.org","Errors-To":"gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org"},"content":"From: Karl Meakin <karl.meakin@arm.com>\n\nPort the following intrinsics to the pragma-based framework:\n* vand\n* vbcax\n* vbic\n* vbsl\n* vcls\n* vclz\n* vcnt\n* veor\n* veor3\n* vmvn\n* vorn\n* vorr\n* vrax1\n* vrbit\n* vxar\n\ngcc/ChangeLog:\n\n\t* config/aarch64/aarch64.md (UNSPEC_BSL): Delete unspec.\n\t* config/aarch64/aarch64-simd-pragma-builtins.def (vbsl_mf8, vbslq_mf8): Delete functions.\n\t* config/aarch64/aarch64-neon-builtins-base.cc (build_cast): New function.\n\t(class gimple_not_rhs, class gimple_bsl, class gimple_rbit, class gimple_eor3, class\n\tgimple_bcax, class gimple_rax1, class gimple_xar, class gimple_ifn): New classes.\n\t(vand, vandq, vbic, vbicq, vbsl, vbslq, veor, veorq, vmvn, vmvnq, vorn, vornq, vorr, vorrq,\n\tvrbit, vrbitq, vbcaxq, veor3q, vrax1q, vxarq, vcls, vclsq, vclz, vclzq, vcnt, vcntq): New\n\tfunction bases.\n\t* config/aarch64/aarch64-neon-builtins-shapes.cc (shift): New function.\n\t* config/aarch64/aarch64-builtins.cc (aarch64_types_bsl_p_qualifiers,\n\taarch64_types_bsl_s_qualifiers, aarch64_types_bsl_u_qualifiers): Delete unused qualifiers.\n\t* config/aarch64/aarch64-simd.md (@aarch64_rbit<mode><vczle><vczbe>): Add `@` modifier so\n\tthat it is callable from `aarch64-neon-builtins-base.cc`.\n\t* config/aarch64/aarch64-acle-builtins.h (TYPES_b_neon, TYPES_b_poly): New type lists.\n\t* config/aarch64/aarch64-neon-builtins-base.def (vand, vandq, vbic, vbicq, vbsl, vbslq,\n\tveor, veorq, vmvn, vmvnq, vorn, vornq, vorr, vorrq, vrbit, vrbitq, vbcaxq, veor3q, vrax1q,\n\tvxarq, vcls, vclsq, vclz, vclzq, vcnt, vcntq): New function groups.\n\t* config/aarch64/aarch64-simd-builtins.def (clrsb, clz, ctz, popcount, rbit, simd_bsl):\n\tDelete builtin functions.\n\t* config/aarch64/arm_neon.h (vbsl_f16, vbsl_f32, vbsl_f64, vbsl_p8, vbsl_p16, vbsl_p64,\n\tvbsl_s8, vbsl_s16, vbsl_s32, vbsl_s64, vbsl_u8, vbsl_u16, vbsl_u32, vbsl_u64, vbslq_f16,\n\tvbslq_f32, vbslq_f64, vbslq_p8, vbslq_p16, vbslq_s8, vbslq_s16, vbslq_p64, vbslq_s32,\n\tvbslq_s64, vbslq_u8, vbslq_u16, vbslq_u32, vbslq_u64, vcls_s8, vcls_s16, vcls_s32,\n\tvclsq_s8, vclsq_s16, vclsq_s32, vcls_u8, vcls_u16, vcls_u32, vclsq_u8, vclsq_u16,\n\tvclsq_u32, vclz_s8, vclz_s16, vclz_s32, vclz_u8, vclz_u16, vclz_u32, vclzq_s8, vclzq_s16,\n\tvclzq_s32, vclzq_u8, vclzq_u16, vclzq_u32, vcnt_p8, vcnt_s8, vcnt_u8, vcntq_p8, vcntq_s8,\n\tvcntq_u8, vrbit_p8, vrbit_s8, vrbit_u8, vrbitq_p8, vrbitq_s8, vrbitq_u8, veor3q_u8,\n\tveor3q_u16, veor3q_u32, veor3q_u64, veor3q_s8, veor3q_s16, veor3q_s32, veor3q_s64,\n\tvrax1q_u64, vxarq_u64, vbcaxq_u8, vbcaxq_u16, vbcaxq_u32, vbcaxq_u64, vbcaxq_s8,\n\tvbcaxq_s16, vbcaxq_s32, vbcaxq_s64): Delete functions.\n\ngcc/testsuite/ChangeLog:\n\n\t* gcc.target/aarch64/neon/vand.c: New test.\n\t* gcc.target/aarch64/neon/vbcax.c: New test.\n\t* gcc.target/aarch64/neon/vbic.c: New test.\n\t* gcc.target/aarch64/neon/vbsl.c: New test.\n\t* gcc.target/aarch64/neon/vcls.c: New test.\n\t* gcc.target/aarch64/neon/vclz.c: New test.\n\t* gcc.target/aarch64/neon/vcnt.c: New test.\n\t* gcc.target/aarch64/neon/veor.c: New test.\n\t* gcc.target/aarch64/neon/veor3.c: New test.\n\t* gcc.target/aarch64/neon/vmvn.c: New test.\n\t* gcc.target/aarch64/neon/vorn.c: New test.\n\t* gcc.target/aarch64/neon/vorr.c: New test.\n\t* gcc.target/aarch64/neon/vrax1.c: New test.\n\t* gcc.target/aarch64/neon/vrbit.c: New test.\n\t* gcc.target/aarch64/neon/vxar.c: New test.\n\t* gcc.target/aarch64/sme/inlining_10.c: Delete `call_vbsl` since the intrinsic is no longer\n\timplemented as an `always_inline` function.\n\t* gcc.target/aarch64/sme/inlining_11.c: Likewise.\n\t* gcc.target/aarch64/sha3_1.c, gcc.target/aarch64/sha3_2.c, gcc.target/aarch64/sha3_3.c: Add\n\t`-O1` flag to ensure expected optimized assembly is emitted.\n\t* gcc.target/aarch64/target_attr_10.c: Fix expected error message.\n---\n gcc/config/aarch64/aarch64-acle-builtins.h    |    9 +\n gcc/config/aarch64/aarch64-builtins.cc        |   20 -\n .../aarch64/aarch64-neon-builtins-base.cc     |  204 +++\n .../aarch64/aarch64-neon-builtins-base.def    |   40 +\n .../aarch64/aarch64-neon-builtins-shapes.cc   |    8 +\n gcc/config/aarch64/aarch64-simd-builtins.def  |   24 -\n .../aarch64/aarch64-simd-pragma-builtins.def  |    6 -\n gcc/config/aarch64/aarch64-simd.md            |   13 +-\n gcc/config/aarch64/aarch64.md                 |    1 -\n gcc/config/aarch64/arm_neon.h                 | 1452 ++---------------\n gcc/testsuite/gcc.target/aarch64/neon/vand.c  |  116 ++\n gcc/testsuite/gcc.target/aarch64/neon/vbcax.c |   60 +\n gcc/testsuite/gcc.target/aarch64/neon/vbic.c  |  116 ++\n gcc/testsuite/gcc.target/aarch64/neon/vbsl.c  |  214 +++\n gcc/testsuite/gcc.target/aarch64/neon/vcls.c  |   88 +\n gcc/testsuite/gcc.target/aarch64/neon/vclz.c  |   88 +\n gcc/testsuite/gcc.target/aarch64/neon/vcnt.c  |   25 +\n gcc/testsuite/gcc.target/aarch64/neon/veor.c  |  116 ++\n gcc/testsuite/gcc.target/aarch64/neon/veor3.c |   60 +\n gcc/testsuite/gcc.target/aarch64/neon/vmvn.c  |  102 ++\n gcc/testsuite/gcc.target/aarch64/neon/vorn.c  |  116 ++\n gcc/testsuite/gcc.target/aarch64/neon/vorr.c  |  116 ++\n gcc/testsuite/gcc.target/aarch64/neon/vrax1.c |   11 +\n gcc/testsuite/gcc.target/aarch64/neon/vrbit.c |   46 +\n gcc/testsuite/gcc.target/aarch64/neon/vxar.c  |   25 +\n gcc/testsuite/gcc.target/aarch64/sha3_1.c     |    2 +-\n gcc/testsuite/gcc.target/aarch64/sha3_2.c     |    2 +-\n gcc/testsuite/gcc.target/aarch64/sha3_3.c     |    2 +-\n .../gcc.target/aarch64/sme/inlining_10.c      |    7 -\n .../gcc.target/aarch64/sme/inlining_11.c      |    7 -\n .../gcc.target/aarch64/target_attr_10.c       |    4 +-\n 31 files changed, 1673 insertions(+), 1427 deletions(-)\n create mode 100644 gcc/testsuite/gcc.target/aarch64/neon/vand.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/neon/vbcax.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/neon/vbic.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/neon/vbsl.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/neon/vcls.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/neon/vclz.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/neon/vcnt.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/neon/veor.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/neon/veor3.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/neon/vmvn.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/neon/vorn.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/neon/vorr.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/neon/vrax1.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/neon/vrbit.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/neon/vxar.c","diff":"diff --git a/gcc/config/aarch64/aarch64-acle-builtins.h b/gcc/config/aarch64/aarch64-acle-builtins.h\nindex 76b565adba5d..f570434810f5 100644\n--- a/gcc/config/aarch64/aarch64-acle-builtins.h\n+++ b/gcc/config/aarch64/aarch64-acle-builtins.h\n@@ -1784,6 +1784,14 @@ void build_all (function_builder &b, const char *signature,\n   TYPES_bhsd_neon (S, D, T), \\\n   TYPES_h_bfloat (S, D, T)\n \n+/* _p8 _s8 _u8.  */\n+#define TYPES_b_neon(S, D, T) \\\n+  S (p8), S (s8), S (u8)\n+\n+/* _p8.  */\n+#define TYPES_b_poly(S, D, T) \\\n+  S (p8)\n+\n /* _p8 _p16 _p64.  */\n #define TYPES_bhd_poly(S, D, T) \\\n   S (p8), S (p16), S (p64)\n@@ -1951,6 +1959,7 @@ DEF_SVE_TYPES_ARRAY (b_float);\n DEF_SVE_TYPES_ARRAY (all_neon);\n DEF_SVE_TYPES_ARRAY (b_neon);\n DEF_SVE_TYPES_ARRAY (h_neon);\n+DEF_SVE_TYPES_ARRAY (b_poly);\n DEF_SVE_TYPES_ARRAY (bhd_poly);\n DEF_SVE_TYPES_ARRAY (bhdq_poly);\n DEF_SVE_TYPES_ARRAY (bhsd_neon);\ndiff --git a/gcc/config/aarch64/aarch64-builtins.cc b/gcc/config/aarch64/aarch64-builtins.cc\nindex 3f74203c3c35..fdf5468f93af 100644\n--- a/gcc/config/aarch64/aarch64-builtins.cc\n+++ b/gcc/config/aarch64/aarch64-builtins.cc\n@@ -417,22 +417,6 @@ aarch64_types_loadstruct_lane_p_qualifiers[SIMD_MAX_BUILTIN_ARGS]\n       qualifier_poly, qualifier_struct_load_store_lane_index };\n #define TYPES_LOADSTRUCT_LANE_P (aarch64_types_loadstruct_lane_p_qualifiers)\n \n-static enum aarch64_type_qualifiers\n-aarch64_types_bsl_p_qualifiers[SIMD_MAX_BUILTIN_ARGS]\n-  = { qualifier_poly, qualifier_unsigned,\n-      qualifier_poly, qualifier_poly };\n-#define TYPES_BSL_P (aarch64_types_bsl_p_qualifiers)\n-static enum aarch64_type_qualifiers\n-aarch64_types_bsl_s_qualifiers[SIMD_MAX_BUILTIN_ARGS]\n-  = { qualifier_none, qualifier_unsigned,\n-      qualifier_none, qualifier_none };\n-#define TYPES_BSL_S (aarch64_types_bsl_s_qualifiers)\n-static enum aarch64_type_qualifiers\n-aarch64_types_bsl_u_qualifiers[SIMD_MAX_BUILTIN_ARGS]\n-  = { qualifier_unsigned, qualifier_unsigned,\n-      qualifier_unsigned, qualifier_unsigned };\n-#define TYPES_BSL_U (aarch64_types_bsl_u_qualifiers)\n-\n /* The first argument (return type) of a store should be void type,\n    which we represent with qualifier_void.  Their first operand will be\n    a DImode pointer to the location to store to, so we must use\n@@ -4091,10 +4075,6 @@ aarch64_expand_pragma_builtin (tree exp, rtx target,\n   insn_code icode;\n   switch (builtin_data.unspec)\n     {\n-    case UNSPEC_BSL:\n-      icode = code_for_aarch64_simd_bsl (ops[0].mode);\n-      break;\n-\n     case UNSPEC_DUP:\n       if (builtin_data.signature == aarch64_builtin_signatures::load)\n \taarch64_dereference_pointer (&ops[1], GET_MODE_INNER (ops[0].mode));\ndiff --git a/gcc/config/aarch64/aarch64-neon-builtins-base.cc b/gcc/config/aarch64/aarch64-neon-builtins-base.cc\nindex 6ae24be6ac81..63274487ef0f 100644\n--- a/gcc/config/aarch64/aarch64-neon-builtins-base.cc\n+++ b/gcc/config/aarch64/aarch64-neon-builtins-base.cc\n@@ -47,6 +47,15 @@\n \n using namespace aarch64_acle;\n \n+/* Build a cast expression, `(TYPE)EXPR`, if necessary to make an expression\n+   with type TYPE.  */\n+tree\n+build_cast (tree type, tree expr)\n+{\n+  return TREE_TYPE (expr) != type ? fold_build1 (VIEW_CONVERT_EXPR, type, expr)\n+\t\t\t\t  : expr;\n+}\n+\n /* Build a `VEC[INDEX]` expression.  */\n tree\n build_lane_get (tree vec, tree index)\n@@ -249,6 +258,169 @@ struct gimple_dup_lane : public gimple_function_base\n   }\n };\n \n+/* For intrinsics that map to a GIMPLE expression with a `BIT_NOT` applied to\n+   the second argument.  */\n+class gimple_not_rhs : public gimple_function_base\n+{\n+  tree_code m_code;\n+\n+public:\n+  constexpr gimple_not_rhs (tree_code code) : m_code (code) {}\n+\n+  gimple *fold (gimple_folder &f) const override\n+  {\n+    auto lhs = gimple_call_arg (f.call, 0);\n+    auto rhs = gimple_call_arg (f.call, 1);\n+    auto type = TREE_TYPE (lhs);\n+\n+    // tmp1 = ~rhs\n+    auto tmp1 = f.force_val (fold_build1 (BIT_NOT_EXPR, type, rhs));\n+    return gimple_build_assign (f.lhs, this->m_code, lhs, tmp1);\n+  }\n+};\n+\n+/* BSL (a, b, c) == (a & (b ^ c)) ^ c.  */\n+class gimple_bsl : public gimple_function_base\n+{\n+public:\n+  gimple *fold (gimple_folder &f) const override\n+  {\n+    auto a = gimple_call_arg (f.call, 0);\n+    auto b = gimple_call_arg (f.call, 1);\n+    auto c = gimple_call_arg (f.call, 2);\n+\n+    auto uint_type = TREE_TYPE (a);\n+    auto ret_type = TREE_TYPE (f.lhs);\n+\n+    b = f.force_val (build_cast (uint_type, b));\n+    c = f.force_val (build_cast (uint_type, c));\n+\n+    // tmp1 = b ^ c\n+    auto tmp1 = f.force_val (fold_build2 (BIT_XOR_EXPR, uint_type, b, c));\n+\n+    // tmp2 = a & (b ^ c)\n+    auto tmp2 = f.force_val (fold_build2 (BIT_AND_EXPR, uint_type, a, tmp1));\n+\n+    // tmp3 = (a & (b ^ c)) ^ c\n+    auto tmp3 = f.force_val (fold_build2 (BIT_XOR_EXPR, uint_type, tmp2, c));\n+\n+    return gimple_build_assign (f.lhs, build_cast (ret_type, tmp3));\n+  }\n+};\n+\n+/* FIXME: how to express this in GIMPLE?  */\n+class gimple_rbit : public gimple_function_base\n+{\n+  rtx expand (function_expander &e) const override\n+  {\n+    return e.use_exact_insn (code_for_aarch64_rbit (e.args[0]->mode));\n+  }\n+\n+  gimple *fold (gimple_folder &) const override { return nullptr; }\n+};\n+\n+/* EOR3 (a, b, c) = (a ^ b) ^ c.  */\n+class gimple_eor3 : public gimple_function_base\n+{\n+public:\n+  gimple *fold (gimple_folder &f) const override\n+  {\n+    auto a = gimple_call_arg (f.call, 0);\n+    auto b = gimple_call_arg (f.call, 1);\n+    auto c = gimple_call_arg (f.call, 2);\n+    auto type = TREE_TYPE (f.lhs);\n+\n+    // tmp1 = a ^ b\n+    auto tmp1 = f.force_val (fold_build2 (BIT_XOR_EXPR, type, a, b));\n+\n+    // lhs = (a ^ b) ^ c\n+    return gimple_build_assign (f.lhs, BIT_XOR_EXPR, tmp1, c);\n+  }\n+};\n+\n+/* BCAX (a, b, c) = a ^ (b & ~c).  */\n+class gimple_bcax : public gimple_function_base\n+{\n+public:\n+  gimple *fold (gimple_folder &f) const override\n+  {\n+    auto a = gimple_call_arg (f.call, 0);\n+    auto b = gimple_call_arg (f.call, 1);\n+    auto c = gimple_call_arg (f.call, 2);\n+    auto arg_type = TREE_TYPE (a);\n+\n+    // tmp1 = ~c\n+    auto tmp1 = f.force_val (fold_build1 (BIT_NOT_EXPR, arg_type, c));\n+\n+    // tmp2 = b & ~c\n+    auto tmp2 = f.force_val (fold_build2 (BIT_AND_EXPR, arg_type, b, tmp1));\n+\n+    // lhs = a ^ (b & ~c)\n+    return gimple_build_assign (f.lhs, BIT_XOR_EXPR, a, tmp2);\n+  }\n+};\n+\n+/* RAX1 (a, b) = rotl (a, 1) ^ b.  */\n+class gimple_rax1 : public gimple_function_base\n+{\n+public:\n+  gimple *fold (gimple_folder &f) const override\n+  {\n+    auto a = gimple_call_arg (f.call, 0);\n+    auto b = gimple_call_arg (f.call, 1);\n+    auto arg_type = TREE_TYPE (a);\n+\n+    // tmp1 = rotl (a, 1)\n+    auto tmp1 = f.force_val (\n+      fold_build2 (LROTATE_EXPR, arg_type, a, build_one_cst (arg_type)));\n+\n+    // lhs = rotl (a, 1) ^ b\n+    return gimple_build_assign (f.lhs, BIT_XOR_EXPR, tmp1, b);\n+  }\n+};\n+\n+/* XAR (a, b, c) = rotr (a ^ b, c).  */\n+class gimple_xar : public gimple_function_base\n+{\n+public:\n+  gimple *fold (gimple_folder &f) const override\n+  {\n+    auto a = gimple_call_arg (f.call, 0);\n+    auto b = gimple_call_arg (f.call, 1);\n+    auto c = gimple_call_arg (f.call, 2);\n+    auto type = TREE_TYPE (f.lhs);\n+\n+    // tmp1 = a ^ b\n+    auto tmp1 = f.force_val (fold_build2 (BIT_XOR_EXPR, type, a, b));\n+\n+    // lhs = rotr (a ^ b, c)\n+    return gimple_build_assign (f.lhs, RROTATE_EXPR, tmp1, c);\n+  }\n+};\n+\n+/* For intrinsics that map to a single GIMPLE IFN with no argument\n+   preparation necessary.  */\n+class gimple_ifn : public gimple_function_base\n+{\n+  internal_fn m_ifn;\n+\n+public:\n+  constexpr gimple_ifn (internal_fn fn)\n+    : m_ifn (fn)\n+      {}\n+\n+  gimple *fold (gimple_folder &f) const override\n+  {\n+    vec<tree> args{};\n+    for (unsigned i = 0; i < gimple_call_num_args (f.call); i++)\n+      args.safe_push (gimple_call_arg (f.call, i));\n+\n+    auto call = gimple_build_call_internal_vec (this->m_ifn, args);\n+    gimple_call_set_lhs (call, f.lhs);\n+    return call;\n+  }\n+};\n+\n // Lane get/set\n NEON_FUNCTION (vcreate,      gimple_create,)\n NEON_FUNCTION (vcombine,     gimple_combine,)\n@@ -283,3 +455,35 @@ NEON_FUNCTION (vdupd_laneq,  gimple_get_lane,)\n NEON_FUNCTION (vaddd, gimple_expr, (PLUS_EXPR))\n NEON_FUNCTION (vadd,  gimple_expr, (PLUS_EXPR, PLUS_EXPR, BIT_XOR_EXPR))\n NEON_FUNCTION (vaddq, gimple_expr, (PLUS_EXPR, PLUS_EXPR, BIT_XOR_EXPR))\n+\n+// Bitwise operations\n+NEON_FUNCTION (vand,   gimple_expr,    (BIT_AND_EXPR))\n+NEON_FUNCTION (vandq,  gimple_expr,    (BIT_AND_EXPR))\n+NEON_FUNCTION (vbic,   gimple_not_rhs, (BIT_AND_EXPR))\n+NEON_FUNCTION (vbicq,  gimple_not_rhs, (BIT_AND_EXPR))\n+NEON_FUNCTION (vbsl,   gimple_bsl,)\n+NEON_FUNCTION (vbslq,  gimple_bsl,)\n+NEON_FUNCTION (veor,   gimple_expr,    (BIT_XOR_EXPR))\n+NEON_FUNCTION (veorq,  gimple_expr,    (BIT_XOR_EXPR))\n+NEON_FUNCTION (vmvn,   gimple_expr,    (BIT_NOT_EXPR))\n+NEON_FUNCTION (vmvnq,  gimple_expr,    (BIT_NOT_EXPR))\n+NEON_FUNCTION (vorn,   gimple_not_rhs, (BIT_IOR_EXPR))\n+NEON_FUNCTION (vornq,  gimple_not_rhs, (BIT_IOR_EXPR))\n+NEON_FUNCTION (vorr,   gimple_expr,    (BIT_IOR_EXPR))\n+NEON_FUNCTION (vorrq,  gimple_expr,    (BIT_IOR_EXPR))\n+NEON_FUNCTION (vrbit,  gimple_rbit,)\n+NEON_FUNCTION (vrbitq, gimple_rbit,)\n+\n+// Bitwise operations (SHA3)\n+NEON_FUNCTION (vbcaxq, gimple_bcax,)\n+NEON_FUNCTION (veor3q, gimple_eor3,)\n+NEON_FUNCTION (vrax1q, gimple_rax1,)\n+NEON_FUNCTION (vxarq,  gimple_xar,)\n+\n+// Bit counting operations\n+NEON_FUNCTION (vcls,  gimple_ifn, (IFN_CLRSB))\n+NEON_FUNCTION (vclsq, gimple_ifn, (IFN_CLRSB))\n+NEON_FUNCTION (vclz,  gimple_ifn, (IFN_CLZ))\n+NEON_FUNCTION (vclzq, gimple_ifn, (IFN_CLZ))\n+NEON_FUNCTION (vcnt,  gimple_ifn, (IFN_POPCOUNT))\n+NEON_FUNCTION (vcntq, gimple_ifn, (IFN_POPCOUNT))\ndiff --git a/gcc/config/aarch64/aarch64-neon-builtins-base.def b/gcc/config/aarch64/aarch64-neon-builtins-base.def\nindex 5f61d8f6634f..e963e506571c 100644\n--- a/gcc/config/aarch64/aarch64-neon-builtins-base.def\n+++ b/gcc/config/aarch64/aarch64-neon-builtins-base.def\n@@ -69,3 +69,43 @@ DEF_NEON_FUNCTION (vaddq, bhdq_poly,\t     (\"Q0,Q0,Q0\"))\n DEF_NEON_FUNCTION (vadd,  h_float, (\"D0,D0,D0\"))\n DEF_NEON_FUNCTION (vaddq, h_float, (\"Q0,Q0,Q0\"))\n #undef REQUIRED_EXTENSIONS\n+\n+// Bitwise operations\n+#define REQUIRED_EXTENSIONS nonstreaming_only (AARCH64_FL_SIMD)\n+DEF_NEON_FUNCTION (vand,   all_integer, (\"D0,D0,D0\"))\n+DEF_NEON_FUNCTION (vandq,  all_integer, (\"Q0,Q0,Q0\"))\n+DEF_NEON_FUNCTION (vbic,   all_integer, (\"D0,D0,D0\"))\n+DEF_NEON_FUNCTION (vbicq,  all_integer, (\"Q0,Q0,Q0\"))\n+DEF_NEON_FUNCTION (vbsl,   bhsd_neon,   (\"D0,Du0,D0,D0\"))\n+DEF_NEON_FUNCTION (vbslq,  bhsd_neon,   (\"Q0,Qu0,Q0,Q0\"))\n+DEF_NEON_FUNCTION (veor,   all_integer, (\"D0,D0,D0\"))\n+DEF_NEON_FUNCTION (veorq,  all_integer, (\"Q0,Q0,Q0\"))\n+DEF_NEON_FUNCTION (vmvn,   b_poly,      (\"D0,D0\"))\n+DEF_NEON_FUNCTION (vmvnq,  b_poly,      (\"Q0,Q0\"))\n+DEF_NEON_FUNCTION (vmvn,   bhs_integer, (\"D0,D0\"))\n+DEF_NEON_FUNCTION (vmvnq,  bhs_integer, (\"Q0,Q0\"))\n+DEF_NEON_FUNCTION (vorn,   all_integer, (\"D0,D0,D0\"))\n+DEF_NEON_FUNCTION (vornq,  all_integer, (\"Q0,Q0,Q0\"))\n+DEF_NEON_FUNCTION (vorr,   all_integer, (\"D0,D0,D0\"))\n+DEF_NEON_FUNCTION (vorrq,  all_integer, (\"Q0,Q0,Q0\"))\n+DEF_NEON_FUNCTION (vrbit,  b_neon,      (\"D0,D0\"))\n+DEF_NEON_FUNCTION (vrbitq, b_neon,      (\"Q0,Q0\"))\n+#undef REQUIRED_EXTENSIONS\n+\n+// Bitwise operations (FEAT_SHA3)\n+#define REQUIRED_EXTENSIONS nonstreaming_only (AARCH64_FL_SHA3)\n+DEF_NEON_FUNCTION (vbcaxq, all_integer, (\"Q0,Q0,Q0,Q0\"))\n+DEF_NEON_FUNCTION (veor3q, all_integer, (\"Q0,Q0,Q0,Q0\"))\n+DEF_NEON_FUNCTION (vrax1q, d_unsigned,  (\"Q0,Q0,Q0\"))\n+DEF_NEON_FUNCTION (vxarq,  d_unsigned,  (\"Q0,Q0,Q0,ss32\", shift<2>))\n+#undef REQUIRED_EXTENSIONS\n+\n+// Bit counting operations\n+#define REQUIRED_EXTENSIONS nonstreaming_only (AARCH64_FL_SIMD)\n+DEF_NEON_FUNCTION (vcls,  bhs_integer, (\"Ds0,D0\"))\n+DEF_NEON_FUNCTION (vclsq, bhs_integer, (\"Qs0,Q0\"))\n+DEF_NEON_FUNCTION (vclz,  bhs_integer, (\"D0,D0\"))\n+DEF_NEON_FUNCTION (vclzq, bhs_integer, (\"Q0,Q0\"))\n+DEF_NEON_FUNCTION (vcnt,  b_neon,      (\"D0,D0\"))\n+DEF_NEON_FUNCTION (vcntq, b_neon,      (\"Q0,Q0\"))\n+#undef REQUIRED_EXTENSIONS\ndiff --git a/gcc/config/aarch64/aarch64-neon-builtins-shapes.cc b/gcc/config/aarch64/aarch64-neon-builtins-shapes.cc\nindex e88307eedf63..6059cee19a67 100644\n--- a/gcc/config/aarch64/aarch64-neon-builtins-shapes.cc\n+++ b/gcc/config/aarch64/aarch64-neon-builtins-shapes.cc\n@@ -73,6 +73,14 @@ lane (function_checker &c)\n   return c.require_immediate_range (PARAM_INDEX, 0, element_count - 1);\n }\n \n+/* Require that the parameter at PARAM_INDEX is a valid shift amount.  */\n+template <unsigned int PARAM_INDEX>\n+bool\n+shift (function_checker &c)\n+{\n+  auto bits = c.type_suffix (0).element_bits;\n+  return c.require_immediate_range (PARAM_INDEX, 0, bits - 1);\n+}\n \n /* A checker that always returns true.  */\n bool\ndiff --git a/gcc/config/aarch64/aarch64-simd-builtins.def b/gcc/config/aarch64/aarch64-simd-builtins.def\nindex b9600bdca30c..2d8c613ca5ef 100644\n--- a/gcc/config/aarch64/aarch64-simd-builtins.def\n+++ b/gcc/config/aarch64/aarch64-simd-builtins.def\n@@ -57,10 +57,6 @@\n   BUILTIN_VHSDF_HSDF (UNOP, sqrt, 2, FP)\n   BUILTIN_VDQ_I (BINOP, addp, 0, DEFAULT)\n   BUILTIN_VDQ_I (BINOPU, addp, 0, DEFAULT)\n-  BUILTIN_VDQ_BHSI (UNOP, clrsb, 2, DEFAULT)\n-  BUILTIN_VDQ_BHSI (UNOP, clz, 2, DEFAULT)\n-  BUILTIN_VS (UNOP, ctz, 2, DEFAULT)\n-  BUILTIN_VB (UNOP, popcount, 2, DEFAULT)\n \n   /* Implemented by aarch64_<sur>q<r>shl<mode>.  */\n   BUILTIN_VSDQ_I (BINOP, sqshl, 0, DEFAULT)\n@@ -648,10 +644,6 @@\n   VAR1 (UNOP, floatunsv4si, 2, FP, v4sf)\n   VAR1 (UNOP, floatunsv2di, 2, FP, v2df)\n \n-  VAR5 (UNOPU, bswap, 2, DEFAULT, v4hi, v8hi, v2si, v4si, v2di)\n-\n-  BUILTIN_VB (UNOP, rbit, 0, DEFAULT)\n-\n   /* Implemented by\n      aarch64_<PERMUTE:perm_insn><mode>.  */\n   BUILTIN_VALL (BINOP, zip1, 0, QUIET)\n@@ -713,12 +705,6 @@\n   BUILTIN_VDQSF (QUADOP_LANE, float_mla_laneq, 0, FP)\n   BUILTIN_VDQSF (QUADOP_LANE, float_mls_laneq, 0, FP)\n \n-  /* Implemented by aarch64_simd_bsl<mode>.  */\n-  BUILTIN_VDQQH (BSL_P, simd_bsl, 0, DEFAULT)\n-  VAR2 (BSL_P, simd_bsl,0, DEFAULT, di, v2di)\n-  BUILTIN_VSDQ_I_DI (BSL_U, simd_bsl, 0, DEFAULT)\n-  BUILTIN_VALLDIF (BSL_S, simd_bsl, 0, QUIET)\n-\n   /* Implemented by aarch64_crypto_aes<op><mode>.  */\n   VAR1 (BINOPU, crypto_aese, 0, DEFAULT, v16qi)\n   VAR1 (BINOPU, crypto_aesd, 0, DEFAULT, v16qi)\n@@ -881,16 +867,6 @@\n   VAR1 (BINOPU, crypto_sha512su0q, 0, DEFAULT, v2di)\n   /* Implemented by aarch64_crypto_sha512su1qv2di.  */\n   VAR1 (TERNOPU, crypto_sha512su1q, 0, DEFAULT, v2di)\n-  /* Implemented by eor3q<mode>4.  */\n-  BUILTIN_VQ_I (TERNOPU, eor3q, 4, DEFAULT)\n-  BUILTIN_VQ_I (TERNOP, eor3q, 4, DEFAULT)\n-  /* Implemented by aarch64_rax1qv2di.  */\n-  VAR1 (BINOPU, rax1q, 0, DEFAULT, v2di)\n-  /* Implemented by aarch64_xarqv2di.  */\n-  VAR1 (TERNOPUI, xarq, 0, DEFAULT, v2di)\n-  /* Implemented by bcaxq<mode>4.  */\n-  BUILTIN_VQ_I (TERNOPU, bcaxq, 4, DEFAULT)\n-  BUILTIN_VQ_I (TERNOP, bcaxq, 4, DEFAULT)\n \n   /* Implemented by aarch64_fml<f16mac1>l<f16quad>_low<mode>.  */\n   VAR1 (TERNOP, fmlal_low, 0, FP, v2sf)\ndiff --git a/gcc/config/aarch64/aarch64-simd-pragma-builtins.def b/gcc/config/aarch64/aarch64-simd-pragma-builtins.def\nindex e9e7e163def3..ebafcd618cd7 100644\n--- a/gcc/config/aarch64/aarch64-simd-pragma-builtins.def\n+++ b/gcc/config/aarch64/aarch64-simd-pragma-builtins.def\n@@ -196,12 +196,6 @@ ENTRY_FMA_FPM (vmlalltb, f32, UNSPEC_FMLALLTB_FP8)\n ENTRY_FMA_FPM (vmlalltt, f32, UNSPEC_FMLALLTT_FP8)\n #undef REQUIRED_EXTENSIONS\n \n-// bsl\n-#define REQUIRED_EXTENSIONS nonstreaming_only (AARCH64_FL_SIMD)\n-ENTRY_TERNARY (vbsl_mf8, mf8, u8, mf8, mf8, UNSPEC_BSL, QUIET)\n-ENTRY_TERNARY (vbslq_mf8, mf8q, u8q, mf8q, mf8q, UNSPEC_BSL, QUIET)\n-#undef REQUIRED_EXTENSIONS\n-\n // ext\n #define REQUIRED_EXTENSIONS nonstreaming_only (AARCH64_FL_SIMD)\n ENTRY_BINARY_LANE (vext_mf8, mf8, mf8, mf8, UNSPEC_EXT, QUIET)\ndiff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md\nindex 2e142b1e1ee7..282b395abcc5 100644\n--- a/gcc/config/aarch64/aarch64-simd.md\n+++ b/gcc/config/aarch64/aarch64-simd.md\n@@ -400,7 +400,7 @@\n   [(set_attr \"type\" \"neon_rev<q>\")]\n )\n \n-(define_insn \"aarch64_rbit<mode><vczle><vczbe>\"\n+(define_insn \"@aarch64_rbit<mode><vczle><vczbe>\"\n   [(set (match_operand:VB 0 \"register_operand\" \"=w\")\n \t(bitreverse:VB (match_operand:VB 1 \"register_operand\" \"w\")))]\n   \"TARGET_SIMD\"\n@@ -9566,15 +9566,16 @@\n   [(set_attr \"type\" \"crypto_sha3\")]\n )\n \n+;; matches 'rotl (a, splat (1)) ^ b'\n (define_insn \"aarch64_rax1qv2di\"\n   [(set (match_operand:V2DI 0 \"register_operand\" \"=w\")\n \t(xor:V2DI\n \t (rotate:V2DI\n-\t  (match_operand:V2DI 2 \"register_operand\" \"w\")\n-\t  (const_int 1))\n-\t (match_operand:V2DI 1 \"register_operand\" \"w\")))]\n-  \"TARGET_SHA3\"\n-  \"rax1\\\\t%0.2d, %1.2d, %2.2d\"\n+\t  (match_operand:V2DI 1 \"register_operand\" \"w\")\n+\t  (match_operand:V2DI 2 \"aarch64_simd_lshift_imm\" \"Dl\"))\n+\t (match_operand:V2DI 3 \"register_operand\" \"w\")))]\n+  \"TARGET_SHA3 && INTVAL (unwrap_const_vec_duplicate (operands[2])) == 1\"\n+  \"rax1\\\\t%0.2d, %1.2d, %3.2d\"\n   [(set_attr \"type\" \"crypto_sha3\")]\n )\n \ndiff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md\nindex bbf8ec264841..ab71a49c839a 100644\n--- a/gcc/config/aarch64/aarch64.md\n+++ b/gcc/config/aarch64/aarch64.md\n@@ -227,7 +227,6 @@\n     UNSPEC_AUTIB1716\n     UNSPEC_AUTIASP\n     UNSPEC_AUTIBSP\n-    UNSPEC_BSL\n     UNSPEC_CALLEE_ABI\n     UNSPEC_CASESI\n     UNSPEC_CPYMEM\ndiff --git a/gcc/config/aarch64/arm_neon.h b/gcc/config/aarch64/arm_neon.h\nindex 2af9c54f1d8b..ec2383d870a6 100644\n--- a/gcc/config/aarch64/arm_neon.h\n+++ b/gcc/config/aarch64/arm_neon.h\n@@ -763,566 +763,6 @@ vmulq_p8 (poly8x16_t __a, poly8x16_t __b)\n   return __builtin_aarch64_pmulv16qi_ppp (__a, __b);\n }\n \n-__extension__ extern __inline int8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vand_s8 (int8x8_t __a, int8x8_t __b)\n-{\n-  return __a & __b;\n-}\n-\n-__extension__ extern __inline int16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vand_s16 (int16x4_t __a, int16x4_t __b)\n-{\n-  return __a & __b;\n-}\n-\n-__extension__ extern __inline int32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vand_s32 (int32x2_t __a, int32x2_t __b)\n-{\n-  return __a & __b;\n-}\n-\n-__extension__ extern __inline uint8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vand_u8 (uint8x8_t __a, uint8x8_t __b)\n-{\n-  return __a & __b;\n-}\n-\n-__extension__ extern __inline uint16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vand_u16 (uint16x4_t __a, uint16x4_t __b)\n-{\n-  return __a & __b;\n-}\n-\n-__extension__ extern __inline uint32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vand_u32 (uint32x2_t __a, uint32x2_t __b)\n-{\n-  return __a & __b;\n-}\n-\n-__extension__ extern __inline int64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vand_s64 (int64x1_t __a, int64x1_t __b)\n-{\n-  return __a & __b;\n-}\n-\n-__extension__ extern __inline uint64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vand_u64 (uint64x1_t __a, uint64x1_t __b)\n-{\n-  return __a & __b;\n-}\n-\n-__extension__ extern __inline int8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vandq_s8 (int8x16_t __a, int8x16_t __b)\n-{\n-  return __a & __b;\n-}\n-\n-__extension__ extern __inline int16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vandq_s16 (int16x8_t __a, int16x8_t __b)\n-{\n-  return __a & __b;\n-}\n-\n-__extension__ extern __inline int32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vandq_s32 (int32x4_t __a, int32x4_t __b)\n-{\n-  return __a & __b;\n-}\n-\n-__extension__ extern __inline int64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vandq_s64 (int64x2_t __a, int64x2_t __b)\n-{\n-  return __a & __b;\n-}\n-\n-__extension__ extern __inline uint8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vandq_u8 (uint8x16_t __a, uint8x16_t __b)\n-{\n-  return __a & __b;\n-}\n-\n-__extension__ extern __inline uint16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vandq_u16 (uint16x8_t __a, uint16x8_t __b)\n-{\n-  return __a & __b;\n-}\n-\n-__extension__ extern __inline uint32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vandq_u32 (uint32x4_t __a, uint32x4_t __b)\n-{\n-  return __a & __b;\n-}\n-\n-__extension__ extern __inline uint64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vandq_u64 (uint64x2_t __a, uint64x2_t __b)\n-{\n-  return __a & __b;\n-}\n-\n-__extension__ extern __inline int8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorr_s8 (int8x8_t __a, int8x8_t __b)\n-{\n-  return __a | __b;\n-}\n-\n-__extension__ extern __inline int16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorr_s16 (int16x4_t __a, int16x4_t __b)\n-{\n-  return __a | __b;\n-}\n-\n-__extension__ extern __inline int32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorr_s32 (int32x2_t __a, int32x2_t __b)\n-{\n-  return __a | __b;\n-}\n-\n-__extension__ extern __inline uint8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorr_u8 (uint8x8_t __a, uint8x8_t __b)\n-{\n-  return __a | __b;\n-}\n-\n-__extension__ extern __inline uint16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorr_u16 (uint16x4_t __a, uint16x4_t __b)\n-{\n-  return __a | __b;\n-}\n-\n-__extension__ extern __inline uint32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorr_u32 (uint32x2_t __a, uint32x2_t __b)\n-{\n-  return __a | __b;\n-}\n-\n-__extension__ extern __inline int64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorr_s64 (int64x1_t __a, int64x1_t __b)\n-{\n-  return __a | __b;\n-}\n-\n-__extension__ extern __inline uint64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorr_u64 (uint64x1_t __a, uint64x1_t __b)\n-{\n-  return __a | __b;\n-}\n-\n-__extension__ extern __inline int8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorrq_s8 (int8x16_t __a, int8x16_t __b)\n-{\n-  return __a | __b;\n-}\n-\n-__extension__ extern __inline int16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorrq_s16 (int16x8_t __a, int16x8_t __b)\n-{\n-  return __a | __b;\n-}\n-\n-__extension__ extern __inline int32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorrq_s32 (int32x4_t __a, int32x4_t __b)\n-{\n-  return __a | __b;\n-}\n-\n-__extension__ extern __inline int64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorrq_s64 (int64x2_t __a, int64x2_t __b)\n-{\n-  return __a | __b;\n-}\n-\n-__extension__ extern __inline uint8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorrq_u8 (uint8x16_t __a, uint8x16_t __b)\n-{\n-  return __a | __b;\n-}\n-\n-__extension__ extern __inline uint16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorrq_u16 (uint16x8_t __a, uint16x8_t __b)\n-{\n-  return __a | __b;\n-}\n-\n-__extension__ extern __inline uint32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorrq_u32 (uint32x4_t __a, uint32x4_t __b)\n-{\n-  return __a | __b;\n-}\n-\n-__extension__ extern __inline uint64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorrq_u64 (uint64x2_t __a, uint64x2_t __b)\n-{\n-  return __a | __b;\n-}\n-\n-__extension__ extern __inline int8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veor_s8 (int8x8_t __a, int8x8_t __b)\n-{\n-  return __a ^ __b;\n-}\n-\n-__extension__ extern __inline int16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veor_s16 (int16x4_t __a, int16x4_t __b)\n-{\n-  return __a ^ __b;\n-}\n-\n-__extension__ extern __inline int32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veor_s32 (int32x2_t __a, int32x2_t __b)\n-{\n-  return __a ^ __b;\n-}\n-\n-__extension__ extern __inline uint8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veor_u8 (uint8x8_t __a, uint8x8_t __b)\n-{\n-  return __a ^ __b;\n-}\n-\n-__extension__ extern __inline uint16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veor_u16 (uint16x4_t __a, uint16x4_t __b)\n-{\n-  return __a ^ __b;\n-}\n-\n-__extension__ extern __inline uint32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veor_u32 (uint32x2_t __a, uint32x2_t __b)\n-{\n-  return __a ^ __b;\n-}\n-\n-__extension__ extern __inline int64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veor_s64 (int64x1_t __a, int64x1_t __b)\n-{\n-  return __a ^ __b;\n-}\n-\n-__extension__ extern __inline uint64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veor_u64 (uint64x1_t __a, uint64x1_t __b)\n-{\n-  return __a ^ __b;\n-}\n-\n-__extension__ extern __inline int8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veorq_s8 (int8x16_t __a, int8x16_t __b)\n-{\n-  return __a ^ __b;\n-}\n-\n-__extension__ extern __inline int16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veorq_s16 (int16x8_t __a, int16x8_t __b)\n-{\n-  return __a ^ __b;\n-}\n-\n-__extension__ extern __inline int32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veorq_s32 (int32x4_t __a, int32x4_t __b)\n-{\n-  return __a ^ __b;\n-}\n-\n-__extension__ extern __inline int64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veorq_s64 (int64x2_t __a, int64x2_t __b)\n-{\n-  return __a ^ __b;\n-}\n-\n-__extension__ extern __inline uint8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veorq_u8 (uint8x16_t __a, uint8x16_t __b)\n-{\n-  return __a ^ __b;\n-}\n-\n-__extension__ extern __inline uint16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veorq_u16 (uint16x8_t __a, uint16x8_t __b)\n-{\n-  return __a ^ __b;\n-}\n-\n-__extension__ extern __inline uint32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veorq_u32 (uint32x4_t __a, uint32x4_t __b)\n-{\n-  return __a ^ __b;\n-}\n-\n-__extension__ extern __inline uint64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veorq_u64 (uint64x2_t __a, uint64x2_t __b)\n-{\n-  return __a ^ __b;\n-}\n-\n-__extension__ extern __inline int8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbic_s8 (int8x8_t __a, int8x8_t __b)\n-{\n-  return __a & ~__b;\n-}\n-\n-__extension__ extern __inline int16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbic_s16 (int16x4_t __a, int16x4_t __b)\n-{\n-  return __a & ~__b;\n-}\n-\n-__extension__ extern __inline int32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbic_s32 (int32x2_t __a, int32x2_t __b)\n-{\n-  return __a & ~__b;\n-}\n-\n-__extension__ extern __inline uint8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbic_u8 (uint8x8_t __a, uint8x8_t __b)\n-{\n-  return __a & ~__b;\n-}\n-\n-__extension__ extern __inline uint16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbic_u16 (uint16x4_t __a, uint16x4_t __b)\n-{\n-  return __a & ~__b;\n-}\n-\n-__extension__ extern __inline uint32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbic_u32 (uint32x2_t __a, uint32x2_t __b)\n-{\n-  return __a & ~__b;\n-}\n-\n-__extension__ extern __inline int64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbic_s64 (int64x1_t __a, int64x1_t __b)\n-{\n-  return __a & ~__b;\n-}\n-\n-__extension__ extern __inline uint64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbic_u64 (uint64x1_t __a, uint64x1_t __b)\n-{\n-  return __a & ~__b;\n-}\n-\n-__extension__ extern __inline int8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbicq_s8 (int8x16_t __a, int8x16_t __b)\n-{\n-  return __a & ~__b;\n-}\n-\n-__extension__ extern __inline int16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbicq_s16 (int16x8_t __a, int16x8_t __b)\n-{\n-  return __a & ~__b;\n-}\n-\n-__extension__ extern __inline int32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbicq_s32 (int32x4_t __a, int32x4_t __b)\n-{\n-  return __a & ~__b;\n-}\n-\n-__extension__ extern __inline int64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbicq_s64 (int64x2_t __a, int64x2_t __b)\n-{\n-  return __a & ~__b;\n-}\n-\n-__extension__ extern __inline uint8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbicq_u8 (uint8x16_t __a, uint8x16_t __b)\n-{\n-  return __a & ~__b;\n-}\n-\n-__extension__ extern __inline uint16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbicq_u16 (uint16x8_t __a, uint16x8_t __b)\n-{\n-  return __a & ~__b;\n-}\n-\n-__extension__ extern __inline uint32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbicq_u32 (uint32x4_t __a, uint32x4_t __b)\n-{\n-  return __a & ~__b;\n-}\n-\n-__extension__ extern __inline uint64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbicq_u64 (uint64x2_t __a, uint64x2_t __b)\n-{\n-  return __a & ~__b;\n-}\n-\n-__extension__ extern __inline int8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorn_s8 (int8x8_t __a, int8x8_t __b)\n-{\n-  return __a | ~__b;\n-}\n-\n-__extension__ extern __inline int16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorn_s16 (int16x4_t __a, int16x4_t __b)\n-{\n-  return __a | ~__b;\n-}\n-\n-__extension__ extern __inline int32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorn_s32 (int32x2_t __a, int32x2_t __b)\n-{\n-  return __a | ~__b;\n-}\n-\n-__extension__ extern __inline uint8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorn_u8 (uint8x8_t __a, uint8x8_t __b)\n-{\n-  return __a | ~__b;\n-}\n-\n-__extension__ extern __inline uint16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorn_u16 (uint16x4_t __a, uint16x4_t __b)\n-{\n-  return __a | ~__b;\n-}\n-\n-__extension__ extern __inline uint32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorn_u32 (uint32x2_t __a, uint32x2_t __b)\n-{\n-  return __a | ~__b;\n-}\n-\n-__extension__ extern __inline int64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorn_s64 (int64x1_t __a, int64x1_t __b)\n-{\n-  return __a | ~__b;\n-}\n-\n-__extension__ extern __inline uint64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vorn_u64 (uint64x1_t __a, uint64x1_t __b)\n-{\n-  return __a | ~__b;\n-}\n-\n-__extension__ extern __inline int8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vornq_s8 (int8x16_t __a, int8x16_t __b)\n-{\n-  return __a | ~__b;\n-}\n-\n-__extension__ extern __inline int16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vornq_s16 (int16x8_t __a, int16x8_t __b)\n-{\n-  return __a | ~__b;\n-}\n-\n-__extension__ extern __inline int32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vornq_s32 (int32x4_t __a, int32x4_t __b)\n-{\n-  return __a | ~__b;\n-}\n-\n-__extension__ extern __inline int64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vornq_s64 (int64x2_t __a, int64x2_t __b)\n-{\n-  return __a | ~__b;\n-}\n-\n-__extension__ extern __inline uint8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vornq_u8 (uint8x16_t __a, uint8x16_t __b)\n-{\n-  return __a | ~__b;\n-}\n-\n-__extension__ extern __inline uint16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vornq_u16 (uint16x8_t __a, uint16x8_t __b)\n-{\n-  return __a | ~__b;\n-}\n-\n-__extension__ extern __inline uint32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vornq_u32 (uint32x4_t __a, uint32x4_t __b)\n-{\n-  return __a | ~__b;\n-}\n-\n-__extension__ extern __inline uint64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vornq_u64 (uint64x2_t __a, uint64x2_t __b)\n-{\n-  return __a | ~__b;\n-}\n-\n __extension__ extern __inline int8x8_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n vsub_s8 (int8x8_t __a, int8x8_t __b)\n@@ -5843,338 +5283,137 @@ vabsd_s64 (int64_t __a)\n \n __extension__ extern __inline int64_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vaddd_s64 (int64_t __a, int64_t __b)\n-{\n-  return __a + __b;\n-}\n-\n-__extension__ extern __inline uint64_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vaddd_u64 (uint64_t __a, uint64_t __b)\n-{\n-  return __a + __b;\n-}\n-\n-/* vaddv */\n-\n-__extension__ extern __inline int8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vaddv_s8 (int8x8_t __a)\n-{\n-  return __builtin_aarch64_reduc_plus_scal_v8qi (__a);\n-}\n-\n-__extension__ extern __inline int16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vaddv_s16 (int16x4_t __a)\n-{\n-  return __builtin_aarch64_reduc_plus_scal_v4hi (__a);\n-}\n-\n-__extension__ extern __inline int32_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vaddv_s32 (int32x2_t __a)\n-{\n-  return __builtin_aarch64_reduc_plus_scal_v2si (__a);\n-}\n-\n-__extension__ extern __inline uint8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vaddv_u8 (uint8x8_t __a)\n-{\n-  return __builtin_aarch64_reduc_plus_scal_v8qi_uu (__a);\n-}\n-\n-__extension__ extern __inline uint16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vaddv_u16 (uint16x4_t __a)\n-{\n-  return __builtin_aarch64_reduc_plus_scal_v4hi_uu (__a);\n-}\n-\n-__extension__ extern __inline uint32_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vaddv_u32 (uint32x2_t __a)\n-{\n-  return __builtin_aarch64_reduc_plus_scal_v2si_uu (__a);\n-}\n-\n-__extension__ extern __inline int8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vaddvq_s8 (int8x16_t __a)\n-{\n-  return __builtin_aarch64_reduc_plus_scal_v16qi (__a);\n-}\n-\n-__extension__ extern __inline int16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vaddvq_s16 (int16x8_t __a)\n-{\n-  return __builtin_aarch64_reduc_plus_scal_v8hi (__a);\n-}\n-\n-__extension__ extern __inline int32_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vaddvq_s32 (int32x4_t __a)\n-{\n-  return __builtin_aarch64_reduc_plus_scal_v4si (__a);\n-}\n-\n-__extension__ extern __inline int64_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vaddvq_s64 (int64x2_t __a)\n-{\n-  return __builtin_aarch64_reduc_plus_scal_v2di (__a);\n-}\n-\n-__extension__ extern __inline uint8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vaddvq_u8 (uint8x16_t __a)\n-{\n-  return __builtin_aarch64_reduc_plus_scal_v16qi_uu (__a);\n-}\n-\n-__extension__ extern __inline uint16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vaddvq_u16 (uint16x8_t __a)\n-{\n-  return __builtin_aarch64_reduc_plus_scal_v8hi_uu (__a);\n-}\n-\n-__extension__ extern __inline uint32_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vaddvq_u32 (uint32x4_t __a)\n-{\n-  return __builtin_aarch64_reduc_plus_scal_v4si_uu (__a);\n-}\n-\n-__extension__ extern __inline uint64_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vaddvq_u64 (uint64x2_t __a)\n-{\n-  return __builtin_aarch64_reduc_plus_scal_v2di_uu (__a);\n-}\n-\n-__extension__ extern __inline float32_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vaddv_f32 (float32x2_t __a)\n-{\n-  return __builtin_aarch64_reduc_plus_scal_v2sf (__a);\n-}\n-\n-__extension__ extern __inline float32_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vaddvq_f32 (float32x4_t __a)\n-{\n-  return __builtin_aarch64_reduc_plus_scal_v4sf (__a);\n-}\n-\n-__extension__ extern __inline float64_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vaddvq_f64 (float64x2_t __a)\n-{\n-  return __builtin_aarch64_reduc_plus_scal_v2df (__a);\n-}\n-\n-/* vbsl  */\n-\n-__extension__ extern __inline float16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbsl_f16 (uint16x4_t __a, float16x4_t __b, float16x4_t __c)\n-{\n-  return __builtin_aarch64_simd_bslv4hf_suss (__a, __b, __c);\n-}\n-\n-__extension__ extern __inline float32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbsl_f32 (uint32x2_t __a, float32x2_t __b, float32x2_t __c)\n-{\n-  return __builtin_aarch64_simd_bslv2sf_suss (__a, __b, __c);\n-}\n-\n-__extension__ extern __inline float64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbsl_f64 (uint64x1_t __a, float64x1_t __b, float64x1_t __c)\n-{\n-  return (float64x1_t)\n-    { __builtin_aarch64_simd_bsldf_suss (__a[0], __b[0], __c[0]) };\n-}\n-\n-__extension__ extern __inline poly8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbsl_p8 (uint8x8_t __a, poly8x8_t __b, poly8x8_t __c)\n-{\n-  return __builtin_aarch64_simd_bslv8qi_pupp (__a, __b, __c);\n-}\n-\n-__extension__ extern __inline poly16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbsl_p16 (uint16x4_t __a, poly16x4_t __b, poly16x4_t __c)\n-{\n-  return __builtin_aarch64_simd_bslv4hi_pupp (__a, __b, __c);\n-}\n-__extension__ extern __inline poly64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbsl_p64 (uint64x1_t __a, poly64x1_t __b, poly64x1_t __c)\n-{\n-  return (poly64x1_t)\n-      {__builtin_aarch64_simd_bsldi_pupp (__a[0], __b[0], __c[0])};\n-}\n-\n-__extension__ extern __inline int8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbsl_s8 (uint8x8_t __a, int8x8_t __b, int8x8_t __c)\n-{\n-  return __builtin_aarch64_simd_bslv8qi_suss (__a, __b, __c);\n-}\n-\n-__extension__ extern __inline int16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbsl_s16 (uint16x4_t __a, int16x4_t __b, int16x4_t __c)\n-{\n-  return __builtin_aarch64_simd_bslv4hi_suss (__a, __b, __c);\n-}\n-\n-__extension__ extern __inline int32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbsl_s32 (uint32x2_t __a, int32x2_t __b, int32x2_t __c)\n-{\n-  return __builtin_aarch64_simd_bslv2si_suss (__a, __b, __c);\n-}\n-\n-__extension__ extern __inline int64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbsl_s64 (uint64x1_t __a, int64x1_t __b, int64x1_t __c)\n+vaddd_s64 (int64_t __a, int64_t __b)\n {\n-  return (int64x1_t)\n-      {__builtin_aarch64_simd_bsldi_suss (__a[0], __b[0], __c[0])};\n+  return __a + __b;\n }\n \n-__extension__ extern __inline uint8x8_t\n+__extension__ extern __inline uint64_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbsl_u8 (uint8x8_t __a, uint8x8_t __b, uint8x8_t __c)\n+vaddd_u64 (uint64_t __a, uint64_t __b)\n {\n-  return __builtin_aarch64_simd_bslv8qi_uuuu (__a, __b, __c);\n+  return __a + __b;\n }\n \n-__extension__ extern __inline uint16x4_t\n+/* vaddv */\n+\n+__extension__ extern __inline int8_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbsl_u16 (uint16x4_t __a, uint16x4_t __b, uint16x4_t __c)\n+vaddv_s8 (int8x8_t __a)\n {\n-  return __builtin_aarch64_simd_bslv4hi_uuuu (__a, __b, __c);\n+  return __builtin_aarch64_reduc_plus_scal_v8qi (__a);\n }\n \n-__extension__ extern __inline uint32x2_t\n+__extension__ extern __inline int16_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbsl_u32 (uint32x2_t __a, uint32x2_t __b, uint32x2_t __c)\n+vaddv_s16 (int16x4_t __a)\n {\n-  return __builtin_aarch64_simd_bslv2si_uuuu (__a, __b, __c);\n+  return __builtin_aarch64_reduc_plus_scal_v4hi (__a);\n }\n \n-__extension__ extern __inline uint64x1_t\n+__extension__ extern __inline int32_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbsl_u64 (uint64x1_t __a, uint64x1_t __b, uint64x1_t __c)\n+vaddv_s32 (int32x2_t __a)\n {\n-  return (uint64x1_t)\n-      {__builtin_aarch64_simd_bsldi_uuuu (__a[0], __b[0], __c[0])};\n+  return __builtin_aarch64_reduc_plus_scal_v2si (__a);\n }\n \n-__extension__ extern __inline float16x8_t\n+__extension__ extern __inline uint8_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbslq_f16 (uint16x8_t __a, float16x8_t __b, float16x8_t __c)\n+vaddv_u8 (uint8x8_t __a)\n {\n-  return __builtin_aarch64_simd_bslv8hf_suss (__a, __b, __c);\n+  return __builtin_aarch64_reduc_plus_scal_v8qi_uu (__a);\n }\n \n-__extension__ extern __inline float32x4_t\n+__extension__ extern __inline uint16_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbslq_f32 (uint32x4_t __a, float32x4_t __b, float32x4_t __c)\n+vaddv_u16 (uint16x4_t __a)\n {\n-  return __builtin_aarch64_simd_bslv4sf_suss (__a, __b, __c);\n+  return __builtin_aarch64_reduc_plus_scal_v4hi_uu (__a);\n }\n \n-__extension__ extern __inline float64x2_t\n+__extension__ extern __inline uint32_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbslq_f64 (uint64x2_t __a, float64x2_t __b, float64x2_t __c)\n+vaddv_u32 (uint32x2_t __a)\n {\n-  return __builtin_aarch64_simd_bslv2df_suss (__a, __b, __c);\n+  return __builtin_aarch64_reduc_plus_scal_v2si_uu (__a);\n }\n \n-__extension__ extern __inline poly8x16_t\n+__extension__ extern __inline int8_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbslq_p8 (uint8x16_t __a, poly8x16_t __b, poly8x16_t __c)\n+vaddvq_s8 (int8x16_t __a)\n {\n-  return __builtin_aarch64_simd_bslv16qi_pupp (__a, __b, __c);\n+  return __builtin_aarch64_reduc_plus_scal_v16qi (__a);\n }\n \n-__extension__ extern __inline poly16x8_t\n+__extension__ extern __inline int16_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbslq_p16 (uint16x8_t __a, poly16x8_t __b, poly16x8_t __c)\n+vaddvq_s16 (int16x8_t __a)\n {\n-  return __builtin_aarch64_simd_bslv8hi_pupp (__a, __b, __c);\n+  return __builtin_aarch64_reduc_plus_scal_v8hi (__a);\n }\n \n-__extension__ extern __inline int8x16_t\n+__extension__ extern __inline int32_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbslq_s8 (uint8x16_t __a, int8x16_t __b, int8x16_t __c)\n+vaddvq_s32 (int32x4_t __a)\n {\n-  return __builtin_aarch64_simd_bslv16qi_suss (__a, __b, __c);\n+  return __builtin_aarch64_reduc_plus_scal_v4si (__a);\n }\n \n-__extension__ extern __inline int16x8_t\n+__extension__ extern __inline int64_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbslq_s16 (uint16x8_t __a, int16x8_t __b, int16x8_t __c)\n+vaddvq_s64 (int64x2_t __a)\n {\n-  return __builtin_aarch64_simd_bslv8hi_suss (__a, __b, __c);\n+  return __builtin_aarch64_reduc_plus_scal_v2di (__a);\n }\n \n-__extension__ extern __inline poly64x2_t\n+__extension__ extern __inline uint8_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbslq_p64 (uint64x2_t __a, poly64x2_t __b, poly64x2_t __c)\n+vaddvq_u8 (uint8x16_t __a)\n {\n-  return __builtin_aarch64_simd_bslv2di_pupp (__a, __b, __c);\n+  return __builtin_aarch64_reduc_plus_scal_v16qi_uu (__a);\n }\n \n-__extension__ extern __inline int32x4_t\n+__extension__ extern __inline uint16_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbslq_s32 (uint32x4_t __a, int32x4_t __b, int32x4_t __c)\n+vaddvq_u16 (uint16x8_t __a)\n {\n-  return __builtin_aarch64_simd_bslv4si_suss (__a, __b, __c);\n+  return __builtin_aarch64_reduc_plus_scal_v8hi_uu (__a);\n }\n \n-__extension__ extern __inline int64x2_t\n+__extension__ extern __inline uint32_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbslq_s64 (uint64x2_t __a, int64x2_t __b, int64x2_t __c)\n+vaddvq_u32 (uint32x4_t __a)\n {\n-  return __builtin_aarch64_simd_bslv2di_suss (__a, __b, __c);\n+  return __builtin_aarch64_reduc_plus_scal_v4si_uu (__a);\n }\n \n-__extension__ extern __inline uint8x16_t\n+__extension__ extern __inline uint64_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbslq_u8 (uint8x16_t __a, uint8x16_t __b, uint8x16_t __c)\n+vaddvq_u64 (uint64x2_t __a)\n {\n-  return __builtin_aarch64_simd_bslv16qi_uuuu (__a, __b, __c);\n+  return __builtin_aarch64_reduc_plus_scal_v2di_uu (__a);\n }\n \n-__extension__ extern __inline uint16x8_t\n+__extension__ extern __inline float32_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbslq_u16 (uint16x8_t __a, uint16x8_t __b, uint16x8_t __c)\n+vaddv_f32 (float32x2_t __a)\n {\n-  return __builtin_aarch64_simd_bslv8hi_uuuu (__a, __b, __c);\n+  return __builtin_aarch64_reduc_plus_scal_v2sf (__a);\n }\n \n-__extension__ extern __inline uint32x4_t\n+__extension__ extern __inline float32_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbslq_u32 (uint32x4_t __a, uint32x4_t __b, uint32x4_t __c)\n+vaddvq_f32 (float32x4_t __a)\n {\n-  return __builtin_aarch64_simd_bslv4si_uuuu (__a, __b, __c);\n+  return __builtin_aarch64_reduc_plus_scal_v4sf (__a);\n }\n \n-__extension__ extern __inline uint64x2_t\n+__extension__ extern __inline float64_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbslq_u64 (uint64x2_t __a, uint64x2_t __b, uint64x2_t __c)\n+vaddvq_f64 (float64x2_t __a)\n {\n-  return __builtin_aarch64_simd_bslv2di_uuuu (__a, __b, __c);\n+  return __builtin_aarch64_reduc_plus_scal_v2df (__a);\n }\n \n /* ARMv8.1-A instrinsics.  */\n@@ -8069,334 +7308,118 @@ vcltd_u64 (uint64_t __a, uint64_t __b)\n \n __extension__ extern __inline uint64_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcltd_f64 (float64_t __a, float64_t __b)\n-{\n-  return __a < __b ? -1ll : 0ll;\n-}\n-\n-/* vcltz - vector.  */\n-\n-__extension__ extern __inline uint32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcltz_f32 (float32x2_t __a)\n-{\n-  return (uint32x2_t) (__a < 0.0f);\n-}\n-\n-__extension__ extern __inline uint64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcltz_f64 (float64x1_t __a)\n-{\n-  return (uint64x1_t) (__a < (float64x1_t) {0.0});\n-}\n-\n-__extension__ extern __inline uint8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcltz_s8 (int8x8_t __a)\n-{\n-  return (uint8x8_t) (__a < 0);\n-}\n-\n-__extension__ extern __inline uint16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcltz_s16 (int16x4_t __a)\n-{\n-  return (uint16x4_t) (__a < 0);\n-}\n-\n-__extension__ extern __inline uint32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcltz_s32 (int32x2_t __a)\n-{\n-  return (uint32x2_t) (__a < 0);\n-}\n-\n-__extension__ extern __inline uint64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcltz_s64 (int64x1_t __a)\n-{\n-  return (uint64x1_t) (__a < __AARCH64_INT64_C (0));\n-}\n-\n-__extension__ extern __inline uint32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcltzq_f32 (float32x4_t __a)\n-{\n-  return (uint32x4_t) (__a < 0.0f);\n-}\n-\n-__extension__ extern __inline uint64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcltzq_f64 (float64x2_t __a)\n-{\n-  return (uint64x2_t) (__a < 0.0);\n-}\n-\n-__extension__ extern __inline uint8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcltzq_s8 (int8x16_t __a)\n-{\n-  return (uint8x16_t) (__a < 0);\n-}\n-\n-__extension__ extern __inline uint16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcltzq_s16 (int16x8_t __a)\n-{\n-  return (uint16x8_t) (__a < 0);\n-}\n-\n-__extension__ extern __inline uint32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcltzq_s32 (int32x4_t __a)\n-{\n-  return (uint32x4_t) (__a < 0);\n-}\n-\n-__extension__ extern __inline uint64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcltzq_s64 (int64x2_t __a)\n-{\n-  return (uint64x2_t) (__a < __AARCH64_INT64_C (0));\n-}\n-\n-/* vcltz - scalar.  */\n-\n-__extension__ extern __inline uint32_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcltzs_f32 (float32_t __a)\n-{\n-  return __a < 0.0f ? -1 : 0;\n-}\n-\n-__extension__ extern __inline uint64_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcltzd_s64 (int64_t __a)\n-{\n-  return __a < 0 ? -1ll : 0ll;\n-}\n-\n-__extension__ extern __inline uint64_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcltzd_f64 (float64_t __a)\n-{\n-  return __a < 0.0 ? -1ll : 0ll;\n-}\n-\n-/* vcls.  */\n-\n-__extension__ extern __inline int8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcls_s8 (int8x8_t __a)\n-{\n-  return __builtin_aarch64_clrsbv8qi (__a);\n-}\n-\n-__extension__ extern __inline int16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcls_s16 (int16x4_t __a)\n-{\n-  return __builtin_aarch64_clrsbv4hi (__a);\n-}\n-\n-__extension__ extern __inline int32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcls_s32 (int32x2_t __a)\n-{\n-  return __builtin_aarch64_clrsbv2si (__a);\n-}\n-\n-__extension__ extern __inline int8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vclsq_s8 (int8x16_t __a)\n-{\n-  return __builtin_aarch64_clrsbv16qi (__a);\n-}\n-\n-__extension__ extern __inline int16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vclsq_s16 (int16x8_t __a)\n-{\n-  return __builtin_aarch64_clrsbv8hi (__a);\n-}\n-\n-__extension__ extern __inline int32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vclsq_s32 (int32x4_t __a)\n-{\n-  return __builtin_aarch64_clrsbv4si (__a);\n-}\n-\n-__extension__ extern __inline int8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcls_u8 (uint8x8_t __a)\n-{\n-  return __builtin_aarch64_clrsbv8qi ((int8x8_t) __a);\n-}\n-\n-__extension__ extern __inline int16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcls_u16 (uint16x4_t __a)\n-{\n-  return __builtin_aarch64_clrsbv4hi ((int16x4_t) __a);\n-}\n-\n-__extension__ extern __inline int32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcls_u32 (uint32x2_t __a)\n-{\n-  return __builtin_aarch64_clrsbv2si ((int32x2_t) __a);\n-}\n-\n-__extension__ extern __inline int8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vclsq_u8 (uint8x16_t __a)\n-{\n-  return __builtin_aarch64_clrsbv16qi ((int8x16_t) __a);\n-}\n-\n-__extension__ extern __inline int16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vclsq_u16 (uint16x8_t __a)\n-{\n-  return __builtin_aarch64_clrsbv8hi ((int16x8_t) __a);\n-}\n-\n-__extension__ extern __inline int32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vclsq_u32 (uint32x4_t __a)\n-{\n-  return __builtin_aarch64_clrsbv4si ((int32x4_t) __a);\n-}\n-\n-/* vclz.  */\n-\n-__extension__ extern __inline int8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vclz_s8 (int8x8_t __a)\n+vcltd_f64 (float64_t __a, float64_t __b)\n {\n-  return __builtin_aarch64_clzv8qi (__a);\n+  return __a < __b ? -1ll : 0ll;\n }\n \n-__extension__ extern __inline int16x4_t\n+/* vcltz - vector.  */\n+\n+__extension__ extern __inline uint32x2_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vclz_s16 (int16x4_t __a)\n+vcltz_f32 (float32x2_t __a)\n {\n-  return __builtin_aarch64_clzv4hi (__a);\n+  return (uint32x2_t) (__a < 0.0f);\n }\n \n-__extension__ extern __inline int32x2_t\n+__extension__ extern __inline uint64x1_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vclz_s32 (int32x2_t __a)\n+vcltz_f64 (float64x1_t __a)\n {\n-  return __builtin_aarch64_clzv2si (__a);\n+  return (uint64x1_t) (__a < (float64x1_t) {0.0});\n }\n \n __extension__ extern __inline uint8x8_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vclz_u8 (uint8x8_t __a)\n+vcltz_s8 (int8x8_t __a)\n {\n-  return (uint8x8_t)__builtin_aarch64_clzv8qi ((int8x8_t)__a);\n+  return (uint8x8_t) (__a < 0);\n }\n \n __extension__ extern __inline uint16x4_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vclz_u16 (uint16x4_t __a)\n+vcltz_s16 (int16x4_t __a)\n {\n-  return (uint16x4_t)__builtin_aarch64_clzv4hi ((int16x4_t)__a);\n+  return (uint16x4_t) (__a < 0);\n }\n \n __extension__ extern __inline uint32x2_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vclz_u32 (uint32x2_t __a)\n+vcltz_s32 (int32x2_t __a)\n {\n-  return (uint32x2_t)__builtin_aarch64_clzv2si ((int32x2_t)__a);\n+  return (uint32x2_t) (__a < 0);\n }\n \n-__extension__ extern __inline int8x16_t\n+__extension__ extern __inline uint64x1_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vclzq_s8 (int8x16_t __a)\n+vcltz_s64 (int64x1_t __a)\n {\n-  return __builtin_aarch64_clzv16qi (__a);\n+  return (uint64x1_t) (__a < __AARCH64_INT64_C (0));\n }\n \n-__extension__ extern __inline int16x8_t\n+__extension__ extern __inline uint32x4_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vclzq_s16 (int16x8_t __a)\n+vcltzq_f32 (float32x4_t __a)\n {\n-  return __builtin_aarch64_clzv8hi (__a);\n+  return (uint32x4_t) (__a < 0.0f);\n }\n \n-__extension__ extern __inline int32x4_t\n+__extension__ extern __inline uint64x2_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vclzq_s32 (int32x4_t __a)\n+vcltzq_f64 (float64x2_t __a)\n {\n-  return __builtin_aarch64_clzv4si (__a);\n+  return (uint64x2_t) (__a < 0.0);\n }\n \n __extension__ extern __inline uint8x16_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vclzq_u8 (uint8x16_t __a)\n+vcltzq_s8 (int8x16_t __a)\n {\n-  return (uint8x16_t)__builtin_aarch64_clzv16qi ((int8x16_t)__a);\n+  return (uint8x16_t) (__a < 0);\n }\n \n __extension__ extern __inline uint16x8_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vclzq_u16 (uint16x8_t __a)\n+vcltzq_s16 (int16x8_t __a)\n {\n-  return (uint16x8_t)__builtin_aarch64_clzv8hi ((int16x8_t)__a);\n+  return (uint16x8_t) (__a < 0);\n }\n \n __extension__ extern __inline uint32x4_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vclzq_u32 (uint32x4_t __a)\n-{\n-  return (uint32x4_t)__builtin_aarch64_clzv4si ((int32x4_t)__a);\n-}\n-\n-/* vcnt.  */\n-\n-__extension__ extern __inline poly8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcnt_p8 (poly8x8_t __a)\n+vcltzq_s32 (int32x4_t __a)\n {\n-  return (poly8x8_t) __builtin_aarch64_popcountv8qi ((int8x8_t) __a);\n+  return (uint32x4_t) (__a < 0);\n }\n \n-__extension__ extern __inline int8x8_t\n+__extension__ extern __inline uint64x2_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcnt_s8 (int8x8_t __a)\n+vcltzq_s64 (int64x2_t __a)\n {\n-  return __builtin_aarch64_popcountv8qi (__a);\n+  return (uint64x2_t) (__a < __AARCH64_INT64_C (0));\n }\n \n-__extension__ extern __inline uint8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcnt_u8 (uint8x8_t __a)\n-{\n-  return (uint8x8_t) __builtin_aarch64_popcountv8qi ((int8x8_t) __a);\n-}\n+/* vcltz - scalar.  */\n \n-__extension__ extern __inline poly8x16_t\n+__extension__ extern __inline uint32_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcntq_p8 (poly8x16_t __a)\n+vcltzs_f32 (float32_t __a)\n {\n-  return (poly8x16_t) __builtin_aarch64_popcountv16qi ((int8x16_t) __a);\n+  return __a < 0.0f ? -1 : 0;\n }\n \n-__extension__ extern __inline int8x16_t\n+__extension__ extern __inline uint64_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcntq_s8 (int8x16_t __a)\n+vcltzd_s64 (int64_t __a)\n {\n-  return __builtin_aarch64_popcountv16qi (__a);\n+  return __a < 0 ? -1ll : 0ll;\n }\n \n-__extension__ extern __inline uint8x16_t\n+__extension__ extern __inline uint64_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vcntq_u8 (uint8x16_t __a)\n+vcltzd_f64 (float64_t __a)\n {\n-  return (uint8x16_t) __builtin_aarch64_popcountv16qi ((int8x16_t) __a);\n+  return __a < 0.0 ? -1ll : 0ll;\n }\n \n /* vcvt (double -> float).  */\n@@ -14902,106 +13925,6 @@ vmulq_n_u32 (uint32x4_t __a, uint32_t __b)\n   return __a * __b;\n }\n \n-/* vmvn  */\n-\n-__extension__ extern __inline poly8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vmvn_p8 (poly8x8_t __a)\n-{\n-  return (poly8x8_t) ~((int8x8_t) __a);\n-}\n-\n-__extension__ extern __inline int8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vmvn_s8 (int8x8_t __a)\n-{\n-  return ~__a;\n-}\n-\n-__extension__ extern __inline int16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vmvn_s16 (int16x4_t __a)\n-{\n-  return ~__a;\n-}\n-\n-__extension__ extern __inline int32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vmvn_s32 (int32x2_t __a)\n-{\n-  return ~__a;\n-}\n-\n-__extension__ extern __inline uint8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vmvn_u8 (uint8x8_t __a)\n-{\n-  return ~__a;\n-}\n-\n-__extension__ extern __inline uint16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vmvn_u16 (uint16x4_t __a)\n-{\n-  return ~__a;\n-}\n-\n-__extension__ extern __inline uint32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vmvn_u32 (uint32x2_t __a)\n-{\n-  return ~__a;\n-}\n-\n-__extension__ extern __inline poly8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vmvnq_p8 (poly8x16_t __a)\n-{\n-  return (poly8x16_t) ~((int8x16_t) __a);\n-}\n-\n-__extension__ extern __inline int8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vmvnq_s8 (int8x16_t __a)\n-{\n-  return ~__a;\n-}\n-\n-__extension__ extern __inline int16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vmvnq_s16 (int16x8_t __a)\n-{\n-  return ~__a;\n-}\n-\n-__extension__ extern __inline int32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vmvnq_s32 (int32x4_t __a)\n-{\n-  return ~__a;\n-}\n-\n-__extension__ extern __inline uint8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vmvnq_u8 (uint8x16_t __a)\n-{\n-  return ~__a;\n-}\n-\n-__extension__ extern __inline uint16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vmvnq_u16 (uint16x8_t __a)\n-{\n-  return ~__a;\n-}\n-\n-__extension__ extern __inline uint32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vmvnq_u32 (uint32x4_t __a)\n-{\n-  return ~__a;\n-}\n-\n /* vneg  */\n \n __extension__ extern __inline float32x2_t\n@@ -17258,50 +16181,6 @@ vqtbx4q_p8 (poly8x16_t __r, poly8x16x4_t __tab, uint8x16_t __idx)\n   return __builtin_aarch64_qtbx4v16qi_pppu (__r, __tab, __idx);\n }\n \n-/* vrbit  */\n-\n-__extension__ extern __inline poly8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrbit_p8 (poly8x8_t __a)\n-{\n-  return (poly8x8_t) __builtin_aarch64_rbitv8qi ((int8x8_t) __a);\n-}\n-\n-__extension__ extern __inline int8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrbit_s8 (int8x8_t __a)\n-{\n-  return __builtin_aarch64_rbitv8qi (__a);\n-}\n-\n-__extension__ extern __inline uint8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrbit_u8 (uint8x8_t __a)\n-{\n-  return (uint8x8_t) __builtin_aarch64_rbitv8qi ((int8x8_t) __a);\n-}\n-\n-__extension__ extern __inline poly8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrbitq_p8 (poly8x16_t __a)\n-{\n-  return (poly8x16_t) __builtin_aarch64_rbitv16qi ((int8x16_t)__a);\n-}\n-\n-__extension__ extern __inline int8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrbitq_s8 (int8x16_t __a)\n-{\n-  return __builtin_aarch64_rbitv16qi (__a);\n-}\n-\n-__extension__ extern __inline uint8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrbitq_u8 (uint8x16_t __a)\n-{\n-  return (uint8x16_t) __builtin_aarch64_rbitv16qi ((int8x16_t) __a);\n-}\n-\n /* vrecpe  */\n \n __extension__ extern __inline uint32x2_t\n@@ -24529,133 +23408,6 @@ vsha512su1q_u64 (uint64x2_t __a, uint64x2_t __b, uint64x2_t __c)\n   return __builtin_aarch64_crypto_sha512su1qv2di_uuuu (__a, __b, __c);\n }\n \n-__extension__ extern __inline uint8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veor3q_u8 (uint8x16_t __a, uint8x16_t __b, uint8x16_t __c)\n-{\n-  return __builtin_aarch64_eor3qv16qi_uuuu (__a, __b, __c);\n-}\n-\n-__extension__ extern __inline uint16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veor3q_u16 (uint16x8_t __a, uint16x8_t __b, uint16x8_t __c)\n-{\n-  return __builtin_aarch64_eor3qv8hi_uuuu (__a, __b, __c);\n-}\n-\n-__extension__ extern __inline uint32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veor3q_u32 (uint32x4_t __a, uint32x4_t __b, uint32x4_t __c)\n-{\n-  return __builtin_aarch64_eor3qv4si_uuuu (__a, __b, __c);\n-}\n-\n-__extension__ extern __inline uint64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veor3q_u64 (uint64x2_t __a, uint64x2_t __b, uint64x2_t __c)\n-{\n-  return __builtin_aarch64_eor3qv2di_uuuu (__a, __b, __c);\n-}\n-\n-\n-__extension__ extern __inline int8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veor3q_s8 (int8x16_t __a, int8x16_t __b, int8x16_t __c)\n-{\n-  return __builtin_aarch64_eor3qv16qi (__a, __b, __c);\n-}\n-\n-__extension__ extern __inline int16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veor3q_s16 (int16x8_t __a, int16x8_t __b, int16x8_t __c)\n-{\n-  return __builtin_aarch64_eor3qv8hi (__a, __b, __c);\n-}\n-\n-__extension__ extern __inline int32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veor3q_s32 (int32x4_t __a, int32x4_t __b, int32x4_t __c)\n-{\n-  return __builtin_aarch64_eor3qv4si (__a, __b, __c);\n-}\n-\n-__extension__ extern __inline int64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-veor3q_s64 (int64x2_t __a, int64x2_t __b, int64x2_t __c)\n-{\n-  return __builtin_aarch64_eor3qv2di (__a, __b, __c);\n-}\n-\n-__extension__ extern __inline uint64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrax1q_u64 (uint64x2_t __a, uint64x2_t __b)\n-{\n-  return __builtin_aarch64_rax1qv2di_uuu (__a, __b);\n-}\n-\n-__extension__ extern __inline uint64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vxarq_u64 (uint64x2_t __a, uint64x2_t __b, const int __imm6)\n-{\n-  return __builtin_aarch64_xarqv2di_uuus (__a, __b, __imm6);\n-}\n-\n-__extension__ extern __inline uint8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbcaxq_u8 (uint8x16_t __a, uint8x16_t __b, uint8x16_t __c)\n-{\n-  return __builtin_aarch64_bcaxqv16qi_uuuu (__a, __b, __c);\n-}\n-\n-__extension__ extern __inline uint16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbcaxq_u16 (uint16x8_t __a, uint16x8_t __b, uint16x8_t __c)\n-{\n-  return __builtin_aarch64_bcaxqv8hi_uuuu (__a, __b, __c);\n-}\n-\n-__extension__ extern __inline uint32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbcaxq_u32 (uint32x4_t __a, uint32x4_t __b, uint32x4_t __c)\n-{\n-  return __builtin_aarch64_bcaxqv4si_uuuu (__a, __b, __c);\n-}\n-\n-__extension__ extern __inline uint64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbcaxq_u64 (uint64x2_t __a, uint64x2_t __b, uint64x2_t __c)\n-{\n-  return __builtin_aarch64_bcaxqv2di_uuuu (__a, __b, __c);\n-}\n-\n-__extension__ extern __inline int8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbcaxq_s8 (int8x16_t __a, int8x16_t __b, int8x16_t __c)\n-{\n-  return __builtin_aarch64_bcaxqv16qi (__a, __b, __c);\n-}\n-\n-__extension__ extern __inline int16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbcaxq_s16 (int16x8_t __a, int16x8_t __b, int16x8_t __c)\n-{\n-  return __builtin_aarch64_bcaxqv8hi (__a, __b, __c);\n-}\n-\n-__extension__ extern __inline int32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbcaxq_s32 (int32x4_t __a, int32x4_t __b, int32x4_t __c)\n-{\n-  return __builtin_aarch64_bcaxqv4si (__a, __b, __c);\n-}\n-\n-__extension__ extern __inline int64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vbcaxq_s64 (int64x2_t __a, int64x2_t __b, int64x2_t __c)\n-{\n-  return __builtin_aarch64_bcaxqv2di (__a, __b, __c);\n-}\n-\n #pragma GCC pop_options\n \n /* AdvSIMD Complex numbers intrinsics.  */\ndiff --git a/gcc/testsuite/gcc.target/aarch64/neon/vand.c b/gcc/testsuite/gcc.target/aarch64/neon/vand.c\nnew file mode 100644\nindex 000000000000..fd85f8992e18\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/neon/vand.c\n@@ -0,0 +1,116 @@\n+/* { dg-do compile } */\n+/* { dg-final { check-function-bodies \"**\" \"\" } } */\n+\n+#include \"arm_neon_test.h\"\n+\n+/*\n+** test_vand_u8:\n+** and\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vand_u8, uint8x8_t)\n+\n+/*\n+** test_vand_s8:\n+** and\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vand_s8, int8x8_t)\n+\n+/*\n+** test_vand_u16:\n+** and\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vand_u16, uint16x4_t)\n+\n+/*\n+** test_vand_s16:\n+** and\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vand_s16, int16x4_t)\n+\n+/*\n+** test_vand_u32:\n+** and\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vand_u32, uint32x2_t)\n+\n+/*\n+** test_vand_s32:\n+** and\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vand_s32, int32x2_t)\n+\n+/*\n+** test_vand_u64:\n+** and\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vand_u64, uint64x1_t)\n+\n+/*\n+** test_vand_s64:\n+** and\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vand_s64, int64x1_t)\n+\n+/*\n+** test_vandq_u8:\n+** and\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vandq_u8, uint8x16_t)\n+\n+/*\n+** test_vandq_s8:\n+** and\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vandq_s8, int8x16_t)\n+\n+/*\n+** test_vandq_u16:\n+** and\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vandq_u16, uint16x8_t)\n+\n+/*\n+** test_vandq_s16:\n+** and\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vandq_s16, int16x8_t)\n+\n+/*\n+** test_vandq_u32:\n+** and\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vandq_u32, uint32x4_t)\n+\n+/*\n+** test_vandq_s32:\n+** and\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vandq_s32, int32x4_t)\n+\n+/*\n+** test_vandq_u64:\n+** and\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vandq_u64, uint64x2_t)\n+\n+/*\n+** test_vandq_s64:\n+** and\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vandq_s64, int64x2_t)\ndiff --git a/gcc/testsuite/gcc.target/aarch64/neon/vbcax.c b/gcc/testsuite/gcc.target/aarch64/neon/vbcax.c\nnew file mode 100644\nindex 000000000000..ae61e65dc6a3\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/neon/vbcax.c\n@@ -0,0 +1,60 @@\n+/* { dg-do compile } */\n+/* { dg-final { check-function-bodies \"**\" \"\" } } */\n+\n+#include \"arm_neon_test.h\"\n+\n+/*\n+** test_vbcaxq_u8:\n+** bcax\tv0\\.16b, v0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_TERNARY (vbcaxq_u8, uint8x16_t)\n+\n+/*\n+** test_vbcaxq_u16:\n+** bcax\tv0\\.16b, v0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_TERNARY (vbcaxq_u16, uint16x8_t)\n+\n+/*\n+** test_vbcaxq_u32:\n+** bcax\tv0\\.16b, v0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_TERNARY (vbcaxq_u32, uint32x4_t)\n+\n+/*\n+** test_vbcaxq_u64:\n+** bcax\tv0\\.16b, v0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_TERNARY (vbcaxq_u64, uint64x2_t)\n+\n+/*\n+** test_vbcaxq_s8:\n+** bcax\tv0\\.16b, v0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_TERNARY (vbcaxq_s8, int8x16_t)\n+\n+/*\n+** test_vbcaxq_s16:\n+** bcax\tv0\\.16b, v0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_TERNARY (vbcaxq_s16, int16x8_t)\n+\n+/*\n+** test_vbcaxq_s32:\n+** bcax\tv0\\.16b, v0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_TERNARY (vbcaxq_s32, int32x4_t)\n+\n+/*\n+** test_vbcaxq_s64:\n+** bcax\tv0\\.16b, v0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_TERNARY (vbcaxq_s64, int64x2_t)\ndiff --git a/gcc/testsuite/gcc.target/aarch64/neon/vbic.c b/gcc/testsuite/gcc.target/aarch64/neon/vbic.c\nnew file mode 100644\nindex 000000000000..d67cb72fda18\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/neon/vbic.c\n@@ -0,0 +1,116 @@\n+/* { dg-do compile } */\n+/* { dg-final { check-function-bodies \"**\" \"\" } } */\n+\n+#include \"arm_neon_test.h\"\n+\n+/*\n+** test_vbic_u8:\n+** bic\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vbic_u8, uint8x8_t)\n+\n+/*\n+** test_vbic_s8:\n+** bic\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vbic_s8, int8x8_t)\n+\n+/*\n+** test_vbic_u16:\n+** bic\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vbic_u16, uint16x4_t)\n+\n+/*\n+** test_vbic_s16:\n+** bic\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vbic_s16, int16x4_t)\n+\n+/*\n+** test_vbic_u32:\n+** bic\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vbic_u32, uint32x2_t)\n+\n+/*\n+** test_vbic_s32:\n+** bic\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vbic_s32, int32x2_t)\n+\n+/*\n+** test_vbic_u64:\n+** bic\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vbic_u64, uint64x1_t)\n+\n+/*\n+** test_vbic_s64:\n+** bic\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vbic_s64, int64x1_t)\n+\n+/*\n+** test_vbicq_u8:\n+** bic\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vbicq_u8, uint8x16_t)\n+\n+/*\n+** test_vbicq_s8:\n+** bic\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vbicq_s8, int8x16_t)\n+\n+/*\n+** test_vbicq_u16:\n+** bic\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vbicq_u16, uint16x8_t)\n+\n+/*\n+** test_vbicq_s16:\n+** bic\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vbicq_s16, int16x8_t)\n+\n+/*\n+** test_vbicq_u32:\n+** bic\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vbicq_u32, uint32x4_t)\n+\n+/*\n+** test_vbicq_s32:\n+** bic\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vbicq_s32, int32x4_t)\n+\n+/*\n+** test_vbicq_u64:\n+** bic\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vbicq_u64, uint64x2_t)\n+\n+/*\n+** test_vbicq_s64:\n+** bic\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vbicq_s64, int64x2_t)\ndiff --git a/gcc/testsuite/gcc.target/aarch64/neon/vbsl.c b/gcc/testsuite/gcc.target/aarch64/neon/vbsl.c\nnew file mode 100644\nindex 000000000000..9a677600ace6\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/neon/vbsl.c\n@@ -0,0 +1,214 @@\n+/* { dg-do compile } */\n+/* { dg-final { check-function-bodies \"**\" \"\" } } */\n+\n+#include \"arm_neon_test.h\"\n+\n+/*\n+** test_vbsl_u8:\n+** bsl\tv0\\.8b, v1\\.8b, v2\\.8b\n+** ret\n+*/\n+TEST_TERNARY (vbsl_u8, uint8x8_t, uint8x8_t, uint8x8_t, uint8x8_t)\n+\n+/*\n+** test_vbsl_s8:\n+** bsl\tv0\\.8b, v1\\.8b, v2\\.8b\n+** ret\n+*/\n+TEST_TERNARY (vbsl_s8, int8x8_t, uint8x8_t, int8x8_t, int8x8_t)\n+\n+/*\n+** test_vbsl_p8:\n+** bsl\tv0\\.8b, v1\\.8b, v2\\.8b\n+** ret\n+*/\n+TEST_TERNARY (vbsl_p8, poly8x8_t, uint8x8_t, poly8x8_t, poly8x8_t)\n+\n+/*\n+** test_vbsl_mf8:\n+** bsl\tv0\\.8b, v1\\.8b, v2\\.8b\n+** ret\n+*/\n+TEST_TERNARY (vbsl_mf8, mfloat8x8_t, uint8x8_t, mfloat8x8_t, mfloat8x8_t)\n+\n+/*\n+** test_vbsl_u16:\n+** bsl\tv0\\.8b, v1\\.8b, v2\\.8b\n+** ret\n+*/\n+TEST_TERNARY (vbsl_u16, uint16x4_t, uint16x4_t, uint16x4_t, uint16x4_t)\n+\n+/*\n+** test_vbsl_s16:\n+** bsl\tv0\\.8b, v1\\.8b, v2\\.8b\n+** ret\n+*/\n+TEST_TERNARY (vbsl_s16, int16x4_t, uint16x4_t, int16x4_t, int16x4_t)\n+\n+/*\n+** test_vbsl_p16:\n+** bsl\tv0\\.8b, v1\\.8b, v2\\.8b\n+** ret\n+*/\n+TEST_TERNARY (vbsl_p16, poly16x4_t, uint16x4_t, poly16x4_t, poly16x4_t)\n+\n+/*\n+** test_vbsl_f16:\n+** bsl\tv0\\.8b, v1\\.8b, v2\\.8b\n+** ret\n+*/\n+TEST_TERNARY (vbsl_f16, float16x4_t, uint16x4_t, float16x4_t, float16x4_t)\n+\n+/*\n+** test_vbsl_u32:\n+** bsl\tv0\\.8b, v1\\.8b, v2\\.8b\n+** ret\n+*/\n+TEST_TERNARY (vbsl_u32, uint32x2_t, uint32x2_t, uint32x2_t, uint32x2_t)\n+\n+/*\n+** test_vbsl_s32:\n+** bsl\tv0\\.8b, v1\\.8b, v2\\.8b\n+** ret\n+*/\n+TEST_TERNARY (vbsl_s32, int32x2_t, uint32x2_t, int32x2_t, int32x2_t)\n+\n+/*\n+** test_vbsl_f32:\n+** bsl\tv0\\.8b, v1\\.8b, v2\\.8b\n+** ret\n+*/\n+TEST_TERNARY (vbsl_f32, float32x2_t, uint32x2_t, float32x2_t, float32x2_t)\n+\n+/*\n+** test_vbsl_u64:\n+** bsl\tv0\\.8b, v1\\.8b, v2\\.8b\n+** ret\n+*/\n+TEST_TERNARY (vbsl_u64, uint64x1_t, uint64x1_t, uint64x1_t, uint64x1_t)\n+\n+/*\n+** test_vbsl_s64:\n+** bsl\tv0\\.8b, v1\\.8b, v2\\.8b\n+** ret\n+*/\n+TEST_TERNARY (vbsl_s64, int64x1_t, uint64x1_t, int64x1_t, int64x1_t)\n+\n+/*\n+** test_vbsl_p64:\n+** bsl\tv0\\.8b, v1\\.8b, v2\\.8b\n+** ret\n+*/\n+TEST_TERNARY (vbsl_p64, poly64x1_t, uint64x1_t, poly64x1_t, poly64x1_t)\n+\n+/*\n+** test_vbsl_f64:\n+** bsl\tv0\\.8b, v1\\.8b, v2\\.8b\n+** ret\n+*/\n+TEST_TERNARY (vbsl_f64, float64x1_t, uint64x1_t, float64x1_t, float64x1_t)\n+\n+/*\n+** test_vbslq_u8:\n+** bsl\tv0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_TERNARY (vbslq_u8, uint8x16_t, uint8x16_t, uint8x16_t, uint8x16_t)\n+\n+/*\n+** test_vbslq_s8:\n+** bsl\tv0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_TERNARY (vbslq_s8, int8x16_t, uint8x16_t, int8x16_t, int8x16_t)\n+\n+/*\n+** test_vbslq_p8:\n+** bsl\tv0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_TERNARY (vbslq_p8, poly8x16_t, uint8x16_t, poly8x16_t, poly8x16_t)\n+\n+/*\n+** test_vbslq_mf8:\n+** bsl\tv0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_TERNARY (vbslq_mf8, mfloat8x16_t, uint8x16_t, mfloat8x16_t, mfloat8x16_t)\n+\n+/*\n+** test_vbslq_u16:\n+** bsl\tv0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_TERNARY (vbslq_u16, uint16x8_t, uint16x8_t, uint16x8_t, uint16x8_t)\n+\n+/*\n+** test_vbslq_s16:\n+** bsl\tv0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_TERNARY (vbslq_s16, int16x8_t, uint16x8_t, int16x8_t, int16x8_t)\n+\n+/*\n+** test_vbslq_p16:\n+** bsl\tv0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_TERNARY (vbslq_p16, poly16x8_t, uint16x8_t, poly16x8_t, poly16x8_t)\n+\n+/*\n+** test_vbslq_f16:\n+** bsl\tv0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_TERNARY (vbslq_f16, float16x8_t, uint16x8_t, float16x8_t, float16x8_t)\n+\n+/*\n+** test_vbslq_u32:\n+** bsl\tv0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_TERNARY (vbslq_u32, uint32x4_t, uint32x4_t, uint32x4_t, uint32x4_t)\n+\n+/*\n+** test_vbslq_s32:\n+** bsl\tv0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_TERNARY (vbslq_s32, int32x4_t, uint32x4_t, int32x4_t, int32x4_t)\n+\n+/*\n+** test_vbslq_f32:\n+** bsl\tv0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_TERNARY (vbslq_f32, float32x4_t, uint32x4_t, float32x4_t, float32x4_t)\n+\n+/*\n+** test_vbslq_u64:\n+** bsl\tv0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_TERNARY (vbslq_u64, uint64x2_t, uint64x2_t, uint64x2_t, uint64x2_t)\n+\n+/*\n+** test_vbslq_s64:\n+** bsl\tv0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_TERNARY (vbslq_s64, int64x2_t, uint64x2_t, int64x2_t, int64x2_t)\n+\n+/*\n+** test_vbslq_p64:\n+** bsl\tv0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_TERNARY (vbslq_p64, poly64x2_t, uint64x2_t, poly64x2_t, poly64x2_t)\n+\n+/*\n+** test_vbslq_f64:\n+** bsl\tv0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_TERNARY (vbslq_f64, float64x2_t, uint64x2_t, float64x2_t, float64x2_t)\ndiff --git a/gcc/testsuite/gcc.target/aarch64/neon/vcls.c b/gcc/testsuite/gcc.target/aarch64/neon/vcls.c\nnew file mode 100644\nindex 000000000000..83b3e2eb70c3\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/neon/vcls.c\n@@ -0,0 +1,88 @@\n+/* { dg-do compile } */\n+/* { dg-final { check-function-bodies \"**\" \"\" } } */\n+\n+#include \"arm_neon_test.h\"\n+\n+/*\n+** test_vcls_u8:\n+** cls\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNARY (vcls_u8, int8x8_t, uint8x8_t)\n+\n+/*\n+** test_vcls_s8:\n+** cls\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNARY (vcls_s8, int8x8_t, int8x8_t)\n+\n+/*\n+** test_vcls_u16:\n+** cls\tv0\\.4h, v0\\.4h\n+** ret\n+*/\n+TEST_UNARY (vcls_u16, int16x4_t, uint16x4_t)\n+\n+/*\n+** test_vcls_s16:\n+** cls\tv0\\.4h, v0\\.4h\n+** ret\n+*/\n+TEST_UNARY (vcls_s16, int16x4_t, int16x4_t)\n+\n+/*\n+** test_vcls_u32:\n+** cls\tv0\\.2s, v0\\.2s\n+** ret\n+*/\n+TEST_UNARY (vcls_u32, int32x2_t, uint32x2_t)\n+\n+/*\n+** test_vcls_s32:\n+** cls\tv0\\.2s, v0\\.2s\n+** ret\n+*/\n+TEST_UNARY (vcls_s32, int32x2_t, int32x2_t)\n+\n+/*\n+** test_vclsq_u8:\n+** cls\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNARY (vclsq_u8, int8x16_t, uint8x16_t)\n+\n+/*\n+** test_vclsq_s8:\n+** cls\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNARY (vclsq_s8, int8x16_t, int8x16_t)\n+\n+/*\n+** test_vclsq_u16:\n+** cls\tv0\\.8h, v0\\.8h\n+** ret\n+*/\n+TEST_UNARY (vclsq_u16, int16x8_t, uint16x8_t)\n+\n+/*\n+** test_vclsq_s16:\n+** cls\tv0\\.8h, v0\\.8h\n+** ret\n+*/\n+TEST_UNARY (vclsq_s16, int16x8_t, int16x8_t)\n+\n+/*\n+** test_vclsq_u32:\n+** cls\tv0\\.4s, v0\\.4s\n+** ret\n+*/\n+TEST_UNARY (vclsq_u32, int32x4_t, uint32x4_t)\n+\n+/*\n+** test_vclsq_s32:\n+** cls\tv0\\.4s, v0\\.4s\n+** ret\n+*/\n+TEST_UNARY (vclsq_s32, int32x4_t, int32x4_t)\ndiff --git a/gcc/testsuite/gcc.target/aarch64/neon/vclz.c b/gcc/testsuite/gcc.target/aarch64/neon/vclz.c\nnew file mode 100644\nindex 000000000000..ad806367e13e\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/neon/vclz.c\n@@ -0,0 +1,88 @@\n+/* { dg-do compile } */\n+/* { dg-final { check-function-bodies \"**\" \"\" } } */\n+\n+#include \"arm_neon_test.h\"\n+\n+/*\n+** test_vclz_u8:\n+** clz\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNARY (vclz_u8, uint8x8_t, uint8x8_t)\n+\n+/*\n+** test_vclz_s8:\n+** clz\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNARY (vclz_s8, int8x8_t, int8x8_t)\n+\n+/*\n+** test_vclz_u16:\n+** clz\tv0\\.4h, v0\\.4h\n+** ret\n+*/\n+TEST_UNARY (vclz_u16, uint16x4_t, uint16x4_t)\n+\n+/*\n+** test_vclz_s16:\n+** clz\tv0\\.4h, v0\\.4h\n+** ret\n+*/\n+TEST_UNARY (vclz_s16, int16x4_t, int16x4_t)\n+\n+/*\n+** test_vclz_u32:\n+** clz\tv0\\.2s, v0\\.2s\n+** ret\n+*/\n+TEST_UNARY (vclz_u32, uint32x2_t, uint32x2_t)\n+\n+/*\n+** test_vclz_s32:\n+** clz\tv0\\.2s, v0\\.2s\n+** ret\n+*/\n+TEST_UNARY (vclz_s32, int32x2_t, int32x2_t)\n+\n+/*\n+** test_vclzq_u8:\n+** clz\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNARY (vclzq_u8, uint8x16_t, uint8x16_t)\n+\n+/*\n+** test_vclzq_s8:\n+** clz\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNARY (vclzq_s8, int8x16_t, int8x16_t)\n+\n+/*\n+** test_vclzq_u16:\n+** clz\tv0\\.8h, v0\\.8h\n+** ret\n+*/\n+TEST_UNARY (vclzq_u16, uint16x8_t, uint16x8_t)\n+\n+/*\n+** test_vclzq_s16:\n+** clz\tv0\\.8h, v0\\.8h\n+** ret\n+*/\n+TEST_UNARY (vclzq_s16, int16x8_t, int16x8_t)\n+\n+/*\n+** test_vclzq_u32:\n+** clz\tv0\\.4s, v0\\.4s\n+** ret\n+*/\n+TEST_UNARY (vclzq_u32, uint32x4_t, uint32x4_t)\n+\n+/*\n+** test_vclzq_s32:\n+** clz\tv0\\.4s, v0\\.4s\n+** ret\n+*/\n+TEST_UNARY (vclzq_s32, int32x4_t, int32x4_t)\ndiff --git a/gcc/testsuite/gcc.target/aarch64/neon/vcnt.c b/gcc/testsuite/gcc.target/aarch64/neon/vcnt.c\nnew file mode 100644\nindex 000000000000..9e1ce67012f0\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/neon/vcnt.c\n@@ -0,0 +1,25 @@\n+/* { dg-do compile } */\n+/* { dg-final { check-function-bodies \"**\" \"\" } } */\n+\n+#include \"arm_neon_test.h\"\n+\n+/*\n+** test_vcnt_u8:\n+** cnt\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNARY (vcnt_u8, uint8x8_t, uint8x8_t)\n+\n+/*\n+** test_vcnt_s8:\n+** cnt\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNARY (vcnt_s8, int8x8_t, int8x8_t)\n+\n+/*\n+** test_vcnt_p8:\n+** cnt\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNARY (vcnt_p8, poly8x8_t, poly8x8_t)\ndiff --git a/gcc/testsuite/gcc.target/aarch64/neon/veor.c b/gcc/testsuite/gcc.target/aarch64/neon/veor.c\nnew file mode 100644\nindex 000000000000..fd2f4836929e\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/neon/veor.c\n@@ -0,0 +1,116 @@\n+/* { dg-do compile } */\n+/* { dg-final { check-function-bodies \"**\" \"\" } } */\n+\n+#include \"arm_neon_test.h\"\n+\n+/*\n+** test_veor_u8:\n+** eor\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (veor_u8, uint8x8_t)\n+\n+/*\n+** test_veor_s8:\n+** eor\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (veor_s8, int8x8_t)\n+\n+/*\n+** test_veor_u16:\n+** eor\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (veor_u16, uint16x4_t)\n+\n+/*\n+** test_veor_s16:\n+** eor\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (veor_s16, int16x4_t)\n+\n+/*\n+** test_veor_u32:\n+** eor\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (veor_u32, uint32x2_t)\n+\n+/*\n+** test_veor_s32:\n+** eor\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (veor_s32, int32x2_t)\n+\n+/*\n+** test_veor_u64:\n+** eor\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (veor_u64, uint64x1_t)\n+\n+/*\n+** test_veor_s64:\n+** eor\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (veor_s64, int64x1_t)\n+\n+/*\n+** test_veorq_u8:\n+** eor\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (veorq_u8, uint8x16_t)\n+\n+/*\n+** test_veorq_s8:\n+** eor\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (veorq_s8, int8x16_t)\n+\n+/*\n+** test_veorq_u16:\n+** eor\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (veorq_u16, uint16x8_t)\n+\n+/*\n+** test_veorq_s16:\n+** eor\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (veorq_s16, int16x8_t)\n+\n+/*\n+** test_veorq_u32:\n+** eor\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (veorq_u32, uint32x4_t)\n+\n+/*\n+** test_veorq_s32:\n+** eor\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (veorq_s32, int32x4_t)\n+\n+/*\n+** test_veorq_u64:\n+** eor\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (veorq_u64, uint64x2_t)\n+\n+/*\n+** test_veorq_s64:\n+** eor\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (veorq_s64, int64x2_t)\ndiff --git a/gcc/testsuite/gcc.target/aarch64/neon/veor3.c b/gcc/testsuite/gcc.target/aarch64/neon/veor3.c\nnew file mode 100644\nindex 000000000000..bda4040e5e54\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/neon/veor3.c\n@@ -0,0 +1,60 @@\n+/* { dg-do compile } */\n+/* { dg-final { check-function-bodies \"**\" \"\" } } */\n+\n+#include \"arm_neon_test.h\"\n+\n+/*\n+** test_veor3q_u8:\n+** eor3\tv0\\.16b, v0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_TERNARY (veor3q_u8, uint8x16_t)\n+\n+/*\n+** test_veor3q_u16:\n+** eor3\tv0\\.16b, v0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_TERNARY (veor3q_u16, uint16x8_t)\n+\n+/*\n+** test_veor3q_u32:\n+** eor3\tv0\\.16b, v0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_TERNARY (veor3q_u32, uint32x4_t)\n+\n+/*\n+** test_veor3q_u64:\n+** eor3\tv0\\.16b, v0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_TERNARY (veor3q_u64, uint64x2_t)\n+\n+/*\n+** test_veor3q_s8:\n+** eor3\tv0\\.16b, v0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_TERNARY (veor3q_s8, int8x16_t)\n+\n+/*\n+** test_veor3q_s16:\n+** eor3\tv0\\.16b, v0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_TERNARY (veor3q_s16, int16x8_t)\n+\n+/*\n+** test_veor3q_s32:\n+** eor3\tv0\\.16b, v0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_TERNARY (veor3q_s32, int32x4_t)\n+\n+/*\n+** test_veor3q_s64:\n+** eor3\tv0\\.16b, v0\\.16b, v1\\.16b, v2\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_TERNARY (veor3q_s64, int64x2_t)\ndiff --git a/gcc/testsuite/gcc.target/aarch64/neon/vmvn.c b/gcc/testsuite/gcc.target/aarch64/neon/vmvn.c\nnew file mode 100644\nindex 000000000000..83a591408bb3\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/neon/vmvn.c\n@@ -0,0 +1,102 @@\n+/* { dg-do compile } */\n+/* { dg-final { check-function-bodies \"**\" \"\" } } */\n+\n+#include \"arm_neon_test.h\"\n+\n+/*\n+** test_vmvn_u8:\n+** not\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vmvn_u8, uint8x8_t)\n+\n+/*\n+** test_vmvn_s8:\n+** not\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vmvn_s8, int8x8_t)\n+\n+/*\n+** test_vmvn_p8:\n+** not\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vmvn_p8, poly8x8_t)\n+\n+/*\n+** test_vmvn_u16:\n+** not\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vmvn_u16, uint16x4_t)\n+\n+/*\n+** test_vmvn_s16:\n+** not\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vmvn_s16, int16x4_t)\n+\n+/*\n+** test_vmvn_u32:\n+** not\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vmvn_u32, uint32x2_t)\n+\n+/*\n+** test_vmvn_s32:\n+** not\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vmvn_s32, int32x2_t)\n+\n+/*\n+** test_vmvnq_u8:\n+** not\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vmvnq_u8, uint8x16_t)\n+\n+/*\n+** test_vmvnq_s8:\n+** not\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vmvnq_s8, int8x16_t)\n+\n+/*\n+** test_vmvnq_p8:\n+** not\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vmvnq_p8, poly8x16_t)\n+\n+/*\n+** test_vmvnq_u16:\n+** not\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vmvnq_u16, uint16x8_t)\n+\n+/*\n+** test_vmvnq_s16:\n+** not\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vmvnq_s16, int16x8_t)\n+\n+/*\n+** test_vmvnq_u32:\n+** not\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vmvnq_u32, uint32x4_t)\n+\n+/*\n+** test_vmvnq_s32:\n+** not\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vmvnq_s32, int32x4_t)\ndiff --git a/gcc/testsuite/gcc.target/aarch64/neon/vorn.c b/gcc/testsuite/gcc.target/aarch64/neon/vorn.c\nnew file mode 100644\nindex 000000000000..fd6c13c11408\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/neon/vorn.c\n@@ -0,0 +1,116 @@\n+/* { dg-do compile } */\n+/* { dg-final { check-function-bodies \"**\" \"\" } } */\n+\n+#include \"arm_neon_test.h\"\n+\n+/*\n+** test_vorn_u8:\n+** orn\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorn_u8, uint8x8_t)\n+\n+/*\n+** test_vorn_s8:\n+** orn\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorn_s8, int8x8_t)\n+\n+/*\n+** test_vorn_u16:\n+** orn\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorn_u16, uint16x4_t)\n+\n+/*\n+** test_vorn_s16:\n+** orn\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorn_s16, int16x4_t)\n+\n+/*\n+** test_vorn_u32:\n+** orn\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorn_u32, uint32x2_t)\n+\n+/*\n+** test_vorn_s32:\n+** orn\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorn_s32, int32x2_t)\n+\n+/*\n+** test_vorn_u64:\n+** orn\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorn_u64, uint64x1_t)\n+\n+/*\n+** test_vorn_s64:\n+** orn\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorn_s64, int64x1_t)\n+\n+/*\n+** test_vornq_u8:\n+** orn\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vornq_u8, uint8x16_t)\n+\n+/*\n+** test_vornq_s8:\n+** orn\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vornq_s8, int8x16_t)\n+\n+/*\n+** test_vornq_u16:\n+** orn\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vornq_u16, uint16x8_t)\n+\n+/*\n+** test_vornq_s16:\n+** orn\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vornq_s16, int16x8_t)\n+\n+/*\n+** test_vornq_u32:\n+** orn\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vornq_u32, uint32x4_t)\n+\n+/*\n+** test_vornq_s32:\n+** orn\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vornq_s32, int32x4_t)\n+\n+/*\n+** test_vornq_u64:\n+** orn\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vornq_u64, uint64x2_t)\n+\n+/*\n+** test_vornq_s64:\n+** orn\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vornq_s64, int64x2_t)\ndiff --git a/gcc/testsuite/gcc.target/aarch64/neon/vorr.c b/gcc/testsuite/gcc.target/aarch64/neon/vorr.c\nnew file mode 100644\nindex 000000000000..d2c7b6b2c3db\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/neon/vorr.c\n@@ -0,0 +1,116 @@\n+/* { dg-do compile } */\n+/* { dg-final { check-function-bodies \"**\" \"\" } } */\n+\n+#include \"arm_neon_test.h\"\n+\n+/*\n+** test_vorr_u8:\n+** orr\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorr_u8, uint8x8_t)\n+\n+/*\n+** test_vorr_s8:\n+** orr\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorr_s8, int8x8_t)\n+\n+/*\n+** test_vorr_u16:\n+** orr\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorr_u16, uint16x4_t)\n+\n+/*\n+** test_vorr_s16:\n+** orr\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorr_s16, int16x4_t)\n+\n+/*\n+** test_vorr_u32:\n+** orr\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorr_u32, uint32x2_t)\n+\n+/*\n+** test_vorr_s32:\n+** orr\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorr_s32, int32x2_t)\n+\n+/*\n+** test_vorr_u64:\n+** orr\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorr_u64, uint64x1_t)\n+\n+/*\n+** test_vorr_s64:\n+** orr\tv0\\.8b, (v0\\.8b, v1\\.8b|v1\\.8b, v0\\.8b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorr_s64, int64x1_t)\n+\n+/*\n+** test_vorrq_u8:\n+** orr\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorrq_u8, uint8x16_t)\n+\n+/*\n+** test_vorrq_s8:\n+** orr\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorrq_s8, int8x16_t)\n+\n+/*\n+** test_vorrq_u16:\n+** orr\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorrq_u16, uint16x8_t)\n+\n+/*\n+** test_vorrq_s16:\n+** orr\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorrq_s16, int16x8_t)\n+\n+/*\n+** test_vorrq_u32:\n+** orr\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorrq_u32, uint32x4_t)\n+\n+/*\n+** test_vorrq_s32:\n+** orr\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorrq_s32, int32x4_t)\n+\n+/*\n+** test_vorrq_u64:\n+** orr\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorrq_u64, uint64x2_t)\n+\n+/*\n+** test_vorrq_s64:\n+** orr\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vorrq_s64, int64x2_t)\ndiff --git a/gcc/testsuite/gcc.target/aarch64/neon/vrax1.c b/gcc/testsuite/gcc.target/aarch64/neon/vrax1.c\nnew file mode 100644\nindex 000000000000..0f5fdd088b4a\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/neon/vrax1.c\n@@ -0,0 +1,11 @@\n+/* { dg-do compile } */\n+/* { dg-final { check-function-bodies \"**\" \"\" } } */\n+\n+#include \"arm_neon_test.h\"\n+\n+/*\n+** test_vrax1q_u64:\n+** rax1\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vrax1q_u64, uint64x2_t)\ndiff --git a/gcc/testsuite/gcc.target/aarch64/neon/vrbit.c b/gcc/testsuite/gcc.target/aarch64/neon/vrbit.c\nnew file mode 100644\nindex 000000000000..9168d54c1108\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/neon/vrbit.c\n@@ -0,0 +1,46 @@\n+/* { dg-do compile } */\n+/* { dg-final { check-function-bodies \"**\" \"\" } } */\n+\n+#include \"arm_neon_test.h\"\n+\n+/*\n+** test_vrbit_u8:\n+** rbit\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrbit_u8, uint8x8_t)\n+\n+/*\n+** test_vrbit_s8:\n+** rbit\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrbit_s8, int8x8_t)\n+\n+/*\n+** test_vrbit_p8:\n+** rbit\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrbit_p8, poly8x8_t)\n+\n+/*\n+** test_vrbitq_u8:\n+** rbit\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrbitq_u8, uint8x16_t)\n+\n+/*\n+** test_vrbitq_s8:\n+** rbit\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrbitq_s8, int8x16_t)\n+\n+/*\n+** test_vrbitq_p8:\n+** rbit\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrbitq_p8, poly8x16_t)\ndiff --git a/gcc/testsuite/gcc.target/aarch64/neon/vxar.c b/gcc/testsuite/gcc.target/aarch64/neon/vxar.c\nnew file mode 100644\nindex 000000000000..5893a83214d1\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/neon/vxar.c\n@@ -0,0 +1,25 @@\n+/* { dg-do compile } */\n+/* { dg-final { check-function-bodies \"**\" \"\" } } */\n+\n+#include \"arm_neon_test.h\"\n+\n+/*\n+** test_vxarq_u64_0:\n+** eor\tv0\\.16b, (v0\\.16b, v1\\.16b|v1\\.16b, v0\\.16b)\n+** ret\n+*/\n+uint64x2_t test_vxarq_u64_0 (uint64x2_t a, uint64x2_t b) { return vxarq_u64 (a, b, 0); }\n+\n+/*\n+** test_vxarq_u64_1:\n+** xar\tv0\\.2d, v0\\.2d, v1\\.2d, #?1\n+** ret\n+*/\n+uint64x2_t test_vxarq_u64_1 (uint64x2_t a, uint64x2_t b) { return vxarq_u64 (a, b, 1); }\n+\n+/*\n+** test_vxarq_u64_31:\n+** xar\tv0\\.2d, v0\\.2d, v1\\.2d, #?31\n+** ret\n+*/\n+uint64x2_t test_vxarq_u64_31 (uint64x2_t a, uint64x2_t b) { return vxarq_u64 (a, b, 31); }\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sha3_1.c b/gcc/testsuite/gcc.target/aarch64/sha3_1.c\nindex cf02865bfe85..189ee470c7dc 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sha3_1.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sha3_1.c\n@@ -1,5 +1,5 @@\n /* { dg-do compile } */\n-/* { dg-options \"-march=armv8.2-a+sha3\" } */\n+/* { dg-options \"-O1 -march=armv8.2-a+sha3\" } */\n \n #include \"sha3.h\"\n \ndiff --git a/gcc/testsuite/gcc.target/aarch64/sha3_2.c b/gcc/testsuite/gcc.target/aarch64/sha3_2.c\nindex 8b085cbe9803..c73ecb08ce65 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sha3_2.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sha3_2.c\n@@ -1,5 +1,5 @@\n /* { dg-do compile } */\n-/* { dg-options \"-march=armv8.3-a+sha3\" } */\n+/* { dg-options \"-O1 -march=armv8.3-a+sha3\" } */\n \n #include \"sha3.h\"\n \ndiff --git a/gcc/testsuite/gcc.target/aarch64/sha3_3.c b/gcc/testsuite/gcc.target/aarch64/sha3_3.c\nindex 51ae0a4da6bb..74236ffeb2bf 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sha3_3.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sha3_3.c\n@@ -1,5 +1,5 @@\n /* { dg-do compile } */\n-/* { dg-options \"-march=armv8.4-a+sha3\" } */\n+/* { dg-options \"-O1 -march=armv8.4-a+sha3\" } */\n \n #include \"sha3.h\"\n \ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme/inlining_10.c b/gcc/testsuite/gcc.target/aarch64/sme/inlining_10.c\nindex 78e737e2f40b..2231bc8e79b2 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme/inlining_10.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme/inlining_10.c\n@@ -17,12 +17,6 @@ call_vadd ()\n   neon[4] = vaddq_u8 (neon[5], neon[6]);\n }\n \n-inline void __attribute__((always_inline))\n-call_vbsl () // { dg-error \"inlining failed\" }\n-{\n-  neon[0] = vbslq_u8 (neon[1], neon[2], neon[3]);\n-}\n-\n inline void __attribute__((always_inline))\n call_svadd ()\n {\n@@ -51,7 +45,6 @@ void\n sc_caller () [[arm::inout(\"za\"), arm::streaming_compatible]]\n {\n   call_vadd ();\n-  call_vbsl ();\n   call_svadd ();\n   call_svld1_gather ();\n   call_svzero ();\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme/inlining_11.c b/gcc/testsuite/gcc.target/aarch64/sme/inlining_11.c\nindex 0cd3487973e3..1cd52477b8d0 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme/inlining_11.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme/inlining_11.c\n@@ -17,12 +17,6 @@ call_vadd ()\n   neon[4] = vaddq_u8 (neon[5], neon[6]);\n }\n \n-inline void __attribute__((always_inline))\n-call_vbsl () // { dg-error \"inlining failed\" }\n-{\n-  neon[0] = vbslq_u8 (neon[1], neon[2], neon[3]);\n-}\n-\n inline void __attribute__((always_inline))\n call_svadd ()\n {\n@@ -51,7 +45,6 @@ void\n sc_caller () [[arm::inout(\"za\"), arm::streaming]]\n {\n   call_vadd ();\n-  call_vbsl ();\n   call_svadd ();\n   call_svld1_gather ();\n   call_svzero ();\ndiff --git a/gcc/testsuite/gcc.target/aarch64/target_attr_10.c b/gcc/testsuite/gcc.target/aarch64/target_attr_10.c\nindex d96a8733a575..fd02607ddfcb 100644\n--- a/gcc/testsuite/gcc.target/aarch64/target_attr_10.c\n+++ b/gcc/testsuite/gcc.target/aarch64/target_attr_10.c\n@@ -10,7 +10,5 @@ __attribute__ ((target (\"+nosimd\")))\n uint8x16_t\n foo (uint8x16_t a, uint8x16_t b, uint8x16_t c)\n {\n-  return vbslq_u8 (a, b, c); /* { dg-message \"called from here\" } */\n+  return vbslq_u8 (a, b, c); /* { dg-error {ACLE function 'vbslq_u8' requires ISA extension 'simd'} } */\n }\n-\n-/* { dg-error \"inlining failed in call to 'always_inline'\" \"\" { target *-*-* } 0 } */\n","prefixes":["v1","4/6"]}