{"id":2237928,"url":"http://patchwork.ozlabs.org/api/patches/2237928/?format=json","web_url":"http://patchwork.ozlabs.org/project/gcc/patch/bmm.hihq3sdm4a.gcc.gcc-TEST.karmea01.158.1.5@forge-stage.sourceware.org/","project":{"id":17,"url":"http://patchwork.ozlabs.org/api/projects/17/?format=json","name":"GNU Compiler Collection","link_name":"gcc","list_id":"gcc-patches.gcc.gnu.org","list_email":"gcc-patches@gcc.gnu.org","web_url":null,"scm_url":null,"webscm_url":null,"list_archive_url":"","list_archive_url_format":"","commit_url_format":""},"msgid":"<bmm.hihq3sdm4a.gcc.gcc-TEST.karmea01.158.1.5@forge-stage.sourceware.org>","list_archive_url":null,"date":"2026-05-13T16:04:15","name":"[v1,5/6] aarch64: Port NEON permutation intrinsics to pragma-based framework","commit_ref":null,"pull_url":null,"state":"new","archived":false,"hash":"c37b0eafac8963faf0de8d29dfd3868b6424f995","submitter":{"id":92188,"url":"http://patchwork.ozlabs.org/api/people/92188/?format=json","name":"Karl Meakin via Sourceware Forge","email":"forge-bot+karmea01@forge-stage.sourceware.org"},"delegate":null,"mbox":"http://patchwork.ozlabs.org/project/gcc/patch/bmm.hihq3sdm4a.gcc.gcc-TEST.karmea01.158.1.5@forge-stage.sourceware.org/mbox/","series":[{"id":504183,"url":"http://patchwork.ozlabs.org/api/series/504183/?format=json","web_url":"http://patchwork.ozlabs.org/project/gcc/list/?series=504183","date":"2026-05-13T16:04:10","name":"aarch64: port NEON intrinsics to pragma-based framework","version":1,"mbox":"http://patchwork.ozlabs.org/series/504183/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/2237928/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/2237928/checks/","tags":{},"related":[],"headers":{"Return-Path":"<gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org>","X-Original-To":["incoming@patchwork.ozlabs.org","gcc-patches@gcc.gnu.org"],"Delivered-To":["patchwork-incoming@legolas.ozlabs.org","gcc-patches@gcc.gnu.org"],"Authentication-Results":["legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org\n (client-ip=2620:52:6:3111::32; helo=vm01.sourceware.org;\n envelope-from=gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org;\n receiver=patchwork.ozlabs.org)","sourceware.org; dmarc=none (p=none dis=none)\n header.from=forge-stage.sourceware.org","sourceware.org;\n spf=pass smtp.mailfrom=forge-stage.sourceware.org","sourceware.org;\n arc=none smtp.remote-ip=2620:52:6:3111::39"],"Received":["from vm01.sourceware.org (vm01.sourceware.org\n [IPv6:2620:52:6:3111::32])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4gFyzg31Xwz1y5L\n\tfor <incoming@patchwork.ozlabs.org>; Thu, 14 May 2026 02:07:59 +1000 (AEST)","from vm01.sourceware.org (localhost [IPv6:::1])\n\tby sourceware.org (Postfix) with ESMTP id 910AB4BBC0A0\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 13 May 2026 16:07:57 +0000 (GMT)","from forge-stage.sourceware.org (vm08.sourceware.org\n [IPv6:2620:52:6:3111::39])\n by sourceware.org (Postfix) with ESMTPS id 02CBC4BB8F6B\n for <gcc-patches@gcc.gnu.org>; Wed, 13 May 2026 16:05:56 +0000 (GMT)","from forge-stage.sourceware.org (localhost [IPv6:::1])\n (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n key-exchange x25519 server-signature ECDSA (prime256v1) server-digest SHA256)\n (No client certificate requested)\n by forge-stage.sourceware.org (Postfix) with ESMTPS id 96E5342D19;\n Wed, 13 May 2026 16:05:10 +0000 (UTC)"],"DKIM-Filter":["OpenDKIM Filter v2.11.0 sourceware.org 910AB4BBC0A0","OpenDKIM Filter v2.11.0 sourceware.org 02CBC4BB8F6B"],"DMARC-Filter":"OpenDMARC Filter v1.4.2 sourceware.org 02CBC4BB8F6B","ARC-Filter":"OpenARC Filter v1.0.0 sourceware.org 02CBC4BB8F6B","ARC-Seal":"i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1778688356; cv=none;\n b=nnfdeeaSOAYpCdSz+DCtqerEWGUdLT07NNjDBi+rI7n4Y2fdgiTvm3s2N8N2r3aUPooP0BpNNMTcvwg/K/9QHOwaJuag86f33/pGBDOBEveAi16Ifpk4HExSTyq1EokqRN/HttwrtpY7vaR1tXST9kRNP9speet4Mo8ZB/HvL70=","ARC-Message-Signature":"i=1; a=rsa-sha256; d=sourceware.org; s=key;\n t=1778688356; c=relaxed/simple;\n bh=llYzKjm6SnSJ/lhmeVG01ls6E+Hqt8AGdRSTlwvEZc8=;\n h=From:Date:Subject:To:Message-ID;\n b=CF1jD1qJkcDf8aNhWfKLxmogGiEnPGOw8N6Krgi9NRrgOPEaq1FiKOCqfQSSYTRa2EYOkF5DypIPneRxuSgyt4Z1fEh6fCPTvDlNaN3F5ASbRzisRl2XTUT4xSzeb/8KKY2CeCXl6uzCyrjTPhpaM3/nFwffRNOayi/BwBaWcwo=","ARC-Authentication-Results":"i=1; sourceware.org","From":"Karl Meakin via Sourceware Forge\n <forge-bot+karmea01@forge-stage.sourceware.org>","Date":"Wed, 13 May 2026 16:04:15 +0000","Subject":"[PATCH v1 5/6] aarch64: Port NEON permutation intrinsics to\n pragma-based framework","To":"gcc-patches mailing list <gcc-patches@gcc.gnu.org>","Cc":"ktkachov@nvidia.com, richard.earnshaw@arm.com, tamar.christina@arm.com,\n karl.meakin@arm.com","Message-ID":"\n <bmm.hihq3sdm4a.gcc.gcc-TEST.karmea01.158.1.5@forge-stage.sourceware.org>","X-Mailer":"batrachomyomachia","X-Pull-Request-Organization":"gcc","X-Pull-Request-Repository":"gcc-TEST","X-Pull-Request":"https://forge.sourceware.org/gcc/gcc-TEST/pulls/158","References":"\n <bmm.hihq3sdm4a.gcc.gcc-TEST.karmea01.158.1.0@forge-stage.sourceware.org>","In-Reply-To":"\n <bmm.hihq3sdm4a.gcc.gcc-TEST.karmea01.158.1.0@forge-stage.sourceware.org>","X-Patch-URL":"\n https://forge.sourceware.org/karmea01/gcc-TEST/commit/a641d1dd08ed45847cf839beb79452c24bb0275f","X-BeenThere":"gcc-patches@gcc.gnu.org","X-Mailman-Version":"2.1.30","Precedence":"list","List-Id":"Gcc-patches mailing list <gcc-patches.gcc.gnu.org>","List-Unsubscribe":"<https://gcc.gnu.org/mailman/options/gcc-patches>,\n <mailto:gcc-patches-request@gcc.gnu.org?subject=unsubscribe>","List-Archive":"<https://gcc.gnu.org/pipermail/gcc-patches/>","List-Post":"<mailto:gcc-patches@gcc.gnu.org>","List-Help":"<mailto:gcc-patches-request@gcc.gnu.org?subject=help>","List-Subscribe":"<https://gcc.gnu.org/mailman/listinfo/gcc-patches>,\n <mailto:gcc-patches-request@gcc.gnu.org?subject=subscribe>","Reply-To":"gcc-patches mailing list <gcc-patches@gcc.gnu.org>,\n ktkachov@nvidia.com, richard.earnshaw@arm.com, tamar.christina@arm.com,\n karl.meakin@arm.com, karmea01@sourceware.org","Errors-To":"gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org"},"content":"From: Karl Meakin <karl.meakin@arm.com>\n\nPort the following intrinsics to the pragma-based framework:\n* vext\n* vrev\n* vtrn\n* vuzp\n* vzip\n\ngcc/ChangeLog:\n\n\t* config/aarch64/aarch64-simd-pragma-builtins.def (vext_mf8, vextq_mf8, vrev64_mf8,\n\tvrev64q_mf8, vrev32_mf8, vrev32q_mf8, vrev16_mf8, vrev16q_mf8, vtrn1_mf8, vtrn1q_mf8,\n\tvtrn2_mf8, vtrn2q_mf8, vtrn_mf8, vtrnq_mf8, vuzp1_mf8, vuzp1q_mf8, vuzp2_mf8, vuzp2q_mf8,\n\tvuzp_mf8, vuzpq_mf8, vzip1_mf8, vzip1q_mf8, vzip2_mf8, vzip2q_mf8, vzip_mf8, vzipq_mf8):\n\tDelete functions.\n\t* config/aarch64/aarch64-sve-builtins-shapes.cc (parse_type): Handle `D` or `Q` followed by\n\t`x1,x2,x3,x4` to mean tuple types.\n\t* config/aarch64/aarch64-acle-builtins.h (TYPES_bh_poly, TYPES_bhs_neon, TYPES_neon_rev16,\n\tTYPES_neon_rev32, TYPES_neon_rev64): New type lists.\n\t(bh_poly, bhs_neon, neon_rev16, neon_rev32, neon_rev64): Likewise.\n\t* config/aarch64/aarch64-builtins.cc (aarch64_simd_tuple_types): Remove `static` qualifier.\n\t* config/aarch64/aarch64-builtins.h (aarch64_simd_tuple_types): New declaration.\n\t* config/aarch64/aarch64-neon-builtins-base.cc (build_tuple_get, build_tuple_set): New\n\tfunctions.\n\t(class gimple_permute, class gimple_permute_pair): New classes.\n\t(ext_mask, rev_mask, trn_mask, uzp_mask, zip_mask): New functions.\n\t(vext, vextq, vrev16, vrev16q, vrev32, vrev32q, vtrn1, vtrn1q, btrn2, vtrn2q, vtrn, vtrnq,\n\tvuzp1, vuzp1q, vuzp2, vuzp2q, vuzp, vuzpq, vzip1, vzip1q, vzip2, vzip2q, vzip, vzipq): New\n\tfunction bases.\n\t* config/aarch64/aarch64-neon-builtins-base.def (vext, vextq, vrev16, vrev16q, vrev32,\n\tvrev32q, vrev64, vrev64q, vtrn, vtrn1, vtrn1q, vtrn2, vtrn2q, vtrnq, vuzp, vuzp1, vuzp1q,\n\tvuzp2, vuzp2q, vuzpq, vzip, vzip1, vzip1q, vzip2, vzip2q, vzipq): New function groups.\n\t* config/aarch64/aarch64-simd-builtins.def (zip1, zip2, uzp1, uzp2, trn1, trn2): Delete\n\tbuiltin functions.\n\t* config/aarch64/arm_neon.h (vext_f16, vext_f32, vext_f64, vext_p8, vext_p16, vext_p64,\n\tvext_s8, vext_s16, vext_s32, vext_s64, vext_u8, vext_u16, vext_u32, vext_u64, vextq_f16,\n\tvextq_f32, vextq_f64, vextq_p8, vextq_p16, vextq_p64, vextq_s8, vextq_s16, vextq_s32,\n\tvextq_s64, vextq_u8, vextq_u16, vextq_u32, vextq_u64, vrev16_p8, vrev16_s8, vrev16_u8,\n\tvrev16q_p8, vrev16q_s8, vrev16q_u8, vrev32_p8, vrev32_p16, vrev32_s8, vrev32_s16, vrev32_u8,\n\tvrev32_u16, vrev32q_p8, vrev32q_p16, vrev32q_s8, vrev32q_s16, vrev32q_u8, vrev32q_u16,\n\tvrev64_f16, vrev64_f32, vrev64_p8, vrev64_p16, vrev64_s8, vrev64_s16, vrev64_s32, vrev64_u8,\n\tvrev64_u16, vrev64_u32, vrev64q_f16, vrev64q_f32, vrev64q_p8, vrev64q_p16, vrev64q_s8,\n\tvrev64q_s16, vrev64q_s32, vrev64q_u8, vrev64q_u16, vrev64q_u32, vtrn1_f16, vtrn1_f32,\n\tvtrn1_p8, vtrn1_p16, vtrn1_s8, vtrn1_s16, vtrn1_s32, vtrn1_u8, vtrn1_u16, vtrn1_u32,\n\tvtrn1q_f16, vtrn1q_f32, vtrn1q_f64, vtrn1q_p8, vtrn1q_p16, vtrn1q_s8, vtrn1q_s16,\n\tvtrn1q_s32, vtrn1q_s64, vtrn1q_u8, vtrn1q_u16, vtrn1q_u32, vtrn1q_p64, vtrn1q_u64,\n\tvtrn2_f16, vtrn2_f32, vtrn2_p8, vtrn2_p16, vtrn2_s8, vtrn2_s16, vtrn2_s32, vtrn2_u8,\n\tvtrn2_u16, vtrn2_u32, vtrn2q_f16, vtrn2q_f32, vtrn2q_f64, vtrn2q_p8, vtrn2q_p16, vtrn2q_s8,\n\tvtrn2q_s16, vtrn2q_s32, vtrn2q_s64, vtrn2q_u8, vtrn2q_u16, vtrn2q_u32, vtrn2q_u64,\n\tvtrn2q_p64, vtrn_f16, vtrn_f32, vtrn_p8, vtrn_p16, vtrn_s8, vtrn_s16, vtrn_s32, vtrn_u8,\n\tvtrn_u16, vtrn_u32, vtrnq_f16, vtrnq_f32, vtrnq_p8, vtrnq_p16, vtrnq_s8, vtrnq_s16,\n\tvtrnq_s32, vtrnq_u8, vtrnq_u16, vtrnq_u32, vuzp1_f16, vuzp1_f32, vuzp1_p8, vuzp1_p16,\n\tvuzp1_s8, vuzp1_s16, vuzp1_s32, vuzp1_u8, vuzp1_u16, vuzp1_u32, vuzp1q_f16, vuzp1q_f32,\n\tvuzp1q_f64, vuzp1q_p8, vuzp1q_p16, vuzp1q_s8, vuzp1q_s16, vuzp1q_s32, vuzp1q_s64, vuzp1q_u8,\n\tvuzp1q_u16, vuzp1q_u32, vuzp1q_u64, vuzp1q_p64, vuzp2_f16, vuzp2_f32, vuzp2_p8, vuzp2_p16,\n\tvuzp2_s8, vuzp2_s16, vuzp2_s32, vuzp2_u8, vuzp2_u16, vuzp2_u32, vuzp2q_f16, vuzp2q_f32,\n\tvuzp2q_f64, vuzp2q_p8, vuzp2q_p16, vuzp2q_s8, vuzp2q_s16, vuzp2q_s32, vuzp2q_s64, vuzp2q_u8,\n\tvuzp2q_u16, vuzp2q_u32, vuzp2q_u64, vuzp2q_p64, vzip1_f16, vzip1_f32, vzip1_p8, vzip1_p16,\n\tvzip1_s8, vzip1_s16, vzip1_s32, vzip1_u8, vzip1_u16, vzip1_u32, vzip1q_f16, vzip1q_f32,\n\tvzip1q_f64, vzip1q_p8, vzip1q_p16, vzip1q_s8, vzip1q_s16, vzip1q_s32, vzip1q_s64, vzip1q_u8,\n\tvzip1q_u16, vzip1q_u32, vzip1q_u64, vzip1q_p64, vzip2_f16, vzip2_f32, vzip2_p8, vzip2_p16,\n\tvzip2_s8, vzip2_s16, vzip2_s32, vzip2_u8, vzip2_u16, vzip2_u32, vzip2q_f16, vzip2q_f32,\n\tvzip2q_f64, vzip2q_p8, vzip2q_p16, vzip2q_s8, vzip2q_s16, vzip2q_s32, vzip2q_s64, vzip2q_u8,\n\tvzip2q_u16, vzip2q_u32, vzip2q_u64, vzip2q_p64): Delete functions.\n\ngcc/testsuite/ChangeLog:\n\n\t* gcc.target/aarch64/neon/vext.c: New test.\n\t* gcc.target/aarch64/neon/vrev.c: New test.\n\t* gcc.target/aarch64/neon/vtrn.c: New test.\n\t* gcc.target/aarch64/neon/vuzp.c: New test.\n\t* gcc.target/aarch64/neon/vzip.c: New test.\n---\n gcc/config/aarch64/aarch64-acle-builtins.h    |   41 +\n gcc/config/aarch64/aarch64-builtins.cc        |  164 +-\n gcc/config/aarch64/aarch64-builtins.h         |    1 +\n .../aarch64/aarch64-neon-builtins-base.cc     |  266 ++\n .../aarch64/aarch64-neon-builtins-base.def    |   30 +\n gcc/config/aarch64/aarch64-simd-builtins.def  |    9 -\n .../aarch64/aarch64-simd-pragma-builtins.def  |   48 -\n .../aarch64/aarch64-sve-builtins-shapes.cc    |   27 +-\n gcc/config/aarch64/arm_neon.h                 | 2685 +----------------\n gcc/testsuite/gcc.target/aarch64/neon/vext.c  |  216 ++\n gcc/testsuite/gcc.target/aarch64/neon/vrev.c  |  311 ++\n gcc/testsuite/gcc.target/aarch64/neon/vtrn.c  |  566 ++++\n gcc/testsuite/gcc.target/aarch64/neon/vuzp.c  |  566 ++++\n gcc/testsuite/gcc.target/aarch64/neon/vzip.c  |  559 ++++\n 14 files changed, 2688 insertions(+), 2801 deletions(-)\n create mode 100644 gcc/testsuite/gcc.target/aarch64/neon/vext.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/neon/vrev.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/neon/vtrn.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/neon/vuzp.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/neon/vzip.c","diff":"diff --git a/gcc/config/aarch64/aarch64-acle-builtins.h b/gcc/config/aarch64/aarch64-acle-builtins.h\nindex f570434810f5..837de2d12326 100644\n--- a/gcc/config/aarch64/aarch64-acle-builtins.h\n+++ b/gcc/config/aarch64/aarch64-acle-builtins.h\n@@ -1796,10 +1796,26 @@ void build_all (function_builder &b, const char *signature,\n #define TYPES_bhd_poly(S, D, T) \\\n   S (p8), S (p16), S (p64)\n \n+/* _p8 _p16.  */\n+#define TYPES_bh_poly(S, D, T) \\\n+  S (p8), S (p16)\n+\n+/* _p16.  */\n+#define TYPES_h_poly(S, D, T) \\\n+  S (p16)\n+\n /* _p8 _p16 _p64 _p128.  */\n #define TYPES_bhdq_poly(S, D, T) \\\n   S (p8), S (p16), S (p64), S (p128)\n \n+/* _p8  _s8  _u8  _mf8\n+   _p16 _s16 _u16 _f16\n+\t_s32 _u32 _f32.  */\n+#define TYPES_bhs_neon(S, D, T) \\\n+  TYPES_bh_poly (S, D, T), S (mf8), \\\n+  TYPES_bhs_integer (S, D, T), \\\n+  TYPES_hs_float (S, D, T)\n+\n /* _p8  _s8  _u8  _mf8\n    _p16 _s16 _u16 _f16\n \t_s32 _u32 _f32\n@@ -1822,6 +1838,25 @@ void build_all (function_builder &b, const char *signature,\n #define TYPES_b_neon(S, D, T) \\\n   S (p8), S (s8), S (u8)\n \n+/* _p8 _s8 _u8 _mf8.  */\n+#define TYPES_neon_rev16(S, D, T) \\\n+  S (p8), S (s8), S (u8), S (mf8)\n+\n+/* _p8  _s8  _u8  _mf8\n+   _p16 _s16 _u16.  */\n+#define TYPES_neon_rev32(S, D, T) \\\n+  S (p8),  S (s8),  S (u8), S (mf8), \\\n+  S (p16), S (s16), S (u16)\n+\n+/* _p8  _s8  _u8  _mf8\n+   _p16 _s16 _u16 _f16\n+\t_s32 _u32 _f32.  */\n+#define TYPES_neon_rev64(S, D, T) \\\n+  S (p8),  S (s8),  S (u8),  S (mf8), \\\n+  S (p16), S (s16), S (u16), S (f16), \\\n+\t   S (s32), S (u32), S (f32)\n+\n+\n /* Describe a tuple of type suffixes in which only the first is used.  */\n #define DEF_VECTOR_TYPE(X) \\\n   { TYPE_SUFFIX_ ## X, NUM_TYPE_SUFFIXES, NUM_TYPE_SUFFIXES }\n@@ -1960,10 +1995,16 @@ DEF_SVE_TYPES_ARRAY (all_neon);\n DEF_SVE_TYPES_ARRAY (b_neon);\n DEF_SVE_TYPES_ARRAY (h_neon);\n DEF_SVE_TYPES_ARRAY (b_poly);\n+DEF_SVE_TYPES_ARRAY (h_poly);\n+DEF_SVE_TYPES_ARRAY (bh_poly);\n DEF_SVE_TYPES_ARRAY (bhd_poly);\n DEF_SVE_TYPES_ARRAY (bhdq_poly);\n+DEF_SVE_TYPES_ARRAY (bhs_neon);\n DEF_SVE_TYPES_ARRAY (bhsd_neon);\n DEF_SVE_TYPES_ARRAY (neon_copy_lane);\n+DEF_SVE_TYPES_ARRAY (neon_rev16);\n+DEF_SVE_TYPES_ARRAY (neon_rev32);\n+DEF_SVE_TYPES_ARRAY (neon_rev64);\n \n static const group_suffix_index groups_none[] = {\n   GROUP_none, NUM_GROUP_SUFFIXES\ndiff --git a/gcc/config/aarch64/aarch64-builtins.cc b/gcc/config/aarch64/aarch64-builtins.cc\nindex fdf5468f93af..327ae8bc20bb 100644\n--- a/gcc/config/aarch64/aarch64-builtins.cc\n+++ b/gcc/config/aarch64/aarch64-builtins.cc\n@@ -941,7 +941,7 @@ struct aarch64_simd_type_info_trees\n aarch64_simd_types_trees[ARRAY_SIZE (aarch64_simd_types)];\n \n static machine_mode aarch64_simd_tuple_modes[ARM_NEON_H_TYPES_LAST][3];\n-static GTY(()) tree aarch64_simd_tuple_types[ARM_NEON_H_TYPES_LAST][3];\n+GTY(()) tree aarch64_simd_tuple_types[ARM_NEON_H_TYPES_LAST][3];\n \n static GTY(()) tree aarch64_simd_intOI_type_node = NULL_TREE;\n static GTY(()) tree aarch64_simd_intCI_type_node = NULL_TREE;\n@@ -2809,9 +2809,6 @@ aarch64_pragma_builtins_checker::check ()\n     case UNSPEC_ST4_LANE:\n       return require_immediate_lane_index (nargs - 1, nargs - 2);\n \n-    case UNSPEC_EXT:\n-      return require_immediate_range (2, 0, types[2].nunits () - 1);\n-\n     case UNSPEC_FDOT_LANE_FP8:\n       return require_immediate_lane_index (nargs - 2, nargs - 3, 0);\n \n@@ -3884,24 +3881,6 @@ aarch64_get_low_unspec (int unspec)\n     }\n }\n \n-/* OPS contains the operands for one of the permute pair functions vtrn,\n-   vuzp or vzip.  Expand the call, given that PERMUTE1 is the unspec for\n-   the first permute and PERMUTE2 is the unspec for the second permute.  */\n-static rtx\n-aarch64_expand_permute_pair (vec<expand_operand> &ops, int permute1,\n-\t\t\t     int permute2)\n-{\n-  rtx op0 = force_reg (ops[1].mode, ops[1].value);\n-  rtx op1 = force_reg (ops[2].mode, ops[2].value);\n-  rtx target = gen_reg_rtx (ops[0].mode);\n-  rtx target0 = gen_rtx_SUBREG (ops[1].mode, target, 0);\n-  rtx target1 = gen_rtx_SUBREG (ops[1].mode, target,\n-\t\t\t\tGET_MODE_SIZE (ops[1].mode));\n-  emit_insn (gen_aarch64 (permute1, ops[1].mode, target0, op0, op1));\n-  emit_insn (gen_aarch64 (permute2, ops[1].mode, target1, op0, op1));\n-  return target;\n-}\n-\n /* Emit a TBL or TBX instruction with inputs INPUTS and a result of mode\n    MODE.  Return the result of the instruction.\n \n@@ -4080,11 +4059,6 @@ aarch64_expand_pragma_builtin (tree exp, rtx target,\n \taarch64_dereference_pointer (&ops[1], GET_MODE_INNER (ops[0].mode));\n       return expand_vector_broadcast (ops[0].mode, ops[1].value);\n \n-\n-    case UNSPEC_EXT:\n-      icode = code_for_aarch64_ext (ops[0].mode);\n-      break;\n-\n     case UNSPEC_FAMAX:\n     case UNSPEC_FAMIN:\n     case UNSPEC_FMMLA:\n@@ -4092,12 +4066,6 @@ aarch64_expand_pragma_builtin (tree exp, rtx target,\n     case UNSPEC_F2CVTL_FP8:\n     case UNSPEC_FDOT_FP8:\n     case UNSPEC_FSCALE:\n-    case UNSPEC_TRN1:\n-    case UNSPEC_TRN2:\n-    case UNSPEC_UZP1:\n-    case UNSPEC_UZP2:\n-    case UNSPEC_ZIP1:\n-    case UNSPEC_ZIP2:\n       icode = code_for_aarch64 (builtin_data.unspec, ops[0].mode);\n       break;\n \n@@ -4205,12 +4173,6 @@ aarch64_expand_pragma_builtin (tree exp, rtx target,\n       icode = code_for_aarch64_lut (ops[1].mode, ops[2].mode);\n       break;\n \n-    case UNSPEC_REV16:\n-    case UNSPEC_REV32:\n-    case UNSPEC_REV64:\n-      icode = code_for_aarch64_rev (builtin_data.unspec, ops[0].mode);\n-      break;\n-\n     case UNSPEC_SET_LANE:\n       if (builtin_data.signature == aarch64_builtin_signatures::load_lane)\n \taarch64_dereference_pointer (&ops[1], GET_MODE_INNER (ops[0].mode));\n@@ -4261,15 +4223,6 @@ aarch64_expand_pragma_builtin (tree exp, rtx target,\n     case UNSPEC_TBX:\n       return aarch64_expand_tbl_tbx (ops, builtin_data.unspec);\n \n-    case UNSPEC_TRN:\n-      return aarch64_expand_permute_pair (ops, UNSPEC_TRN1, UNSPEC_TRN2);\n-\n-    case UNSPEC_UZP:\n-      return aarch64_expand_permute_pair (ops, UNSPEC_UZP1, UNSPEC_UZP2);\n-\n-    case UNSPEC_ZIP:\n-      return aarch64_expand_permute_pair (ops, UNSPEC_ZIP1, UNSPEC_ZIP2);\n-\n     default:\n       gcc_unreachable ();\n     }\n@@ -4842,79 +4795,6 @@ aarch64_fold_store (gcall *stmt, tree type)\n   return nullptr;\n }\n \n-/* An aarch64_fold_permute callback for vext.  SELECTOR is the value of\n-   the final argument.  */\n-static unsigned int\n-aarch64_ext_index (unsigned int, unsigned int selector, unsigned int i)\n-{\n-  return selector + i;\n-}\n-\n-/* An aarch64_fold_permute callback for vrev.  SELECTOR is the number\n-   of elements in each reversal group.  */\n-static unsigned int\n-aarch64_rev_index (unsigned int, unsigned int selector, unsigned int i)\n-{\n-  return ROUND_DOWN (i, selector) + (selector - 1) - (i % selector);\n-}\n-\n-/* An aarch64_fold_permute callback for vtrn.  SELECTOR is 0 for TRN1\n-   and 1 for TRN2.  */\n-static unsigned int\n-aarch64_trn_index (unsigned int nelts, unsigned int selector, unsigned int i)\n-{\n-  return (i % 2) * nelts + ROUND_DOWN (i, 2) + selector;\n-}\n-\n-/* An aarch64_fold_permute callback for vuzp.  SELECTOR is 0 for UZP1\n-   and 1 for UZP2.  */\n-static unsigned int\n-aarch64_uzp_index (unsigned int, unsigned int selector, unsigned int i)\n-{\n-  return i * 2 + selector;\n-}\n-\n-/* An aarch64_fold_permute callback for vzip.  SELECTOR is 0 for ZIP1\n-   and 1 for ZIP2.  */\n-static unsigned int\n-aarch64_zip_index (unsigned int nelts, unsigned int selector, unsigned int i)\n-{\n-  return (i % 2) * nelts + (i / 2) + selector * (nelts / 2);\n-}\n-\n-/* Fold STMT to a VEC_PERM_EXPR on the first NINPUTS arguments.\n-   Make the VEC_PERM_EXPR emulate an NINPUTS-input TBL in which\n-   architectural lane I of the result selects architectural lane:\n-\n-     GET_INDEX (NELTS, SELECTOR, I)\n-\n-   of the input table.  NELTS is the number of elements in one vector.  */\n-static gimple *\n-aarch64_fold_permute (gcall *stmt, unsigned int ninputs,\n-\t\t      unsigned int (*get_index) (unsigned int, unsigned int,\n-\t\t\t\t\t\t unsigned int),\n-\t\t      unsigned int selector)\n-{\n-  tree op0 = gimple_call_arg (stmt, 0);\n-  tree op1 = ninputs == 2 ? gimple_call_arg (stmt, 1) : op0;\n-  auto nelts = TYPE_VECTOR_SUBPARTS (TREE_TYPE (op0)).to_constant ();\n-  vec_perm_builder sel (nelts, nelts, 1);\n-  for (unsigned int i = 0; i < nelts; ++i)\n-    {\n-      unsigned int index = get_index (nelts, selector,\n-\t\t\t\t      ENDIAN_LANE_N (nelts, i));\n-      unsigned int vec = index / nelts;\n-      unsigned int elt = ENDIAN_LANE_N (nelts, index % nelts);\n-      sel.quick_push (vec * nelts + elt);\n-    }\n-\n-  vec_perm_indices indices (sel, ninputs, nelts);\n-  tree mask_type = build_vector_type (ssizetype, nelts);\n-  tree mask = vec_perm_indices_to_tree (mask_type, indices);\n-  return gimple_build_assign (gimple_call_lhs (stmt), VEC_PERM_EXPR,\n-\t\t\t      op0, op1, mask);\n-}\n-\n /* Try to fold STMT (at GSI), given that it is a call to the builtin\n    described by BUILTIN_DATA.  Return the new statement on success,\n    otherwise return null.  */\n@@ -4939,33 +4819,9 @@ aarch64_gimple_fold_pragma_builtin\n \treturn aarch64_fold_to_val (stmt, gsi, nullptr, dup);\n       }\n \n-    case UNSPEC_EXT:\n-      {\n-\tauto index = tree_to_uhwi (gimple_call_arg (stmt, 2));\n-\treturn aarch64_fold_permute (stmt, 2, aarch64_ext_index, index);\n-      }\n-\n     case UNSPEC_LD1:\n       return aarch64_fold_load (stmt, types[0].type ());\n \n-    case UNSPEC_REV16:\n-      {\n-\tauto selector = 16 / GET_MODE_UNIT_BITSIZE (types[0].mode);\n-\treturn aarch64_fold_permute (stmt, 1, aarch64_rev_index, selector);\n-      }\n-\n-    case UNSPEC_REV32:\n-      {\n-\tauto selector = 32 / GET_MODE_UNIT_BITSIZE (types[0].mode);\n-\treturn aarch64_fold_permute (stmt, 1, aarch64_rev_index, selector);\n-      }\n-\n-    case UNSPEC_REV64:\n-      {\n-\tauto selector = 64 / GET_MODE_UNIT_BITSIZE (types[0].mode);\n-\treturn aarch64_fold_permute (stmt, 1, aarch64_rev_index, selector);\n-      }\n-\n     case UNSPEC_SET_LANE:\n       {\n \ttree elt = gimple_call_arg (stmt, 0);\n@@ -4992,24 +4848,6 @@ aarch64_gimple_fold_pragma_builtin\n \treturn aarch64_copy_vops (gimple_build_assign (mem, val), stmt);\n       }\n \n-    case UNSPEC_TRN1:\n-      return aarch64_fold_permute (stmt, 2, aarch64_trn_index, 0);\n-\n-    case UNSPEC_TRN2:\n-      return aarch64_fold_permute (stmt, 2, aarch64_trn_index, 1);\n-\n-    case UNSPEC_UZP1:\n-      return aarch64_fold_permute (stmt, 2, aarch64_uzp_index, 0);\n-\n-    case UNSPEC_UZP2:\n-      return aarch64_fold_permute (stmt, 2, aarch64_uzp_index, 1);\n-\n-    case UNSPEC_ZIP1:\n-      return aarch64_fold_permute (stmt, 2, aarch64_zip_index, 0);\n-\n-    case UNSPEC_ZIP2:\n-      return aarch64_fold_permute (stmt, 2, aarch64_zip_index, 1);\n-\n     default:\n       return nullptr;\n     }\ndiff --git a/gcc/config/aarch64/aarch64-builtins.h b/gcc/config/aarch64/aarch64-builtins.h\nindex 0c1ee9d390c5..c4289c40ecbe 100644\n--- a/gcc/config/aarch64/aarch64-builtins.h\n+++ b/gcc/config/aarch64/aarch64-builtins.h\n@@ -110,5 +110,6 @@ struct GTY(()) aarch64_simd_type_info_trees\n \n extern const aarch64_simd_type_info aarch64_simd_types[];\n extern aarch64_simd_type_info_trees aarch64_simd_types_trees[];\n+extern tree aarch64_simd_tuple_types[ARM_NEON_H_TYPES_LAST][3];\n \n #endif\ndiff --git a/gcc/config/aarch64/aarch64-neon-builtins-base.cc b/gcc/config/aarch64/aarch64-neon-builtins-base.cc\nindex 63274487ef0f..a3239f0b72ca 100644\n--- a/gcc/config/aarch64/aarch64-neon-builtins-base.cc\n+++ b/gcc/config/aarch64/aarch64-neon-builtins-base.cc\n@@ -88,6 +88,50 @@ build_vec_dup (tree type, tree elem)\n \t   : fold_build1 (VEC_DUPLICATE_EXPR, type, elem);\n }\n \n+/* Build a `TUPLE.val[INDEX]` expression.  */\n+tree\n+build_tuple_get (tree tuple, tree index)\n+{\n+  auto tuple_type = TREE_TYPE (tuple);\n+  auto field = tuple_type_field (tuple_type);\n+  auto array_type = TREE_TYPE (field);\n+  auto vec_type = TREE_TYPE (array_type);\n+\n+  auto field_ref = fold_build3 (COMPONENT_REF, array_type, unshare_expr (tuple),\n+\t\t\t\tfield, NULL_TREE);\n+  auto array_ref\n+    = build4 (ARRAY_REF, vec_type, field_ref, index, NULL_TREE, NULL_TREE);\n+\n+  return array_ref;\n+}\n+\n+/* Build a `TUPLE.val[INDEX][LANE]` expression.  */\n+tree\n+build_tuple_get (gimple_folder &f, tree tuple, tree index, tree lane)\n+{\n+  auto vec = f.force_val (build_tuple_get (tuple, index));\n+  return build_lane_get (vec, lane);\n+}\n+\n+/* Build a `TUPLE.val[INDEX] = ELEM;` statement.\n+   Returns an expression representing the updated tuple.  */\n+tree\n+build_tuple_set (gimple_folder &f, tree tuple, tree index, tree elem)\n+{\n+  f.assign (build_tuple_get (tuple, index), elem);\n+  return tuple;\n+}\n+\n+/* Build a `TUPLE.val[INDEX][LANE] = ELEM;` statement.\n+   Returns an expression representing the updated tuple.  */\n+tree\n+build_tuple_set (gimple_folder &f, tree tuple, tree index, tree lane, tree elem)\n+{\n+  auto vec = f.force_val (build_tuple_get (tuple, index));\n+  vec = f.force_val (build_lane_set (vec, lane, elem));\n+  return build_tuple_set (f, tuple, index, vec);\n+}\n+\n /* Base class for all function expanders.\n    At least one of `expand` or `fold` must be overriden by derived classes.  */\n class gimple_function_base : public function_base\n@@ -421,6 +465,191 @@ public:\n   }\n };\n \n+using mask_fn_t = tree (*) (gimple_folder &);\n+\n+/* For intrinsics that map to a VEC_PERM (A, B, MASK) expression.\n+   A and B come the intrinsic's arguments, MASK is genereted by calling the\n+   provided MASK_FN with gimple_folder.  */\n+class gimple_permute : public gimple_function_base\n+{\n+  mask_fn_t m_mask_fn;\n+\n+public:\n+  constexpr gimple_permute (mask_fn_t mask_fn)\n+    : m_mask_fn (mask_fn)\n+      {}\n+\n+  gimple *fold (gimple_folder &f) const override\n+  {\n+    auto a = gimple_call_arg (f.call, 0);\n+    auto b\n+      = gimple_call_num_args (f.call) >= 2 ? gimple_call_arg (f.call, 1) : a;\n+\n+    auto mask = this->m_mask_fn (f);\n+    return gimple_build_assign (f.lhs, VEC_PERM_EXPR, a, b, mask);\n+  }\n+};\n+\n+class gimple_permute_pair : public gimple_function_base\n+{\n+  mask_fn_t m_mask_fn_1;\n+  mask_fn_t m_mask_fn_2;\n+\n+public:\n+  constexpr gimple_permute_pair (mask_fn_t mask_fn_1, mask_fn_t mask_fn_2)\n+    : m_mask_fn_1 (mask_fn_1), m_mask_fn_2 (mask_fn_2)\n+  {}\n+\n+  gimple *fold (gimple_folder &f) const override\n+  {\n+    auto a = gimple_call_arg (f.call, 0);\n+    auto b = gimple_call_arg (f.call, 1);\n+\n+    auto arg_type = TREE_TYPE (a);\n+    gcc_assert (arg_type == TREE_TYPE (b));\n+\n+    auto tuple_type = TREE_TYPE (f.lhs);\n+    auto tuple = create_tmp_var (tuple_type);\n+    f.assign (tuple, build_clobber (tuple_type));\n+    for (auto i = 0; i < 2; i++)\n+      {\n+\tauto mask = i == 0 ? this->m_mask_fn_1 (f) : this->m_mask_fn_2 (f);\n+\tauto permuted\n+\t  = f.force_val (fold_build3 (VEC_PERM_EXPR, arg_type, a, b, mask));\n+\tbuild_tuple_set (f, tuple, size_int (i), permuted);\n+      }\n+\n+    return gimple_build_assign (f.lhs, tuple);\n+  }\n+};\n+\n+tree\n+ext_mask (gimple_folder &f)\n+{\n+  auto vec_type = TREE_TYPE (gimple_call_arg (f.call, 0));\n+  auto start = int_cst_value (gimple_call_arg (f.call, 2));\n+  auto len = TYPE_VECTOR_SUBPARTS (vec_type);\n+  auto mask_type = build_vector_type (sizetype, len);\n+  return build_vec_series (mask_type, size_int (start), size_int (1));\n+}\n+\n+/* vrev16_u8  => {{1, 0}, {3, 2}, {5, 4}, {7, 6}}\n+   vrev16q_u8 => {{1, 0}, {3, 2}, {5, 4}, {7, 6},\n+\t\t  {9, 8}, {11, 10}, {13, 12}, {15, 14}}\n+\n+   vrev32_u8   => {{3, 2, 1, 0}, {7, 6, 5, 4}}\n+   vrev32q_u8  => {{3, 2, 1, 0}, {7, 6, 5, 4}, {11, 10, 9, 8}, {15, 14, 13, 12}}\n+   vrev32_u16  => {{1, 0}, {3, 2}}\n+   vrev32q_p16 => {{1, 0}, {3, 2}, {5, 4}, {7, 6}}\n+\n+   rev64_u8   => {{7, 6, 5, 4, 3, 2, 1, 0}}\n+   rev64q_u8  => {{7, 6, 5, 4, 3, 2, 1, 0}, {15, 14, 13, 12, 11, 10, 9, 8}}\n+   rev64_u16  => {{3, 2, 1, 0}}\n+   rev64q_u16 => {{3, 2, 1, 0}, {7, 6, 5, 4}}\n+   rev64_u32  => {{1, 0}}\n+   rev64q_u32 => {{1, 0}, {3, 2}}\n+*/\n+\n+template <unsigned int bits_per_word>\n+tree\n+rev_mask (gimple_folder &f)\n+{\n+  auto vec_type = TREE_TYPE (gimple_call_arg (f.call, 0));\n+\n+  auto elem_type = TREE_TYPE (vec_type);\n+  auto len = TYPE_VECTOR_SUBPARTS (vec_type).to_constant ();\n+  auto mask_type = build_vector_type (sizetype, len);\n+\n+  auto num_elems_per_word\n+    = bits_per_word / int_cst_value (TYPE_SIZE (elem_type));\n+  auto num_groups = len / num_elems_per_word;\n+\n+  tree_vector_builder builder (mask_type, len, 1);\n+\n+  for (auto i = 1U; i <= num_groups; i++)\n+    for (auto j = 1U; j <= num_elems_per_word; j++)\n+      builder.quick_push (size_int (i * num_elems_per_word - j));\n+\n+  return builder.build ();\n+}\n+\n+/* TRN1 ({a0, a1},\t   {b0, b1})\t     = {a0, b0}\n+\t\t\t\t\t     = VEC_PERM (a, b, {0, 2})\n+   TRN1 ({a0, a1, a2, a3}, {b0, b1, b2, b3}) = {a0, b0, a2, b2}\n+\t\t\t\t\t     = VEC_PERM (a, b, {0, 4, 2, 6})\n+\n+   TRN2 ({a0, a1},\t   {b0, b1})\t     = {a1, b1}\n+\t\t\t\t\t     = VEC_PERM (a, b, {1, 3})\n+   TRN2 ({a0, a1, a2, a3}, {b0, b1, b2, b3}) = {a1, b1, a3, b3}\n+\t\t\t\t\t     = VEC_PERM (a, b, {1, 5, 3, 7})\n+*/\n+template <bool secondary_p>\n+tree\n+trn_mask (gimple_folder &f)\n+{\n+  auto vec_type = TREE_TYPE (gimple_call_arg (f.call, 0));\n+  auto len = TYPE_VECTOR_SUBPARTS (vec_type).to_constant ();\n+  auto mask_type = build_vector_type (sizetype, len);\n+  tree_vector_builder builder (mask_type, len, 1);\n+\n+  for (auto i = 0U; i < len / 2; i++)\n+    {\n+      builder.quick_push (size_int (i * 2 + secondary_p));\n+      builder.quick_push (size_int (len + i * 2 + secondary_p));\n+    }\n+\n+  return builder.build ();\n+}\n+\n+/* UZP1 ({a0, a1},\t   {b0, b1})\t     = {a0, b0}\n+\t\t\t\t\t     = VEC_PERM (a, b, {0, 2})\n+   UZP1 ({a0, a1, a2, a3}, {b0, b1, b2, b3}) = {a0, a2, b0, b2}\n+\t\t\t\t\t     = VEC_PERM (a, b, {0, 2, 4, 6})\n+\n+   UZP2 ({a0, a1},\t   {b0, b1})\t     = {a1, b1}\n+\t\t\t\t\t     = VEC_PERM (a, b, {1, 3})\n+   UZP2 ({a0, a1, a2, a3}, {b0, b1, b2, b3}) = {a1, a3, b1, b3}\n+\t\t\t\t\t     = VEC_PERM (a, b, {1, 3, 5, 7})\n+*/\n+template <bool secondary_p>\n+tree\n+uzp_mask (gimple_folder &f)\n+{\n+  auto vec_type = TREE_TYPE (gimple_call_arg (f.call, 0));\n+  auto len = TYPE_VECTOR_SUBPARTS (vec_type).to_constant ();\n+  auto mask_type = build_vector_type (sizetype, len);\n+  return build_vec_series (mask_type, size_int (secondary_p), size_int (2));\n+}\n+\n+/* ZIP1 ({a0, a1},\t   {b0, b1})\t     = {a0, b0}\n+\t\t\t\t\t     = VEC_PERM (a, b, {0, 2})\n+   ZIP1 ({a0, a1, a2, a3}, {b0, b1, b2, b3}) = {a0, b0, a1, b1}\n+\t\t\t\t\t     = VEC_PERM (a, b, {0, 4, 1, 5})\n+\n+   ZIP2 ({a0, a1},\t   {b0, b1})\t     = {a1, b1}\n+\t\t\t\t\t     = VEC_PERM (a, b, {1, 3})\n+   ZIP2 ({a0, a1, a2, a3}, {b0, b1, b2, b3}) = {a2, b2, a3, b3}\n+\t\t\t\t\t     = VEC_PERM (a, b, {2, 6, 3, 7})\n+*/\n+template <bool secondary_p>\n+tree\n+zip_mask (gimple_folder &f)\n+{\n+  auto vec_type = TREE_TYPE (gimple_call_arg (f.call, 0));\n+  auto len = TYPE_VECTOR_SUBPARTS (vec_type).to_constant ();\n+  auto mask_type = build_vector_type (sizetype, len);\n+\n+  auto start = secondary_p ? len / 2 : 0;\n+  tree_vector_builder builder (mask_type, len, 1);\n+\n+  for (auto i = 0U; i < len / 2; i++)\n+    {\n+      builder.quick_push (size_int (start + i));\n+      builder.quick_push (size_int (len + start + i));\n+    }\n+  return builder.build ();\n+}\n+\n // Lane get/set\n NEON_FUNCTION (vcreate,      gimple_create,)\n NEON_FUNCTION (vcombine,     gimple_combine,)\n@@ -487,3 +716,40 @@ NEON_FUNCTION (vclz,  gimple_ifn, (IFN_CLZ))\n NEON_FUNCTION (vclzq, gimple_ifn, (IFN_CLZ))\n NEON_FUNCTION (vcnt,  gimple_ifn, (IFN_POPCOUNT))\n NEON_FUNCTION (vcntq, gimple_ifn, (IFN_POPCOUNT))\n+\n+// Permutations\n+// Extract\n+NEON_FUNCTION (vext,  gimple_permute, (ext_mask))\n+NEON_FUNCTION (vextq, gimple_permute, (ext_mask))\n+\n+// Reverse\n+NEON_FUNCTION (vrev16,  gimple_permute, (rev_mask<16>))\n+NEON_FUNCTION (vrev16q, gimple_permute, (rev_mask<16>))\n+NEON_FUNCTION (vrev32,  gimple_permute, (rev_mask<32>))\n+NEON_FUNCTION (vrev32q, gimple_permute, (rev_mask<32>))\n+NEON_FUNCTION (vrev64,  gimple_permute, (rev_mask<64>))\n+NEON_FUNCTION (vrev64q, gimple_permute, (rev_mask<64>))\n+\n+// Transpose\n+NEON_FUNCTION (vtrn1,  gimple_permute,      (trn_mask<false>))\n+NEON_FUNCTION (vtrn1q, gimple_permute,      (trn_mask<false>))\n+NEON_FUNCTION (vtrn2,  gimple_permute,      (trn_mask<true>))\n+NEON_FUNCTION (vtrn2q, gimple_permute,      (trn_mask<true>))\n+NEON_FUNCTION (vtrn,   gimple_permute_pair, (trn_mask<false>, trn_mask<true>))\n+NEON_FUNCTION (vtrnq,  gimple_permute_pair, (trn_mask<false>, trn_mask<true>))\n+\n+// Unzip\n+NEON_FUNCTION (vuzp1,  gimple_permute,      (uzp_mask<false>))\n+NEON_FUNCTION (vuzp1q, gimple_permute,      (uzp_mask<false>))\n+NEON_FUNCTION (vuzp2,  gimple_permute,      (uzp_mask<true>))\n+NEON_FUNCTION (vuzp2q, gimple_permute,      (uzp_mask<true>))\n+NEON_FUNCTION (vuzp,   gimple_permute_pair, (uzp_mask<false>, uzp_mask<true>))\n+NEON_FUNCTION (vuzpq,  gimple_permute_pair, (uzp_mask<false>, uzp_mask<true>))\n+\n+// Zip\n+NEON_FUNCTION (vzip1,  gimple_permute,      (zip_mask<false>))\n+NEON_FUNCTION (vzip1q, gimple_permute,      (zip_mask<false>))\n+NEON_FUNCTION (vzip2,  gimple_permute,      (zip_mask<true>))\n+NEON_FUNCTION (vzip2q, gimple_permute,      (zip_mask<true>))\n+NEON_FUNCTION (vzip,   gimple_permute_pair, (zip_mask<false>, zip_mask<true>))\n+NEON_FUNCTION (vzipq,  gimple_permute_pair, (zip_mask<false>, zip_mask<true>))\ndiff --git a/gcc/config/aarch64/aarch64-neon-builtins-base.def b/gcc/config/aarch64/aarch64-neon-builtins-base.def\nindex e963e506571c..3ed6be649b9c 100644\n--- a/gcc/config/aarch64/aarch64-neon-builtins-base.def\n+++ b/gcc/config/aarch64/aarch64-neon-builtins-base.def\n@@ -109,3 +109,33 @@ DEF_NEON_FUNCTION (vclzq, bhs_integer, (\"Q0,Q0\"))\n DEF_NEON_FUNCTION (vcnt,  b_neon,      (\"D0,D0\"))\n DEF_NEON_FUNCTION (vcntq, b_neon,      (\"Q0,Q0\"))\n #undef REQUIRED_EXTENSIONS\n+\n+// Permutations\n+#define REQUIRED_EXTENSIONS nonstreaming_only (AARCH64_FL_SIMD)\n+DEF_NEON_FUNCTION (vext,    bhsd_neon,  (\"D0,D0,D0,ss32\", lane<2>))\n+DEF_NEON_FUNCTION (vextq,   bhsd_neon,  (\"Q0,Q0,Q0,ss32\", lane<2>))\n+DEF_NEON_FUNCTION (vrev16,  neon_rev16, (\"D0,D0\"))\n+DEF_NEON_FUNCTION (vrev16q, neon_rev16, (\"Q0,Q0\"))\n+DEF_NEON_FUNCTION (vrev32,  neon_rev32, (\"D0,D0\"))\n+DEF_NEON_FUNCTION (vrev32q, neon_rev32, (\"Q0,Q0\"))\n+DEF_NEON_FUNCTION (vrev64,  neon_rev64, (\"D0,D0\"))\n+DEF_NEON_FUNCTION (vrev64q, neon_rev64, (\"Q0,Q0\"))\n+DEF_NEON_FUNCTION (vtrn,    bhs_neon,   (\"D0x2,D0,D0\"))\n+DEF_NEON_FUNCTION (vtrn1,   bhs_neon,   (\"D0,D0,D0\"))\n+DEF_NEON_FUNCTION (vtrn1q,  bhsd_neon,  (\"Q0,Q0,Q0\"))\n+DEF_NEON_FUNCTION (vtrn2,   bhs_neon,   (\"D0,D0,D0\"))\n+DEF_NEON_FUNCTION (vtrn2q,  bhsd_neon,  (\"Q0,Q0,Q0\"))\n+DEF_NEON_FUNCTION (vtrnq,   bhs_neon,   (\"Q0x2,Q0,Q0\"))\n+DEF_NEON_FUNCTION (vuzp,    bhs_neon,   (\"D0x2,D0,D0\"))\n+DEF_NEON_FUNCTION (vuzp1,   bhs_neon,   (\"D0,D0,D0\"))\n+DEF_NEON_FUNCTION (vuzp1q,  bhsd_neon,  (\"Q0,Q0,Q0\"))\n+DEF_NEON_FUNCTION (vuzp2,   bhs_neon,   (\"D0,D0,D0\"))\n+DEF_NEON_FUNCTION (vuzp2q,  bhsd_neon,  (\"Q0,Q0,Q0\"))\n+DEF_NEON_FUNCTION (vuzpq,   bhs_neon,   (\"Q0x2,Q0,Q0\"))\n+DEF_NEON_FUNCTION (vzip,    bhs_neon,   (\"D0x2,D0,D0\"))\n+DEF_NEON_FUNCTION (vzip1,   bhs_neon,   (\"D0,D0,D0\"))\n+DEF_NEON_FUNCTION (vzip1q,  bhsd_neon,  (\"Q0,Q0,Q0\"))\n+DEF_NEON_FUNCTION (vzip2,   bhs_neon,   (\"D0,D0,D0\"))\n+DEF_NEON_FUNCTION (vzip2q,  bhsd_neon,  (\"Q0,Q0,Q0\"))\n+DEF_NEON_FUNCTION (vzipq,   bhs_neon,   (\"Q0x2,Q0,Q0\"))\n+#undef REQUIRED_EXTENSIONS\ndiff --git a/gcc/config/aarch64/aarch64-simd-builtins.def b/gcc/config/aarch64/aarch64-simd-builtins.def\nindex 2d8c613ca5ef..ff05265a8da1 100644\n--- a/gcc/config/aarch64/aarch64-simd-builtins.def\n+++ b/gcc/config/aarch64/aarch64-simd-builtins.def\n@@ -644,15 +644,6 @@\n   VAR1 (UNOP, floatunsv4si, 2, FP, v4sf)\n   VAR1 (UNOP, floatunsv2di, 2, FP, v2df)\n \n-  /* Implemented by\n-     aarch64_<PERMUTE:perm_insn><mode>.  */\n-  BUILTIN_VALL (BINOP, zip1, 0, QUIET)\n-  BUILTIN_VALL (BINOP, zip2, 0, QUIET)\n-  BUILTIN_VALL (BINOP, uzp1, 0, QUIET)\n-  BUILTIN_VALL (BINOP, uzp2, 0, QUIET)\n-  BUILTIN_VALL (BINOP, trn1, 0, QUIET)\n-  BUILTIN_VALL (BINOP, trn2, 0, QUIET)\n-\n   BUILTIN_GPF_F16 (UNOP, frecpe, 0, FP)\n   BUILTIN_GPF_F16 (UNOP, frecpx, 0, FP)\n \ndiff --git a/gcc/config/aarch64/aarch64-simd-pragma-builtins.def b/gcc/config/aarch64/aarch64-simd-pragma-builtins.def\nindex ebafcd618cd7..92ae493800ea 100644\n--- a/gcc/config/aarch64/aarch64-simd-pragma-builtins.def\n+++ b/gcc/config/aarch64/aarch64-simd-pragma-builtins.def\n@@ -196,12 +196,6 @@ ENTRY_FMA_FPM (vmlalltb, f32, UNSPEC_FMLALLTB_FP8)\n ENTRY_FMA_FPM (vmlalltt, f32, UNSPEC_FMLALLTT_FP8)\n #undef REQUIRED_EXTENSIONS\n \n-// ext\n-#define REQUIRED_EXTENSIONS nonstreaming_only (AARCH64_FL_SIMD)\n-ENTRY_BINARY_LANE (vext_mf8, mf8, mf8, mf8, UNSPEC_EXT, QUIET)\n-ENTRY_BINARY_LANE (vextq_mf8, mf8q, mf8q, mf8q, UNSPEC_EXT, QUIET)\n-#undef REQUIRED_EXTENSIONS\n-\n // ld1\n #define REQUIRED_EXTENSIONS nonstreaming_only (AARCH64_FL_SIMD)\n ENTRY_LOAD (vld1_mf8, mf8, mf8_scalar_const_ptr, UNSPEC_LD1)\n@@ -261,18 +255,6 @@ ENTRY_TERNARY (vmmlaq_f16_mf8, f16q, f16q, mf8q, mf8q, UNSPEC_FMMLA, FP8)\n ENTRY_TERNARY (vmmlaq_f32_mf8, f32q, f32q, mf8q, mf8q, UNSPEC_FMMLA, FP8)\n #undef REQUIRED_EXTENSIONS\n \n-// rev\n-#define REQUIRED_EXTENSIONS nonstreaming_only (AARCH64_FL_SIMD)\n-ENTRY_UNARY (vrev64_mf8, mf8, mf8, UNSPEC_REV64, QUIET)\n-ENTRY_UNARY (vrev64q_mf8, mf8q, mf8q, UNSPEC_REV64, QUIET)\n-\n-ENTRY_UNARY (vrev32_mf8, mf8, mf8, UNSPEC_REV32, QUIET)\n-ENTRY_UNARY (vrev32q_mf8, mf8q, mf8q, UNSPEC_REV32, QUIET)\n-\n-ENTRY_UNARY (vrev16_mf8, mf8, mf8, UNSPEC_REV16, QUIET)\n-ENTRY_UNARY (vrev16q_mf8, mf8q, mf8q, UNSPEC_REV16, QUIET)\n-#undef REQUIRED_EXTENSIONS\n-\n // st1\n #define REQUIRED_EXTENSIONS nonstreaming_only (AARCH64_FL_SIMD)\n ENTRY_STORE (vst1_mf8, mf8_scalar_ptr, mf8, UNSPEC_ST1)\n@@ -339,33 +321,3 @@ ENTRY_TERNARY (vqtbx3q_mf8, mf8q, mf8q, mf8qx3, u8q, UNSPEC_TBX, QUIET)\n ENTRY_TERNARY (vqtbx4_mf8, mf8, mf8, mf8qx4, u8, UNSPEC_TBX, QUIET)\n ENTRY_TERNARY (vqtbx4q_mf8, mf8q, mf8q, mf8qx4, u8q, UNSPEC_TBX, QUIET)\n #undef REQUIRED_EXTENSIONS\n-\n-// trn<n>\n-#define REQUIRED_EXTENSIONS nonstreaming_only (AARCH64_FL_SIMD)\n-ENTRY_BINARY (vtrn1_mf8, mf8, mf8, mf8, UNSPEC_TRN1, QUIET)\n-ENTRY_BINARY (vtrn1q_mf8, mf8q, mf8q, mf8q, UNSPEC_TRN1, QUIET)\n-ENTRY_BINARY (vtrn2_mf8, mf8, mf8, mf8, UNSPEC_TRN2, QUIET)\n-ENTRY_BINARY (vtrn2q_mf8, mf8q, mf8q, mf8q, UNSPEC_TRN2, QUIET)\n-ENTRY_BINARY (vtrn_mf8, mf8x2, mf8, mf8, UNSPEC_TRN, QUIET)\n-ENTRY_BINARY (vtrnq_mf8, mf8qx2, mf8q, mf8q, UNSPEC_TRN, QUIET)\n-#undef REQUIRED_EXTENSIONS\n-\n-// uzp<n>\n-#define REQUIRED_EXTENSIONS nonstreaming_only (AARCH64_FL_SIMD)\n-ENTRY_BINARY (vuzp1_mf8, mf8, mf8, mf8, UNSPEC_UZP1, QUIET)\n-ENTRY_BINARY (vuzp1q_mf8, mf8q, mf8q, mf8q, UNSPEC_UZP1, QUIET)\n-ENTRY_BINARY (vuzp2_mf8, mf8, mf8, mf8, UNSPEC_UZP2, QUIET)\n-ENTRY_BINARY (vuzp2q_mf8, mf8q, mf8q, mf8q, UNSPEC_UZP2, QUIET)\n-ENTRY_BINARY (vuzp_mf8, mf8x2, mf8, mf8, UNSPEC_UZP, QUIET)\n-ENTRY_BINARY (vuzpq_mf8, mf8qx2, mf8q, mf8q, UNSPEC_UZP, QUIET)\n-#undef REQUIRED_EXTENSIONS\n-\n-// zip<n>\n-#define REQUIRED_EXTENSIONS nonstreaming_only (AARCH64_FL_SIMD)\n-ENTRY_BINARY (vzip1_mf8, mf8, mf8, mf8, UNSPEC_ZIP1, QUIET)\n-ENTRY_BINARY (vzip1q_mf8, mf8q, mf8q, mf8q, UNSPEC_ZIP1, QUIET)\n-ENTRY_BINARY (vzip2_mf8, mf8, mf8, mf8, UNSPEC_ZIP2, QUIET)\n-ENTRY_BINARY (vzip2q_mf8, mf8q, mf8q, mf8q, UNSPEC_ZIP2, QUIET)\n-ENTRY_BINARY (vzip_mf8, mf8x2, mf8, mf8, UNSPEC_ZIP, QUIET)\n-ENTRY_BINARY (vzipq_mf8, mf8qx2, mf8q, mf8q, UNSPEC_ZIP, QUIET)\n-#undef REQUIRED_EXTENSIONS\ndiff --git a/gcc/config/aarch64/aarch64-sve-builtins-shapes.cc b/gcc/config/aarch64/aarch64-sve-builtins-shapes.cc\nindex b51115359eaf..8d3f1235ceb8 100644\n--- a/gcc/config/aarch64/aarch64-sve-builtins-shapes.cc\n+++ b/gcc/config/aarch64/aarch64-sve-builtins-shapes.cc\n@@ -205,6 +205,8 @@ parse_element_type (const function_instance &instance, const char *&format)\n    v<elt>  - a vector with the given element suffix\n    D<elt>  - a 64 bit neon vector\n    Q<elt>  - a 128 bit neon vector\n+   D<elt>x<n>  - an n-tuple of 64 bit neon vectors\n+   Q<elt>x<n>  - an n-tuple of 128 bit neon vectors\n \n    where <elt> has the format described above parse_element_type\n \n@@ -321,18 +323,23 @@ parse_type (const function_instance &instance, const char *&format)\n       return acle_vector_types[0][type_suffixes[suffix].vector_type];\n     }\n \n-  if (ch == 'D')\n+  if (ch == 'D' || ch == 'Q')\n     {\n       type_suffix_index suffix = parse_element_type (instance, format);\n-      int neon_index = type_suffixes[suffix].neon64_type;\n-      return aarch64_simd_types_trees[neon_index].itype;\n-    }\n-\n-  if (ch == 'Q')\n-    {\n-      type_suffix_index suffix = parse_element_type (instance, format);\n-      int neon_index = type_suffixes[suffix].neon128_type;\n-      return aarch64_simd_types_trees[neon_index].itype;\n+      aarch64_simd_type neon_index = ch == 'D'\n+\t\t\t\t       ? type_suffixes[suffix].neon64_type\n+\t\t\t\t       : type_suffixes[suffix].neon128_type;\n+      unsigned int num_vectors = 1;\n+      if (format[0] == 'x')\n+\t{\n+\t  int ch = format[1];\n+\t  format += 2;\n+\t  gcc_assert (IN_RANGE (ch, '1', '4'));\n+\t  num_vectors = ch - '0';\n+\t}\n+      return num_vectors == 1\n+\t       ? aarch64_simd_types_trees[neon_index].itype\n+\t       : aarch64_simd_tuple_types[neon_index][num_vectors - 2];\n     }\n \n   gcc_unreachable ();\ndiff --git a/gcc/config/aarch64/arm_neon.h b/gcc/config/aarch64/arm_neon.h\nindex ec2383d870a6..25d4526befaa 100644\n--- a/gcc/config/aarch64/arm_neon.h\n+++ b/gcc/config/aarch64/arm_neon.h\n@@ -8175,368 +8175,6 @@ vcvtpq_u64_f64 (float64x2_t __a)\n   return __builtin_aarch64_lceiluv2dfv2di_us (__a);\n }\n \n-/* vext  */\n-\n-__extension__ extern __inline float16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vext_f16 (float16x4_t __a, float16x4_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a,\n-\t\t\t    (uint16x4_t) {4 - __c, 5 - __c, 6 - __c, 7 - __c});\n-#else\n-  return __builtin_shuffle (__a, __b,\n-\t\t\t    (uint16x4_t) {__c, __c + 1, __c + 2, __c + 3});\n-#endif\n-}\n-\n-__extension__ extern __inline float32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vext_f32 (float32x2_t __a, float32x2_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a, (uint32x2_t) {2-__c, 3-__c});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {__c, __c+1});\n-#endif\n-}\n-\n-__extension__ extern __inline float64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vext_f64 (float64x1_t __a, float64x1_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-  /* The only possible index to the assembler instruction returns element 0.  */\n-  return __a;\n-}\n-__extension__ extern __inline poly8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vext_p8 (poly8x8_t __a, poly8x8_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a, (uint8x8_t)\n-      {8-__c, 9-__c, 10-__c, 11-__c, 12-__c, 13-__c, 14-__c, 15-__c});\n-#else\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x8_t) {__c, __c+1, __c+2, __c+3, __c+4, __c+5, __c+6, __c+7});\n-#endif\n-}\n-\n-__extension__ extern __inline poly16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vext_p16 (poly16x4_t __a, poly16x4_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a,\n-      (uint16x4_t) {4-__c, 5-__c, 6-__c, 7-__c});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {__c, __c+1, __c+2, __c+3});\n-#endif\n-}\n-\n-__extension__ extern __inline poly64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vext_p64 (poly64x1_t __a, poly64x1_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-  /* The only possible index to the assembler instruction returns element 0.  */\n-  return __a;\n-}\n-\n-__extension__ extern __inline int8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vext_s8 (int8x8_t __a, int8x8_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a, (uint8x8_t)\n-      {8-__c, 9-__c, 10-__c, 11-__c, 12-__c, 13-__c, 14-__c, 15-__c});\n-#else\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x8_t) {__c, __c+1, __c+2, __c+3, __c+4, __c+5, __c+6, __c+7});\n-#endif\n-}\n-\n-__extension__ extern __inline int16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vext_s16 (int16x4_t __a, int16x4_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a,\n-      (uint16x4_t) {4-__c, 5-__c, 6-__c, 7-__c});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {__c, __c+1, __c+2, __c+3});\n-#endif\n-}\n-\n-__extension__ extern __inline int32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vext_s32 (int32x2_t __a, int32x2_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a, (uint32x2_t) {2-__c, 3-__c});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {__c, __c+1});\n-#endif\n-}\n-\n-__extension__ extern __inline int64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vext_s64 (int64x1_t __a, int64x1_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-  /* The only possible index to the assembler instruction returns element 0.  */\n-  return __a;\n-}\n-\n-__extension__ extern __inline uint8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vext_u8 (uint8x8_t __a, uint8x8_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a, (uint8x8_t)\n-      {8-__c, 9-__c, 10-__c, 11-__c, 12-__c, 13-__c, 14-__c, 15-__c});\n-#else\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x8_t) {__c, __c+1, __c+2, __c+3, __c+4, __c+5, __c+6, __c+7});\n-#endif\n-}\n-\n-__extension__ extern __inline uint16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vext_u16 (uint16x4_t __a, uint16x4_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a,\n-      (uint16x4_t) {4-__c, 5-__c, 6-__c, 7-__c});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {__c, __c+1, __c+2, __c+3});\n-#endif\n-}\n-\n-__extension__ extern __inline uint32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vext_u32 (uint32x2_t __a, uint32x2_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a, (uint32x2_t) {2-__c, 3-__c});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {__c, __c+1});\n-#endif\n-}\n-\n-__extension__ extern __inline uint64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vext_u64 (uint64x1_t __a, uint64x1_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-  /* The only possible index to the assembler instruction returns element 0.  */\n-  return __a;\n-}\n-\n-__extension__ extern __inline float16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vextq_f16 (float16x8_t __a, float16x8_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a,\n-\t\t\t    (uint16x8_t) {8 - __c, 9 - __c, 10 - __c, 11 - __c,\n-\t\t\t\t\t  12 - __c, 13 - __c, 14 - __c,\n-\t\t\t\t\t  15 - __c});\n-#else\n-  return __builtin_shuffle (__a, __b,\n-\t\t\t    (uint16x8_t) {__c, __c + 1, __c + 2, __c + 3,\n-\t\t\t\t\t  __c + 4, __c + 5, __c + 6, __c + 7});\n-#endif\n-}\n-\n-__extension__ extern __inline float32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vextq_f32 (float32x4_t __a, float32x4_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a,\n-      (uint32x4_t) {4-__c, 5-__c, 6-__c, 7-__c});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {__c, __c+1, __c+2, __c+3});\n-#endif\n-}\n-\n-__extension__ extern __inline float64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vextq_f64 (float64x2_t __a, float64x2_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a, (uint64x2_t) {2-__c, 3-__c});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {__c, __c+1});\n-#endif\n-}\n-\n-__extension__ extern __inline poly8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vextq_p8 (poly8x16_t __a, poly8x16_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a, (uint8x16_t)\n-      {16-__c, 17-__c, 18-__c, 19-__c, 20-__c, 21-__c, 22-__c, 23-__c,\n-       24-__c, 25-__c, 26-__c, 27-__c, 28-__c, 29-__c, 30-__c, 31-__c});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x16_t)\n-      {__c, __c+1, __c+2, __c+3, __c+4, __c+5, __c+6, __c+7,\n-       __c+8, __c+9, __c+10, __c+11, __c+12, __c+13, __c+14, __c+15});\n-#endif\n-}\n-\n-__extension__ extern __inline poly16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vextq_p16 (poly16x8_t __a, poly16x8_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a, (uint16x8_t)\n-      {8-__c, 9-__c, 10-__c, 11-__c, 12-__c, 13-__c, 14-__c, 15-__c});\n-#else\n-  return __builtin_shuffle (__a, __b,\n-      (uint16x8_t) {__c, __c+1, __c+2, __c+3, __c+4, __c+5, __c+6, __c+7});\n-#endif\n-}\n-\n-__extension__ extern __inline poly64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vextq_p64 (poly64x2_t __a, poly64x2_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a, (uint64x2_t) {2-__c, 3-__c});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {__c, __c+1});\n-#endif\n-}\n-\n-__extension__ extern __inline int8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vextq_s8 (int8x16_t __a, int8x16_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a, (uint8x16_t)\n-      {16-__c, 17-__c, 18-__c, 19-__c, 20-__c, 21-__c, 22-__c, 23-__c,\n-       24-__c, 25-__c, 26-__c, 27-__c, 28-__c, 29-__c, 30-__c, 31-__c});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x16_t)\n-      {__c, __c+1, __c+2, __c+3, __c+4, __c+5, __c+6, __c+7,\n-       __c+8, __c+9, __c+10, __c+11, __c+12, __c+13, __c+14, __c+15});\n-#endif\n-}\n-\n-__extension__ extern __inline int16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vextq_s16 (int16x8_t __a, int16x8_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a, (uint16x8_t)\n-      {8-__c, 9-__c, 10-__c, 11-__c, 12-__c, 13-__c, 14-__c, 15-__c});\n-#else\n-  return __builtin_shuffle (__a, __b,\n-      (uint16x8_t) {__c, __c+1, __c+2, __c+3, __c+4, __c+5, __c+6, __c+7});\n-#endif\n-}\n-\n-__extension__ extern __inline int32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vextq_s32 (int32x4_t __a, int32x4_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a,\n-      (uint32x4_t) {4-__c, 5-__c, 6-__c, 7-__c});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {__c, __c+1, __c+2, __c+3});\n-#endif\n-}\n-\n-__extension__ extern __inline int64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vextq_s64 (int64x2_t __a, int64x2_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a, (uint64x2_t) {2-__c, 3-__c});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {__c, __c+1});\n-#endif\n-}\n-\n-__extension__ extern __inline uint8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vextq_u8 (uint8x16_t __a, uint8x16_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a, (uint8x16_t)\n-      {16-__c, 17-__c, 18-__c, 19-__c, 20-__c, 21-__c, 22-__c, 23-__c,\n-       24-__c, 25-__c, 26-__c, 27-__c, 28-__c, 29-__c, 30-__c, 31-__c});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x16_t)\n-      {__c, __c+1, __c+2, __c+3, __c+4, __c+5, __c+6, __c+7,\n-       __c+8, __c+9, __c+10, __c+11, __c+12, __c+13, __c+14, __c+15});\n-#endif\n-}\n-\n-__extension__ extern __inline uint16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vextq_u16 (uint16x8_t __a, uint16x8_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a, (uint16x8_t)\n-      {8-__c, 9-__c, 10-__c, 11-__c, 12-__c, 13-__c, 14-__c, 15-__c});\n-#else\n-  return __builtin_shuffle (__a, __b,\n-      (uint16x8_t) {__c, __c+1, __c+2, __c+3, __c+4, __c+5, __c+6, __c+7});\n-#endif\n-}\n-\n-__extension__ extern __inline uint32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vextq_u32 (uint32x4_t __a, uint32x4_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a,\n-      (uint32x4_t) {4-__c, 5-__c, 6-__c, 7-__c});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {__c, __c+1, __c+2, __c+3});\n-#endif\n-}\n-\n-__extension__ extern __inline uint64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vextq_u64 (uint64x2_t __a, uint64x2_t __b, __const int __c)\n-{\n-  __AARCH64_LANE_CHECK (__a, __c);\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__b, __a, (uint64x2_t) {2-__c, 3-__c});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {__c, __c+1});\n-#endif\n-}\n-\n /* vfma  */\n \n __extension__ extern __inline float64x1_t\n@@ -16300,354 +15938,76 @@ vrecpxd_f64 (float64_t __a)\n   return __builtin_aarch64_frecpxdf (__a);\n }\n \n+/* vrnd  */\n \n-/* vrev  */\n-\n-__extension__ extern __inline poly8x8_t\n+__extension__ extern __inline float32x2_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev16_p8 (poly8x8_t __a)\n+vrnd_f32 (float32x2_t __a)\n {\n-  return __builtin_shuffle (__a, (uint8x8_t) { 1, 0, 3, 2, 5, 4, 7, 6 });\n+  return __builtin_aarch64_btruncv2sf (__a);\n }\n \n-__extension__ extern __inline int8x8_t\n+__extension__ extern __inline float64x1_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev16_s8 (int8x8_t __a)\n+vrnd_f64 (float64x1_t __a)\n {\n-  return __builtin_shuffle (__a, (uint8x8_t) { 1, 0, 3, 2, 5, 4, 7, 6 });\n+  return vset_lane_f64 (__builtin_trunc (vget_lane_f64 (__a, 0)), __a, 0);\n }\n \n-__extension__ extern __inline uint8x8_t\n+__extension__ extern __inline float32x4_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev16_u8 (uint8x8_t __a)\n+vrndq_f32 (float32x4_t __a)\n {\n-  return __builtin_shuffle (__a, (uint8x8_t) { 1, 0, 3, 2, 5, 4, 7, 6 });\n+  return __builtin_aarch64_btruncv4sf (__a);\n }\n \n-__extension__ extern __inline poly8x16_t\n+__extension__ extern __inline float64x2_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev16q_p8 (poly8x16_t __a)\n+vrndq_f64 (float64x2_t __a)\n {\n-  return __builtin_shuffle (__a,\n-      (uint8x16_t) { 1, 0, 3, 2, 5, 4, 7, 6, 9, 8, 11, 10, 13, 12, 15, 14 });\n+  return __builtin_aarch64_btruncv2df (__a);\n }\n \n-__extension__ extern __inline int8x16_t\n+/* vrnda  */\n+\n+__extension__ extern __inline float32x2_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev16q_s8 (int8x16_t __a)\n+vrnda_f32 (float32x2_t __a)\n {\n-  return __builtin_shuffle (__a,\n-      (uint8x16_t) { 1, 0, 3, 2, 5, 4, 7, 6, 9, 8, 11, 10, 13, 12, 15, 14 });\n+  return __builtin_aarch64_roundv2sf (__a);\n }\n \n-__extension__ extern __inline uint8x16_t\n+__extension__ extern __inline float64x1_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev16q_u8 (uint8x16_t __a)\n+vrnda_f64 (float64x1_t __a)\n {\n-  return __builtin_shuffle (__a,\n-      (uint8x16_t) { 1, 0, 3, 2, 5, 4, 7, 6, 9, 8, 11, 10, 13, 12, 15, 14 });\n+  return vset_lane_f64 (__builtin_round (vget_lane_f64 (__a, 0)), __a, 0);\n }\n \n-__extension__ extern __inline poly8x8_t\n+__extension__ extern __inline float32x4_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev32_p8 (poly8x8_t __a)\n+vrndaq_f32 (float32x4_t __a)\n {\n-  return __builtin_shuffle (__a, (uint8x8_t) { 3, 2, 1, 0, 7, 6, 5, 4 });\n+  return __builtin_aarch64_roundv4sf (__a);\n }\n \n-__extension__ extern __inline poly16x4_t\n+__extension__ extern __inline float64x2_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev32_p16 (poly16x4_t __a)\n+vrndaq_f64 (float64x2_t __a)\n {\n-  return __builtin_shuffle (__a, (uint16x4_t) { 1, 0, 3, 2 });\n+  return __builtin_aarch64_roundv2df (__a);\n }\n \n-__extension__ extern __inline int8x8_t\n+/* vrndi  */\n+\n+__extension__ extern __inline float32x2_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev32_s8 (int8x8_t __a)\n+vrndi_f32 (float32x2_t __a)\n {\n-  return __builtin_shuffle (__a, (uint8x8_t) { 3, 2, 1, 0, 7, 6, 5, 4 });\n+  return __builtin_aarch64_nearbyintv2sf (__a);\n }\n \n-__extension__ extern __inline int16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev32_s16 (int16x4_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint16x4_t) { 1, 0, 3, 2 });\n-}\n-\n-__extension__ extern __inline uint8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev32_u8 (uint8x8_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint8x8_t) { 3, 2, 1, 0, 7, 6, 5, 4 });\n-}\n-\n-__extension__ extern __inline uint16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev32_u16 (uint16x4_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint16x4_t) { 1, 0, 3, 2 });\n-}\n-\n-__extension__ extern __inline poly8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev32q_p8 (poly8x16_t __a)\n-{\n-  return __builtin_shuffle (__a,\n-      (uint8x16_t) { 3, 2, 1, 0, 7, 6, 5, 4, 11, 10, 9, 8, 15, 14, 13, 12 });\n-}\n-\n-__extension__ extern __inline poly16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev32q_p16 (poly16x8_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint16x8_t) { 1, 0, 3, 2, 5, 4, 7, 6 });\n-}\n-\n-__extension__ extern __inline int8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev32q_s8 (int8x16_t __a)\n-{\n-  return __builtin_shuffle (__a,\n-      (uint8x16_t) { 3, 2, 1, 0, 7, 6, 5, 4, 11, 10, 9, 8, 15, 14, 13, 12 });\n-}\n-\n-__extension__ extern __inline int16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev32q_s16 (int16x8_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint16x8_t) { 1, 0, 3, 2, 5, 4, 7, 6 });\n-}\n-\n-__extension__ extern __inline uint8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev32q_u8 (uint8x16_t __a)\n-{\n-  return __builtin_shuffle (__a,\n-      (uint8x16_t) { 3, 2, 1, 0, 7, 6, 5, 4, 11, 10, 9, 8, 15, 14, 13, 12 });\n-}\n-\n-__extension__ extern __inline uint16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev32q_u16 (uint16x8_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint16x8_t) { 1, 0, 3, 2, 5, 4, 7, 6 });\n-}\n-\n-__extension__ extern __inline float16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev64_f16 (float16x4_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint16x4_t) { 3, 2, 1, 0 });\n-}\n-\n-__extension__ extern __inline float32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev64_f32 (float32x2_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint32x2_t) { 1, 0 });\n-}\n-\n-__extension__ extern __inline poly8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev64_p8 (poly8x8_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint8x8_t) { 7, 6, 5, 4, 3, 2, 1, 0 });\n-}\n-\n-__extension__ extern __inline poly16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev64_p16 (poly16x4_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint16x4_t) { 3, 2, 1, 0 });\n-}\n-\n-__extension__ extern __inline int8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev64_s8 (int8x8_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint8x8_t) { 7, 6, 5, 4, 3, 2, 1, 0 });\n-}\n-\n-__extension__ extern __inline int16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev64_s16 (int16x4_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint16x4_t) { 3, 2, 1, 0 });\n-}\n-\n-__extension__ extern __inline int32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev64_s32 (int32x2_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint32x2_t) { 1, 0 });\n-}\n-\n-__extension__ extern __inline uint8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev64_u8 (uint8x8_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint8x8_t) { 7, 6, 5, 4, 3, 2, 1, 0 });\n-}\n-\n-__extension__ extern __inline uint16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev64_u16 (uint16x4_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint16x4_t) { 3, 2, 1, 0 });\n-}\n-\n-__extension__ extern __inline uint32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev64_u32 (uint32x2_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint32x2_t) { 1, 0 });\n-}\n-\n-__extension__ extern __inline float16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev64q_f16 (float16x8_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint16x8_t) { 3, 2, 1, 0, 7, 6, 5, 4 });\n-}\n-\n-__extension__ extern __inline float32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev64q_f32 (float32x4_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint32x4_t) { 1, 0, 3, 2 });\n-}\n-\n-__extension__ extern __inline poly8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev64q_p8 (poly8x16_t __a)\n-{\n-  return __builtin_shuffle (__a,\n-      (uint8x16_t) { 7, 6, 5, 4, 3, 2, 1, 0, 15, 14, 13, 12, 11, 10, 9, 8 });\n-}\n-\n-__extension__ extern __inline poly16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev64q_p16 (poly16x8_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint16x8_t) { 3, 2, 1, 0, 7, 6, 5, 4 });\n-}\n-\n-__extension__ extern __inline int8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev64q_s8 (int8x16_t __a)\n-{\n-  return __builtin_shuffle (__a,\n-      (uint8x16_t) { 7, 6, 5, 4, 3, 2, 1, 0, 15, 14, 13, 12, 11, 10, 9, 8 });\n-}\n-\n-__extension__ extern __inline int16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev64q_s16 (int16x8_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint16x8_t) { 3, 2, 1, 0, 7, 6, 5, 4 });\n-}\n-\n-__extension__ extern __inline int32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev64q_s32 (int32x4_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint32x4_t) { 1, 0, 3, 2 });\n-}\n-\n-__extension__ extern __inline uint8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev64q_u8 (uint8x16_t __a)\n-{\n-  return __builtin_shuffle (__a,\n-      (uint8x16_t) { 7, 6, 5, 4, 3, 2, 1, 0, 15, 14, 13, 12, 11, 10, 9, 8 });\n-}\n-\n-__extension__ extern __inline uint16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev64q_u16 (uint16x8_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint16x8_t) { 3, 2, 1, 0, 7, 6, 5, 4 });\n-}\n-\n-__extension__ extern __inline uint32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrev64q_u32 (uint32x4_t __a)\n-{\n-  return __builtin_shuffle (__a, (uint32x4_t) { 1, 0, 3, 2 });\n-}\n-\n-/* vrnd  */\n-\n-__extension__ extern __inline float32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrnd_f32 (float32x2_t __a)\n-{\n-  return __builtin_aarch64_btruncv2sf (__a);\n-}\n-\n-__extension__ extern __inline float64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrnd_f64 (float64x1_t __a)\n-{\n-  return vset_lane_f64 (__builtin_trunc (vget_lane_f64 (__a, 0)), __a, 0);\n-}\n-\n-__extension__ extern __inline float32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrndq_f32 (float32x4_t __a)\n-{\n-  return __builtin_aarch64_btruncv4sf (__a);\n-}\n-\n-__extension__ extern __inline float64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrndq_f64 (float64x2_t __a)\n-{\n-  return __builtin_aarch64_btruncv2df (__a);\n-}\n-\n-/* vrnda  */\n-\n-__extension__ extern __inline float32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrnda_f32 (float32x2_t __a)\n-{\n-  return __builtin_aarch64_roundv2sf (__a);\n-}\n-\n-__extension__ extern __inline float64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrnda_f64 (float64x1_t __a)\n-{\n-  return vset_lane_f64 (__builtin_round (vget_lane_f64 (__a, 0)), __a, 0);\n-}\n-\n-__extension__ extern __inline float32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrndaq_f32 (float32x4_t __a)\n-{\n-  return __builtin_aarch64_roundv4sf (__a);\n-}\n-\n-__extension__ extern __inline float64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrndaq_f64 (float64x2_t __a)\n-{\n-  return __builtin_aarch64_roundv2df (__a);\n-}\n-\n-/* vrndi  */\n-\n-__extension__ extern __inline float32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vrndi_f32 (float32x2_t __a)\n-{\n-  return __builtin_aarch64_nearbyintv2sf (__a);\n-}\n-\n-__extension__ extern __inline float64x1_t\n+__extension__ extern __inline float64x1_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n vrndi_f64 (float64x1_t __a)\n {\n@@ -20033,2037 +19393,220 @@ vtbx4_p8 (poly8x8_t __r, poly8x8x4_t __tab, uint8x8_t __idx)\n   return __builtin_aarch64_qtbx2v8qi_pppu (__r, __temp, __idx);\n }\n \n-/* vtrn */\n+/* vtst */\n \n-__extension__ extern __inline float16x4_t\n+__extension__ extern __inline uint8x8_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1_f16 (float16x4_t __a, float16x4_t __b)\n+vtst_s8 (int8x8_t __a, int8x8_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {5, 1, 7, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {0, 4, 2, 6});\n-#endif\n+  return (uint8x8_t) ((__a & __b) != 0);\n }\n \n-__extension__ extern __inline float32x2_t\n+__extension__ extern __inline uint16x4_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1_f32 (float32x2_t __a, float32x2_t __b)\n+vtst_s16 (int16x4_t __a, int16x4_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {0, 2});\n-#endif\n+  return (uint16x4_t) ((__a & __b) != 0);\n }\n \n-__extension__ extern __inline poly8x8_t\n+__extension__ extern __inline uint32x2_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1_p8 (poly8x8_t __a, poly8x8_t __b)\n+vtst_s32 (int32x2_t __a, int32x2_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {9, 1, 11, 3, 13, 5, 15, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {0, 8, 2, 10, 4, 12, 6, 14});\n-#endif\n+  return (uint32x2_t) ((__a & __b) != 0);\n }\n \n-__extension__ extern __inline poly16x4_t\n+__extension__ extern __inline uint64x1_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1_p16 (poly16x4_t __a, poly16x4_t __b)\n+vtst_s64 (int64x1_t __a, int64x1_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {5, 1, 7, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {0, 4, 2, 6});\n-#endif\n+  return (uint64x1_t) ((__a & __b) != __AARCH64_INT64_C (0));\n }\n \n-__extension__ extern __inline int8x8_t\n+__extension__ extern __inline uint8x8_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1_s8 (int8x8_t __a, int8x8_t __b)\n+vtst_u8 (uint8x8_t __a, uint8x8_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {9, 1, 11, 3, 13, 5, 15, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {0, 8, 2, 10, 4, 12, 6, 14});\n-#endif\n+  return ((__a & __b) != 0);\n }\n \n-__extension__ extern __inline int16x4_t\n+__extension__ extern __inline uint16x4_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1_s16 (int16x4_t __a, int16x4_t __b)\n+vtst_u16 (uint16x4_t __a, uint16x4_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {5, 1, 7, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {0, 4, 2, 6});\n-#endif\n-}\n-\n-__extension__ extern __inline int32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1_s32 (int32x2_t __a, int32x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {0, 2});\n-#endif\n-}\n-\n-__extension__ extern __inline uint8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1_u8 (uint8x8_t __a, uint8x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {9, 1, 11, 3, 13, 5, 15, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {0, 8, 2, 10, 4, 12, 6, 14});\n-#endif\n-}\n-\n-__extension__ extern __inline uint16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1_u16 (uint16x4_t __a, uint16x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {5, 1, 7, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {0, 4, 2, 6});\n-#endif\n-}\n-\n-__extension__ extern __inline uint32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1_u32 (uint32x2_t __a, uint32x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {0, 2});\n-#endif\n-}\n-\n-__extension__ extern __inline float16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1q_f16 (float16x8_t __a, float16x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {9, 1, 11, 3, 13, 5, 15, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {0, 8, 2, 10, 4, 12, 6, 14});\n-#endif\n-}\n-\n-__extension__ extern __inline float32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1q_f32 (float32x4_t __a, float32x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {5, 1, 7, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {0, 4, 2, 6});\n-#endif\n-}\n-\n-__extension__ extern __inline float64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1q_f64 (float64x2_t __a, float64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {0, 2});\n-#endif\n-}\n-\n-__extension__ extern __inline poly8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1q_p8 (poly8x16_t __a, poly8x16_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x16_t) {17, 1, 19, 3, 21, 5, 23, 7, 25, 9, 27, 11, 29, 13, 31, 15});\n-#else\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x16_t) {0, 16, 2, 18, 4, 20, 6, 22, 8, 24, 10, 26, 12, 28, 14, 30});\n-#endif\n-}\n-\n-__extension__ extern __inline poly16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1q_p16 (poly16x8_t __a, poly16x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {9, 1, 11, 3, 13, 5, 15, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {0, 8, 2, 10, 4, 12, 6, 14});\n-#endif\n-}\n-\n-__extension__ extern __inline int8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1q_s8 (int8x16_t __a, int8x16_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x16_t) {17, 1, 19, 3, 21, 5, 23, 7, 25, 9, 27, 11, 29, 13, 31, 15});\n-#else\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x16_t) {0, 16, 2, 18, 4, 20, 6, 22, 8, 24, 10, 26, 12, 28, 14, 30});\n-#endif\n-}\n-\n-__extension__ extern __inline int16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1q_s16 (int16x8_t __a, int16x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {9, 1, 11, 3, 13, 5, 15, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {0, 8, 2, 10, 4, 12, 6, 14});\n-#endif\n-}\n-\n-__extension__ extern __inline int32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1q_s32 (int32x4_t __a, int32x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {5, 1, 7, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {0, 4, 2, 6});\n-#endif\n-}\n-\n-__extension__ extern __inline int64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1q_s64 (int64x2_t __a, int64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {0, 2});\n-#endif\n-}\n-\n-__extension__ extern __inline uint8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1q_u8 (uint8x16_t __a, uint8x16_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x16_t) {17, 1, 19, 3, 21, 5, 23, 7, 25, 9, 27, 11, 29, 13, 31, 15});\n-#else\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x16_t) {0, 16, 2, 18, 4, 20, 6, 22, 8, 24, 10, 26, 12, 28, 14, 30});\n-#endif\n-}\n-\n-__extension__ extern __inline uint16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1q_u16 (uint16x8_t __a, uint16x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {9, 1, 11, 3, 13, 5, 15, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {0, 8, 2, 10, 4, 12, 6, 14});\n-#endif\n-}\n-\n-__extension__ extern __inline uint32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1q_u32 (uint32x4_t __a, uint32x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {5, 1, 7, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {0, 4, 2, 6});\n-#endif\n-}\n-\n-__extension__ extern __inline poly64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1q_p64 (poly64x2_t __a, poly64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (poly64x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (poly64x2_t) {0, 2});\n-#endif\n-}\n-\n-__extension__ extern __inline uint64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn1q_u64 (uint64x2_t __a, uint64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {0, 2});\n-#endif\n-}\n-\n-__extension__ extern __inline float16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2_f16 (float16x4_t __a, float16x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {4, 0, 6, 2});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {1, 5, 3, 7});\n-#endif\n-}\n-\n-__extension__ extern __inline float32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2_f32 (float32x2_t __a, float32x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {1, 3});\n-#endif\n-}\n-\n-__extension__ extern __inline poly8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2_p8 (poly8x8_t __a, poly8x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {8, 0, 10, 2, 12, 4, 14, 6});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {1, 9, 3, 11, 5, 13, 7, 15});\n-#endif\n-}\n-\n-__extension__ extern __inline poly16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2_p16 (poly16x4_t __a, poly16x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {4, 0, 6, 2});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {1, 5, 3, 7});\n-#endif\n-}\n-\n-__extension__ extern __inline int8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2_s8 (int8x8_t __a, int8x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {8, 0, 10, 2, 12, 4, 14, 6});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {1, 9, 3, 11, 5, 13, 7, 15});\n-#endif\n-}\n-\n-__extension__ extern __inline int16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2_s16 (int16x4_t __a, int16x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {4, 0, 6, 2});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {1, 5, 3, 7});\n-#endif\n-}\n-\n-__extension__ extern __inline int32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2_s32 (int32x2_t __a, int32x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {1, 3});\n-#endif\n-}\n-\n-__extension__ extern __inline uint8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2_u8 (uint8x8_t __a, uint8x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {8, 0, 10, 2, 12, 4, 14, 6});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {1, 9, 3, 11, 5, 13, 7, 15});\n-#endif\n-}\n-\n-__extension__ extern __inline uint16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2_u16 (uint16x4_t __a, uint16x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {4, 0, 6, 2});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {1, 5, 3, 7});\n-#endif\n-}\n-\n-__extension__ extern __inline uint32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2_u32 (uint32x2_t __a, uint32x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {1, 3});\n-#endif\n-}\n-\n-__extension__ extern __inline float16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2q_f16 (float16x8_t __a, float16x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {8, 0, 10, 2, 12, 4, 14, 6});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {1, 9, 3, 11, 5, 13, 7, 15});\n-#endif\n-}\n-\n-__extension__ extern __inline float32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2q_f32 (float32x4_t __a, float32x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {4, 0, 6, 2});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {1, 5, 3, 7});\n-#endif\n-}\n-\n-__extension__ extern __inline float64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2q_f64 (float64x2_t __a, float64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {1, 3});\n-#endif\n-}\n-\n-__extension__ extern __inline poly8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2q_p8 (poly8x16_t __a, poly8x16_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x16_t) {16, 0, 18, 2, 20, 4, 22, 6, 24, 8, 26, 10, 28, 12, 30, 14});\n-#else\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x16_t) {1, 17, 3, 19, 5, 21, 7, 23, 9, 25, 11, 27, 13, 29, 15, 31});\n-#endif\n-}\n-\n-__extension__ extern __inline poly16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2q_p16 (poly16x8_t __a, poly16x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {8, 0, 10, 2, 12, 4, 14, 6});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {1, 9, 3, 11, 5, 13, 7, 15});\n-#endif\n-}\n-\n-__extension__ extern __inline int8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2q_s8 (int8x16_t __a, int8x16_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x16_t) {16, 0, 18, 2, 20, 4, 22, 6, 24, 8, 26, 10, 28, 12, 30, 14});\n-#else\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x16_t) {1, 17, 3, 19, 5, 21, 7, 23, 9, 25, 11, 27, 13, 29, 15, 31});\n-#endif\n-}\n-\n-__extension__ extern __inline int16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2q_s16 (int16x8_t __a, int16x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {8, 0, 10, 2, 12, 4, 14, 6});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {1, 9, 3, 11, 5, 13, 7, 15});\n-#endif\n-}\n-\n-__extension__ extern __inline int32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2q_s32 (int32x4_t __a, int32x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {4, 0, 6, 2});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {1, 5, 3, 7});\n-#endif\n-}\n-\n-__extension__ extern __inline int64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2q_s64 (int64x2_t __a, int64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {1, 3});\n-#endif\n-}\n-\n-__extension__ extern __inline uint8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2q_u8 (uint8x16_t __a, uint8x16_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x16_t) {16, 0, 18, 2, 20, 4, 22, 6, 24, 8, 26, 10, 28, 12, 30, 14});\n-#else\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x16_t) {1, 17, 3, 19, 5, 21, 7, 23, 9, 25, 11, 27, 13, 29, 15, 31});\n-#endif\n-}\n-\n-__extension__ extern __inline uint16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2q_u16 (uint16x8_t __a, uint16x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {8, 0, 10, 2, 12, 4, 14, 6});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {1, 9, 3, 11, 5, 13, 7, 15});\n-#endif\n-}\n-\n-__extension__ extern __inline uint32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2q_u32 (uint32x4_t __a, uint32x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {4, 0, 6, 2});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {1, 5, 3, 7});\n-#endif\n-}\n-\n-__extension__ extern __inline uint64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2q_u64 (uint64x2_t __a, uint64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {1, 3});\n-#endif\n-}\n-\n-\n-__extension__ extern __inline poly64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn2q_p64 (poly64x2_t __a, poly64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (poly64x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (poly64x2_t) {1, 3});\n-#endif\n-}\n-\n-__extension__ extern __inline float16x4x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn_f16 (float16x4_t __a, float16x4_t __b)\n-{\n-  return (float16x4x2_t) {vtrn1_f16 (__a, __b), vtrn2_f16 (__a, __b)};\n-}\n-\n-__extension__ extern __inline float32x2x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn_f32 (float32x2_t __a, float32x2_t __b)\n-{\n-  return (float32x2x2_t) {vtrn1_f32 (__a, __b), vtrn2_f32 (__a, __b)};\n-}\n-\n-__extension__ extern __inline poly8x8x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn_p8 (poly8x8_t __a, poly8x8_t __b)\n-{\n-  return (poly8x8x2_t) {vtrn1_p8 (__a, __b), vtrn2_p8 (__a, __b)};\n-}\n-\n-__extension__ extern __inline poly16x4x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn_p16 (poly16x4_t __a, poly16x4_t __b)\n-{\n-  return (poly16x4x2_t) {vtrn1_p16 (__a, __b), vtrn2_p16 (__a, __b)};\n-}\n-\n-__extension__ extern __inline int8x8x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn_s8 (int8x8_t __a, int8x8_t __b)\n-{\n-  return (int8x8x2_t) {vtrn1_s8 (__a, __b), vtrn2_s8 (__a, __b)};\n-}\n-\n-__extension__ extern __inline int16x4x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn_s16 (int16x4_t __a, int16x4_t __b)\n-{\n-  return (int16x4x2_t) {vtrn1_s16 (__a, __b), vtrn2_s16 (__a, __b)};\n-}\n-\n-__extension__ extern __inline int32x2x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn_s32 (int32x2_t __a, int32x2_t __b)\n-{\n-  return (int32x2x2_t) {vtrn1_s32 (__a, __b), vtrn2_s32 (__a, __b)};\n-}\n-\n-__extension__ extern __inline uint8x8x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn_u8 (uint8x8_t __a, uint8x8_t __b)\n-{\n-  return (uint8x8x2_t) {vtrn1_u8 (__a, __b), vtrn2_u8 (__a, __b)};\n-}\n-\n-__extension__ extern __inline uint16x4x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn_u16 (uint16x4_t __a, uint16x4_t __b)\n-{\n-  return (uint16x4x2_t) {vtrn1_u16 (__a, __b), vtrn2_u16 (__a, __b)};\n-}\n-\n-__extension__ extern __inline uint32x2x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrn_u32 (uint32x2_t __a, uint32x2_t __b)\n-{\n-  return (uint32x2x2_t) {vtrn1_u32 (__a, __b), vtrn2_u32 (__a, __b)};\n-}\n-\n-__extension__ extern __inline float16x8x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrnq_f16 (float16x8_t __a, float16x8_t __b)\n-{\n-  return (float16x8x2_t) {vtrn1q_f16 (__a, __b), vtrn2q_f16 (__a, __b)};\n-}\n-\n-__extension__ extern __inline float32x4x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrnq_f32 (float32x4_t __a, float32x4_t __b)\n-{\n-  return (float32x4x2_t) {vtrn1q_f32 (__a, __b), vtrn2q_f32 (__a, __b)};\n-}\n-\n-__extension__ extern __inline poly8x16x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrnq_p8 (poly8x16_t __a, poly8x16_t __b)\n-{\n-  return (poly8x16x2_t) {vtrn1q_p8 (__a, __b), vtrn2q_p8 (__a, __b)};\n-}\n-\n-__extension__ extern __inline poly16x8x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrnq_p16 (poly16x8_t __a, poly16x8_t __b)\n-{\n-  return (poly16x8x2_t) {vtrn1q_p16 (__a, __b), vtrn2q_p16 (__a, __b)};\n-}\n-\n-__extension__ extern __inline int8x16x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrnq_s8 (int8x16_t __a, int8x16_t __b)\n-{\n-  return (int8x16x2_t) {vtrn1q_s8 (__a, __b), vtrn2q_s8 (__a, __b)};\n-}\n-\n-__extension__ extern __inline int16x8x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrnq_s16 (int16x8_t __a, int16x8_t __b)\n-{\n-  return (int16x8x2_t) {vtrn1q_s16 (__a, __b), vtrn2q_s16 (__a, __b)};\n-}\n-\n-__extension__ extern __inline int32x4x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrnq_s32 (int32x4_t __a, int32x4_t __b)\n-{\n-  return (int32x4x2_t) {vtrn1q_s32 (__a, __b), vtrn2q_s32 (__a, __b)};\n-}\n-\n-__extension__ extern __inline uint8x16x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrnq_u8 (uint8x16_t __a, uint8x16_t __b)\n-{\n-  return (uint8x16x2_t) {vtrn1q_u8 (__a, __b), vtrn2q_u8 (__a, __b)};\n-}\n-\n-__extension__ extern __inline uint16x8x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrnq_u16 (uint16x8_t __a, uint16x8_t __b)\n-{\n-  return (uint16x8x2_t) {vtrn1q_u16 (__a, __b), vtrn2q_u16 (__a, __b)};\n-}\n-\n-__extension__ extern __inline uint32x4x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtrnq_u32 (uint32x4_t __a, uint32x4_t __b)\n-{\n-  return (uint32x4x2_t) {vtrn1q_u32 (__a, __b), vtrn2q_u32 (__a, __b)};\n-}\n-\n-/* vtst */\n-\n-__extension__ extern __inline uint8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtst_s8 (int8x8_t __a, int8x8_t __b)\n-{\n-  return (uint8x8_t) ((__a & __b) != 0);\n-}\n-\n-__extension__ extern __inline uint16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtst_s16 (int16x4_t __a, int16x4_t __b)\n-{\n-  return (uint16x4_t) ((__a & __b) != 0);\n-}\n-\n-__extension__ extern __inline uint32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtst_s32 (int32x2_t __a, int32x2_t __b)\n-{\n-  return (uint32x2_t) ((__a & __b) != 0);\n-}\n-\n-__extension__ extern __inline uint64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtst_s64 (int64x1_t __a, int64x1_t __b)\n-{\n-  return (uint64x1_t) ((__a & __b) != __AARCH64_INT64_C (0));\n-}\n-\n-__extension__ extern __inline uint8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtst_u8 (uint8x8_t __a, uint8x8_t __b)\n-{\n-  return ((__a & __b) != 0);\n-}\n-\n-__extension__ extern __inline uint16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtst_u16 (uint16x4_t __a, uint16x4_t __b)\n-{\n-  return ((__a & __b) != 0);\n-}\n-\n-__extension__ extern __inline uint32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtst_u32 (uint32x2_t __a, uint32x2_t __b)\n-{\n-  return ((__a & __b) != 0);\n-}\n-\n-__extension__ extern __inline uint64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtst_u64 (uint64x1_t __a, uint64x1_t __b)\n-{\n-  return ((__a & __b) != __AARCH64_UINT64_C (0));\n-}\n-\n-__extension__ extern __inline uint8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtstq_s8 (int8x16_t __a, int8x16_t __b)\n-{\n-  return (uint8x16_t) ((__a & __b) != 0);\n-}\n-\n-__extension__ extern __inline uint16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtstq_s16 (int16x8_t __a, int16x8_t __b)\n-{\n-  return (uint16x8_t) ((__a & __b) != 0);\n-}\n-\n-__extension__ extern __inline uint32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtstq_s32 (int32x4_t __a, int32x4_t __b)\n-{\n-  return (uint32x4_t) ((__a & __b) != 0);\n-}\n-\n-__extension__ extern __inline uint64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtstq_s64 (int64x2_t __a, int64x2_t __b)\n-{\n-  return (uint64x2_t) ((__a & __b) != __AARCH64_INT64_C (0));\n-}\n-\n-__extension__ extern __inline uint8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtstq_u8 (uint8x16_t __a, uint8x16_t __b)\n-{\n-  return ((__a & __b) != 0);\n-}\n-\n-__extension__ extern __inline uint16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtstq_u16 (uint16x8_t __a, uint16x8_t __b)\n-{\n-  return ((__a & __b) != 0);\n-}\n-\n-__extension__ extern __inline uint32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtstq_u32 (uint32x4_t __a, uint32x4_t __b)\n-{\n-  return ((__a & __b) != 0);\n-}\n-\n-__extension__ extern __inline uint64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtstq_u64 (uint64x2_t __a, uint64x2_t __b)\n-{\n-  return ((__a & __b) != __AARCH64_UINT64_C (0));\n-}\n-\n-__extension__ extern __inline uint64_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtstd_s64 (int64_t __a, int64_t __b)\n-{\n-  return (__a & __b) ? -1ll : 0ll;\n-}\n-\n-__extension__ extern __inline uint64_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vtstd_u64 (uint64_t __a, uint64_t __b)\n-{\n-  return (__a & __b) ? -1ll : 0ll;\n-}\n-\n-/* vuqadd */\n-\n-__extension__ extern __inline int8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuqadd_s8 (int8x8_t __a, uint8x8_t __b)\n-{\n-  return __builtin_aarch64_suqaddv8qi_ssu (__a,  __b);\n-}\n-\n-__extension__ extern __inline int16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuqadd_s16 (int16x4_t __a, uint16x4_t __b)\n-{\n-  return __builtin_aarch64_suqaddv4hi_ssu (__a,  __b);\n-}\n-\n-__extension__ extern __inline int32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuqadd_s32 (int32x2_t __a, uint32x2_t __b)\n-{\n-  return __builtin_aarch64_suqaddv2si_ssu (__a,  __b);\n-}\n-\n-__extension__ extern __inline int64x1_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuqadd_s64 (int64x1_t __a, uint64x1_t __b)\n-{\n-  return (int64x1_t) {__builtin_aarch64_suqadddi_ssu (__a[0], __b[0])};\n-}\n-\n-__extension__ extern __inline int8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuqaddq_s8 (int8x16_t __a, uint8x16_t __b)\n-{\n-  return __builtin_aarch64_suqaddv16qi_ssu (__a,  __b);\n-}\n-\n-__extension__ extern __inline int16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuqaddq_s16 (int16x8_t __a, uint16x8_t __b)\n-{\n-  return __builtin_aarch64_suqaddv8hi_ssu (__a,  __b);\n-}\n-\n-__extension__ extern __inline int32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuqaddq_s32 (int32x4_t __a, uint32x4_t __b)\n-{\n-  return __builtin_aarch64_suqaddv4si_ssu (__a,  __b);\n-}\n-\n-__extension__ extern __inline int64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuqaddq_s64 (int64x2_t __a, uint64x2_t __b)\n-{\n-  return __builtin_aarch64_suqaddv2di_ssu (__a,  __b);\n-}\n-\n-__extension__ extern __inline int8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuqaddb_s8 (int8_t __a, uint8_t __b)\n-{\n-  return __builtin_aarch64_suqaddqi_ssu (__a,  __b);\n-}\n-\n-__extension__ extern __inline int16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuqaddh_s16 (int16_t __a, uint16_t __b)\n-{\n-  return __builtin_aarch64_suqaddhi_ssu (__a,  __b);\n-}\n-\n-__extension__ extern __inline int32_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuqadds_s32 (int32_t __a, uint32_t __b)\n-{\n-  return __builtin_aarch64_suqaddsi_ssu (__a,  __b);\n-}\n-\n-__extension__ extern __inline int64_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuqaddd_s64 (int64_t __a, uint64_t __b)\n-{\n-  return __builtin_aarch64_suqadddi_ssu (__a,  __b);\n-}\n-\n-#define __DEFINTERLEAVE(op, rettype, intype, funcsuffix, Q) \t\t\\\n-  __extension__ extern __inline rettype\t\t\t\t\t\\\n-  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) \\\n-  v ## op ## Q ## _ ## funcsuffix (intype a, intype b)\t\t\t\\\n-  {\t\t\t\t\t\t\t\t\t\\\n-    return (rettype) {v ## op ## 1 ## Q ## _ ## funcsuffix (a, b),\t\\\n-\t\t      v ## op ## 2 ## Q ## _ ## funcsuffix (a, b)};\t\\\n-  }\n-\n-#define __INTERLEAVE_LIST(op)\t\t\t\t\t\\\n-  __DEFINTERLEAVE (op, float16x4x2_t, float16x4_t, f16,)\t\\\n-  __DEFINTERLEAVE (op, float32x2x2_t, float32x2_t, f32,)\t\\\n-  __DEFINTERLEAVE (op, poly8x8x2_t, poly8x8_t, p8,)\t\t\\\n-  __DEFINTERLEAVE (op, poly16x4x2_t, poly16x4_t, p16,)\t\t\\\n-  __DEFINTERLEAVE (op, int8x8x2_t, int8x8_t, s8,)\t\t\\\n-  __DEFINTERLEAVE (op, int16x4x2_t, int16x4_t, s16,)\t\t\\\n-  __DEFINTERLEAVE (op, int32x2x2_t, int32x2_t, s32,)\t\t\\\n-  __DEFINTERLEAVE (op, uint8x8x2_t, uint8x8_t, u8,)\t\t\\\n-  __DEFINTERLEAVE (op, uint16x4x2_t, uint16x4_t, u16,)\t\t\\\n-  __DEFINTERLEAVE (op, uint32x2x2_t, uint32x2_t, u32,)\t\t\\\n-  __DEFINTERLEAVE (op, float16x8x2_t, float16x8_t, f16, q)\t\\\n-  __DEFINTERLEAVE (op, float32x4x2_t, float32x4_t, f32, q)\t\\\n-  __DEFINTERLEAVE (op, poly8x16x2_t, poly8x16_t, p8, q)\t\t\\\n-  __DEFINTERLEAVE (op, poly16x8x2_t, poly16x8_t, p16, q)\t\\\n-  __DEFINTERLEAVE (op, int8x16x2_t, int8x16_t, s8, q)\t\t\\\n-  __DEFINTERLEAVE (op, int16x8x2_t, int16x8_t, s16, q)\t\t\\\n-  __DEFINTERLEAVE (op, int32x4x2_t, int32x4_t, s32, q)\t\t\\\n-  __DEFINTERLEAVE (op, uint8x16x2_t, uint8x16_t, u8, q)\t\t\\\n-  __DEFINTERLEAVE (op, uint16x8x2_t, uint16x8_t, u16, q)\t\\\n-  __DEFINTERLEAVE (op, uint32x4x2_t, uint32x4_t, u32, q)\n-\n-/* vuzp */\n-\n-__extension__ extern __inline float16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1_f16 (float16x4_t __a, float16x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {5, 7, 1, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {0, 2, 4, 6});\n-#endif\n-}\n-\n-__extension__ extern __inline float32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1_f32 (float32x2_t __a, float32x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {0, 2});\n-#endif\n-}\n-\n-__extension__ extern __inline poly8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1_p8 (poly8x8_t __a, poly8x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {9, 11, 13, 15, 1, 3, 5, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {0, 2, 4, 6, 8, 10, 12, 14});\n-#endif\n-}\n-\n-__extension__ extern __inline poly16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1_p16 (poly16x4_t __a, poly16x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {5, 7, 1, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {0, 2, 4, 6});\n-#endif\n-}\n-\n-__extension__ extern __inline int8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1_s8 (int8x8_t __a, int8x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {9, 11, 13, 15, 1, 3, 5, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {0, 2, 4, 6, 8, 10, 12, 14});\n-#endif\n-}\n-\n-__extension__ extern __inline int16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1_s16 (int16x4_t __a, int16x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {5, 7, 1, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {0, 2, 4, 6});\n-#endif\n-}\n-\n-__extension__ extern __inline int32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1_s32 (int32x2_t __a, int32x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {0, 2});\n-#endif\n-}\n-\n-__extension__ extern __inline uint8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1_u8 (uint8x8_t __a, uint8x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {9, 11, 13, 15, 1, 3, 5, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {0, 2, 4, 6, 8, 10, 12, 14});\n-#endif\n-}\n-\n-__extension__ extern __inline uint16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1_u16 (uint16x4_t __a, uint16x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {5, 7, 1, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {0, 2, 4, 6});\n-#endif\n-}\n-\n-__extension__ extern __inline uint32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1_u32 (uint32x2_t __a, uint32x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {0, 2});\n-#endif\n-}\n-\n-__extension__ extern __inline float16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1q_f16 (float16x8_t __a, float16x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {9, 11, 13, 15, 1, 3, 5, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {0, 2, 4, 6, 8, 10, 12, 14});\n-#endif\n-}\n-\n-__extension__ extern __inline float32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1q_f32 (float32x4_t __a, float32x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {5, 7, 1, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {0, 2, 4, 6});\n-#endif\n-}\n-\n-__extension__ extern __inline float64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1q_f64 (float64x2_t __a, float64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {0, 2});\n-#endif\n-}\n-\n-__extension__ extern __inline poly8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1q_p8 (poly8x16_t __a, poly8x16_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x16_t)\n-      {17, 19, 21, 23, 25, 27, 29, 31, 1, 3, 5, 7, 9, 11, 13, 15});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x16_t)\n-      {0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30});\n-#endif\n-}\n-\n-__extension__ extern __inline poly16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1q_p16 (poly16x8_t __a, poly16x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {9, 11, 13, 15, 1, 3, 5, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {0, 2, 4, 6, 8, 10, 12, 14});\n-#endif\n-}\n-\n-__extension__ extern __inline int8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1q_s8 (int8x16_t __a, int8x16_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x16_t) {17, 19, 21, 23, 25, 27, 29, 31, 1, 3, 5, 7, 9, 11, 13, 15});\n-#else\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x16_t) {0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30});\n-#endif\n-}\n-\n-__extension__ extern __inline int16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1q_s16 (int16x8_t __a, int16x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {9, 11, 13, 15, 1, 3, 5, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {0, 2, 4, 6, 8, 10, 12, 14});\n-#endif\n-}\n-\n-__extension__ extern __inline int32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1q_s32 (int32x4_t __a, int32x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {5, 7, 1, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {0, 2, 4, 6});\n-#endif\n-}\n-\n-__extension__ extern __inline int64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1q_s64 (int64x2_t __a, int64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {0, 2});\n-#endif\n-}\n-\n-__extension__ extern __inline uint8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1q_u8 (uint8x16_t __a, uint8x16_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x16_t) {17, 19, 21, 23, 25, 27, 29, 31, 1, 3, 5, 7, 9, 11, 13, 15});\n-#else\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x16_t) {0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30});\n-#endif\n-}\n-\n-__extension__ extern __inline uint16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1q_u16 (uint16x8_t __a, uint16x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {9, 11, 13, 15, 1, 3, 5, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {0, 2, 4, 6, 8, 10, 12, 14});\n-#endif\n-}\n-\n-__extension__ extern __inline uint32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1q_u32 (uint32x4_t __a, uint32x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {5, 7, 1, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {0, 2, 4, 6});\n-#endif\n-}\n-\n-__extension__ extern __inline uint64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1q_u64 (uint64x2_t __a, uint64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {0, 2});\n-#endif\n-}\n-\n-__extension__ extern __inline poly64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp1q_p64 (poly64x2_t __a, poly64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (poly64x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (poly64x2_t) {0, 2});\n-#endif\n-}\n-\n-__extension__ extern __inline float16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2_f16 (float16x4_t __a, float16x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {4, 6, 0, 2});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {1, 3, 5, 7});\n-#endif\n-}\n-\n-__extension__ extern __inline float32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2_f32 (float32x2_t __a, float32x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {1, 3});\n-#endif\n-}\n-\n-__extension__ extern __inline poly8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2_p8 (poly8x8_t __a, poly8x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {8, 10, 12, 14, 0, 2, 4, 6});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {1, 3, 5, 7, 9, 11, 13, 15});\n-#endif\n-}\n-\n-__extension__ extern __inline poly16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2_p16 (poly16x4_t __a, poly16x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {4, 6, 0, 2});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {1, 3, 5, 7});\n-#endif\n-}\n-\n-__extension__ extern __inline int8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2_s8 (int8x8_t __a, int8x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {8, 10, 12, 14, 0, 2, 4, 6});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {1, 3, 5, 7, 9, 11, 13, 15});\n-#endif\n-}\n-\n-__extension__ extern __inline int16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2_s16 (int16x4_t __a, int16x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {4, 6, 0, 2});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {1, 3, 5, 7});\n-#endif\n-}\n-\n-__extension__ extern __inline int32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2_s32 (int32x2_t __a, int32x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {1, 3});\n-#endif\n-}\n-\n-__extension__ extern __inline uint8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2_u8 (uint8x8_t __a, uint8x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {8, 10, 12, 14, 0, 2, 4, 6});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {1, 3, 5, 7, 9, 11, 13, 15});\n-#endif\n-}\n-\n-__extension__ extern __inline uint16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2_u16 (uint16x4_t __a, uint16x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {4, 6, 0, 2});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {1, 3, 5, 7});\n-#endif\n-}\n-\n-__extension__ extern __inline uint32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2_u32 (uint32x2_t __a, uint32x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {1, 3});\n-#endif\n-}\n-\n-__extension__ extern __inline float16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2q_f16 (float16x8_t __a, float16x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {8, 10, 12, 14, 0, 2, 4, 6});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {1, 3, 5, 7, 9, 11, 13, 15});\n-#endif\n-}\n-\n-__extension__ extern __inline float32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2q_f32 (float32x4_t __a, float32x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {4, 6, 0, 2});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {1, 3, 5, 7});\n-#endif\n-}\n-\n-__extension__ extern __inline float64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2q_f64 (float64x2_t __a, float64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {1, 3});\n-#endif\n-}\n-\n-__extension__ extern __inline poly8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2q_p8 (poly8x16_t __a, poly8x16_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x16_t) {16, 18, 20, 22, 24, 26, 28, 30, 0, 2, 4, 6, 8, 10, 12, 14});\n-#else\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x16_t) {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31});\n-#endif\n-}\n-\n-__extension__ extern __inline poly16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2q_p16 (poly16x8_t __a, poly16x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {8, 10, 12, 14, 0, 2, 4, 6});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {1, 3, 5, 7, 9, 11, 13, 15});\n-#endif\n-}\n-\n-__extension__ extern __inline int8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2q_s8 (int8x16_t __a, int8x16_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x16_t) {16, 18, 20, 22, 24, 26, 28, 30, 0, 2, 4, 6, 8, 10, 12, 14});\n-#else\n-  return __builtin_shuffle (__a, __b,\n-      (uint8x16_t) {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31});\n-#endif\n-}\n-\n-__extension__ extern __inline int16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2q_s16 (int16x8_t __a, int16x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {8, 10, 12, 14, 0, 2, 4, 6});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {1, 3, 5, 7, 9, 11, 13, 15});\n-#endif\n-}\n-\n-__extension__ extern __inline int32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2q_s32 (int32x4_t __a, int32x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {4, 6, 0, 2});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {1, 3, 5, 7});\n-#endif\n-}\n-\n-__extension__ extern __inline int64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2q_s64 (int64x2_t __a, int64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {1, 3});\n-#endif\n-}\n-\n-__extension__ extern __inline uint8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2q_u8 (uint8x16_t __a, uint8x16_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x16_t)\n-      {16, 18, 20, 22, 24, 26, 28, 30, 0, 2, 4, 6, 8, 10, 12, 14});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x16_t)\n-      {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31});\n-#endif\n-}\n-\n-__extension__ extern __inline uint16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2q_u16 (uint16x8_t __a, uint16x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {8, 10, 12, 14, 0, 2, 4, 6});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {1, 3, 5, 7, 9, 11, 13, 15});\n-#endif\n-}\n-\n-__extension__ extern __inline uint32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2q_u32 (uint32x4_t __a, uint32x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {4, 6, 0, 2});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {1, 3, 5, 7});\n-#endif\n-}\n-\n-__extension__ extern __inline uint64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2q_u64 (uint64x2_t __a, uint64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {1, 3});\n-#endif\n-}\n-\n-__extension__ extern __inline poly64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vuzp2q_p64 (poly64x2_t __a, poly64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (poly64x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (poly64x2_t) {1, 3});\n-#endif\n-}\n-\n-__INTERLEAVE_LIST (uzp)\n-\n-/* vzip */\n-\n-__extension__ extern __inline float16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1_f16 (float16x4_t __a, float16x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {6, 2, 7, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {0, 4, 1, 5});\n-#endif\n-}\n-\n-__extension__ extern __inline float32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1_f32 (float32x2_t __a, float32x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {0, 2});\n-#endif\n-}\n-\n-__extension__ extern __inline poly8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1_p8 (poly8x8_t __a, poly8x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {12, 4, 13, 5, 14, 6, 15, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {0, 8, 1, 9, 2, 10, 3, 11});\n-#endif\n-}\n-\n-__extension__ extern __inline poly16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1_p16 (poly16x4_t __a, poly16x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {6, 2, 7, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {0, 4, 1, 5});\n-#endif\n-}\n-\n-__extension__ extern __inline int8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1_s8 (int8x8_t __a, int8x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {12, 4, 13, 5, 14, 6, 15, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {0, 8, 1, 9, 2, 10, 3, 11});\n-#endif\n-}\n-\n-__extension__ extern __inline int16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1_s16 (int16x4_t __a, int16x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {6, 2, 7, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {0, 4, 1, 5});\n-#endif\n-}\n-\n-__extension__ extern __inline int32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1_s32 (int32x2_t __a, int32x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {0, 2});\n-#endif\n-}\n-\n-__extension__ extern __inline uint8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1_u8 (uint8x8_t __a, uint8x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {12, 4, 13, 5, 14, 6, 15, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {0, 8, 1, 9, 2, 10, 3, 11});\n-#endif\n-}\n-\n-__extension__ extern __inline uint16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1_u16 (uint16x4_t __a, uint16x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {6, 2, 7, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {0, 4, 1, 5});\n-#endif\n-}\n-\n-__extension__ extern __inline uint32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1_u32 (uint32x2_t __a, uint32x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {0, 2});\n-#endif\n-}\n-\n-__extension__ extern __inline float16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1q_f16 (float16x8_t __a, float16x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b,\n-\t\t\t    (uint16x8_t) {12, 4, 13, 5, 14, 6, 15, 7});\n-#else\n-  return __builtin_shuffle (__a, __b,\n-\t\t\t    (uint16x8_t) {0, 8, 1, 9, 2, 10, 3, 11});\n-#endif\n-}\n-\n-__extension__ extern __inline float32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1q_f32 (float32x4_t __a, float32x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {6, 2, 7, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {0, 4, 1, 5});\n-#endif\n-}\n-\n-__extension__ extern __inline float64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1q_f64 (float64x2_t __a, float64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {0, 2});\n-#endif\n+  return ((__a & __b) != 0);\n }\n \n-__extension__ extern __inline poly8x16_t\n+__extension__ extern __inline uint32x2_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1q_p8 (poly8x16_t __a, poly8x16_t __b)\n+vtst_u32 (uint32x2_t __a, uint32x2_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x16_t)\n-      {24, 8, 25, 9, 26, 10, 27, 11, 28, 12, 29, 13, 30, 14, 31, 15});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x16_t)\n-      {0, 16, 1, 17, 2, 18, 3, 19, 4, 20, 5, 21, 6, 22, 7, 23});\n-#endif\n+  return ((__a & __b) != 0);\n }\n \n-__extension__ extern __inline poly16x8_t\n+__extension__ extern __inline uint64x1_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1q_p16 (poly16x8_t __a, poly16x8_t __b)\n+vtst_u64 (uint64x1_t __a, uint64x1_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t)\n-      {12, 4, 13, 5, 14, 6, 15, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {0, 8, 1, 9, 2, 10, 3, 11});\n-#endif\n+  return ((__a & __b) != __AARCH64_UINT64_C (0));\n }\n \n-__extension__ extern __inline int8x16_t\n+__extension__ extern __inline uint8x16_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1q_s8 (int8x16_t __a, int8x16_t __b)\n+vtstq_s8 (int8x16_t __a, int8x16_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x16_t)\n-      {24, 8, 25, 9, 26, 10, 27, 11, 28, 12, 29, 13, 30, 14, 31, 15});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x16_t)\n-      {0, 16, 1, 17, 2, 18, 3, 19, 4, 20, 5, 21, 6, 22, 7, 23});\n-#endif\n+  return (uint8x16_t) ((__a & __b) != 0);\n }\n \n-__extension__ extern __inline int16x8_t\n+__extension__ extern __inline uint16x8_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1q_s16 (int16x8_t __a, int16x8_t __b)\n+vtstq_s16 (int16x8_t __a, int16x8_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t)\n-      {12, 4, 13, 5, 14, 6, 15, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {0, 8, 1, 9, 2, 10, 3, 11});\n-#endif\n+  return (uint16x8_t) ((__a & __b) != 0);\n }\n \n-__extension__ extern __inline int32x4_t\n+__extension__ extern __inline uint32x4_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1q_s32 (int32x4_t __a, int32x4_t __b)\n+vtstq_s32 (int32x4_t __a, int32x4_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {6, 2, 7, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {0, 4, 1, 5});\n-#endif\n+  return (uint32x4_t) ((__a & __b) != 0);\n }\n \n-__extension__ extern __inline int64x2_t\n+__extension__ extern __inline uint64x2_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1q_s64 (int64x2_t __a, int64x2_t __b)\n+vtstq_s64 (int64x2_t __a, int64x2_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {0, 2});\n-#endif\n+  return (uint64x2_t) ((__a & __b) != __AARCH64_INT64_C (0));\n }\n \n __extension__ extern __inline uint8x16_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1q_u8 (uint8x16_t __a, uint8x16_t __b)\n+vtstq_u8 (uint8x16_t __a, uint8x16_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x16_t)\n-      {24, 8, 25, 9, 26, 10, 27, 11, 28, 12, 29, 13, 30, 14, 31, 15});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x16_t)\n-      {0, 16, 1, 17, 2, 18, 3, 19, 4, 20, 5, 21, 6, 22, 7, 23});\n-#endif\n+  return ((__a & __b) != 0);\n }\n \n __extension__ extern __inline uint16x8_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1q_u16 (uint16x8_t __a, uint16x8_t __b)\n+vtstq_u16 (uint16x8_t __a, uint16x8_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t)\n-      {12, 4, 13, 5, 14, 6, 15, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {0, 8, 1, 9, 2, 10, 3, 11});\n-#endif\n+  return ((__a & __b) != 0);\n }\n \n __extension__ extern __inline uint32x4_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1q_u32 (uint32x4_t __a, uint32x4_t __b)\n+vtstq_u32 (uint32x4_t __a, uint32x4_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {6, 2, 7, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {0, 4, 1, 5});\n-#endif\n+  return ((__a & __b) != 0);\n }\n \n __extension__ extern __inline uint64x2_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1q_u64 (uint64x2_t __a, uint64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {0, 2});\n-#endif\n-}\n-\n-__extension__ extern __inline poly64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip1q_p64 (poly64x2_t __a, poly64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (poly64x2_t) {3, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (poly64x2_t) {0, 2});\n-#endif\n-}\n-\n-__extension__ extern __inline float16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2_f16 (float16x4_t __a, float16x4_t __b)\n+vtstq_u64 (uint64x2_t __a, uint64x2_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {4, 0, 5, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {2, 6, 3, 7});\n-#endif\n+  return ((__a & __b) != __AARCH64_UINT64_C (0));\n }\n \n-__extension__ extern __inline float32x2_t\n+__extension__ extern __inline uint64_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2_f32 (float32x2_t __a, float32x2_t __b)\n+vtstd_s64 (int64_t __a, int64_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {1, 3});\n-#endif\n+  return (__a & __b) ? -1ll : 0ll;\n }\n \n-__extension__ extern __inline poly8x8_t\n+__extension__ extern __inline uint64_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2_p8 (poly8x8_t __a, poly8x8_t __b)\n+vtstd_u64 (uint64_t __a, uint64_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {8, 0, 9, 1, 10, 2, 11, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {4, 12, 5, 13, 6, 14, 7, 15});\n-#endif\n+  return (__a & __b) ? -1ll : 0ll;\n }\n \n-__extension__ extern __inline poly16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2_p16 (poly16x4_t __a, poly16x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {4, 0, 5, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {2, 6, 3, 7});\n-#endif\n-}\n+/* vuqadd */\n \n __extension__ extern __inline int8x8_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2_s8 (int8x8_t __a, int8x8_t __b)\n+vuqadd_s8 (int8x8_t __a, uint8x8_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {8, 0, 9, 1, 10, 2, 11, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {4, 12, 5, 13, 6, 14, 7, 15});\n-#endif\n+  return __builtin_aarch64_suqaddv8qi_ssu (__a,  __b);\n }\n \n __extension__ extern __inline int16x4_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2_s16 (int16x4_t __a, int16x4_t __b)\n+vuqadd_s16 (int16x4_t __a, uint16x4_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {4, 0, 5, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {2, 6, 3, 7});\n-#endif\n+  return __builtin_aarch64_suqaddv4hi_ssu (__a,  __b);\n }\n \n __extension__ extern __inline int32x2_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2_s32 (int32x2_t __a, int32x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {1, 3});\n-#endif\n-}\n-\n-__extension__ extern __inline uint8x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2_u8 (uint8x8_t __a, uint8x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {8, 0, 9, 1, 10, 2, 11, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x8_t) {4, 12, 5, 13, 6, 14, 7, 15});\n-#endif\n-}\n-\n-__extension__ extern __inline uint16x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2_u16 (uint16x4_t __a, uint16x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {4, 0, 5, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x4_t) {2, 6, 3, 7});\n-#endif\n-}\n-\n-__extension__ extern __inline uint32x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2_u32 (uint32x2_t __a, uint32x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x2_t) {1, 3});\n-#endif\n-}\n-\n-__extension__ extern __inline float16x8_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2q_f16 (float16x8_t __a, float16x8_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b,\n-\t\t\t    (uint16x8_t) {8, 0, 9, 1, 10, 2, 11, 3});\n-#else\n-  return __builtin_shuffle (__a, __b,\n-\t\t\t    (uint16x8_t) {4, 12, 5, 13, 6, 14, 7, 15});\n-#endif\n-}\n-\n-__extension__ extern __inline float32x4_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2q_f32 (float32x4_t __a, float32x4_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {4, 0, 5, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {2, 6, 3, 7});\n-#endif\n-}\n-\n-__extension__ extern __inline float64x2_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2q_f64 (float64x2_t __a, float64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {1, 3});\n-#endif\n-}\n-\n-__extension__ extern __inline poly8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2q_p8 (poly8x16_t __a, poly8x16_t __b)\n+vuqadd_s32 (int32x2_t __a, uint32x2_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x16_t)\n-      {16, 0, 17, 1, 18, 2, 19, 3, 20, 4, 21, 5, 22, 6, 23, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x16_t)\n-      {8, 24, 9, 25, 10, 26, 11, 27, 12, 28, 13, 29, 14, 30, 15, 31});\n-#endif\n+  return __builtin_aarch64_suqaddv2si_ssu (__a,  __b);\n }\n \n-__extension__ extern __inline poly16x8_t\n+__extension__ extern __inline int64x1_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2q_p16 (poly16x8_t __a, poly16x8_t __b)\n+vuqadd_s64 (int64x1_t __a, uint64x1_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {8, 0, 9, 1, 10, 2, 11, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t)\n-      {4, 12, 5, 13, 6, 14, 7, 15});\n-#endif\n+  return (int64x1_t) {__builtin_aarch64_suqadddi_ssu (__a[0], __b[0])};\n }\n \n __extension__ extern __inline int8x16_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2q_s8 (int8x16_t __a, int8x16_t __b)\n+vuqaddq_s8 (int8x16_t __a, uint8x16_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x16_t)\n-      {16, 0, 17, 1, 18, 2, 19, 3, 20, 4, 21, 5, 22, 6, 23, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x16_t)\n-      {8, 24, 9, 25, 10, 26, 11, 27, 12, 28, 13, 29, 14, 30, 15, 31});\n-#endif\n+  return __builtin_aarch64_suqaddv16qi_ssu (__a,  __b);\n }\n \n __extension__ extern __inline int16x8_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2q_s16 (int16x8_t __a, int16x8_t __b)\n+vuqaddq_s16 (int16x8_t __a, uint16x8_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {8, 0, 9, 1, 10, 2, 11, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t)\n-      {4, 12, 5, 13, 6, 14, 7, 15});\n-#endif\n+  return __builtin_aarch64_suqaddv8hi_ssu (__a,  __b);\n }\n \n __extension__ extern __inline int32x4_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2q_s32 (int32x4_t __a, int32x4_t __b)\n+vuqaddq_s32 (int32x4_t __a, uint32x4_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {4, 0, 5, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {2, 6, 3, 7});\n-#endif\n+  return __builtin_aarch64_suqaddv4si_ssu (__a,  __b);\n }\n \n __extension__ extern __inline int64x2_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2q_s64 (int64x2_t __a, int64x2_t __b)\n-{\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {1, 3});\n-#endif\n-}\n-\n-__extension__ extern __inline uint8x16_t\n-__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2q_u8 (uint8x16_t __a, uint8x16_t __b)\n+vuqaddq_s64 (int64x2_t __a, uint64x2_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint8x16_t)\n-      {16, 0, 17, 1, 18, 2, 19, 3, 20, 4, 21, 5, 22, 6, 23, 7});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint8x16_t)\n-      {8, 24, 9, 25, 10, 26, 11, 27, 12, 28, 13, 29, 14, 30, 15, 31});\n-#endif\n+  return __builtin_aarch64_suqaddv2di_ssu (__a,  __b);\n }\n \n-__extension__ extern __inline uint16x8_t\n+__extension__ extern __inline int8_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2q_u16 (uint16x8_t __a, uint16x8_t __b)\n+vuqaddb_s8 (int8_t __a, uint8_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint16x8_t) {8, 0, 9, 1, 10, 2, 11, 3});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint16x8_t)\n-      {4, 12, 5, 13, 6, 14, 7, 15});\n-#endif\n+  return __builtin_aarch64_suqaddqi_ssu (__a,  __b);\n }\n \n-__extension__ extern __inline uint32x4_t\n+__extension__ extern __inline int16_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2q_u32 (uint32x4_t __a, uint32x4_t __b)\n+vuqaddh_s16 (int16_t __a, uint16_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {4, 0, 5, 1});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint32x4_t) {2, 6, 3, 7});\n-#endif\n+  return __builtin_aarch64_suqaddhi_ssu (__a,  __b);\n }\n \n-__extension__ extern __inline uint64x2_t\n+__extension__ extern __inline int32_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2q_u64 (uint64x2_t __a, uint64x2_t __b)\n+vuqadds_s32 (int32_t __a, uint32_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (uint64x2_t) {1, 3});\n-#endif\n+  return __builtin_aarch64_suqaddsi_ssu (__a,  __b);\n }\n \n-__extension__ extern __inline poly64x2_t\n+__extension__ extern __inline int64_t\n __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))\n-vzip2q_p64 (poly64x2_t __a, poly64x2_t __b)\n+vuqaddd_s64 (int64_t __a, uint64_t __b)\n {\n-#ifdef __AARCH64EB__\n-  return __builtin_shuffle (__a, __b, (poly64x2_t) {2, 0});\n-#else\n-  return __builtin_shuffle (__a, __b, (poly64x2_t) {1, 3});\n-#endif\n+  return __builtin_aarch64_suqadddi_ssu (__a,  __b);\n }\n \n-__INTERLEAVE_LIST (zip)\n-\n-#undef __INTERLEAVE_LIST\n-#undef __DEFINTERLEAVE\n-\n-/* End of optimal implementations in approved order.  */\n-\n #pragma GCC pop_options\n \n /* ARMv8.2-A FP16 intrinsics.  */\ndiff --git a/gcc/testsuite/gcc.target/aarch64/neon/vext.c b/gcc/testsuite/gcc.target/aarch64/neon/vext.c\nnew file mode 100644\nindex 000000000000..e155a3323984\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/neon/vext.c\n@@ -0,0 +1,216 @@\n+/* { dg-do compile } */\n+/* { dg-final { check-function-bodies \"**\" \"\" } } */\n+\n+#include \"arm_neon_test.h\"\n+\n+#define TEST_EXT(NAME, TYPE, INDEX) \\\n+  TYPE test_##NAME##_##INDEX (TYPE a, TYPE b) { return NAME (a, b, INDEX); }\n+\n+/*\n+** test_vext_u8_1:\n+** ext\tv0\\.8b, v0\\.8b, v1\\.8b, #?1\n+** ret\n+*/\n+TEST_EXT (vext_u8, uint8x8_t, 1)\n+\n+/*\n+** test_vext_s8_1:\n+** ext\tv0\\.8b, v0\\.8b, v1\\.8b, #?1\n+** ret\n+*/\n+TEST_EXT (vext_s8, int8x8_t, 1)\n+\n+/*\n+** test_vext_p8_1:\n+** ext\tv0\\.8b, v0\\.8b, v1\\.8b, #?1\n+** ret\n+*/\n+TEST_EXT (vext_p8, poly8x8_t, 1)\n+\n+/*\n+** test_vext_mf8_1:\n+** ext\tv0\\.8b, v0\\.8b, v1\\.8b, #?1\n+** ret\n+*/\n+TEST_EXT (vext_mf8, mfloat8x8_t, 1)\n+\n+/*\n+** test_vext_u16_1:\n+** ext\tv0\\.8b, v0\\.8b, v1\\.8b, #?2\n+** ret\n+*/\n+TEST_EXT (vext_u16, uint16x4_t, 1)\n+\n+/*\n+** test_vext_s16_1:\n+** ext\tv0\\.8b, v0\\.8b, v1\\.8b, #?2\n+** ret\n+*/\n+TEST_EXT (vext_s16, int16x4_t, 1)\n+\n+/*\n+** test_vext_f16_1:\n+** ext\tv0\\.8b, v0\\.8b, v1\\.8b, #?2\n+** ret\n+*/\n+TEST_EXT (vext_f16, float16x4_t, 1)\n+\n+/*\n+** test_vext_p16_1:\n+** ext\tv0\\.8b, v0\\.8b, v1\\.8b, #?2\n+** ret\n+*/\n+TEST_EXT (vext_p16, poly16x4_t, 1)\n+\n+/*\n+** test_vext_u32_1:\n+** ext\tv0\\.8b, v0\\.8b, v1\\.8b, #?4\n+** ret\n+*/\n+TEST_EXT (vext_u32, uint32x2_t, 1)\n+\n+/*\n+** test_vext_s32_1:\n+** ext\tv0\\.8b, v0\\.8b, v1\\.8b, #?4\n+** ret\n+*/\n+TEST_EXT (vext_s32, int32x2_t, 1)\n+\n+/*\n+** test_vext_f32_1:\n+** ext\tv0\\.8b, v0\\.8b, v1\\.8b, #?4\n+** ret\n+*/\n+TEST_EXT (vext_f32, float32x2_t, 1)\n+\n+/*\n+** test_vext_u64_0:\n+** ret\n+*/\n+TEST_EXT (vext_u64, uint64x1_t, 0)\n+\n+/* The only allowable index for vext_{u64,s64,f64} is 0, which just returns the\n+   whole `a` vector. So no instructions are emitted.  */\n+\n+/*\n+** test_vext_s64_0:\n+** ret\n+*/\n+TEST_EXT (vext_s64, int64x1_t, 0)\n+\n+/*\n+** test_vext_f64_0:\n+** ret\n+*/\n+TEST_EXT (vext_f64, float64x1_t, 0)\n+\n+/*\n+** test_vext_p64_0:\n+** ret\n+*/\n+TEST_EXT (vext_p64, poly64x1_t, 0)\n+\n+/*\n+** test_vextq_u8_1:\n+** ext\tv0\\.16b, v0\\.16b, v1\\.16b, #?1\n+** ret\n+*/\n+TEST_EXT (vextq_u8, uint8x16_t, 1)\n+\n+/*\n+** test_vextq_s8_1:\n+** ext\tv0\\.16b, v0\\.16b, v1\\.16b, #?1\n+** ret\n+*/\n+TEST_EXT (vextq_s8, int8x16_t, 1)\n+\n+/*\n+** test_vextq_p8_1:\n+** ext\tv0\\.16b, v0\\.16b, v1\\.16b, #?1\n+** ret\n+*/\n+TEST_EXT (vextq_p8, poly8x16_t, 1)\n+\n+/*\n+** test_vextq_mf8_1:\n+** ext\tv0\\.16b, v0\\.16b, v1\\.16b, #?1\n+** ret\n+*/\n+TEST_EXT (vextq_mf8, mfloat8x16_t, 1)\n+\n+/*\n+** test_vextq_u16_1:\n+** ext\tv0\\.16b, v0\\.16b, v1\\.16b, #?2\n+** ret\n+*/\n+TEST_EXT (vextq_u16, uint16x8_t, 1)\n+\n+/*\n+** test_vextq_s16_1:\n+** ext\tv0\\.16b, v0\\.16b, v1\\.16b, #?2\n+** ret\n+*/\n+TEST_EXT (vextq_s16, int16x8_t, 1)\n+\n+/*\n+** test_vextq_f16_1:\n+** ext\tv0\\.16b, v0\\.16b, v1\\.16b, #?2\n+** ret\n+*/\n+TEST_EXT (vextq_f16, float16x8_t, 1)\n+\n+/*\n+** test_vextq_p16_1:\n+** ext\tv0\\.16b, v0\\.16b, v1\\.16b, #?2\n+** ret\n+*/\n+TEST_EXT (vextq_p16, poly16x8_t, 1)\n+\n+/*\n+** test_vextq_u32_1:\n+** ext\tv0\\.16b, v0\\.16b, v1\\.16b, #?4\n+** ret\n+*/\n+TEST_EXT (vextq_u32, uint32x4_t, 1)\n+\n+/*\n+** test_vextq_s32_1:\n+** ext\tv0\\.16b, v0\\.16b, v1\\.16b, #?4\n+** ret\n+*/\n+TEST_EXT (vextq_s32, int32x4_t, 1)\n+\n+/*\n+** test_vextq_f32_1:\n+** ext\tv0\\.16b, v0\\.16b, v1\\.16b, #?4\n+** ret\n+*/\n+TEST_EXT (vextq_f32, float32x4_t, 1)\n+\n+/*\n+** test_vextq_u64_1:\n+** ext\tv0\\.16b, v0\\.16b, v1\\.16b, #?8\n+** ret\n+*/\n+TEST_EXT (vextq_u64, uint64x2_t, 1)\n+\n+/*\n+** test_vextq_s64_1:\n+** ext\tv0\\.16b, v0\\.16b, v1\\.16b, #?8\n+** ret\n+*/\n+TEST_EXT (vextq_s64, int64x2_t, 1)\n+\n+/*\n+** test_vextq_f64_1:\n+** ext\tv0\\.16b, v0\\.16b, v1\\.16b, #?8\n+** ret\n+*/\n+TEST_EXT (vextq_f64, float64x2_t, 1)\n+\n+/*\n+** test_vextq_p64_1:\n+** ext\tv0\\.16b, v0\\.16b, v1\\.16b, #?8\n+** ret\n+*/\n+TEST_EXT (vextq_p64, poly64x2_t, 1)\ndiff --git a/gcc/testsuite/gcc.target/aarch64/neon/vrev.c b/gcc/testsuite/gcc.target/aarch64/neon/vrev.c\nnew file mode 100644\nindex 000000000000..37566160b98c\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/neon/vrev.c\n@@ -0,0 +1,311 @@\n+/* { dg-do compile } */\n+/* { dg-final { check-function-bodies \"**\" \"\" } } */\n+\n+#include \"arm_neon_test.h\"\n+\n+/*\n+** test_vrev16_u8:\n+** rev16\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev16_u8, uint8x8_t)\n+\n+/*\n+** test_vrev16_s8:\n+** rev16\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev16_s8, int8x8_t)\n+\n+/*\n+** test_vrev16_p8:\n+** rev16\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev16_p8, poly8x8_t)\n+\n+/*\n+** test_vrev16_mf8:\n+** rev16\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev16_mf8, mfloat8x8_t)\n+\n+/*\n+** test_vrev16q_u8:\n+** rev16\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev16q_u8, uint8x16_t)\n+\n+/*\n+** test_vrev16q_s8:\n+** rev16\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev16q_s8, int8x16_t)\n+\n+/*\n+** test_vrev16q_p8:\n+** rev16\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev16q_p8, poly8x16_t)\n+\n+/*\n+** test_vrev16q_mf8:\n+** rev16\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev16q_mf8, mfloat8x16_t)\n+\n+/*\n+** test_vrev32_u8:\n+** rev32\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev32_u8, uint8x8_t)\n+\n+/*\n+** test_vrev32_s8:\n+** rev32\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev32_s8, int8x8_t)\n+\n+/*\n+** test_vrev32_p8:\n+** rev32\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev32_p8, poly8x8_t)\n+\n+/*\n+** test_vrev32_mf8:\n+** rev32\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev32_mf8, mfloat8x8_t)\n+\n+/*\n+** test_vrev32_u16:\n+** rev32\tv0\\.4h, v0\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev32_u16, uint16x4_t)\n+\n+/*\n+** test_vrev32_s16:\n+** rev32\tv0\\.4h, v0\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev32_s16, int16x4_t)\n+\n+/*\n+** test_vrev32_p16:\n+** rev32\tv0\\.4h, v0\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev32_p16, poly16x4_t)\n+\n+/*\n+** test_vrev32q_u8:\n+** rev32\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev32q_u8, uint8x16_t)\n+\n+/*\n+** test_vrev32q_s8:\n+** rev32\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev32q_s8, int8x16_t)\n+\n+/*\n+** test_vrev32q_p8:\n+** rev32\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev32q_p8, poly8x16_t)\n+\n+/*\n+** test_vrev32q_mf8:\n+** rev32\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev32q_mf8, mfloat8x16_t)\n+\n+/*\n+** test_vrev32q_u16:\n+** rev32\tv0\\.8h, v0\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev32q_u16, uint16x8_t)\n+\n+/*\n+** test_vrev32q_s16:\n+** rev32\tv0\\.8h, v0\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev32q_s16, int16x8_t)\n+\n+/*\n+** test_vrev32q_p16:\n+** rev32\tv0\\.8h, v0\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev32q_p16, poly16x8_t)\n+\n+/*\n+** test_vrev64_u8:\n+** rev64\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64_u8, uint8x8_t)\n+\n+/*\n+** test_vrev64_s8:\n+** rev64\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64_s8, int8x8_t)\n+\n+/*\n+** test_vrev64_p8:\n+** rev64\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64_p8, poly8x8_t)\n+\n+/*\n+** test_vrev64_mf8:\n+** rev64\tv0\\.8b, v0\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64_mf8, mfloat8x8_t)\n+\n+/*\n+** test_vrev64q_u8:\n+** rev64\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64q_u8, uint8x16_t)\n+\n+/*\n+** test_vrev64q_s8:\n+** rev64\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64q_s8, int8x16_t)\n+\n+/*\n+** test_vrev64q_p8:\n+** rev64\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64q_p8, poly8x16_t)\n+\n+/*\n+** test_vrev64q_mf8:\n+** rev64\tv0\\.16b, v0\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64q_mf8, mfloat8x16_t)\n+\n+/*\n+** test_vrev64_u16:\n+** rev64\tv0\\.4h, v0\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64_u16, uint16x4_t)\n+\n+/*\n+** test_vrev64_s16:\n+** rev64\tv0\\.4h, v0\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64_s16, int16x4_t)\n+\n+/*\n+** test_vrev64_p16:\n+** rev64\tv0\\.4h, v0\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64_p16, poly16x4_t)\n+\n+/*\n+** test_vrev64_f16:\n+** rev64\tv0\\.4h, v0\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64_f16, float16x4_t)\n+\n+/*\n+** test_vrev64_f32:\n+** rev64\tv0\\.2s, v0\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64_f32, float32x2_t)\n+\n+/*\n+** test_vrev64q_u16:\n+** rev64\tv0\\.8h, v0\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64q_u16, uint16x8_t)\n+/*\n+** test_vrev64q_s16:\n+** rev64\tv0\\.8h, v0\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64q_s16, int16x8_t)\n+\n+/*\n+** test_vrev64q_p16:\n+** rev64\tv0\\.8h, v0\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64q_p16, poly16x8_t)\n+\n+/*\n+** test_vrev64q_f16:\n+** rev64\tv0\\.8h, v0\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64q_f16, float16x8_t)\n+\n+/*\n+** test_vrev64_u32:\n+** rev64\tv0\\.2s, v0\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64_u32, uint32x2_t)\n+\n+/*\n+** test_vrev64_s32:\n+** rev64\tv0\\.2s, v0\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64_s32, int32x2_t)\n+\n+/*\n+** test_vrev64q_u32:\n+** rev64\tv0\\.4s, v0\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64q_u32, uint32x4_t)\n+\n+/*\n+** test_vrev64q_s32:\n+** rev64\tv0\\.4s, v0\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64q_s32, int32x4_t)\n+\n+/*\n+** test_vrev64q_f32:\n+** rev64\tv0\\.4s, v0\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_UNARY (vrev64q_f32, float32x4_t)\ndiff --git a/gcc/testsuite/gcc.target/aarch64/neon/vtrn.c b/gcc/testsuite/gcc.target/aarch64/neon/vtrn.c\nnew file mode 100644\nindex 000000000000..8049e2772761\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/neon/vtrn.c\n@@ -0,0 +1,566 @@\n+/* { dg-do compile } */\n+/* { dg-final { check-function-bodies \"**\" \"\" } } */\n+\n+#include \"arm_neon_test.h\"\n+\n+/*\n+** test_vtrn1_u8:\n+** trn1\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1_u8, uint8x8_t)\n+\n+/*\n+** test_vtrn1_s8:\n+** trn1\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1_s8, int8x8_t)\n+\n+/*\n+** test_vtrn1_p8:\n+** trn1\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1_p8, poly8x8_t)\n+\n+/*\n+** test_vtrn1_mf8:\n+** trn1\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1_mf8, mfloat8x8_t)\n+\n+/*\n+** test_vtrn1_u16:\n+** trn1\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1_u16, uint16x4_t)\n+\n+/*\n+** test_vtrn1_s16:\n+** trn1\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1_s16, int16x4_t)\n+\n+/*\n+** test_vtrn1_f16:\n+** trn1\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1_f16, float16x4_t)\n+\n+/*\n+** test_vtrn1_p16:\n+** trn1\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1_p16, poly16x4_t)\n+\n+/*\n+** test_vtrn1_u32:\n+** (trn|zip)1\tv0\\.2s, v0\\.2s, v1\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1_u32, uint32x2_t)\n+\n+/*\n+** test_vtrn1_s32:\n+** (trn|zip)1\tv0\\.2s, v0\\.2s, v1\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1_s32, int32x2_t)\n+\n+/*\n+** test_vtrn1_f32:\n+** (trn|zip)1\tv0\\.2s, v0\\.2s, v1\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1_f32, float32x2_t)\n+\n+/*\n+** test_vtrn1q_u8:\n+** trn1\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1q_u8, uint8x16_t)\n+\n+/*\n+** test_vtrn1q_s8:\n+** trn1\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1q_s8, int8x16_t)\n+\n+/*\n+** test_vtrn1q_p8:\n+** trn1\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1q_p8, poly8x16_t)\n+\n+/*\n+** test_vtrn1q_mf8:\n+** trn1\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1q_mf8, mfloat8x16_t)\n+\n+/*\n+** test_vtrn1q_u16:\n+** trn1\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1q_u16, uint16x8_t)\n+\n+/*\n+** test_vtrn1q_s16:\n+** trn1\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1q_s16, int16x8_t)\n+\n+/*\n+** test_vtrn1q_f16:\n+** trn1\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1q_f16, float16x8_t)\n+\n+/*\n+** test_vtrn1q_p16:\n+** trn1\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1q_p16, poly16x8_t)\n+\n+/*\n+** test_vtrn1q_u32:\n+** trn1\tv0\\.4s, v0\\.4s, v1\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1q_u32, uint32x4_t)\n+\n+/*\n+** test_vtrn1q_s32:\n+** trn1\tv0\\.4s, v0\\.4s, v1\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1q_s32, int32x4_t)\n+\n+/*\n+** test_vtrn1q_f32:\n+** trn1\tv0\\.4s, v0\\.4s, v1\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1q_f32, float32x4_t)\n+\n+/*\n+** test_vtrn1q_u64:\n+** (trn|zip)1\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1q_u64, uint64x2_t)\n+\n+/*\n+** test_vtrn1q_s64:\n+** (trn|zip)1\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1q_s64, int64x2_t)\n+\n+/*\n+** test_vtrn1q_f64:\n+** (trn|zip)1\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1q_f64, float64x2_t)\n+\n+/*\n+** test_vtrn1q_p64:\n+** (trn|zip)1\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn1q_p64, poly64x2_t)\n+\n+/*\n+** test_vtrn2_u8:\n+** trn2\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2_u8, uint8x8_t)\n+\n+/*\n+** test_vtrn2_s8:\n+** trn2\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2_s8, int8x8_t)\n+\n+/*\n+** test_vtrn2_p8:\n+** trn2\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2_p8, poly8x8_t)\n+\n+/*\n+** test_vtrn2_mf8:\n+** trn2\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2_mf8, mfloat8x8_t)\n+\n+/*\n+** test_vtrn2_u16:\n+** trn2\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2_u16, uint16x4_t)\n+\n+/*\n+** test_vtrn2_s16:\n+** trn2\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2_s16, int16x4_t)\n+\n+/*\n+** test_vtrn2_f16:\n+** trn2\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2_f16, float16x4_t)\n+\n+/*\n+** test_vtrn2_p16:\n+** trn2\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2_p16, poly16x4_t)\n+\n+/*\n+** test_vtrn2_u32:\n+** (trn|zip)2\tv0\\.2s, v0\\.2s, v1\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2_u32, uint32x2_t)\n+\n+/*\n+** test_vtrn2_s32:\n+** (trn|zip)2\tv0\\.2s, v0\\.2s, v1\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2_s32, int32x2_t)\n+\n+/*\n+** test_vtrn2_f32:\n+** (trn|zip)2\tv0\\.2s, v0\\.2s, v1\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2_f32, float32x2_t)\n+\n+/*\n+** test_vtrn2q_u8:\n+** trn2\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2q_u8, uint8x16_t)\n+\n+/*\n+** test_vtrn2q_s8:\n+** trn2\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2q_s8, int8x16_t)\n+\n+/*\n+** test_vtrn2q_p8:\n+** trn2\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2q_p8, poly8x16_t)\n+\n+/*\n+** test_vtrn2q_mf8:\n+** trn2\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2q_mf8, mfloat8x16_t)\n+\n+/*\n+** test_vtrn2q_u16:\n+** trn2\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2q_u16, uint16x8_t)\n+\n+/*\n+** test_vtrn2q_s16:\n+** trn2\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2q_s16, int16x8_t)\n+\n+/*\n+** test_vtrn2q_f16:\n+** trn2\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2q_f16, float16x8_t)\n+\n+/*\n+** test_vtrn2q_p16:\n+** trn2\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2q_p16, poly16x8_t)\n+\n+/*\n+** test_vtrn2q_u32:\n+** trn2\tv0\\.4s, v0\\.4s, v1\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2q_u32, uint32x4_t)\n+\n+/*\n+** test_vtrn2q_s32:\n+** trn2\tv0\\.4s, v0\\.4s, v1\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2q_s32, int32x4_t)\n+\n+/*\n+** test_vtrn2q_f32:\n+** trn2\tv0\\.4s, v0\\.4s, v1\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2q_f32, float32x4_t)\n+\n+/*\n+** test_vtrn2q_u64:\n+** (trn|zip)2\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2q_u64, uint64x2_t)\n+\n+/*\n+** test_vtrn2q_s64:\n+** (trn|zip)2\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2q_s64, int64x2_t)\n+\n+/*\n+** test_vtrn2q_f64:\n+** (trn|zip)2\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2q_f64, float64x2_t)\n+\n+/*\n+** test_vtrn2q_p64:\n+** (trn|zip)2\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vtrn2q_p64, poly64x2_t)\n+\n+/*\n+** test_vtrn_u8:\n+** ...\n+** trn1\tv0\\.8b, .+\n+** trn2\tv1\\.8b, .+\n+** ret\n+*/\n+TEST_BINARY (vtrn_u8, uint8x8x2_t, uint8x8_t, uint8x8_t)\n+\n+/*\n+** test_vtrn_s8:\n+** ...\n+** trn1\tv0\\.8b, .+\n+** trn2\tv1\\.8b, .+\n+** ret\n+*/\n+TEST_BINARY (vtrn_s8, int8x8x2_t, int8x8_t, int8x8_t)\n+\n+/*\n+** test_vtrn_p8:\n+** ...\n+** trn1\tv0\\.8b, .+\n+** trn2\tv1\\.8b, .+\n+** ret\n+*/\n+TEST_BINARY (vtrn_p8, poly8x8x2_t, poly8x8_t, poly8x8_t)\n+\n+/*\n+** test_vtrn_mf8:\n+** ...\n+** trn1\tv0\\.8b, .+\n+** trn2\tv1\\.8b, .+\n+** ret\n+*/\n+TEST_BINARY (vtrn_mf8, mfloat8x8x2_t, mfloat8x8_t, mfloat8x8_t)\n+\n+/*\n+** test_vtrn_u16:\n+** ...\n+** trn1\tv0\\.4h, .+\n+** trn2\tv1\\.4h, .+\n+** ret\n+*/\n+TEST_BINARY (vtrn_u16, uint16x4x2_t, uint16x4_t, uint16x4_t)\n+\n+/*\n+** test_vtrn_s16:\n+** ...\n+** trn1\tv0\\.4h, .+\n+** trn2\tv1\\.4h, .+\n+** ret\n+*/\n+TEST_BINARY (vtrn_s16, int16x4x2_t, int16x4_t, int16x4_t)\n+\n+/*\n+** test_vtrn_f16:\n+** ...\n+** trn1\tv0\\.4h, .+\n+** trn2\tv1\\.4h, .+\n+** ret\n+*/\n+TEST_BINARY (vtrn_f16, float16x4x2_t, float16x4_t, float16x4_t)\n+\n+/*\n+** test_vtrn_p16:\n+** ...\n+** trn1\tv0\\.4h, .+\n+** trn2\tv1\\.4h, .+\n+** ret\n+*/\n+TEST_BINARY (vtrn_p16, poly16x4x2_t, poly16x4_t, poly16x4_t)\n+\n+/*\n+** test_vtrn_u32:\n+** ...\n+** (trn|zip)1\tv0\\.2s, .+\n+** (trn|zip)2\tv1\\.2s, .+\n+** ret\n+*/\n+TEST_BINARY (vtrn_u32, uint32x2x2_t, uint32x2_t, uint32x2_t)\n+\n+/*\n+** test_vtrn_s32:\n+** ...\n+** (trn|zip)1\tv0\\.2s, .+\n+** (trn|zip)2\tv1\\.2s, .+\n+** ret\n+*/\n+TEST_BINARY (vtrn_s32, int32x2x2_t, int32x2_t, int32x2_t)\n+\n+/*\n+** test_vtrn_f32:\n+** ...\n+** (trn|zip)1\tv0\\.2s, .+\n+** (trn|zip)2\tv1\\.2s, .+\n+** ret\n+*/\n+TEST_BINARY (vtrn_f32, float32x2x2_t, float32x2_t, float32x2_t)\n+\n+/*\n+** test_vtrnq_u8:\n+** ...\n+** trn1\tv0\\.16b, .+\n+** trn2\tv1\\.16b, .+\n+** ret\n+*/\n+TEST_BINARY (vtrnq_u8, uint8x16x2_t, uint8x16_t, uint8x16_t)\n+\n+/*\n+** test_vtrnq_s8:\n+** ...\n+** trn1\tv0\\.16b, .+\n+** trn2\tv1\\.16b, .+\n+** ret\n+*/\n+TEST_BINARY (vtrnq_s8, int8x16x2_t, int8x16_t, int8x16_t)\n+\n+/*\n+** test_vtrnq_p8:\n+** ...\n+** trn1\tv0\\.16b, .+\n+** trn2\tv1\\.16b, .+\n+** ret\n+*/\n+TEST_BINARY (vtrnq_p8, poly8x16x2_t, poly8x16_t, poly8x16_t)\n+\n+/*\n+** test_vtrnq_mf8:\n+** ...\n+** trn1\tv0\\.16b, .+\n+** trn2\tv1\\.16b, .+\n+** ret\n+*/\n+TEST_BINARY (vtrnq_mf8, mfloat8x16x2_t, mfloat8x16_t, mfloat8x16_t)\n+\n+/*\n+** test_vtrnq_u16:\n+** ...\n+** trn1\tv0\\.8h, .+\n+** trn2\tv1\\.8h, .+\n+** ret\n+*/\n+TEST_BINARY (vtrnq_u16, uint16x8x2_t, uint16x8_t, uint16x8_t)\n+\n+/*\n+** test_vtrnq_s16:\n+** ...\n+** trn1\tv0\\.8h, .+\n+** trn2\tv1\\.8h, .+\n+** ret\n+*/\n+TEST_BINARY (vtrnq_s16, int16x8x2_t, int16x8_t, int16x8_t)\n+\n+/*\n+** test_vtrnq_f16:\n+** ...\n+** trn1\tv0\\.8h, .+\n+** trn2\tv1\\.8h, .+\n+** ret\n+*/\n+TEST_BINARY (vtrnq_f16, float16x8x2_t, float16x8_t, float16x8_t)\n+\n+/*\n+** test_vtrnq_p16:\n+** ...\n+** trn1\tv0\\.8h, .+\n+** trn2\tv1\\.8h, .+\n+** ret\n+*/\n+TEST_BINARY (vtrnq_p16, poly16x8x2_t, poly16x8_t, poly16x8_t)\n+\n+/*\n+** test_vtrnq_u32:\n+** ...\n+** trn1\tv0\\.4s, .+\n+** trn2\tv1\\.4s, .+\n+** ret\n+*/\n+TEST_BINARY (vtrnq_u32, uint32x4x2_t, uint32x4_t, uint32x4_t)\n+\n+/*\n+** test_vtrnq_s32:\n+** ...\n+** trn1\tv0\\.4s, .+\n+** trn2\tv1\\.4s, .+\n+** ret\n+*/\n+TEST_BINARY (vtrnq_s32, int32x4x2_t, int32x4_t, int32x4_t)\n+\n+/*\n+** test_vtrnq_f32:\n+** ...\n+** trn1\tv0\\.4s, .+\n+** trn2\tv1\\.4s, .+\n+** ret\n+*/\n+TEST_BINARY (vtrnq_f32, float32x4x2_t, float32x4_t, float32x4_t)\ndiff --git a/gcc/testsuite/gcc.target/aarch64/neon/vuzp.c b/gcc/testsuite/gcc.target/aarch64/neon/vuzp.c\nnew file mode 100644\nindex 000000000000..ea5a24bf3864\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/neon/vuzp.c\n@@ -0,0 +1,566 @@\n+/* { dg-do compile } */\n+/* { dg-final { check-function-bodies \"**\" \"\" } } */\n+\n+#include \"arm_neon_test.h\"\n+\n+/*\n+** test_vuzp1_u8:\n+** uzp1\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1_u8, uint8x8_t)\n+\n+/*\n+** test_vuzp1_s8:\n+** uzp1\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1_s8, int8x8_t)\n+\n+/*\n+** test_vuzp1_p8:\n+** uzp1\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1_p8, poly8x8_t)\n+\n+/*\n+** test_vuzp1_mf8:\n+** uzp1\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1_mf8, mfloat8x8_t)\n+\n+/*\n+** test_vuzp1_u16:\n+** uzp1\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1_u16, uint16x4_t)\n+\n+/*\n+** test_vuzp1_s16:\n+** uzp1\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1_s16, int16x4_t)\n+\n+/*\n+** test_vuzp1_f16:\n+** uzp1\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1_f16, float16x4_t)\n+\n+/*\n+** test_vuzp1_p16:\n+** uzp1\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1_p16, poly16x4_t)\n+\n+/*\n+** test_vuzp1_u32:\n+** (uzp|zip)1\tv0\\.2s, v0\\.2s, v1\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1_u32, uint32x2_t)\n+\n+/*\n+** test_vuzp1_s32:\n+** (uzp|zip)1\tv0\\.2s, v0\\.2s, v1\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1_s32, int32x2_t)\n+\n+/*\n+** test_vuzp1_f32:\n+** (uzp|zip)1\tv0\\.2s, v0\\.2s, v1\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1_f32, float32x2_t)\n+\n+/*\n+** test_vuzp1q_u8:\n+** uzp1\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1q_u8, uint8x16_t)\n+\n+/*\n+** test_vuzp1q_s8:\n+** uzp1\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1q_s8, int8x16_t)\n+\n+/*\n+** test_vuzp1q_p8:\n+** uzp1\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1q_p8, poly8x16_t)\n+\n+/*\n+** test_vuzp1q_mf8:\n+** uzp1\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1q_mf8, mfloat8x16_t)\n+\n+/*\n+** test_vuzp1q_u16:\n+** uzp1\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1q_u16, uint16x8_t)\n+\n+/*\n+** test_vuzp1q_s16:\n+** uzp1\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1q_s16, int16x8_t)\n+\n+/*\n+** test_vuzp1q_f16:\n+** uzp1\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1q_f16, float16x8_t)\n+\n+/*\n+** test_vuzp1q_p16:\n+** uzp1\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1q_p16, poly16x8_t)\n+\n+/*\n+** test_vuzp1q_u32:\n+** uzp1\tv0\\.4s, v0\\.4s, v1\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1q_u32, uint32x4_t)\n+\n+/*\n+** test_vuzp1q_s32:\n+** uzp1\tv0\\.4s, v0\\.4s, v1\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1q_s32, int32x4_t)\n+\n+/*\n+** test_vuzp1q_f32:\n+** uzp1\tv0\\.4s, v0\\.4s, v1\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1q_f32, float32x4_t)\n+\n+/*\n+** test_vuzp1q_u64:\n+** (uzp|zip)1\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1q_u64, uint64x2_t)\n+\n+/*\n+** test_vuzp1q_s64:\n+** (uzp|zip)1\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1q_s64, int64x2_t)\n+\n+/*\n+** test_vuzp1q_f64:\n+** (uzp|zip)1\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1q_f64, float64x2_t)\n+\n+/*\n+** test_vuzp1q_p64:\n+** (uzp|zip)1\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp1q_p64, poly64x2_t)\n+\n+/*\n+** test_vuzp2_u8:\n+** uzp2\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2_u8, uint8x8_t)\n+\n+/*\n+** test_vuzp2_s8:\n+** uzp2\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2_s8, int8x8_t)\n+\n+/*\n+** test_vuzp2_p8:\n+** uzp2\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2_p8, poly8x8_t)\n+\n+/*\n+** test_vuzp2_mf8:\n+** uzp2\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2_mf8, mfloat8x8_t)\n+\n+/*\n+** test_vuzp2_u16:\n+** uzp2\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2_u16, uint16x4_t)\n+\n+/*\n+** test_vuzp2_s16:\n+** uzp2\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2_s16, int16x4_t)\n+\n+/*\n+** test_vuzp2_f16:\n+** uzp2\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2_f16, float16x4_t)\n+\n+/*\n+** test_vuzp2_p16:\n+** uzp2\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2_p16, poly16x4_t)\n+\n+/*\n+** test_vuzp2_u32:\n+** (uzp|zip)2\tv0\\.2s, v0\\.2s, v1\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2_u32, uint32x2_t)\n+\n+/*\n+** test_vuzp2_s32:\n+** (uzp|zip)2\tv0\\.2s, v0\\.2s, v1\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2_s32, int32x2_t)\n+\n+/*\n+** test_vuzp2_f32:\n+** (uzp|zip)2\tv0\\.2s, v0\\.2s, v1\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2_f32, float32x2_t)\n+\n+/*\n+** test_vuzp2q_u8:\n+** uzp2\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2q_u8, uint8x16_t)\n+\n+/*\n+** test_vuzp2q_s8:\n+** uzp2\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2q_s8, int8x16_t)\n+\n+/*\n+** test_vuzp2q_p8:\n+** uzp2\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2q_p8, poly8x16_t)\n+\n+/*\n+** test_vuzp2q_mf8:\n+** uzp2\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2q_mf8, mfloat8x16_t)\n+\n+/*\n+** test_vuzp2q_u16:\n+** uzp2\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2q_u16, uint16x8_t)\n+\n+/*\n+** test_vuzp2q_s16:\n+** uzp2\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2q_s16, int16x8_t)\n+\n+/*\n+** test_vuzp2q_f16:\n+** uzp2\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2q_f16, float16x8_t)\n+\n+/*\n+** test_vuzp2q_p16:\n+** uzp2\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2q_p16, poly16x8_t)\n+\n+/*\n+** test_vuzp2q_u32:\n+** uzp2\tv0\\.4s, v0\\.4s, v1\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2q_u32, uint32x4_t)\n+\n+/*\n+** test_vuzp2q_s32:\n+** uzp2\tv0\\.4s, v0\\.4s, v1\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2q_s32, int32x4_t)\n+\n+/*\n+** test_vuzp2q_f32:\n+** uzp2\tv0\\.4s, v0\\.4s, v1\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2q_f32, float32x4_t)\n+\n+/*\n+** test_vuzp2q_u64:\n+** (uzp|zip)2\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2q_u64, uint64x2_t)\n+\n+/*\n+** test_vuzp2q_s64:\n+** (uzp|zip)2\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2q_s64, int64x2_t)\n+\n+/*\n+** test_vuzp2q_f64:\n+** (uzp|zip)2\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2q_f64, float64x2_t)\n+\n+/*\n+** test_vuzp2q_p64:\n+** (uzp|zip)2\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vuzp2q_p64, poly64x2_t)\n+\n+/*\n+** test_vuzp_u8:\n+** ...\n+** uzp1\tv0\\.8b, .+\n+** uzp2\tv1\\.8b, .+\n+** ret\n+*/\n+TEST_BINARY (vuzp_u8, uint8x8x2_t, uint8x8_t, uint8x8_t)\n+\n+/*\n+** test_vuzp_s8:\n+** ...\n+** uzp1\tv0\\.8b, .+\n+** uzp2\tv1\\.8b, .+\n+** ret\n+*/\n+TEST_BINARY (vuzp_s8, int8x8x2_t, int8x8_t, int8x8_t)\n+\n+/*\n+** test_vuzp_p8:\n+** ...\n+** uzp1\tv0\\.8b, .+\n+** uzp2\tv1\\.8b, .+\n+** ret\n+*/\n+TEST_BINARY (vuzp_p8, poly8x8x2_t, poly8x8_t, poly8x8_t)\n+\n+/*\n+** test_vuzp_mf8:\n+** ...\n+** uzp1\tv0\\.8b, .+\n+** uzp2\tv1\\.8b, .+\n+** ret\n+*/\n+TEST_BINARY (vuzp_mf8, mfloat8x8x2_t, mfloat8x8_t, mfloat8x8_t)\n+\n+/*\n+** test_vuzp_u16:\n+** ...\n+** uzp1\tv0\\.4h, .+\n+** uzp2\tv1\\.4h, .+\n+** ret\n+*/\n+TEST_BINARY (vuzp_u16, uint16x4x2_t, uint16x4_t, uint16x4_t)\n+\n+/*\n+** test_vuzp_s16:\n+** ...\n+** uzp1\tv0\\.4h, .+\n+** uzp2\tv1\\.4h, .+\n+** ret\n+*/\n+TEST_BINARY (vuzp_s16, int16x4x2_t, int16x4_t, int16x4_t)\n+\n+/*\n+** test_vuzp_f16:\n+** ...\n+** uzp1\tv0\\.4h, .+\n+** uzp2\tv1\\.4h, .+\n+** ret\n+*/\n+TEST_BINARY (vuzp_f16, float16x4x2_t, float16x4_t, float16x4_t)\n+\n+/*\n+** test_vuzp_p16:\n+** ...\n+** uzp1\tv0\\.4h, .+\n+** uzp2\tv1\\.4h, .+\n+** ret\n+*/\n+TEST_BINARY (vuzp_p16, poly16x4x2_t, poly16x4_t, poly16x4_t)\n+\n+/*\n+** test_vuzp_u32:\n+** ...\n+** (uzp|zip)1\tv0\\.2s, .+\n+** (uzp|zip)2\tv1\\.2s, .+\n+** ret\n+*/\n+TEST_BINARY (vuzp_u32, uint32x2x2_t, uint32x2_t, uint32x2_t)\n+\n+/*\n+** test_vuzp_s32:\n+** ...\n+** (uzp|zip)1\tv0\\.2s, .+\n+** (uzp|zip)2\tv1\\.2s, .+\n+** ret\n+*/\n+TEST_BINARY (vuzp_s32, int32x2x2_t, int32x2_t, int32x2_t)\n+\n+/*\n+** test_vuzp_f32:\n+** ...\n+** (uzp|zip)1\tv0\\.2s, .+\n+** (uzp|zip)2\tv1\\.2s, .+\n+** ret\n+*/\n+TEST_BINARY (vuzp_f32, float32x2x2_t, float32x2_t, float32x2_t)\n+\n+/*\n+** test_vuzpq_u8:\n+** ...\n+** uzp1\tv0\\.16b, .+\n+** uzp2\tv1\\.16b, .+\n+** ret\n+*/\n+TEST_BINARY (vuzpq_u8, uint8x16x2_t, uint8x16_t, uint8x16_t)\n+\n+/*\n+** test_vuzpq_s8:\n+** ...\n+** uzp1\tv0\\.16b, .+\n+** uzp2\tv1\\.16b, .+\n+** ret\n+*/\n+TEST_BINARY (vuzpq_s8, int8x16x2_t, int8x16_t, int8x16_t)\n+\n+/*\n+** test_vuzpq_p8:\n+** ...\n+** uzp1\tv0\\.16b, .+\n+** uzp2\tv1\\.16b, .+\n+** ret\n+*/\n+TEST_BINARY (vuzpq_p8, poly8x16x2_t, poly8x16_t, poly8x16_t)\n+\n+/*\n+** test_vuzpq_mf8:\n+** ...\n+** uzp1\tv0\\.16b, .+\n+** uzp2\tv1\\.16b, .+\n+** ret\n+*/\n+TEST_BINARY (vuzpq_mf8, mfloat8x16x2_t, mfloat8x16_t, mfloat8x16_t)\n+\n+/*\n+** test_vuzpq_u16:\n+** ...\n+** uzp1\tv0\\.8h, .+\n+** uzp2\tv1\\.8h, .+\n+** ret\n+*/\n+TEST_BINARY (vuzpq_u16, uint16x8x2_t, uint16x8_t, uint16x8_t)\n+\n+/*\n+** test_vuzpq_s16:\n+** ...\n+** uzp1\tv0\\.8h, .+\n+** uzp2\tv1\\.8h, .+\n+** ret\n+*/\n+TEST_BINARY (vuzpq_s16, int16x8x2_t, int16x8_t, int16x8_t)\n+\n+/*\n+** test_vuzpq_f16:\n+** ...\n+** uzp1\tv0\\.8h, .+\n+** uzp2\tv1\\.8h, .+\n+** ret\n+*/\n+TEST_BINARY (vuzpq_f16, float16x8x2_t, float16x8_t, float16x8_t)\n+\n+/*\n+** test_vuzpq_p16:\n+** ...\n+** uzp1\tv0\\.8h, .+\n+** uzp2\tv1\\.8h, .+\n+** ret\n+*/\n+TEST_BINARY (vuzpq_p16, poly16x8x2_t, poly16x8_t, poly16x8_t)\n+\n+/*\n+** test_vuzpq_u32:\n+** ...\n+** uzp1\tv0\\.4s, .+\n+** uzp2\tv1\\.4s, .+\n+** ret\n+*/\n+TEST_BINARY (vuzpq_u32, uint32x4x2_t, uint32x4_t, uint32x4_t)\n+\n+/*\n+** test_vuzpq_s32:\n+** ...\n+** uzp1\tv0\\.4s, .+\n+** uzp2\tv1\\.4s, .+\n+** ret\n+*/\n+TEST_BINARY (vuzpq_s32, int32x4x2_t, int32x4_t, int32x4_t)\n+\n+/*\n+** test_vuzpq_f32:\n+** ...\n+** uzp1\tv0\\.4s, .+\n+** uzp2\tv1\\.4s, .+\n+** ret\n+*/\n+TEST_BINARY (vuzpq_f32, float32x4x2_t, float32x4_t, float32x4_t)\ndiff --git a/gcc/testsuite/gcc.target/aarch64/neon/vzip.c b/gcc/testsuite/gcc.target/aarch64/neon/vzip.c\nnew file mode 100644\nindex 000000000000..d90dc4fd8e58\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/neon/vzip.c\n@@ -0,0 +1,559 @@\n+/* { dg-do compile } */\n+/* { dg-final { check-function-bodies \"**\" \"\" } } */\n+\n+#include \"arm_neon_test.h\"\n+\n+/*\n+** test_vzip1_u8:\n+** zip1\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1_u8, uint8x8_t)\n+\n+/*\n+** test_vzip1_s8:\n+** zip1\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1_s8, int8x8_t)\n+\n+/*\n+** test_vzip1_p8:\n+** zip1\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1_p8, poly8x8_t)\n+\n+/*\n+** test_vzip1_mf8:\n+** zip1\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1_mf8, mfloat8x8_t)\n+\n+/*\n+** test_vzip1_u16:\n+** zip1\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1_u16, uint16x4_t)\n+\n+/*\n+** test_vzip1_s16:\n+** zip1\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1_s16, int16x4_t)\n+\n+/*\n+** test_vzip1_f16:\n+** zip1\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1_f16, float16x4_t)\n+\n+/*\n+** test_vzip1_p16:\n+** zip1\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1_p16, poly16x4_t)\n+\n+/*\n+** test_vzip1_u32:\n+** zip1\tv0\\.2s, v0\\.2s, v1\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1_u32, uint32x2_t)\n+\n+/*\n+** test_vzip1_s32:\n+** zip1\tv0\\.2s, v0\\.2s, v1\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1_s32, int32x2_t)\n+\n+/*\n+** test_vzip1_f32:\n+** zip1\tv0\\.2s, v0\\.2s, v1\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1_f32, float32x2_t)\n+\n+/*\n+** test_vzip1q_u8:\n+** zip1\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1q_u8, uint8x16_t)\n+\n+/*\n+** test_vzip1q_s8:\n+** zip1\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1q_s8, int8x16_t)\n+\n+/*\n+** test_vzip1q_p8:\n+** zip1\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1q_p8, poly8x16_t)\n+\n+/*\n+** test_vzip1q_mf8:\n+** zip1\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1q_mf8, mfloat8x16_t)\n+\n+/*\n+** test_vzip1q_u16:\n+** zip1\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1q_u16, uint16x8_t)\n+\n+/*\n+** test_vzip1q_s16:\n+** zip1\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1q_s16, int16x8_t)\n+\n+/*\n+** test_vzip1q_f16:\n+** zip1\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1q_f16, float16x8_t)\n+\n+/*\n+** test_vzip1q_p16:\n+** zip1\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1q_p16, poly16x8_t)\n+\n+/*\n+** test_vzip1q_u32:\n+** zip1\tv0\\.4s, v0\\.4s, v1\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1q_u32, uint32x4_t)\n+\n+/*\n+** test_vzip1q_s32:\n+** zip1\tv0\\.4s, v0\\.4s, v1\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1q_s32, int32x4_t)\n+\n+/*\n+** test_vzip1q_f32:\n+** zip1\tv0\\.4s, v0\\.4s, v1\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1q_f32, float32x4_t)\n+\n+/*\n+** test_vzip1q_u64:\n+** zip1\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1q_u64, uint64x2_t)\n+\n+/*\n+** test_vzip1q_s64:\n+** zip1\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1q_s64, int64x2_t)\n+\n+/*\n+** test_vzip1q_f64:\n+** zip1\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1q_f64, float64x2_t)\n+\n+/*\n+** test_vzip1q_p64:\n+** zip1\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip1q_p64, poly64x2_t)\n+\n+/*\n+** test_vzip2_u8:\n+** zip2\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2_u8, uint8x8_t)\n+\n+/*\n+** test_vzip2_s8:\n+** zip2\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2_s8, int8x8_t)\n+\n+/*\n+** test_vzip2_p8:\n+** zip2\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2_p8, poly8x8_t)\n+\n+/*\n+** test_vzip2_mf8:\n+** zip2\tv0\\.8b, v0\\.8b, v1\\.8b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2_mf8, mfloat8x8_t)\n+\n+/*\n+** test_vzip2_u16:\n+** zip2\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2_u16, uint16x4_t)\n+\n+/*\n+** test_vzip2_s16:\n+** zip2\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2_s16, int16x4_t)\n+\n+/*\n+** test_vzip2_f16:\n+** zip2\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2_f16, float16x4_t)\n+\n+/*\n+** test_vzip2_p16:\n+** zip2\tv0\\.4h, v0\\.4h, v1\\.4h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2_p16, poly16x4_t)\n+\n+/*\n+** test_vzip2_u32:\n+** zip2\tv0\\.2s, v0\\.2s, v1\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2_u32, uint32x2_t)\n+\n+/*\n+** test_vzip2_s32:\n+** zip2\tv0\\.2s, v0\\.2s, v1\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2_s32, int32x2_t)\n+\n+/*\n+** test_vzip2_f32:\n+** zip2\tv0\\.2s, v0\\.2s, v1\\.2s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2_f32, float32x2_t)\n+\n+/*\n+** test_vzip2q_u8:\n+** zip2\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2q_u8, uint8x16_t)\n+\n+/*\n+** test_vzip2q_s8:\n+** zip2\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2q_s8, int8x16_t)\n+\n+/*\n+** test_vzip2q_mf8:\n+** zip2\tv0\\.16b, v0\\.16b, v1\\.16b\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2q_mf8, mfloat8x16_t)\n+\n+/*\n+** test_vzip2q_u16:\n+** zip2\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2q_u16, uint16x8_t)\n+\n+/*\n+** test_vzip2q_s16:\n+** zip2\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2q_s16, int16x8_t)\n+\n+/*\n+** test_vzip2q_f16:\n+** zip2\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2q_f16, float16x8_t)\n+\n+/*\n+** test_vzip2q_p16:\n+** zip2\tv0\\.8h, v0\\.8h, v1\\.8h\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2q_p16, poly16x8_t)\n+\n+/*\n+** test_vzip2q_u32:\n+** zip2\tv0\\.4s, v0\\.4s, v1\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2q_u32, uint32x4_t)\n+\n+/*\n+** test_vzip2q_s32:\n+** zip2\tv0\\.4s, v0\\.4s, v1\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2q_s32, int32x4_t)\n+\n+/*\n+** test_vzip2q_f32:\n+** zip2\tv0\\.4s, v0\\.4s, v1\\.4s\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2q_f32, float32x4_t)\n+\n+/*\n+** test_vzip2q_u64:\n+** zip2\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2q_u64, uint64x2_t)\n+\n+/*\n+** test_vzip2q_s64:\n+** zip2\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2q_s64, int64x2_t)\n+\n+/*\n+** test_vzip2q_f64:\n+** zip2\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2q_f64, float64x2_t)\n+\n+/*\n+** test_vzip2q_p64:\n+** zip2\tv0\\.2d, v0\\.2d, v1\\.2d\n+** ret\n+*/\n+TEST_UNIFORM_BINARY (vzip2q_p64, poly64x2_t)\n+\n+/*\n+** test_vzip_u8:\n+** ...\n+** zip1\tv0\\.8b, .+\n+** zip2\tv1\\.8b, .+\n+** ret\n+*/\n+TEST_BINARY (vzip_u8, uint8x8x2_t, uint8x8_t, uint8x8_t)\n+\n+/*\n+** test_vzip_s8:\n+** ...\n+** zip1\tv0\\.8b, .+\n+** zip2\tv1\\.8b, .+\n+** ret\n+*/\n+TEST_BINARY (vzip_s8, int8x8x2_t, int8x8_t, int8x8_t)\n+\n+/*\n+** test_vzip_p8:\n+** ...\n+** zip1\tv0\\.8b, .+\n+** zip2\tv1\\.8b, .+\n+** ret\n+*/\n+TEST_BINARY (vzip_p8, poly8x8x2_t, poly8x8_t, poly8x8_t)\n+\n+/*\n+** test_vzip_mf8:\n+** ...\n+** zip1\tv0\\.8b, .+\n+** zip2\tv1\\.8b, .+\n+** ret\n+*/\n+TEST_BINARY (vzip_mf8, mfloat8x8x2_t, mfloat8x8_t, mfloat8x8_t)\n+\n+/*\n+** test_vzip_u16:\n+** ...\n+** zip1\tv0\\.4h, .+\n+** zip2\tv1\\.4h, .+\n+** ret\n+*/\n+TEST_BINARY (vzip_u16, uint16x4x2_t, uint16x4_t, uint16x4_t)\n+\n+/*\n+** test_vzip_s16:\n+** ...\n+** zip1\tv0\\.4h, .+\n+** zip2\tv1\\.4h, .+\n+** ret\n+*/\n+TEST_BINARY (vzip_s16, int16x4x2_t, int16x4_t, int16x4_t)\n+\n+/*\n+** test_vzip_f16:\n+** ...\n+** zip1\tv0\\.4h, .+\n+** zip2\tv1\\.4h, .+\n+** ret\n+*/\n+TEST_BINARY (vzip_f16, float16x4x2_t, float16x4_t, float16x4_t)\n+\n+/*\n+** test_vzip_p16:\n+** ...\n+** zip1\tv0\\.4h, .+\n+** zip2\tv1\\.4h, .+\n+** ret\n+*/\n+TEST_BINARY (vzip_p16, poly16x4x2_t, poly16x4_t, poly16x4_t)\n+\n+/*\n+** test_vzip_u32:\n+** ...\n+** zip1\tv0\\.2s, .+\n+** zip2\tv1\\.2s, .+\n+** ret\n+*/\n+TEST_BINARY (vzip_u32, uint32x2x2_t, uint32x2_t, uint32x2_t)\n+\n+/*\n+** test_vzip_s32:\n+** ...\n+** zip1\tv0\\.2s, .+\n+** zip2\tv1\\.2s, .+\n+** ret\n+*/\n+TEST_BINARY (vzip_s32, int32x2x2_t, int32x2_t, int32x2_t)\n+\n+/*\n+** test_vzip_f32:\n+** ...\n+** zip1\tv0\\.2s, .+\n+** zip2\tv1\\.2s, .+\n+** ret\n+*/\n+TEST_BINARY (vzip_f32, float32x2x2_t, float32x2_t, float32x2_t)\n+\n+/*\n+** test_vzipq_u8:\n+** ...\n+** zip1\tv0\\.16b, .+\n+** zip2\tv1\\.16b, .+\n+** ret\n+*/\n+TEST_BINARY (vzipq_u8, uint8x16x2_t, uint8x16_t, uint8x16_t)\n+\n+/*\n+** test_vzipq_s8:\n+** ...\n+** zip1\tv0\\.16b, .+\n+** zip2\tv1\\.16b, .+\n+** ret\n+*/\n+TEST_BINARY (vzipq_s8, int8x16x2_t, int8x16_t, int8x16_t)\n+\n+/*\n+** test_vzipq_p8:\n+** ...\n+** zip1\tv0\\.16b, .+\n+** zip2\tv1\\.16b, .+\n+** ret\n+*/\n+TEST_BINARY (vzipq_p8, poly8x16x2_t, poly8x16_t, poly8x16_t)\n+\n+/*\n+** test_vzipq_mf8:\n+** ...\n+** zip1\tv0\\.16b, .+\n+** zip2\tv1\\.16b, .+\n+** ret\n+*/\n+TEST_BINARY (vzipq_mf8, mfloat8x16x2_t, mfloat8x16_t, mfloat8x16_t)\n+\n+/*\n+** test_vzipq_u16:\n+** ...\n+** zip1\tv0\\.8h, .+\n+** zip2\tv1\\.8h, .+\n+** ret\n+*/\n+TEST_BINARY (vzipq_u16, uint16x8x2_t, uint16x8_t, uint16x8_t)\n+\n+/*\n+** test_vzipq_s16:\n+** ...\n+** zip1\tv0\\.8h, .+\n+** zip2\tv1\\.8h, .+\n+** ret\n+*/\n+TEST_BINARY (vzipq_s16, int16x8x2_t, int16x8_t, int16x8_t)\n+\n+/*\n+** test_vzipq_f16:\n+** ...\n+** zip1\tv0\\.8h, .+\n+** zip2\tv1\\.8h, .+\n+** ret\n+*/\n+TEST_BINARY (vzipq_f16, float16x8x2_t, float16x8_t, float16x8_t)\n+\n+/*\n+** test_vzipq_p16:\n+** ...\n+** zip1\tv0\\.8h, .+\n+** zip2\tv1\\.8h, .+\n+** ret\n+*/\n+TEST_BINARY (vzipq_p16, poly16x8x2_t, poly16x8_t, poly16x8_t)\n+\n+/*\n+** test_vzipq_u32:\n+** ...\n+** zip1\tv0\\.4s, .+\n+** zip2\tv1\\.4s, .+\n+** ret\n+*/\n+TEST_BINARY (vzipq_u32, uint32x4x2_t, uint32x4_t, uint32x4_t)\n+\n+/*\n+** test_vzipq_s32:\n+** ...\n+** zip1\tv0\\.4s, .+\n+** zip2\tv1\\.4s, .+\n+** ret\n+*/\n+TEST_BINARY (vzipq_s32, int32x4x2_t, int32x4_t, int32x4_t)\n+\n+/*\n+** test_vzipq_f32:\n+** ...\n+** zip1\tv0\\.4s, .+\n+** zip2\tv1\\.4s, .+\n+** ret\n+*/\n+TEST_BINARY (vzipq_f32, float32x4x2_t, float32x4_t, float32x4_t)\n","prefixes":["v1","5/6"]}