{"id":2175589,"url":"http://patchwork.ozlabs.org/api/1.0/patches/2175589/?format=json","project":{"id":17,"url":"http://patchwork.ozlabs.org/api/1.0/projects/17/?format=json","name":"GNU Compiler Collection","link_name":"gcc","list_id":"gcc-patches.gcc.gnu.org","list_email":"gcc-patches@gcc.gnu.org","web_url":null,"scm_url":null,"webscm_url":null},"msgid":"<20251218142621.57402-2-claudio.bantaloukas@arm.com>","date":"2025-12-18T14:26:13","name":"[v4,1/8] aarch64: extend sme intrinsics to mfp8","commit_ref":null,"pull_url":null,"state":"new","archived":false,"hash":"393ea3cb3c6c30633f113b431208febcd22968ef","submitter":{"id":88972,"url":"http://patchwork.ozlabs.org/api/1.0/people/88972/?format=json","name":"Claudio Bantaloukas","email":"claudio.bantaloukas@arm.com"},"delegate":null,"mbox":"http://patchwork.ozlabs.org/project/gcc/patch/20251218142621.57402-2-claudio.bantaloukas@arm.com/mbox/","series":[{"id":485861,"url":"http://patchwork.ozlabs.org/api/1.0/series/485861/?format=json","date":"2025-12-18T14:26:12","name":"aarch64: Add fp8 sme 2.1 features per ACLE 2024Q4","version":4,"mbox":"http://patchwork.ozlabs.org/series/485861/mbox/"}],"check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/2175589/checks/","tags":{},"headers":{"Return-Path":"<gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org>","X-Original-To":["incoming@patchwork.ozlabs.org","gcc-patches@gcc.gnu.org"],"Delivered-To":["patchwork-incoming@legolas.ozlabs.org","gcc-patches@gcc.gnu.org"],"Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (1024-bit key;\n unprotected) header.d=arm.com header.i=@arm.com header.a=rsa-sha256\n header.s=selector1 header.b=KG4VzVvj;\n\tdkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com\n header.a=rsa-sha256 header.s=selector1 header.b=KG4VzVvj;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org\n (client-ip=38.145.34.32; helo=vm01.sourceware.org;\n envelope-from=gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org;\n receiver=patchwork.ozlabs.org)","sourceware.org;\n\tdkim=pass (1024-bit key,\n unprotected) header.d=arm.com header.i=@arm.com header.a=rsa-sha256\n header.s=selector1 header.b=KG4VzVvj;\n\tdkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com\n header.a=rsa-sha256 header.s=selector1 header.b=KG4VzVvj","sourceware.org;\n dmarc=pass (p=none dis=none) header.from=arm.com","sourceware.org; spf=pass smtp.mailfrom=arm.com","server2.sourceware.org;\n arc=pass smtp.remote-ip=40.107.162.39"],"Received":["from vm01.sourceware.org (vm01.sourceware.org [38.145.34.32])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4dXCmB2FKFz1y2F\n\tfor <incoming@patchwork.ozlabs.org>; Fri, 19 Dec 2025 01:31:54 +1100 (AEDT)","from vm01.sourceware.org (localhost [127.0.0.1])\n\tby sourceware.org (Postfix) with ESMTP id 3B1934BA23DC\n\tfor <incoming@patchwork.ozlabs.org>; Thu, 18 Dec 2025 14:31:52 +0000 (GMT)","from PA4PR04CU001.outbound.protection.outlook.com\n (mail-francecentralazon11013039.outbound.protection.outlook.com\n [40.107.162.39])\n by sourceware.org (Postfix) with ESMTPS id 5B46B4BA2E32\n for <gcc-patches@gcc.gnu.org>; Thu, 18 Dec 2025 14:28:00 +0000 (GMT)","from AM9P193CA0005.EURP193.PROD.OUTLOOK.COM (2603:10a6:20b:21e::10)\n by DB9PR08MB7771.eurprd08.prod.outlook.com (2603:10a6:10:397::11)\n with Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9434.6; Thu, 18 Dec\n 2025 14:27:42 +0000","from AM4PEPF00027A6A.eurprd04.prod.outlook.com\n (2603:10a6:20b:21e:cafe::48) by AM9P193CA0005.outlook.office365.com\n (2603:10a6:20b:21e::10) with Microsoft SMTP Server (version=TLS1_3,\n cipher=TLS_AES_256_GCM_SHA384) id 15.20.9434.6 via Frontend Transport; Thu,\n 18 Dec 2025 14:27:33 +0000","from outbound-uk1.az.dlp.m.darktrace.com (4.158.2.129) by\n AM4PEPF00027A6A.mail.protection.outlook.com (10.167.16.88) with Microsoft\n SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9434.6\n via Frontend Transport; Thu, 18 Dec 2025 14:27:41 +0000","from DUZP191CA0005.EURP191.PROD.OUTLOOK.COM (2603:10a6:10:4f9::19)\n by DBBPR08MB10697.eurprd08.prod.outlook.com (2603:10a6:10:52a::5) with\n Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9434.8; Thu, 18 Dec\n 2025 14:26:31 +0000","from DB1PEPF000509E5.eurprd03.prod.outlook.com\n (2603:10a6:10:4f9:cafe::59) by DUZP191CA0005.outlook.office365.com\n (2603:10a6:10:4f9::19) with Microsoft SMTP Server (version=TLS1_3,\n cipher=TLS_AES_256_GCM_SHA384) id 15.20.9434.8 via Frontend Transport; Thu,\n 18 Dec 2025 14:26:31 +0000","from nebula.arm.com (172.205.89.229) by\n DB1PEPF000509E5.mail.protection.outlook.com (10.167.242.55) with Microsoft\n SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id\n 15.20.9412.4 via Frontend Transport; Thu, 18 Dec 2025 14:26:31 +0000","from AZ-NEU-EX04.Arm.com (10.240.25.138) by AZ-NEU-EX04.Arm.com\n (10.240.25.138) with Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.29; Thu, 18 Dec\n 2025 14:26:27 +0000","from e72c20ac6da1.eu-west-1.compute.internal (10.249.56.29) by\n mail.arm.com (10.240.25.138) with Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.29 via Frontend\n Transport; Thu, 18 Dec 2025 14:26:27 +0000"],"DKIM-Filter":["OpenDKIM Filter v2.11.0 sourceware.org 3B1934BA23DC","OpenDKIM Filter v2.11.0 sourceware.org 5B46B4BA2E32"],"DMARC-Filter":"OpenDMARC Filter v1.4.2 sourceware.org 5B46B4BA2E32","ARC-Filter":"OpenARC Filter v1.0.0 sourceware.org 5B46B4BA2E32","ARC-Seal":["i=3; a=rsa-sha256; d=sourceware.org; s=key; t=1766068080; cv=pass;\n b=v0h0sWcllA1CBxieWu5aDa2twXvTiXw67UO2oidHeqqmkpJYtTnJltyZP3TBmaOU7ZE4XNSWtYi5otw4Av0jTFzqaqaRyOoU6H+UVSEyHMB8g9P33KbmSQPT0FUnQMcb3zvO0vnlvWF1kEudL8LlG33vTYm8Csi1MTqEPF1ADfg=","i=2; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=pass;\n b=YLx0kbpTQBbtleADX/fK+le2I7q8YBM2g08zbkwXszRrLHTRDpFp+Dqj22Drx0AxQeLgNT2wD74rtO/DxnTteF7P4gA0lLaj3R0mxoahrNlmuyQ51MpFRQq2noAF6fb2Bp8jA63RAYlFGbobkIH5a/kIv21IPugB802LqIUMhcNvsCslfLOP4DXS0EziNjHcyAwxmM6V2L80zhHsq8DT4VKaCKBdAI3Jw/9TygMlsEcSqUPq9ox3Qz4ft68jyu1zdJdXzSOB2Koy8kYxmenRMk/RofgQN235F/yVul6XyTyRU7G+BF2JsjoVpweyGpd8J/2QnwIR/4RxBiczXfj5YA==","i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;\n b=kddyLulPUh+3yxCMw9hBhYQajGch8g5uuuf/2S6BRu4P/HVEu/IyWgmPAnERgrRLwb85DCuwbxdBfA2FUOo6WcUL7wHJrSm4nKWnyEgzwIEdqDmd0UMWWSmF0TjEK9z2woVTHTxv6haXsC43H9BNT266cIrCdFNkHjmrSVWCrNn13IE7G6vXxl9m9/ITv4yS4jJ3LeyiCja9Sr1mu/cl2sSdaFef4AY5u/DfyP6Hd++CFItjGk3zmZ5sfuJ6/KWdsFT4+QfeEYA17/4zFxqVQhIMPF7s3gwBBSBMkdmmGnOLOSdXRC1W2NkdWuGnYTBCrsDTfggS40MLuPjdOoiy4g=="],"ARC-Message-Signature":["i=3; a=rsa-sha256; d=sourceware.org; s=key;\n t=1766068080; c=relaxed/simple;\n bh=7AXI9Bf7PWkxEwODLGsc/1wNJoWZgLGKtx1f/jJF9gI=;\n h=DKIM-Signature:DKIM-Signature:From:To:Subject:Date:Message-ID:\n MIME-Version;\n b=KfOUTCHWNjJv9XEt/9X4vFv65BW6EmuJx6Sp5GtbmAi+FS/UiusnkcQY9InLJtD298Eta4mFhiniHjqEtv1cP6WAxhDcJF4DJlFOeiy5fQRoWEq3IwzoWkwixhA7LqcGTbxIbd3xH0fEEpOv0fSx9eDZoZckvbUBaQArBSAhNWI=","i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector10001;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=u17N086EWe8S85YzDxGLnzWvmQnQkMO0wAw/bPxi23Q=;\n b=Joej6BFCCef5kEBb2meaIfR17us0tafnGKnC5K8QXdqzBINSxo2agP+iiQAbQge4gY93N+r+BhBsoFgoI/WoabcKDWChSUj/PHAcYZBbR5Oa0UTrSipDDqB4aYWG5+J+toXBVb2mBXw8m5/WOsUs99UKXFFb8yE36tIlakFDhvEgjk1jIQ0ItR8DpaS5jw9Qs1GAbZsRUivmJA3jb0WNB2UomU1zUaxlUFGreF9MXRx98yU8dlzj5y/y/mXBndyhoDKzRSjRztxTMeckLHw5lXyZpISQIZQz/eyoQyN0JOOirJ9Cr9K08EEDTRgzi24L9wPky7GLsyEH9ouPsYLP9g==","i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector10001;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=u17N086EWe8S85YzDxGLnzWvmQnQkMO0wAw/bPxi23Q=;\n b=uHOlYKMJXOR9Iu8VJe9L09LcgvZ2NJ02CO+xpcYUxBQoEXABXKKeuhQMTKm6L+eFBT9/ycWMNp2eu4qnHprU94K9Q+KZgLUuY7x+c4evyr6688ynVcvpAJoHYAlnNdoKoqcyshwTSdh9t4hhJxzlez2FyPN+QtNSqDMjucw0NknyDoKr9oi+kFDcLm95nbZ4z7GkbnSajewnyVlwc/9zbqUEFk1YabK1aEe5u8eTxHfIb4YCu0fIqOZHR0mHRn0iSRknbpJazkEcvLZBsNIFTMlZW/J0eZkE/ZkJdb6/u+a1nIHgwoQ4WieSaoSvy7Oy/xREsok/AEcuRgC5oBzGQg=="],"ARC-Authentication-Results":["i=3; server2.sourceware.org","i=2; mx.microsoft.com 1; spf=pass (sender ip is\n 4.158.2.129) smtp.rcpttodomain=oss.qualcomm.com smtp.mailfrom=arm.com;\n dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;\n dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1\n spf=[1,1,smtp.mailfrom=arm.com] dmarc=[1,1,header.from=arm.com])","i=1; mx.microsoft.com 1; spf=pass (sender ip is\n 172.205.89.229) smtp.rcpttodomain=gcc.gnu.org smtp.mailfrom=arm.com;\n dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;\n dkim=none (message not signed); arc=none (0)"],"DKIM-Signature":["v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=u17N086EWe8S85YzDxGLnzWvmQnQkMO0wAw/bPxi23Q=;\n b=KG4VzVvjC4Pr+4QzQl/Ed36zATOJ2S1TlLa+BSTyWTvFUv0RBifzDSFPzKcGX8OKV5MMhmBhmzFawgmq1OC6wRs2nkV2Yk4Rc3DmsTPxGebrfO4PYyFOS+OphEeUXNfnu1rKdYDeorolQhx4EAlkzOys2jdBMUjRhBYIMIt9fAI=","v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=u17N086EWe8S85YzDxGLnzWvmQnQkMO0wAw/bPxi23Q=;\n b=KG4VzVvjC4Pr+4QzQl/Ed36zATOJ2S1TlLa+BSTyWTvFUv0RBifzDSFPzKcGX8OKV5MMhmBhmzFawgmq1OC6wRs2nkV2Yk4Rc3DmsTPxGebrfO4PYyFOS+OphEeUXNfnu1rKdYDeorolQhx4EAlkzOys2jdBMUjRhBYIMIt9fAI="],"X-MS-Exchange-Authentication-Results":["spf=pass (sender IP is 4.158.2.129)\n smtp.mailfrom=arm.com; dkim=pass (signature was verified)\n header.d=arm.com;dmarc=pass action=none header.from=arm.com;","spf=pass (sender IP is 172.205.89.229)\n smtp.mailfrom=arm.com; dkim=none (message not signed)\n header.d=none;dmarc=pass action=none header.from=arm.com;"],"Received-SPF":["Pass (protection.outlook.com: domain of arm.com designates\n 4.158.2.129 as permitted sender) receiver=protection.outlook.com;\n client-ip=4.158.2.129; helo=outbound-uk1.az.dlp.m.darktrace.com; pr=C","Pass (protection.outlook.com: domain of arm.com designates\n 172.205.89.229 as permitted sender) receiver=protection.outlook.com;\n client-ip=172.205.89.229; helo=nebula.arm.com; pr=C"],"From":"Claudio Bantaloukas <claudio.bantaloukas@arm.com>","To":"Gcc Patches ML <gcc-patches@gcc.gnu.org>","CC":"Alex Coplan <alex.coplan@arm.com>, Alice Carlotti\n <alice.carlotti@arm.com>, Andrew Pinski <andrew.pinski@oss.qualcomm.com>,\n Kyrylo Tkachov <ktkachov@nvidia.com>, Richard Earnshaw\n <richard.earnshaw@arm.com>, Tamar Christina <tamar.christina@arm.com>, \"Wilco\n Dijkstra\" <wilco.dijkstra@arm.com>, Claudio Bantaloukas\n <claudio.bantaloukas@arm.com>","Subject":"[PATCH v4 1/8] aarch64: extend sme intrinsics to mfp8","Date":"Thu, 18 Dec 2025 14:26:13 +0000","Message-ID":"<20251218142621.57402-2-claudio.bantaloukas@arm.com>","X-Mailer":"git-send-email 2.51.0","In-Reply-To":"<20251218142621.57402-1-claudio.bantaloukas@arm.com>","References":"<20251218142621.57402-1-claudio.bantaloukas@arm.com>","MIME-Version":"1.0","Content-Transfer-Encoding":"8bit","Content-Type":"text/plain","X-EOPAttributedMessage":"1","X-MS-TrafficTypeDiagnostic":"\n DB1PEPF000509E5:EE_|DBBPR08MB10697:EE_|AM4PEPF00027A6A:EE_|DB9PR08MB7771:EE_","X-MS-Office365-Filtering-Correlation-Id":"c8ac3f2c-3594-49f1-3f06-08de3e419a50","x-checkrecipientrouted":"true","NoDisclaimer":"true","X-MS-Exchange-SenderADCheck":"1","X-MS-Exchange-AntiSpam-Relay":"0","X-Microsoft-Antispam-Untrusted":"BCL:0;\n ARA:13230040|36860700013|1800799024|82310400026|376014|13003099007;","X-Microsoft-Antispam-Message-Info-Original":"\n i/R9cmIP3LjeHEpEQeYCsfyWuyuJWC60UZ0EpHYp/qBnQN4jFtGnVy5tyBuRmlRHwXoO2cP1A38mRqR76NUz28vh2m42RxL0IxTYq6X6cHc90hjEg6lLMyNvKErBJNzz/HbvlWAJ1is3r+kxvRRz876uTtSFkLxoMbtbPHzpm+3YjYyUSv52CHk6TDbu0rsCijBuU8c7C9/5T7/caIhoN8Iv3jr4kJY1Oyn9lkli3d0mH7Aznh2xigHfNYY7X20R5Uc7kPz4oLM5Gr8iSw0Awy95CSlm1vTzkvNw5uVcHqrJukdAL5Gm1dHo1uoTTOSAFGN/ii7r55lXDghQ+7wvdLjR8gZg9H6RfZEQzhWPZYIH1/jg6A0bkTF/DTb22/X962LSYS8XXAZhueNw1OUApLwFTeQtdqZTiccYd1Kho5EDSWUWeZSKFE+qPobfjUyelWcD4znO+Eg4crlHQiDPD4wGSjam8GfAUGOZ95Gk/wysu+7slqOfwhUx+YG5RLMW0U0fkfHMAHkNCEIy9fyA+omNPHYwNolIWH/xMzc5ybUZQooUqeJzMA3S8H3uX+yutXZAxmjHJp9FKv0prHusEUo+JT8GhVJHU2kgCUyjKTGxDeR6+7Ujwfnc3uDvj+bViRBTEOBumPIjnbT5dn8maEzAuXbLxgKW7TsHPlSyVJ4JdUOMgnD909AjMmH8NWWixSS/We/51FQuJsvEOrbV4B0XFjQrWCRg+UYIuJZBR74C7bZfXl+zD/96AqOLM4XffjvTP0nQVqbu+8wjkVVMTH52PQgLbNsI7SY3TCWNiUA1BScLablNx1A62enWxz+CJRBcK4uv4TxdoUqZzUMh5/SXIJt/mcwhg7E0gMSFEMCUx91bzvpzv9Afeu6Jr/izI3JbXDCUt2IKZUq2dwJJ/I/JjGSVlTjgQ6KdWpq/mj3mR5u0NsO36R7g1v0kJ8uq/ghTVhs7mAK0xoUsy7qF/Kx6rJIZXK5y9PAqNFZ6bNZC71aMC37XF9eLgK17Q/eZXjQLycSTxDxDNaK1Q5SDJxdFGMAlTe3HhM4HA3b3/MA4+H2sit+S5885QR59yDrmaEnygXPF/cDivv5xSBJMbWmcmHfgUfR+N8bSe3zQGDe6TzmoQSFFolAUljHVgPPU7Galb9VIwFIN5+HAqsET4/z2/t+KSU445HprmO/rerilSJ0Cn51tuTu8f6QZfXHEB9hBOz4NUygUIPQOJEXa+IUaNH+byyqnzfiVQlVXAWuGcu/wMWJ2g7pwZ/gDR5o6jA3mpgMLX8kWVCicWkJJ8Axo5I2oh3XfFR/4KwbUQpMYfD2Ozkh7rNJWOYFS6wD9Z+Y/84klMP+J80UF8KquUN1JkPx2UFmhp2RJ7qrgZ27qmtBz60ZekdFtd+XLWjW3Zq3Z/S69EwND9PTCfy3l6yYFhXRluGjo8lnr+Knr4f+GbyGs5XX4ByUtTE3VmjfxsBaWexp96287/UT9/IwiEaw6nP7ycjdYYVpKdYXXXL0VQojpyvQhZ10pQatqnsv1Ap/X/rGCDAfh8l9iIh5yvz70CcWcyBJfsU6Edp6DzAs=","X-Forefront-Antispam-Report-Untrusted":"CIP:172.205.89.229; CTRY:IE; LANG:en;\n SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:nebula.arm.com; PTR:InfoDomainNonexistent;\n CAT:NONE;\n SFS:(13230040)(36860700013)(1800799024)(82310400026)(376014)(13003099007);\n DIR:OUT; SFP:1101;","X-MS-Exchange-Transport-CrossTenantHeadersStamped":["DBBPR08MB10697","DB9PR08MB7771"],"X-MS-Exchange-Transport-CrossTenantHeadersStripped":"\n AM4PEPF00027A6A.eurprd04.prod.outlook.com","X-MS-PublicTrafficType":"Email","X-MS-Office365-Filtering-Correlation-Id-Prvs":"\n 50a093ca-d0d9-40e6-e4e9-08de3e417073","X-Microsoft-Antispam":"BCL:0;\n ARA:13230040|35042699022|14060799003|36860700013|376014|82310400026|1800799024|13003099007;","X-Microsoft-Antispam-Message-Info":"\n T8nAbxGUbSAXmNReuBChaXxwJl3ngINF+t2aAZi/6nFFb19Qd/Gppnu+xic8EYiYCO/7DmtOVK3DFgRKIO7xyNHej8dQA+0igRCwDAfrzRwdJmqCiXxoXNhIoyJHR/MyY6wlOzNvladq+VLXnWLytU3XAmRItyzhNu/DbGEVHev2HjuEvJcxdm/Gpl1EjGM3oVpdqhza5Pg6VDq6Lwt+g0cCbx/zOySl/KKi72cDDChRQ1iZ6uQR/g7TE87F37HanEPeajhBUGSB7I2nxwAx7yPhuUIFkCkoxBb2JEbie0Npa9WrNGcafep5sceoJpSQxzZSCu+6tuzJOXNBoWH3DfO+dt6FJ5T9z4I6zO5R+DmbDZYRJxkcMJIZJ7w4MepmAM4jPtPRlhDMoAuanUGV54Cog7+zlG7RTl+mxLlcaF7SdC+hBdnfZ/PmFZgXS6X8YSa3E5W9RNAvH3dJ5JmRYcjgo/MlQMo1k+3pVNRgCR60uTmfAhC8ibHneoZUY/OLasu2pEID/Tajqfb3mwkMvwHDxkRrdASqQPfiSfZ7OeuZ1zklq6jXEsEd7JBJ6GRcTu9JBU4XCCldQ2GB47NKzQJyA+WeZ+LzHmZ4hobEkDpoMpy3f9XmWsquwX8pf/CbVEyczCthWPRU8Rgvd0ggqhRBFf2DBIVBRIvRoGvTqN/742DccU8wRivelmcZ2LnhN8sBCom9zUkXYL+IkhkWCmuo66uqUJfkAp2VZc2OuiFXRKfgiHO3ytrRnY94G8yuY7DVrfl5+xApYBgzqqk7rouHYSOLumQQ1JxM9JeokdUGMF9R+uFehGza3bY7++SkxNTiFrN/Z8+yLROZQ0CwQNWmr6r2AEFZwqQV4CqPQoab5vzeAETvlZAULOP3fXu8ziNXH0PXuzdx/ouN3pIWFLMKvnHF5R+Hk3LTprKibzdFuDLoC57w9duHr/Qq0Qy6IlAszepwFKf8rmrx5wU9ULxqmmswMwPbquH1amHro4Qa8JMFePo57aPrWueKN0YUROfF117UTATxF5YAv4I2pCQkweDW8fIct7vYlcEmnEl4Y+KjI3mWdvltUVHE3mx4eoGfCplNmkUFjDrtYMb9qSkODUhUeFX2IP36AT099Ft9682RAyiM1EbJ1Vg0cGQFc4N7RazCyqb/C7XzqSkmkfPDtKbwVEJf821JXf8Wu1me9K7zdt41aewQD8R35XgUOk9UPsApo/gFAdv0pntNlAG6yBiYKZxEq3Q0zsWUN1Xex3lpsE+8nFQQNheNbt9GMhI5o5rchvYNdfDq0I4V05UH0nRHmY88RZvAZmtx18bf3p72PwQZYzN+E410OYWl1iVgCEeOVnXK1DSIeDGh+h+YGyNXrwvkss9vRxFrCwqV2wKhnzkGDhZ23M7VT9+CABuLdbT5ByKsUtcXVLU95KYBfT/X5vSzk4jCbLuo4jASw8bni7sTmJEKUpNCi08DfH2HqAYsRATToo0poD0yEfy3klUHEknPXTu1729TnNXTtFdmRqK53cOHJ3lAR7fPzxJ9PgQjjxEuH35OKMRtBHi934IX2pzYleFrY40qR3I=","X-Forefront-Antispam-Report":"CIP:4.158.2.129; CTRY:GB; LANG:en; SCL:1; SRV:;\n IPV:NLI; SFV:NSPM; H:outbound-uk1.az.dlp.m.darktrace.com;\n PTR:InfoDomainNonexistent; CAT:NONE;\n SFS:(13230040)(35042699022)(14060799003)(36860700013)(376014)(82310400026)(1800799024)(13003099007);\n DIR:OUT; SFP:1101;","X-OriginatorOrg":"arm.com","X-MS-Exchange-CrossTenant-OriginalArrivalTime":"18 Dec 2025 14:27:41.4901 (UTC)","X-MS-Exchange-CrossTenant-Network-Message-Id":"\n c8ac3f2c-3594-49f1-3f06-08de3e419a50","X-MS-Exchange-CrossTenant-Id":"f34e5979-57d9-4aaa-ad4d-b122a662184d","X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp":"\n TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[4.158.2.129];\n Helo=[outbound-uk1.az.dlp.m.darktrace.com]","X-MS-Exchange-CrossTenant-AuthSource":"\n AM4PEPF00027A6A.eurprd04.prod.outlook.com","X-MS-Exchange-CrossTenant-AuthAs":"Anonymous","X-MS-Exchange-CrossTenant-FromEntityHeader":"HybridOnPrem","X-BeenThere":"gcc-patches@gcc.gnu.org","X-Mailman-Version":"2.1.30","Precedence":"list","List-Id":"Gcc-patches mailing list <gcc-patches.gcc.gnu.org>","List-Unsubscribe":"<https://gcc.gnu.org/mailman/options/gcc-patches>,\n <mailto:gcc-patches-request@gcc.gnu.org?subject=unsubscribe>","List-Archive":"<https://gcc.gnu.org/pipermail/gcc-patches/>","List-Post":"<mailto:gcc-patches@gcc.gnu.org>","List-Help":"<mailto:gcc-patches-request@gcc.gnu.org?subject=help>","List-Subscribe":"<https://gcc.gnu.org/mailman/listinfo/gcc-patches>,\n <mailto:gcc-patches-request@gcc.gnu.org?subject=subscribe>","Errors-To":"gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org"},"content":"This patch extends the following intrinsics to support svmfloat8_t types and\nadds tests based on the equivalent ones for svuint8_t.\n\nSME:\n- svread_hor_za8[_mf8]_m, svread_hor_za128[_mf8]_m and related ver.\n- svwrite_hor_za8[_mf8]_m, svwrite_hor_za128[_mf8]_m and related ver.\n\nSME2:\n- svread_hor_za8_mf8_vg2, svread_hor_za8_mf8_vg4 and related ver.\n- svwrite_hor_za8[_mf8]_vg2, svwrite_hor_za8[_mf8]_vg4 and related ver.\n- svread_za8[_mf8]_vg1x2, svread_za8[_mf8]_vg1x4.\n- svwrite_za8[_mf8]_vg1x2, svwrite_za8[_mf8]_vg1x4.\n- svsel[_mf8_x2], svsel[_mf8_x4].\n- svzip[_mf8_x2], svzip[_mf8_x4].\n- svzipq[_mf8_x2], svzipq[_mf8_x4].\n- svuzp[_mf8_x2], svuzp[_mf8_x4].\n- svuzpq[_mf8_x2], svuzpq[_mf8_x4].\n- svld1[_mf8]_x2, svld1[_mf8]_x4.\n- svld1_vnum[_mf8]_x2, svld1_vnum[_mf8]_x4.\n\nSVE2.1/SME2:\n- svldnt1[_mf8]_x2, svldnt1[_mf8]_x4.\n- svldnt1_vnum[_mf8]_x2, svldnt1_vnum[_mf8]_x4.\n- svrevd[_mf8]_m, svrevd[_mf8]_z, svrevd[_mf8]_x.\n- svst1[_mf8_x2], svst1[_mf8_x4].\n- svst1_vnum[_mf8_x2], svst1_vnum[_mf8_x4].\n- svstnt1[_mf8_x2], svstnt1[_mf8_x4].\n- svstnt1_vnum[_mf8_x2], svstnt1_vnum[_mf8_x4].\n\nSME2.1:\n- svreadz_hor_za8_u8, svreadz_hor_za8_u8_vg2, svreadz_hor_za8_u8_vg4 and related\n  ver.\n- svreadz_hor_za128_u8, svreadz_ver_za128_u8.\n- svreadz_za8_u8_vg1x2, svreadz_za8_u8_vg1x4.\n\nThis change follows ACLE 2024Q4.\n\ngcc/\n\t* config/aarch64/aarch64-sve-builtins.cc (TYPES_za_bhsd_data): Add\n\tD (za8, mf8) combination to za_bhsd_data.\n\ngcc/testsuite/\n\t* gcc.target/aarch64/sme/acle-asm/revd_mf8.c: Added test file.\n\t* gcc.target/aarch64/sme2/acle-asm/ld1_mf8_x2.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/ld1_mf8_x4.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/ldnt1_mf8_x2.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/ldnt1_mf8_x4.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/readz_ver_za128.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/sel_mf8_x2.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/sel_mf8_x4.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/st1_mf8_x2.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/st1_mf8_x4.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/stnt1_mf8_x2.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/stnt1_mf8_x4.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/uzp_mf8_x2.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/uzp_mf8_x4.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/uzpq_mf8_x2.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/uzpq_mf8_x4.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/zip_mf8_x2.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/zip_mf8_x4.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/zipq_mf8_x2.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/zipq_mf8_x4.c: Likewise.\n\t* gcc.target/aarch64/sve2/acle/asm/ld1_mf8_x2.c: Likewise.\n\t* gcc.target/aarch64/sve2/acle/asm/ld1_mf8_x4.c: Likewise.\n\t* gcc.target/aarch64/sve2/acle/asm/ldnt1_mf8_x2.c: Likewise.\n\t* gcc.target/aarch64/sve2/acle/asm/ldnt1_mf8_x4.c: Likewise.\n\t* gcc.target/aarch64/sve2/acle/asm/revd_mf8.c: Likewise.\n\t* gcc.target/aarch64/sve2/acle/asm/stnt1_mf8_x2.c: Likewise.\n\t* gcc.target/aarch64/sve2/acle/asm/stnt1_mf8_x4.c: Likewise.\n\t* gcc.target/aarch64/sme/acle-asm/read_hor_za128.c: Added mf8 tests.\n\t* gcc.target/aarch64/sme/acle-asm/read_hor_za8.c: Likewise.\n\t* gcc.target/aarch64/sme/acle-asm/read_ver_za128.c: Likewise.\n\t* gcc.target/aarch64/sme/acle-asm/read_ver_za8.c: Likewise.\n\t* gcc.target/aarch64/sme/acle-asm/write_hor_za128.c: Likewise.\n\t* gcc.target/aarch64/sme/acle-asm/write_hor_za8.c: Likewise.\n\t* gcc.target/aarch64/sme/acle-asm/write_ver_za128.c: Likewise.\n\t* gcc.target/aarch64/sme/acle-asm/write_ver_za8.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/read_hor_za8_vg2.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/read_hor_za8_vg4.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/read_ver_za8_vg2.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/read_ver_za8_vg4.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/read_za8_vg1x2.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/read_za8_vg1x4.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/readz_hor_za128.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/readz_hor_za8_vg2.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/readz_hor_za8_vg4.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/readz_hor_za8.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/readz_ver_za8_vg2.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/readz_ver_za8_vg4.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/readz_ver_za8.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/readz_za8_vg1x2.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/readz_za8_vg1x4.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/write_hor_za8_vg2.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/write_hor_za8_vg4.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/write_ver_za8_vg2.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/write_ver_za8_vg4.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/write_za8_vg1x2.c: Likewise.\n\t* gcc.target/aarch64/sme2/acle-asm/write_za8_vg1x4.c: Likewise.\n---\n gcc/config/aarch64/aarch64-sve-builtins.cc    |   4 +-\n .../aarch64/sme/acle-asm/read_hor_za128.c     |  31 ++\n .../aarch64/sme/acle-asm/read_hor_za8.c       |  31 ++\n .../aarch64/sme/acle-asm/read_ver_za128.c     |  31 ++\n .../aarch64/sme/acle-asm/read_ver_za8.c       |  31 ++\n .../aarch64/sme/acle-asm/revd_mf8.c           |  76 ++++\n .../aarch64/sme/acle-asm/write_hor_za128.c    |  10 +\n .../aarch64/sme/acle-asm/write_hor_za8.c      |  10 +\n .../aarch64/sme/acle-asm/write_ver_za128.c    |  10 +\n .../aarch64/sme/acle-asm/write_ver_za8.c      |  10 +\n .../aarch64/sme2/acle-asm/ld1_mf8_x2.c        | 262 +++++++++++++\n .../aarch64/sme2/acle-asm/ld1_mf8_x4.c        | 354 +++++++++++++++++\n .../aarch64/sme2/acle-asm/ldnt1_mf8_x2.c      | 262 +++++++++++++\n .../aarch64/sme2/acle-asm/ldnt1_mf8_x4.c      | 354 +++++++++++++++++\n .../aarch64/sme2/acle-asm/read_hor_za8_vg2.c  |  78 ++++\n .../aarch64/sme2/acle-asm/read_hor_za8_vg4.c  |  91 +++++\n .../aarch64/sme2/acle-asm/read_ver_za8_vg2.c  |  78 ++++\n .../aarch64/sme2/acle-asm/read_ver_za8_vg4.c  |  91 +++++\n .../aarch64/sme2/acle-asm/read_za8_vg1x2.c    |  48 +++\n .../aarch64/sme2/acle-asm/read_za8_vg1x4.c    |  54 +++\n .../aarch64/sme2/acle-asm/readz_hor_za128.c   |  10 +\n .../aarch64/sme2/acle-asm/readz_hor_za8.c     |  10 +\n .../aarch64/sme2/acle-asm/readz_hor_za8_vg2.c |  78 ++++\n .../aarch64/sme2/acle-asm/readz_hor_za8_vg4.c |  91 +++++\n .../aarch64/sme2/acle-asm/readz_ver_za128.c   | 197 ++++++++++\n .../aarch64/sme2/acle-asm/readz_ver_za8.c     |  10 +\n .../aarch64/sme2/acle-asm/readz_ver_za8_vg2.c |  77 ++++\n .../aarch64/sme2/acle-asm/readz_ver_za8_vg4.c |  90 +++++\n .../aarch64/sme2/acle-asm/readz_za8_vg1x2.c   |  48 +++\n .../aarch64/sme2/acle-asm/readz_za8_vg1x4.c   |  56 +++\n .../aarch64/sme2/acle-asm/sel_mf8_x2.c        |  92 +++++\n .../aarch64/sme2/acle-asm/sel_mf8_x4.c        |  92 +++++\n .../aarch64/sme2/acle-asm/st1_mf8_x2.c        | 262 +++++++++++++\n .../aarch64/sme2/acle-asm/st1_mf8_x4.c        | 354 +++++++++++++++++\n .../aarch64/sme2/acle-asm/stnt1_mf8_x2.c      | 262 +++++++++++++\n .../aarch64/sme2/acle-asm/stnt1_mf8_x4.c      | 354 +++++++++++++++++\n .../aarch64/sme2/acle-asm/uzp_mf8_x2.c        |  77 ++++\n .../aarch64/sme2/acle-asm/uzp_mf8_x4.c        |  73 ++++\n .../aarch64/sme2/acle-asm/uzpq_mf8_x2.c       |  77 ++++\n .../aarch64/sme2/acle-asm/uzpq_mf8_x4.c       |  73 ++++\n .../aarch64/sme2/acle-asm/write_hor_za8_vg2.c |  78 ++++\n .../aarch64/sme2/acle-asm/write_hor_za8_vg4.c |  91 +++++\n .../aarch64/sme2/acle-asm/write_ver_za8_vg2.c |  78 ++++\n .../aarch64/sme2/acle-asm/write_ver_za8_vg4.c |  91 +++++\n .../aarch64/sme2/acle-asm/write_za8_vg1x2.c   |  48 +++\n .../aarch64/sme2/acle-asm/write_za8_vg1x4.c   |  54 +++\n .../aarch64/sme2/acle-asm/zip_mf8_x2.c        |  77 ++++\n .../aarch64/sme2/acle-asm/zip_mf8_x4.c        |  73 ++++\n .../aarch64/sme2/acle-asm/zipq_mf8_x2.c       |  77 ++++\n .../aarch64/sme2/acle-asm/zipq_mf8_x4.c       |  73 ++++\n .../aarch64/sve2/acle/asm/ld1_mf8_x2.c        | 269 +++++++++++++\n .../aarch64/sve2/acle/asm/ld1_mf8_x4.c        | 361 ++++++++++++++++++\n .../aarch64/sve2/acle/asm/ldnt1_mf8_x2.c      | 269 +++++++++++++\n .../aarch64/sve2/acle/asm/ldnt1_mf8_x4.c      | 361 ++++++++++++++++++\n .../aarch64/sve2/acle/asm/revd_mf8.c          |  80 ++++\n .../aarch64/sve2/acle/asm/stnt1_mf8_x2.c      | 269 +++++++++++++\n .../aarch64/sve2/acle/asm/stnt1_mf8_x4.c      | 361 ++++++++++++++++++\n 57 files changed, 7007 insertions(+), 2 deletions(-)\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sme/acle-asm/revd_mf8.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/ld1_mf8_x2.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/ld1_mf8_x4.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/ldnt1_mf8_x2.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/ldnt1_mf8_x4.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_ver_za128.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/sel_mf8_x2.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/sel_mf8_x4.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/st1_mf8_x2.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/st1_mf8_x4.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/stnt1_mf8_x2.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/stnt1_mf8_x4.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/uzp_mf8_x2.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/uzp_mf8_x4.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/uzpq_mf8_x2.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/uzpq_mf8_x4.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/zip_mf8_x2.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/zip_mf8_x4.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/zipq_mf8_x2.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/zipq_mf8_x4.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/ld1_mf8_x2.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/ld1_mf8_x4.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/ldnt1_mf8_x2.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/ldnt1_mf8_x4.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/revd_mf8.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/stnt1_mf8_x2.c\n create mode 100644 gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/stnt1_mf8_x4.c","diff":"diff --git a/gcc/config/aarch64/aarch64-sve-builtins.cc b/gcc/config/aarch64/aarch64-sve-builtins.cc\nindex dbd80cab627..e8eeedb4d36 100644\n--- a/gcc/config/aarch64/aarch64-sve-builtins.cc\n+++ b/gcc/config/aarch64/aarch64-sve-builtins.cc\n@@ -640,7 +640,7 @@ CONSTEXPR const group_suffix_info group_suffixes[] = {\n #define TYPES_d_za(S, D) \\\n   S (za64)\n \n-/* {   _za8 } x {             _s8  _u8 }\n+/* {   _za8 } x {  _mf8       _s8  _u8 }\n \n    {  _za16 } x { _bf16 _f16 _s16 _u16 }\n \n@@ -648,7 +648,7 @@ CONSTEXPR const group_suffix_info group_suffixes[] = {\n \n    {  _za64 } x {       _f64 _s64 _u64 }.  */\n #define TYPES_za_bhsd_data(S, D) \\\n-  D (za8, s8), D (za8, u8), \\\n+  D (za8, mf8), D (za8, s8), D (za8, u8), \\\n   D (za16, bf16), D (za16, f16), D (za16, s16), D (za16, u16), \\\n   D (za32, f32), D (za32, s32), D (za32, u32), \\\n   D (za64, f64), D (za64, s64), D (za64, u64)\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/read_hor_za128.c b/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/read_hor_za128.c\nindex c8eef3b16fd..fedefe5b824 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/read_hor_za128.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/read_hor_za128.c\n@@ -103,6 +103,16 @@ TEST_READ_ZA (read_za128_u8_0_w0_tied, svuint8_t,\n \t      z0 = svread_hor_za128_u8_m (z0, p0, 0, w0),\n \t      z0 = svread_hor_za128_m (z0, p0, 0, w0))\n \n+/*\n+** read_za128_mf8_0_w0_tied:\n+**\tmov\t(w1[2-5]), w0\n+**\tmova\tz0\\.q, p0/m, za0h\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (read_za128_mf8_0_w0_tied, svmfloat8_t,\n+\t      z0 = svread_hor_za128_mf8_m (z0, p0, 0, w0),\n+\t      z0 = svread_hor_za128_m (z0, p0, 0, w0))\n+\n /*\n ** read_za128_u8_0_w0_untied:\n ** (\n@@ -124,6 +134,27 @@ TEST_READ_ZA (read_za128_u8_0_w0_untied, svuint8_t,\n \t      z0 = svread_hor_za128_u8_m (z1, p0, 0, w0),\n \t      z0 = svread_hor_za128_m (z1, p0, 0, w0))\n \n+/*\n+** read_za128_mf8_0_w0_untied:\n+** (\n+**\tmov\t(w1[2-5]), w0\n+**\tmov\tz0\\.d, z1\\.d\n+**\tmova\tz0\\.q, p0/m, za0h\\.q\\[\\1, 0\\]\n+** |\n+**\tmov\tz0\\.d, z1\\.d\n+**\tmov\t(w1[2-5]), w0\n+**\tmova\tz0\\.q, p0/m, za0h\\.q\\[\\2, 0\\]\n+** |\n+**\tmov\t(w1[2-5]), w0\n+**\tmova\tz1\\.q, p0/m, za0h\\.q\\[\\3, 0\\]\n+**\tmov\tz0\\.d, z1\\.d\n+** )\n+**\tret\n+*/\n+TEST_READ_ZA (read_za128_mf8_0_w0_untied, svmfloat8_t,\n+\t      z0 = svread_hor_za128_mf8_m (z1, p0, 0, w0),\n+\t      z0 = svread_hor_za128_m (z1, p0, 0, w0))\n+\n /*\n ** read_za128_s16_0_w0_tied:\n **\tmov\t(w1[2-5]), w0\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/read_hor_za8.c b/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/read_hor_za8.c\nindex 0ad5a953f6b..7c04ef30fd0 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/read_hor_za8.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/read_hor_za8.c\n@@ -103,6 +103,16 @@ TEST_READ_ZA (read_za8_u8_0_w0_tied, svuint8_t,\n \t      z0 = svread_hor_za8_u8_m (z0, p0, 0, w0),\n \t      z0 = svread_hor_za8_m (z0, p0, 0, w0))\n \n+/*\n+** read_za8_mf8_0_w0_tied:\n+**\tmov\t(w1[2-5]), w0\n+**\tmova\tz0\\.b, p0/m, za0h\\.b\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (read_za8_mf8_0_w0_tied, svmfloat8_t,\n+\t      z0 = svread_hor_za8_mf8_m (z0, p0, 0, w0),\n+\t      z0 = svread_hor_za8_m (z0, p0, 0, w0))\n+\n /*\n ** read_za8_u8_0_w0_untied:\n ** (\n@@ -123,3 +133,24 @@ TEST_READ_ZA (read_za8_u8_0_w0_tied, svuint8_t,\n TEST_READ_ZA (read_za8_u8_0_w0_untied, svuint8_t,\n \t      z0 = svread_hor_za8_u8_m (z1, p0, 0, w0),\n \t      z0 = svread_hor_za8_m (z1, p0, 0, w0))\n+\n+/*\n+** read_za8_mf8_0_w0_untied:\n+** (\n+**\tmov\t(w1[2-5]), w0\n+**\tmov\tz0\\.d, z1\\.d\n+**\tmova\tz0\\.b, p0/m, za0h\\.b\\[\\1, 0\\]\n+** |\n+**\tmov\tz0\\.d, z1\\.d\n+**\tmov\t(w1[2-5]), w0\n+**\tmova\tz0\\.b, p0/m, za0h\\.b\\[\\2, 0\\]\n+** |\n+**\tmov\t(w1[2-5]), w0\n+**\tmova\tz1\\.b, p0/m, za0h\\.b\\[\\3, 0\\]\n+**\tmov\tz0\\.d, z1\\.d\n+** )\n+**\tret\n+*/\n+TEST_READ_ZA (read_za8_mf8_0_w0_untied, svmfloat8_t,\n+\t      z0 = svread_hor_za8_mf8_m (z1, p0, 0, w0),\n+\t      z0 = svread_hor_za8_m (z1, p0, 0, w0))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/read_ver_za128.c b/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/read_ver_za128.c\nindex 93d5d60ea57..c4214d19e5d 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/read_ver_za128.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/read_ver_za128.c\n@@ -103,6 +103,16 @@ TEST_READ_ZA (read_za128_u8_0_w0_tied, svuint8_t,\n \t      z0 = svread_ver_za128_u8_m (z0, p0, 0, w0),\n \t      z0 = svread_ver_za128_m (z0, p0, 0, w0))\n \n+/*\n+** read_za128_mf8_0_w0_tied:\n+**\tmov\t(w1[2-5]), w0\n+**\tmova\tz0\\.q, p0/m, za0v\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (read_za128_mf8_0_w0_tied, svmfloat8_t,\n+\t      z0 = svread_ver_za128_mf8_m (z0, p0, 0, w0),\n+\t      z0 = svread_ver_za128_m (z0, p0, 0, w0))\n+\n /*\n ** read_za128_u8_0_w0_untied:\n ** (\n@@ -124,6 +134,27 @@ TEST_READ_ZA (read_za128_u8_0_w0_untied, svuint8_t,\n \t      z0 = svread_ver_za128_u8_m (z1, p0, 0, w0),\n \t      z0 = svread_ver_za128_m (z1, p0, 0, w0))\n \n+/*\n+** read_za128_mf8_0_w0_untied:\n+** (\n+**\tmov\t(w1[2-5]), w0\n+**\tmov\tz0\\.d, z1\\.d\n+**\tmova\tz0\\.q, p0/m, za0v\\.q\\[\\1, 0\\]\n+** |\n+**\tmov\tz0\\.d, z1\\.d\n+**\tmov\t(w1[2-5]), w0\n+**\tmova\tz0\\.q, p0/m, za0v\\.q\\[\\2, 0\\]\n+** |\n+**\tmov\t(w1[2-5]), w0\n+**\tmova\tz1\\.q, p0/m, za0v\\.q\\[\\3, 0\\]\n+**\tmov\tz0\\.d, z1\\.d\n+** )\n+**\tret\n+*/\n+TEST_READ_ZA (read_za128_mf8_0_w0_untied, svmfloat8_t,\n+\t      z0 = svread_ver_za128_mf8_m (z1, p0, 0, w0),\n+\t      z0 = svread_ver_za128_m (z1, p0, 0, w0))\n+\n /*\n ** read_za128_s16_0_w0_tied:\n **\tmov\t(w1[2-5]), w0\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/read_ver_za8.c b/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/read_ver_za8.c\nindex 87564d1fa68..3859b2351fb 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/read_ver_za8.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/read_ver_za8.c\n@@ -103,6 +103,16 @@ TEST_READ_ZA (read_za8_u8_0_w0_tied, svuint8_t,\n \t      z0 = svread_ver_za8_u8_m (z0, p0, 0, w0),\n \t      z0 = svread_ver_za8_m (z0, p0, 0, w0))\n \n+/*\n+** read_za8_mf8_0_w0_tied:\n+**\tmov\t(w1[2-5]), w0\n+**\tmova\tz0\\.b, p0/m, za0v\\.b\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (read_za8_mf8_0_w0_tied, svmfloat8_t,\n+\t      z0 = svread_ver_za8_mf8_m (z0, p0, 0, w0),\n+\t      z0 = svread_ver_za8_m (z0, p0, 0, w0))\n+\n /*\n ** read_za8_u8_0_w0_untied:\n ** (\n@@ -123,3 +133,24 @@ TEST_READ_ZA (read_za8_u8_0_w0_tied, svuint8_t,\n TEST_READ_ZA (read_za8_u8_0_w0_untied, svuint8_t,\n \t      z0 = svread_ver_za8_u8_m (z1, p0, 0, w0),\n \t      z0 = svread_ver_za8_m (z1, p0, 0, w0))\n+\n+/*\n+** read_za8_mf8_0_w0_untied:\n+** (\n+**\tmov\t(w1[2-5]), w0\n+**\tmov\tz0\\.d, z1\\.d\n+**\tmova\tz0\\.b, p0/m, za0v\\.b\\[\\1, 0\\]\n+** |\n+**\tmov\tz0\\.d, z1\\.d\n+**\tmov\t(w1[2-5]), w0\n+**\tmova\tz0\\.b, p0/m, za0v\\.b\\[\\2, 0\\]\n+** |\n+**\tmov\t(w1[2-5]), w0\n+**\tmova\tz1\\.b, p0/m, za0v\\.b\\[\\3, 0\\]\n+**\tmov\tz0\\.d, z1\\.d\n+** )\n+**\tret\n+*/\n+TEST_READ_ZA (read_za8_mf8_0_w0_untied, svmfloat8_t,\n+\t      z0 = svread_ver_za8_mf8_m (z1, p0, 0, w0),\n+\t      z0 = svread_ver_za8_m (z1, p0, 0, w0))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/revd_mf8.c b/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/revd_mf8.c\nnew file mode 100644\nindex 00000000000..611714b539b\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/revd_mf8.c\n@@ -0,0 +1,76 @@\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" } } */\n+\n+#include \"test_sme_acle.h\"\n+\n+/*\n+** revd_mf8_m_tied12:\n+**\trevd\tz0\\.q, p0/m, z0\\.q\n+**\tret\n+*/\n+TEST_UNIFORM_Z (revd_mf8_m_tied12, svmfloat8_t,\n+\t\tz0 = svrevd_mf8_m (z0, p0, z0),\n+\t\tz0 = svrevd_m (z0, p0, z0))\n+\n+/*\n+** revd_mf8_m_tied1:\n+**\trevd\tz0\\.q, p0/m, z1\\.q\n+**\tret\n+*/\n+TEST_UNIFORM_Z (revd_mf8_m_tied1, svmfloat8_t,\n+\t\tz0 = svrevd_mf8_m (z0, p0, z1),\n+\t\tz0 = svrevd_m (z0, p0, z1))\n+\n+/*\n+** revd_mf8_m_tied2:\n+**\tmov\t(z[0-9]+)\\.d, z0\\.d\n+**\tmovprfx\tz0, z1\n+**\trevd\tz0\\.q, p0/m, \\1\\.q\n+**\tret\n+*/\n+TEST_UNIFORM_Z (revd_mf8_m_tied2, svmfloat8_t,\n+\t\tz0 = svrevd_mf8_m (z1, p0, z0),\n+\t\tz0 = svrevd_m (z1, p0, z0))\n+\n+/*\n+** revd_mf8_m_untied:\n+**\tmovprfx\tz0, z2\n+**\trevd\tz0\\.q, p0/m, z1\\.q\n+**\tret\n+*/\n+TEST_UNIFORM_Z (revd_mf8_m_untied, svmfloat8_t,\n+\t\tz0 = svrevd_mf8_m (z2, p0, z1),\n+\t\tz0 = svrevd_m (z2, p0, z1))\n+\n+/* Awkward register allocation.  Don't require specific output.  */\n+TEST_UNIFORM_Z (revd_mf8_z_tied1, svmfloat8_t,\n+\t\tz0 = svrevd_mf8_z (p0, z0),\n+\t\tz0 = svrevd_z (p0, z0))\n+\n+/*\n+** revd_mf8_z_untied:\n+**\tmovi?\t[vdz]0\\.?(?:[0-9]*[bhsd])?, #?0\n+**\trevd\tz0\\.q, p0/m, z1\\.q\n+**\tret\n+*/\n+TEST_UNIFORM_Z (revd_mf8_z_untied, svmfloat8_t,\n+\t\tz0 = svrevd_mf8_z (p0, z1),\n+\t\tz0 = svrevd_z (p0, z1))\n+\n+/*\n+** revd_mf8_x_tied1:\n+**\trevd\tz0\\.q, p0/m, z0\\.q\n+**\tret\n+*/\n+TEST_UNIFORM_Z (revd_mf8_x_tied1, svmfloat8_t,\n+\t\tz0 = svrevd_mf8_x (p0, z0),\n+\t\tz0 = svrevd_x (p0, z0))\n+\n+/*\n+** revd_mf8_x_untied:\n+**\tmovprfx\tz0, z1\n+**\trevd\tz0\\.q, p0/m, z1\\.q\n+**\tret\n+*/\n+TEST_UNIFORM_Z (revd_mf8_x_untied, svmfloat8_t,\n+\t\tz0 = svrevd_mf8_x (p0, z1),\n+\t\tz0 = svrevd_x (p0, z1))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/write_hor_za128.c b/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/write_hor_za128.c\nindex 119a2535e99..09447b35619 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/write_hor_za128.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/write_hor_za128.c\n@@ -92,6 +92,16 @@ TEST_WRITE_ZA (write_za128_u8_0_w0_z0, svuint8_t,\n \t       svwrite_hor_za128_u8_m (0, w0, p0, z0),\n \t       svwrite_hor_za128_m (0, w0, p0, z0))\n \n+/*\n+** write_za128_mf8_0_w0_z0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmova\tza0h\\.q\\[\\1, 0\\], p0/m, z0\\.q\n+**\tret\n+*/\n+TEST_WRITE_ZA (write_za128_mf8_0_w0_z0, svmfloat8_t,\n+\t       svwrite_hor_za128_mf8_m (0, w0, p0, z0),\n+\t       svwrite_hor_za128_m (0, w0, p0, z0))\n+\n /*\n ** write_za128_s16_0_w0_z0:\n **\tmov\t(w1[2-5]), w0\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/write_hor_za8.c b/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/write_hor_za8.c\nindex 683e1a64ab3..6529f9597fc 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/write_hor_za8.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/write_hor_za8.c\n@@ -91,3 +91,13 @@ TEST_WRITE_ZA (write_za8_s8_0_w0_z1, svint8_t,\n TEST_WRITE_ZA (write_za8_u8_0_w0_z0, svuint8_t,\n \t       svwrite_hor_za8_u8_m (0, w0, p0, z0),\n \t       svwrite_hor_za8_m (0, w0, p0, z0))\n+\n+/*\n+** write_za8_mf8_0_w0_z0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmova\tza0h\\.b\\[\\1, 0\\], p0/m, z0\\.b\n+**\tret\n+*/\n+TEST_WRITE_ZA (write_za8_mf8_0_w0_z0, svmfloat8_t,\n+\t       svwrite_hor_za8_mf8_m (0, w0, p0, z0),\n+\t       svwrite_hor_za8_m (0, w0, p0, z0))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/write_ver_za128.c b/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/write_ver_za128.c\nindex 9622e99dde1..6c0d334c3dc 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/write_ver_za128.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/write_ver_za128.c\n@@ -92,6 +92,16 @@ TEST_WRITE_ZA (write_za128_u8_0_w0_z0, svuint8_t,\n \t       svwrite_ver_za128_u8_m (0, w0, p0, z0),\n \t       svwrite_ver_za128_m (0, w0, p0, z0))\n \n+/*\n+** write_za128_mf8_0_w0_z0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmova\tza0v\\.q\\[\\1, 0\\], p0/m, z0\\.q\n+**\tret\n+*/\n+TEST_WRITE_ZA (write_za128_mf8_0_w0_z0, svmfloat8_t,\n+\t       svwrite_ver_za128_mf8_m (0, w0, p0, z0),\n+\t       svwrite_ver_za128_m (0, w0, p0, z0))\n+\n /*\n ** write_za128_s16_0_w0_z0:\n **\tmov\t(w1[2-5]), w0\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/write_ver_za8.c b/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/write_ver_za8.c\nindex dd61828219c..0e7cda809f2 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/write_ver_za8.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme/acle-asm/write_ver_za8.c\n@@ -91,3 +91,13 @@ TEST_WRITE_ZA (write_za8_s8_0_w0_z1, svint8_t,\n TEST_WRITE_ZA (write_za8_u8_0_w0_z0, svuint8_t,\n \t       svwrite_ver_za8_u8_m (0, w0, p0, z0),\n \t       svwrite_ver_za8_m (0, w0, p0, z0))\n+\n+/*\n+** write_za8_mf8_0_w0_z0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmova\tza0v\\.b\\[\\1, 0\\], p0/m, z0\\.b\n+**\tret\n+*/\n+TEST_WRITE_ZA (write_za8_mf8_0_w0_z0, svmfloat8_t,\n+\t       svwrite_ver_za8_mf8_m (0, w0, p0, z0),\n+\t       svwrite_ver_za8_m (0, w0, p0, z0))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/ld1_mf8_x2.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/ld1_mf8_x2.c\nnew file mode 100644\nindex 00000000000..6891c5c009a\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/ld1_mf8_x2.c\n@@ -0,0 +1,262 @@\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" { target { ! ilp32 } } } } */\n+\n+#include \"test_sme2_acle.h\"\n+\n+/*\n+** ld1_mf8_base:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_base, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn8, x0),\n+\t\t z0 = svld1_x2 (pn8, x0))\n+\n+/*\n+** ld1_mf8_index:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, x1\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_index, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn8, x0 + x1),\n+\t\t z0 = svld1_x2 (pn8, x0 + x1))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_mf8_1:\n+**\tincb\tx0\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_1, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn8, x0 + svcntb ()),\n+\t\t z0 = svld1_x2 (pn8, x0 + svcntb ()))\n+\n+/*\n+** ld1_mf8_2:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #2, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_2, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn8, x0 + svcntb () * 2),\n+\t\t z0 = svld1_x2 (pn8, x0 + svcntb () * 2))\n+\n+/*\n+** ld1_mf8_14:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #14, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_14, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn8, x0 + svcntb () * 14),\n+\t\t z0 = svld1_x2 (pn8, x0 + svcntb () * 14))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_mf8_16:\n+**\tincb\tx0, all, mul #16\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_16, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn8, x0 + svcntb () * 16),\n+\t\t z0 = svld1_x2 (pn8, x0 + svcntb () * 16))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_mf8_m1:\n+**\tdecb\tx0\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_m1, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn8, x0 - svcntb ()),\n+\t\t z0 = svld1_x2 (pn8, x0 - svcntb ()))\n+\n+/*\n+** ld1_mf8_m2:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #-2, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_m2, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn8, x0 - svcntb () * 2),\n+\t\t z0 = svld1_x2 (pn8, x0 - svcntb () * 2))\n+\n+/*\n+** ld1_mf8_m16:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #-16, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_m16, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn8, x0 - svcntb () * 16),\n+\t\t z0 = svld1_x2 (pn8, x0 - svcntb () * 16))\n+\n+/*\n+** ld1_mf8_m18:\n+**\taddvl\t(x[0-9]+), x0, #-18\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[\\1\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_m18, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn8, x0 - svcntb () * 18),\n+\t\t z0 = svld1_x2 (pn8, x0 - svcntb () * 18))\n+\n+/*\n+** ld1_mf8_z17:\n+**\tld1b\t{z[^\\n]+}, pn8/z, \\[x0\\]\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_z17, svmfloat8x2_t, mfloat8_t,\n+\t\t z17 = svld1_mf8_x2 (pn8, x0),\n+\t\t z17 = svld1_x2 (pn8, x0))\n+\n+/*\n+** ld1_mf8_z22:\n+**\tld1b\t{z22\\.b(?: - |, )z23\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_z22, svmfloat8x2_t, mfloat8_t,\n+\t\t z22 = svld1_mf8_x2 (pn8, x0),\n+\t\t z22 = svld1_x2 (pn8, x0))\n+\n+/*\n+** ld1_mf8_z28:\n+**\tld1b\t{z28\\.b(?: - |, )z29\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_z28, svmfloat8x2_t, mfloat8_t,\n+\t\t z28 = svld1_mf8_x2 (pn8, x0),\n+\t\t z28 = svld1_x2 (pn8, x0))\n+\n+/*\n+** ld1_mf8_pn0:\n+**\tmov\tp([89]|1[0-5])\\.b, p0\\.b\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn\\1/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_pn0, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn0, x0),\n+\t\t z0 = svld1_x2 (pn0, x0))\n+\n+/*\n+** ld1_mf8_pn7:\n+**\tmov\tp([89]|1[0-5])\\.b, p7\\.b\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn\\1/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_pn7, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn7, x0),\n+\t\t z0 = svld1_x2 (pn7, x0))\n+\n+/*\n+** ld1_mf8_pn15:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn15/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_pn15, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn15, x0),\n+\t\t z0 = svld1_x2 (pn15, x0))\n+\n+/*\n+** ld1_vnum_mf8_0:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_0, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x2 (pn8, x0, 0),\n+\t\t z0 = svld1_vnum_x2 (pn8, x0, 0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_vnum_mf8_1:\n+**\tincb\tx0\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_1, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x2 (pn8, x0, 1),\n+\t\t z0 = svld1_vnum_x2 (pn8, x0, 1))\n+\n+/*\n+** ld1_vnum_mf8_2:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #2, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_2, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x2 (pn8, x0, 2),\n+\t\t z0 = svld1_vnum_x2 (pn8, x0, 2))\n+\n+/*\n+** ld1_vnum_mf8_14:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #14, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_14, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x2 (pn8, x0, 14),\n+\t\t z0 = svld1_vnum_x2 (pn8, x0, 14))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_vnum_mf8_16:\n+**\tincb\tx0, all, mul #16\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_16, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x2 (pn8, x0, 16),\n+\t\t z0 = svld1_vnum_x2 (pn8, x0, 16))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_vnum_mf8_m1:\n+**\tdecb\tx0\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_m1, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x2 (pn8, x0, -1),\n+\t\t z0 = svld1_vnum_x2 (pn8, x0, -1))\n+\n+/*\n+** ld1_vnum_mf8_m2:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #-2, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_m2, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x2 (pn8, x0, -2),\n+\t\t z0 = svld1_vnum_x2 (pn8, x0, -2))\n+\n+/*\n+** ld1_vnum_mf8_m16:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #-16, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_m16, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x2 (pn8, x0, -16),\n+\t\t z0 = svld1_vnum_x2 (pn8, x0, -16))\n+\n+/*\n+** ld1_vnum_mf8_m18:\n+**\taddvl\t(x[0-9]+), x0, #-18\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[\\1\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_m18, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x2 (pn8, x0, -18),\n+\t\t z0 = svld1_vnum_x2 (pn8, x0, -18))\n+\n+/*\n+** ld1_vnum_mf8_x1:\n+**\tcntb\t(x[0-9]+)\n+** (\n+**\tmadd\t(x[0-9]+), (?:x1, \\1|\\1, x1), x0\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[\\2\\]\n+** |\n+**\tmul\t(x[0-9]+), (?:x1, \\1|\\1, x1)\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, \\3\\]\n+** )\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_x1, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x2 (pn8, x0, x1),\n+\t\t z0 = svld1_vnum_x2 (pn8, x0, x1))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/ld1_mf8_x4.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/ld1_mf8_x4.c\nnew file mode 100644\nindex 00000000000..a95a33e6665\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/ld1_mf8_x4.c\n@@ -0,0 +1,354 @@\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" { target { ! ilp32 } } } } */\n+\n+#include \"test_sme2_acle.h\"\n+\n+/*\n+** ld1_mf8_base:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_base, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0),\n+\t\t z0 = svld1_x4 (pn8, x0))\n+\n+/*\n+** ld1_mf8_index:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, x1\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_index, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 + x1),\n+\t\t z0 = svld1_x4 (pn8, x0 + x1))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_mf8_1:\n+**\tincb\tx0\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_1, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 + svcntb ()),\n+\t\t z0 = svld1_x4 (pn8, x0 + svcntb ()))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_mf8_2:\n+**\tincb\tx0, all, mul #2\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_2, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 + svcntb () * 2),\n+\t\t z0 = svld1_x4 (pn8, x0 + svcntb () * 2))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_mf8_3:\n+**\tincb\tx0, all, mul #3\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_3, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 + svcntb () * 3),\n+\t\t z0 = svld1_x4 (pn8, x0 + svcntb () * 3))\n+\n+/*\n+** ld1_mf8_4:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #4, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_4, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 + svcntb () * 4),\n+\t\t z0 = svld1_x4 (pn8, x0 + svcntb () * 4))\n+\n+/*\n+** ld1_mf8_28:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #28, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_28, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 + svcntb () * 28),\n+\t\t z0 = svld1_x4 (pn8, x0 + svcntb () * 28))\n+\n+/*\n+** ld1_mf8_32:\n+**\t[^{]*\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_32, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 + svcntb () * 32),\n+\t\t z0 = svld1_x4 (pn8, x0 + svcntb () * 32))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_mf8_m1:\n+**\tdecb\tx0\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_m1, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 - svcntb ()),\n+\t\t z0 = svld1_x4 (pn8, x0 - svcntb ()))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_mf8_m2:\n+**\tdecb\tx0, all, mul #2\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_m2, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 - svcntb () * 2),\n+\t\t z0 = svld1_x4 (pn8, x0 - svcntb () * 2))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_mf8_m3:\n+**\tdecb\tx0, all, mul #3\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_m3, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 - svcntb () * 3),\n+\t\t z0 = svld1_x4 (pn8, x0 - svcntb () * 3))\n+\n+/*\n+** ld1_mf8_m4:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #-4, mul vl\\]\n+**\tret\n+*/\n+  TEST_LOAD_COUNT (ld1_mf8_m4, svmfloat8x4_t, mfloat8_t,\n+\t\t   z0 = svld1_mf8_x4 (pn8, x0 - svcntb () * 4),\n+\t\t   z0 = svld1_x4 (pn8, x0 - svcntb () * 4))\n+\n+/*\n+** ld1_mf8_m32:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #-32, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_m32, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 - svcntb () * 32),\n+\t\t z0 = svld1_x4 (pn8, x0 - svcntb () * 32))\n+\n+/*\n+** ld1_mf8_m36:\n+**\t[^{]*\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_m36, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 - svcntb () * 36),\n+\t\t z0 = svld1_x4 (pn8, x0 - svcntb () * 36))\n+\n+/*\n+** ld1_mf8_z17:\n+**\tld1b\t{z[^\\n]+}, pn8/z, \\[x0\\]\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_z17, svmfloat8x4_t, mfloat8_t,\n+\t\t z17 = svld1_mf8_x4 (pn8, x0),\n+\t\t z17 = svld1_x4 (pn8, x0))\n+\n+/*\n+** ld1_mf8_z22:\n+**\tld1b\t{z[^\\n]+}, pn8/z, \\[x0\\]\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_z22, svmfloat8x4_t, mfloat8_t,\n+\t\t z22 = svld1_mf8_x4 (pn8, x0),\n+\t\t z22 = svld1_x4 (pn8, x0))\n+\n+/*\n+** ld1_mf8_z28:\n+**\tld1b\t{z28\\.b(?: - |, )z31\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_z28, svmfloat8x4_t, mfloat8_t,\n+\t\t z28 = svld1_mf8_x4 (pn8, x0),\n+\t\t z28 = svld1_x4 (pn8, x0))\n+\n+/*\n+** ld1_mf8_pn0:\n+**\tmov\tp([89]|1[0-5])\\.b, p0\\.b\n+**\tld1b\t{z0\\.b(?: - |, )z3\\.b}, pn\\1/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_pn0, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn0, x0),\n+\t\t z0 = svld1_x4 (pn0, x0))\n+\n+/*\n+** ld1_mf8_pn7:\n+**\tmov\tp([89]|1[0-5])\\.b, p7\\.b\n+**\tld1b\t{z0\\.b(?: - |, )z3\\.b}, pn\\1/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_pn7, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn7, x0),\n+\t\t z0 = svld1_x4 (pn7, x0))\n+\n+/*\n+** ld1_mf8_pn15:\n+**\tld1b\t{z0\\.b(?: - |, )z3\\.b}, pn15/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_pn15, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn15, x0),\n+\t\t z0 = svld1_x4 (pn15, x0))\n+\n+/*\n+** ld1_vnum_mf8_0:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_0, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, 0),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, 0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_vnum_mf8_1:\n+**\tincb\tx0\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_1, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, 1),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, 1))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_vnum_mf8_2:\n+**\tincb\tx0, all, mul #2\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_2, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, 2),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, 2))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_vnum_mf8_3:\n+**\tincb\tx0, all, mul #3\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_3, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, 3),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, 3))\n+\n+/*\n+** ld1_vnum_mf8_4:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #4, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_4, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, 4),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, 4))\n+\n+/*\n+** ld1_vnum_mf8_28:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #28, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_28, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, 28),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, 28))\n+\n+/*\n+** ld1_vnum_mf8_32:\n+**\t[^{]*\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_32, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, 32),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, 32))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_vnum_mf8_m1:\n+**\tdecb\tx0\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_m1, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, -1),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, -1))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_vnum_mf8_m2:\n+**\tdecb\tx0, all, mul #2\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_m2, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, -2),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, -2))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_vnum_mf8_m3:\n+**\tdecb\tx0, all, mul #3\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_m3, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, -3),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, -3))\n+\n+/*\n+** ld1_vnum_mf8_m4:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #-4, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_m4, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, -4),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, -4))\n+\n+/*\n+** ld1_vnum_mf8_m32:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #-32, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_m32, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, -32),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, -32))\n+\n+/*\n+** ld1_vnum_mf8_m36:\n+**\t[^{]*\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_m36, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, -36),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, -36))\n+\n+/*\n+** ld1_vnum_mf8_x1:\n+**\tcntb\t(x[0-9]+)\n+** (\n+**\tmadd\t(x[0-9]+), (?:x1, \\1|\\1, x1), x0\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[\\2\\]\n+** |\n+**\tmul\t(x[0-9]+), (?:x1, \\1|\\1, x1)\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, \\3\\]\n+** )\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_x1, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, x1),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, x1))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/ldnt1_mf8_x2.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/ldnt1_mf8_x2.c\nnew file mode 100644\nindex 00000000000..1855dd115c7\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/ldnt1_mf8_x2.c\n@@ -0,0 +1,262 @@\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" { target { ! ilp32 } } } } */\n+\n+#include \"test_sme2_acle.h\"\n+\n+/*\n+** ldnt1_mf8_base:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_base, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn8, x0),\n+\t\t z0 = svldnt1_x2 (pn8, x0))\n+\n+/*\n+** ldnt1_mf8_index:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, x1\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_index, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn8, x0 + x1),\n+\t\t z0 = svldnt1_x2 (pn8, x0 + x1))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_mf8_1:\n+**\tincb\tx0\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_1, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn8, x0 + svcntb ()),\n+\t\t z0 = svldnt1_x2 (pn8, x0 + svcntb ()))\n+\n+/*\n+** ldnt1_mf8_2:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #2, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_2, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn8, x0 + svcntb () * 2),\n+\t\t z0 = svldnt1_x2 (pn8, x0 + svcntb () * 2))\n+\n+/*\n+** ldnt1_mf8_14:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #14, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_14, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn8, x0 + svcntb () * 14),\n+\t\t z0 = svldnt1_x2 (pn8, x0 + svcntb () * 14))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_mf8_16:\n+**\tincb\tx0, all, mul #16\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_16, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn8, x0 + svcntb () * 16),\n+\t\t z0 = svldnt1_x2 (pn8, x0 + svcntb () * 16))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_mf8_m1:\n+**\tdecb\tx0\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_m1, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn8, x0 - svcntb ()),\n+\t\t z0 = svldnt1_x2 (pn8, x0 - svcntb ()))\n+\n+/*\n+** ldnt1_mf8_m2:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #-2, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_m2, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn8, x0 - svcntb () * 2),\n+\t\t z0 = svldnt1_x2 (pn8, x0 - svcntb () * 2))\n+\n+/*\n+** ldnt1_mf8_m16:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #-16, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_m16, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn8, x0 - svcntb () * 16),\n+\t\t z0 = svldnt1_x2 (pn8, x0 - svcntb () * 16))\n+\n+/*\n+** ldnt1_mf8_m18:\n+**\taddvl\t(x[0-9]+), x0, #-18\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[\\1\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_m18, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn8, x0 - svcntb () * 18),\n+\t\t z0 = svldnt1_x2 (pn8, x0 - svcntb () * 18))\n+\n+/*\n+** ldnt1_mf8_z17:\n+**\tldnt1b\t{z[^\\n]+}, pn8/z, \\[x0\\]\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_z17, svmfloat8x2_t, mfloat8_t,\n+\t\t z17 = svldnt1_mf8_x2 (pn8, x0),\n+\t\t z17 = svldnt1_x2 (pn8, x0))\n+\n+/*\n+** ldnt1_mf8_z22:\n+**\tldnt1b\t{z22\\.b(?: - |, )z23\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_z22, svmfloat8x2_t, mfloat8_t,\n+\t\t z22 = svldnt1_mf8_x2 (pn8, x0),\n+\t\t z22 = svldnt1_x2 (pn8, x0))\n+\n+/*\n+** ldnt1_mf8_z28:\n+**\tldnt1b\t{z28\\.b(?: - |, )z29\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_z28, svmfloat8x2_t, mfloat8_t,\n+\t\t z28 = svldnt1_mf8_x2 (pn8, x0),\n+\t\t z28 = svldnt1_x2 (pn8, x0))\n+\n+/*\n+** ldnt1_mf8_pn0:\n+**\tmov\tp([89]|1[0-5])\\.b, p0\\.b\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn\\1/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_pn0, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn0, x0),\n+\t\t z0 = svldnt1_x2 (pn0, x0))\n+\n+/*\n+** ldnt1_mf8_pn7:\n+**\tmov\tp([89]|1[0-5])\\.b, p7\\.b\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn\\1/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_pn7, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn7, x0),\n+\t\t z0 = svldnt1_x2 (pn7, x0))\n+\n+/*\n+** ldnt1_mf8_pn15:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn15/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_pn15, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn15, x0),\n+\t\t z0 = svldnt1_x2 (pn15, x0))\n+\n+/*\n+** ldnt1_vnum_mf8_0:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_0, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x2 (pn8, x0, 0),\n+\t\t z0 = svldnt1_vnum_x2 (pn8, x0, 0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_vnum_mf8_1:\n+**\tincb\tx0\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_1, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x2 (pn8, x0, 1),\n+\t\t z0 = svldnt1_vnum_x2 (pn8, x0, 1))\n+\n+/*\n+** ldnt1_vnum_mf8_2:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #2, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_2, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x2 (pn8, x0, 2),\n+\t\t z0 = svldnt1_vnum_x2 (pn8, x0, 2))\n+\n+/*\n+** ldnt1_vnum_mf8_14:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #14, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_14, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x2 (pn8, x0, 14),\n+\t\t z0 = svldnt1_vnum_x2 (pn8, x0, 14))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_vnum_mf8_16:\n+**\tincb\tx0, all, mul #16\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_16, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x2 (pn8, x0, 16),\n+\t\t z0 = svldnt1_vnum_x2 (pn8, x0, 16))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_vnum_mf8_m1:\n+**\tdecb\tx0\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_m1, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x2 (pn8, x0, -1),\n+\t\t z0 = svldnt1_vnum_x2 (pn8, x0, -1))\n+\n+/*\n+** ldnt1_vnum_mf8_m2:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #-2, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_m2, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x2 (pn8, x0, -2),\n+\t\t z0 = svldnt1_vnum_x2 (pn8, x0, -2))\n+\n+/*\n+** ldnt1_vnum_mf8_m16:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #-16, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_m16, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x2 (pn8, x0, -16),\n+\t\t z0 = svldnt1_vnum_x2 (pn8, x0, -16))\n+\n+/*\n+** ldnt1_vnum_mf8_m18:\n+**\taddvl\t(x[0-9]+), x0, #-18\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[\\1\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_m18, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x2 (pn8, x0, -18),\n+\t\t z0 = svldnt1_vnum_x2 (pn8, x0, -18))\n+\n+/*\n+** ldnt1_vnum_mf8_x1:\n+**\tcntb\t(x[0-9]+)\n+** (\n+**\tmadd\t(x[0-9]+), (?:x1, \\1|\\1, x1), x0\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[\\2\\]\n+** |\n+**\tmul\t(x[0-9]+), (?:x1, \\1|\\1, x1)\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, \\3\\]\n+** )\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_x1, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x2 (pn8, x0, x1),\n+\t\t z0 = svldnt1_vnum_x2 (pn8, x0, x1))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/ldnt1_mf8_x4.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/ldnt1_mf8_x4.c\nnew file mode 100644\nindex 00000000000..0fad26f4616\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/ldnt1_mf8_x4.c\n@@ -0,0 +1,354 @@\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" { target { ! ilp32 } } } } */\n+\n+#include \"test_sme2_acle.h\"\n+\n+/*\n+** ldnt1_mf8_base:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_base, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0),\n+\t\t z0 = svldnt1_x4 (pn8, x0))\n+\n+/*\n+** ldnt1_mf8_index:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, x1\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_index, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 + x1),\n+\t\t z0 = svldnt1_x4 (pn8, x0 + x1))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_mf8_1:\n+**\tincb\tx0\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_1, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 + svcntb ()),\n+\t\t z0 = svldnt1_x4 (pn8, x0 + svcntb ()))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_mf8_2:\n+**\tincb\tx0, all, mul #2\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_2, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 + svcntb () * 2),\n+\t\t z0 = svldnt1_x4 (pn8, x0 + svcntb () * 2))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_mf8_3:\n+**\tincb\tx0, all, mul #3\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_3, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 + svcntb () * 3),\n+\t\t z0 = svldnt1_x4 (pn8, x0 + svcntb () * 3))\n+\n+/*\n+** ldnt1_mf8_4:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #4, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_4, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 + svcntb () * 4),\n+\t\t z0 = svldnt1_x4 (pn8, x0 + svcntb () * 4))\n+\n+/*\n+** ldnt1_mf8_28:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #28, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_28, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 + svcntb () * 28),\n+\t\t z0 = svldnt1_x4 (pn8, x0 + svcntb () * 28))\n+\n+/*\n+** ldnt1_mf8_32:\n+**\t[^{]*\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_32, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 + svcntb () * 32),\n+\t\t z0 = svldnt1_x4 (pn8, x0 + svcntb () * 32))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_mf8_m1:\n+**\tdecb\tx0\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_m1, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 - svcntb ()),\n+\t\t z0 = svldnt1_x4 (pn8, x0 - svcntb ()))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_mf8_m2:\n+**\tdecb\tx0, all, mul #2\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_m2, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 - svcntb () * 2),\n+\t\t z0 = svldnt1_x4 (pn8, x0 - svcntb () * 2))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_mf8_m3:\n+**\tdecb\tx0, all, mul #3\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_m3, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 - svcntb () * 3),\n+\t\t z0 = svldnt1_x4 (pn8, x0 - svcntb () * 3))\n+\n+/*\n+** ldnt1_mf8_m4:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #-4, mul vl\\]\n+**\tret\n+*/\n+  TEST_LOAD_COUNT (ldnt1_mf8_m4, svmfloat8x4_t, mfloat8_t,\n+\t\t   z0 = svldnt1_mf8_x4 (pn8, x0 - svcntb () * 4),\n+\t\t   z0 = svldnt1_x4 (pn8, x0 - svcntb () * 4))\n+\n+/*\n+** ldnt1_mf8_m32:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #-32, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_m32, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 - svcntb () * 32),\n+\t\t z0 = svldnt1_x4 (pn8, x0 - svcntb () * 32))\n+\n+/*\n+** ldnt1_mf8_m36:\n+**\t[^{]*\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_m36, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 - svcntb () * 36),\n+\t\t z0 = svldnt1_x4 (pn8, x0 - svcntb () * 36))\n+\n+/*\n+** ldnt1_mf8_z17:\n+**\tldnt1b\t{z[^\\n]+}, pn8/z, \\[x0\\]\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_z17, svmfloat8x4_t, mfloat8_t,\n+\t\t z17 = svldnt1_mf8_x4 (pn8, x0),\n+\t\t z17 = svldnt1_x4 (pn8, x0))\n+\n+/*\n+** ldnt1_mf8_z22:\n+**\tldnt1b\t{z[^\\n]+}, pn8/z, \\[x0\\]\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_z22, svmfloat8x4_t, mfloat8_t,\n+\t\t z22 = svldnt1_mf8_x4 (pn8, x0),\n+\t\t z22 = svldnt1_x4 (pn8, x0))\n+\n+/*\n+** ldnt1_mf8_z28:\n+**\tldnt1b\t{z28\\.b(?: - |, )z31\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_z28, svmfloat8x4_t, mfloat8_t,\n+\t\t z28 = svldnt1_mf8_x4 (pn8, x0),\n+\t\t z28 = svldnt1_x4 (pn8, x0))\n+\n+/*\n+** ldnt1_mf8_pn0:\n+**\tmov\tp([89]|1[0-5])\\.b, p0\\.b\n+**\tldnt1b\t{z0\\.b(?: - |, )z3\\.b}, pn\\1/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_pn0, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn0, x0),\n+\t\t z0 = svldnt1_x4 (pn0, x0))\n+\n+/*\n+** ldnt1_mf8_pn7:\n+**\tmov\tp([89]|1[0-5])\\.b, p7\\.b\n+**\tldnt1b\t{z0\\.b(?: - |, )z3\\.b}, pn\\1/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_pn7, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn7, x0),\n+\t\t z0 = svldnt1_x4 (pn7, x0))\n+\n+/*\n+** ldnt1_mf8_pn15:\n+**\tldnt1b\t{z0\\.b(?: - |, )z3\\.b}, pn15/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_pn15, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn15, x0),\n+\t\t z0 = svldnt1_x4 (pn15, x0))\n+\n+/*\n+** ldnt1_vnum_mf8_0:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_0, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, 0),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, 0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_vnum_mf8_1:\n+**\tincb\tx0\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_1, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, 1),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, 1))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_vnum_mf8_2:\n+**\tincb\tx0, all, mul #2\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_2, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, 2),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, 2))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_vnum_mf8_3:\n+**\tincb\tx0, all, mul #3\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_3, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, 3),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, 3))\n+\n+/*\n+** ldnt1_vnum_mf8_4:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #4, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_4, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, 4),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, 4))\n+\n+/*\n+** ldnt1_vnum_mf8_28:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #28, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_28, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, 28),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, 28))\n+\n+/*\n+** ldnt1_vnum_mf8_32:\n+**\t[^{]*\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_32, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, 32),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, 32))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_vnum_mf8_m1:\n+**\tdecb\tx0\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_m1, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, -1),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, -1))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_vnum_mf8_m2:\n+**\tdecb\tx0, all, mul #2\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_m2, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, -2),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, -2))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_vnum_mf8_m3:\n+**\tdecb\tx0, all, mul #3\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_m3, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, -3),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, -3))\n+\n+/*\n+** ldnt1_vnum_mf8_m4:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #-4, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_m4, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, -4),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, -4))\n+\n+/*\n+** ldnt1_vnum_mf8_m32:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #-32, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_m32, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, -32),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, -32))\n+\n+/*\n+** ldnt1_vnum_mf8_m36:\n+**\t[^{]*\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_m36, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, -36),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, -36))\n+\n+/*\n+** ldnt1_vnum_mf8_x1:\n+**\tcntb\t(x[0-9]+)\n+** (\n+**\tmadd\t(x[0-9]+), (?:x1, \\1|\\1, x1), x0\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[\\2\\]\n+** |\n+**\tmul\t(x[0-9]+), (?:x1, \\1|\\1, x1)\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, \\3\\]\n+** )\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_x1, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, x1),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, x1))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_hor_za8_vg2.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_hor_za8_vg2.c\nindex ec31a68b46e..724ba852ef4 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_hor_za8_vg2.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_hor_za8_vg2.c\n@@ -22,6 +22,16 @@ TEST_READ_ZA_XN (read_za8_u8_z4_0_1, svuint8x2_t,\n \t\t z4 = svread_hor_za8_u8_vg2 (0, 1),\n \t\t z4 = svread_hor_za8_u8_vg2 (0, 1))\n \n+/*\n+** read_za8_mf8_z4_0_1:\n+**\tmov\t(w1[2-5]), #?1\n+**\tmova\t{z4\\.b - z5\\.b}, za0h\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z4_0_1, svmfloat8x2_t,\n+\t\t z4 = svread_hor_za8_mf8_vg2 (0, 1),\n+\t\t z4 = svread_hor_za8_mf8_vg2 (0, 1))\n+\n /*\n ** read_za8_s8_z28_0_w11:\n **\tmov\t(w1[2-5]), w11\n@@ -50,6 +60,15 @@ TEST_READ_ZA_XN (read_za8_u8_z18_0_w15, svuint8x2_t,\n \t\t z18 = svread_hor_za8_u8_vg2 (0, w15),\n \t\t z18 = svread_hor_za8_u8_vg2 (0, w15))\n \n+/*\n+** read_za8_mf8_z18_0_w15:\n+**\tmova\t{z18\\.b - z19\\.b}, za0h\\.b\\[w15, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z18_0_w15, svmfloat8x2_t,\n+\t\t z18 = svread_hor_za8_mf8_vg2 (0, w15),\n+\t\t z18 = svread_hor_za8_mf8_vg2 (0, w15))\n+\n /*\n ** read_za8_s8_z23_0_w12p14:\n **\tmova\t{[^\\n]+}, za0h\\.b\\[w12, 14:15\\]\n@@ -71,6 +90,16 @@ TEST_READ_ZA_XN (read_za8_u8_z4_0_w12p1, svuint8x2_t,\n \t\t z4 = svread_hor_za8_u8_vg2 (0, w12 + 1),\n \t\t z4 = svread_hor_za8_u8_vg2 (0, w12 + 1))\n \n+/*\n+** read_za8_mf8_z4_0_w12p1:\n+**\tadd\t(w[0-9]+), w12, #?1\n+**\tmova\t{z4\\.b - z5\\.b}, za0h\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z4_0_w12p1, svmfloat8x2_t,\n+\t\t z4 = svread_hor_za8_mf8_vg2 (0, w12 + 1),\n+\t\t z4 = svread_hor_za8_mf8_vg2 (0, w12 + 1))\n+\n /*\n ** read_za8_s8_z28_0_w12p2:\n **\tmova\t{z28\\.b - z29\\.b}, za0h\\.b\\[w12, 2:3\\]\n@@ -90,6 +119,16 @@ TEST_READ_ZA_XN (read_za8_u8_z0_0_w15p3, svuint8x2_t,\n \t\t z0 = svread_hor_za8_u8_vg2 (0, w15 + 3),\n \t\t z0 = svread_hor_za8_u8_vg2 (0, w15 + 3))\n \n+/*\n+** read_za8_mf8_z0_0_w15p3:\n+**\tadd\t(w[0-9]+), w15, #?3\n+**\tmova\t{z0\\.b - z1\\.b}, za0h\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z0_0_w15p3, svmfloat8x2_t,\n+\t\t z0 = svread_hor_za8_mf8_vg2 (0, w15 + 3),\n+\t\t z0 = svread_hor_za8_mf8_vg2 (0, w15 + 3))\n+\n /*\n ** read_za8_u8_z4_0_w15p12:\n **\tmova\t{z4\\.b - z5\\.b}, za0h\\.b\\[w15, 12:13\\]\n@@ -99,6 +138,15 @@ TEST_READ_ZA_XN (read_za8_u8_z4_0_w15p12, svuint8x2_t,\n \t\t z4 = svread_hor_za8_u8_vg2 (0, w15 + 12),\n \t\t z4 = svread_hor_za8_u8_vg2 (0, w15 + 12))\n \n+/*\n+** read_za8_mf8_z4_0_w15p12:\n+**\tmova\t{z4\\.b - z5\\.b}, za0h\\.b\\[w15, 12:13\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z4_0_w15p12, svmfloat8x2_t,\n+\t\t z4 = svread_hor_za8_mf8_vg2 (0, w15 + 12),\n+\t\t z4 = svread_hor_za8_mf8_vg2 (0, w15 + 12))\n+\n /*\n ** read_za8_u8_z28_0_w12p15:\n **\tadd\t(w[0-9]+), w12, #?15\n@@ -109,6 +157,16 @@ TEST_READ_ZA_XN (read_za8_u8_z28_0_w12p15, svuint8x2_t,\n \t\t z28 = svread_hor_za8_u8_vg2 (0, w12 + 15),\n \t\t z28 = svread_hor_za8_u8_vg2 (0, w12 + 15))\n \n+/*\n+** read_za8_mf8_z28_0_w12p15:\n+**\tadd\t(w[0-9]+), w12, #?15\n+**\tmova\t{z28\\.b - z29\\.b}, za0h\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z28_0_w12p15, svmfloat8x2_t,\n+\t\t z28 = svread_hor_za8_mf8_vg2 (0, w12 + 15),\n+\t\t z28 = svread_hor_za8_mf8_vg2 (0, w12 + 15))\n+\n /*\n ** read_za8_s8_z0_0_w15p16:\n **\tadd\t(w[0-9]+), w15, #?16\n@@ -129,6 +187,16 @@ TEST_READ_ZA_XN (read_za8_u8_z4_0_w12m1, svuint8x2_t,\n \t\t z4 = svread_hor_za8_u8_vg2 (0, w12 - 1),\n \t\t z4 = svread_hor_za8_u8_vg2 (0, w12 - 1))\n \n+/*\n+** read_za8_mf8_z4_0_w12m1:\n+**\tsub\t(w[0-9]+), w12, #?1\n+**\tmova\t{z4\\.b - z5\\.b}, za0h\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z4_0_w12m1, svmfloat8x2_t,\n+\t\t z4 = svread_hor_za8_mf8_vg2 (0, w12 - 1),\n+\t\t z4 = svread_hor_za8_mf8_vg2 (0, w12 - 1))\n+\n /*\n ** read_za8_u8_z18_0_w16:\n **\tmov\t(w1[2-5]), w16\n@@ -138,3 +206,13 @@ TEST_READ_ZA_XN (read_za8_u8_z4_0_w12m1, svuint8x2_t,\n TEST_READ_ZA_XN (read_za8_u8_z18_0_w16, svuint8x2_t,\n \t\t z18 = svread_hor_za8_u8_vg2 (0, w16),\n \t\t z18 = svread_hor_za8_u8_vg2 (0, w16))\n+\n+/*\n+** read_za8_mf8_z18_0_w16:\n+**\tmov\t(w1[2-5]), w16\n+**\tmova\t{z18\\.b - z19\\.b}, za0h\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z18_0_w16, svmfloat8x2_t,\n+\t\t z18 = svread_hor_za8_mf8_vg2 (0, w16),\n+\t\t z18 = svread_hor_za8_mf8_vg2 (0, w16))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_hor_za8_vg4.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_hor_za8_vg4.c\nindex 261cbead442..2c3132dc6a8 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_hor_za8_vg4.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_hor_za8_vg4.c\n@@ -22,6 +22,16 @@ TEST_READ_ZA_XN (read_za8_u8_z4_0_1, svuint8x4_t,\n \t\t z4 = svread_hor_za8_u8_vg4 (0, 1),\n \t\t z4 = svread_hor_za8_u8_vg4 (0, 1))\n \n+/*\n+** read_za8_mf8_z4_0_1:\n+**\tmov\t(w1[2-5]), #?1\n+**\tmova\t{z4\\.b - z7\\.b}, za0h\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z4_0_1, svmfloat8x4_t,\n+\t\t z4 = svread_hor_za8_mf8_vg4 (0, 1),\n+\t\t z4 = svread_hor_za8_mf8_vg4 (0, 1))\n+\n /*\n ** read_za8_s8_z28_0_w11:\n **\tmov\t(w1[2-5]), w11\n@@ -54,6 +64,19 @@ TEST_READ_ZA_XN (read_za8_u8_z18_0_w15, svuint8x4_t,\n \t\t z18 = svread_hor_za8_u8_vg4 (0, w15),\n \t\t z18 = svread_hor_za8_u8_vg4 (0, w15))\n \n+/*\n+** read_za8_mf8_z18_0_w15:\n+**\tmova\t{[^\\n]+}, za0h\\.b\\[w15, 0:3\\]\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z18_0_w15, svmfloat8x4_t,\n+\t\t z18 = svread_hor_za8_mf8_vg4 (0, w15),\n+\t\t z18 = svread_hor_za8_mf8_vg4 (0, w15))\n+\n /*\n ** read_za8_s8_z23_0_w12p12:\n **\tmova\t{[^\\n]+}, za0h\\.b\\[w12, 12:15\\]\n@@ -77,6 +100,16 @@ TEST_READ_ZA_XN (read_za8_u8_z4_0_w12p1, svuint8x4_t,\n \t\t z4 = svread_hor_za8_u8_vg4 (0, w12 + 1),\n \t\t z4 = svread_hor_za8_u8_vg4 (0, w12 + 1))\n \n+/*\n+** read_za8_mf8_z4_0_w12p1:\n+**\tadd\t(w[0-9]+), w12, #?1\n+**\tmova\t{z4\\.b - z7\\.b}, za0h\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z4_0_w12p1, svmfloat8x4_t,\n+\t\t z4 = svread_hor_za8_mf8_vg4 (0, w12 + 1),\n+\t\t z4 = svread_hor_za8_mf8_vg4 (0, w12 + 1))\n+\n /*\n ** read_za8_s8_z28_0_w12p2:\n **\tadd\t(w[0-9]+), w12, #?2\n@@ -97,6 +130,16 @@ TEST_READ_ZA_XN (read_za8_u8_z0_0_w15p3, svuint8x4_t,\n \t\t z0 = svread_hor_za8_u8_vg4 (0, w15 + 3),\n \t\t z0 = svread_hor_za8_u8_vg4 (0, w15 + 3))\n \n+/*\n+** read_za8_mf8_z0_0_w15p3:\n+**\tadd\t(w[0-9]+), w15, #?3\n+**\tmova\t{z0\\.b - z3\\.b}, za0h\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z0_0_w15p3, svmfloat8x4_t,\n+\t\t z0 = svread_hor_za8_mf8_vg4 (0, w15 + 3),\n+\t\t z0 = svread_hor_za8_mf8_vg4 (0, w15 + 3))\n+\n /*\n ** read_za8_u8_z0_0_w12p4:\n **\tmova\t{z0\\.b - z3\\.b}, za0h\\.b\\[w12, 4:7\\]\n@@ -106,6 +149,15 @@ TEST_READ_ZA_XN (read_za8_u8_z0_0_w12p4, svuint8x4_t,\n \t\t z0 = svread_hor_za8_u8_vg4 (0, w12 + 4),\n \t\t z0 = svread_hor_za8_u8_vg4 (0, w12 + 4))\n \n+/*\n+** read_za8_mf8_z0_0_w12p4:\n+**\tmova\t{z0\\.b - z3\\.b}, za0h\\.b\\[w12, 4:7\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z0_0_w12p4, svmfloat8x4_t,\n+\t\t z0 = svread_hor_za8_mf8_vg4 (0, w12 + 4),\n+\t\t z0 = svread_hor_za8_mf8_vg4 (0, w12 + 4))\n+\n /*\n ** read_za8_u8_z4_0_w15p12:\n **\tmova\t{z4\\.b - z7\\.b}, za0h\\.b\\[w15, 12:15\\]\n@@ -115,6 +167,15 @@ TEST_READ_ZA_XN (read_za8_u8_z4_0_w15p12, svuint8x4_t,\n \t\t z4 = svread_hor_za8_u8_vg4 (0, w15 + 12),\n \t\t z4 = svread_hor_za8_u8_vg4 (0, w15 + 12))\n \n+/*\n+** read_za8_mf8_z4_0_w15p12:\n+**\tmova\t{z4\\.b - z7\\.b}, za0h\\.b\\[w15, 12:15\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z4_0_w15p12, svmfloat8x4_t,\n+\t\t z4 = svread_hor_za8_mf8_vg4 (0, w15 + 12),\n+\t\t z4 = svread_hor_za8_mf8_vg4 (0, w15 + 12))\n+\n /*\n ** read_za8_u8_z28_0_w12p14:\n **\tadd\t(w[0-9]+), w12, #?14\n@@ -125,6 +186,16 @@ TEST_READ_ZA_XN (read_za8_u8_z28_0_w12p14, svuint8x4_t,\n \t\t z28 = svread_hor_za8_u8_vg4 (0, w12 + 14),\n \t\t z28 = svread_hor_za8_u8_vg4 (0, w12 + 14))\n \n+/*\n+** read_za8_mf8_z28_0_w12p14:\n+**\tadd\t(w[0-9]+), w12, #?14\n+**\tmova\t{z28\\.b - z31\\.b}, za0h\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z28_0_w12p14, svmfloat8x4_t,\n+\t\t z28 = svread_hor_za8_mf8_vg4 (0, w12 + 14),\n+\t\t z28 = svread_hor_za8_mf8_vg4 (0, w12 + 14))\n+\n /*\n ** read_za8_s8_z0_0_w15p16:\n **\tadd\t(w[0-9]+), w15, #?16\n@@ -145,6 +216,16 @@ TEST_READ_ZA_XN (read_za8_u8_z4_0_w12m1, svuint8x4_t,\n \t\t z4 = svread_hor_za8_u8_vg4 (0, w12 - 1),\n \t\t z4 = svread_hor_za8_u8_vg4 (0, w12 - 1))\n \n+/*\n+** read_za8_mf8_z4_0_w12m1:\n+**\tsub\t(w[0-9]+), w12, #?1\n+**\tmova\t{z4\\.b - z7\\.b}, za0h\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z4_0_w12m1, svmfloat8x4_t,\n+\t\t z4 = svread_hor_za8_mf8_vg4 (0, w12 - 1),\n+\t\t z4 = svread_hor_za8_mf8_vg4 (0, w12 - 1))\n+\n /*\n ** read_za8_u8_z28_0_w16:\n **\tmov\t(w1[2-5]), w16\n@@ -154,3 +235,13 @@ TEST_READ_ZA_XN (read_za8_u8_z4_0_w12m1, svuint8x4_t,\n TEST_READ_ZA_XN (read_za8_u8_z28_0_w16, svuint8x4_t,\n \t\t z28 = svread_hor_za8_u8_vg4 (0, w16),\n \t\t z28 = svread_hor_za8_u8_vg4 (0, w16))\n+\n+/*\n+** read_za8_u8_z28_0_w16:\n+**\tmov\t(w1[2-5]), w16\n+**\tmova\t{z28\\.b - z31\\.b}, za0h\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z28_0_w16, svmfloat8x4_t,\n+\t\t z28 = svread_hor_za8_mf8_vg4 (0, w16),\n+\t\t z28 = svread_hor_za8_mf8_vg4 (0, w16))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_ver_za8_vg2.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_ver_za8_vg2.c\nindex 55970616ba8..5cd101a4988 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_ver_za8_vg2.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_ver_za8_vg2.c\n@@ -22,6 +22,16 @@ TEST_READ_ZA_XN (read_za8_u8_z4_0_1, svuint8x2_t,\n \t\t z4 = svread_ver_za8_u8_vg2 (0, 1),\n \t\t z4 = svread_ver_za8_u8_vg2 (0, 1))\n \n+/*\n+** read_za8_mf8_z4_0_1:\n+**\tmov\t(w1[2-5]), #?1\n+**\tmova\t{z4\\.b - z5\\.b}, za0v\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z4_0_1, svmfloat8x2_t,\n+\t\t z4 = svread_ver_za8_mf8_vg2 (0, 1),\n+\t\t z4 = svread_ver_za8_mf8_vg2 (0, 1))\n+\n /*\n ** read_za8_s8_z28_0_w11:\n **\tmov\t(w1[2-5]), w11\n@@ -50,6 +60,15 @@ TEST_READ_ZA_XN (read_za8_u8_z18_0_w15, svuint8x2_t,\n \t\t z18 = svread_ver_za8_u8_vg2 (0, w15),\n \t\t z18 = svread_ver_za8_u8_vg2 (0, w15))\n \n+/*\n+** read_za8_mf8_z18_0_w15:\n+**\tmova\t{z18\\.b - z19\\.b}, za0v\\.b\\[w15, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z18_0_w15, svmfloat8x2_t,\n+\t\t z18 = svread_ver_za8_mf8_vg2 (0, w15),\n+\t\t z18 = svread_ver_za8_mf8_vg2 (0, w15))\n+\n /*\n ** read_za8_s8_z23_0_w12p14:\n **\tmova\t{[^\\n]+}, za0v\\.b\\[w12, 14:15\\]\n@@ -71,6 +90,16 @@ TEST_READ_ZA_XN (read_za8_u8_z4_0_w12p1, svuint8x2_t,\n \t\t z4 = svread_ver_za8_u8_vg2 (0, w12 + 1),\n \t\t z4 = svread_ver_za8_u8_vg2 (0, w12 + 1))\n \n+/*\n+** read_za8_mf8_z4_0_w12p1:\n+**\tadd\t(w[0-9]+), w12, #?1\n+**\tmova\t{z4\\.b - z5\\.b}, za0v\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z4_0_w12p1, svmfloat8x2_t,\n+\t\t z4 = svread_ver_za8_mf8_vg2 (0, w12 + 1),\n+\t\t z4 = svread_ver_za8_mf8_vg2 (0, w12 + 1))\n+\n /*\n ** read_za8_s8_z28_0_w12p2:\n **\tmova\t{z28\\.b - z29\\.b}, za0v\\.b\\[w12, 2:3\\]\n@@ -90,6 +119,16 @@ TEST_READ_ZA_XN (read_za8_u8_z0_0_w15p3, svuint8x2_t,\n \t\t z0 = svread_ver_za8_u8_vg2 (0, w15 + 3),\n \t\t z0 = svread_ver_za8_u8_vg2 (0, w15 + 3))\n \n+/*\n+** read_za8_mf8_z0_0_w15p3:\n+**\tadd\t(w[0-9]+), w15, #?3\n+**\tmova\t{z0\\.b - z1\\.b}, za0v\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z0_0_w15p3, svmfloat8x2_t,\n+\t\t z0 = svread_ver_za8_mf8_vg2 (0, w15 + 3),\n+\t\t z0 = svread_ver_za8_mf8_vg2 (0, w15 + 3))\n+\n /*\n ** read_za8_u8_z4_0_w15p12:\n **\tmova\t{z4\\.b - z5\\.b}, za0v\\.b\\[w15, 12:13\\]\n@@ -99,6 +138,15 @@ TEST_READ_ZA_XN (read_za8_u8_z4_0_w15p12, svuint8x2_t,\n \t\t z4 = svread_ver_za8_u8_vg2 (0, w15 + 12),\n \t\t z4 = svread_ver_za8_u8_vg2 (0, w15 + 12))\n \n+/*\n+** read_za8_mf8_z4_0_w15p12:\n+**\tmova\t{z4\\.b - z5\\.b}, za0v\\.b\\[w15, 12:13\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z4_0_w15p12, svmfloat8x2_t,\n+\t\t z4 = svread_ver_za8_mf8_vg2 (0, w15 + 12),\n+\t\t z4 = svread_ver_za8_mf8_vg2 (0, w15 + 12))\n+\n /*\n ** read_za8_u8_z28_0_w12p15:\n **\tadd\t(w[0-9]+), w12, #?15\n@@ -109,6 +157,16 @@ TEST_READ_ZA_XN (read_za8_u8_z28_0_w12p15, svuint8x2_t,\n \t\t z28 = svread_ver_za8_u8_vg2 (0, w12 + 15),\n \t\t z28 = svread_ver_za8_u8_vg2 (0, w12 + 15))\n \n+/*\n+** read_za8_mf8_z28_0_w12p15:\n+**\tadd\t(w[0-9]+), w12, #?15\n+**\tmova\t{z28\\.b - z29\\.b}, za0v\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z28_0_w12p15, svmfloat8x2_t,\n+\t\t z28 = svread_ver_za8_mf8_vg2 (0, w12 + 15),\n+\t\t z28 = svread_ver_za8_mf8_vg2 (0, w12 + 15))\n+\n /*\n ** read_za8_s8_z0_0_w15p16:\n **\tadd\t(w[0-9]+), w15, #?16\n@@ -129,6 +187,16 @@ TEST_READ_ZA_XN (read_za8_u8_z4_0_w12m1, svuint8x2_t,\n \t\t z4 = svread_ver_za8_u8_vg2 (0, w12 - 1),\n \t\t z4 = svread_ver_za8_u8_vg2 (0, w12 - 1))\n \n+/*\n+** read_za8_mf8_z4_0_w12m1:\n+**\tsub\t(w[0-9]+), w12, #?1\n+**\tmova\t{z4\\.b - z5\\.b}, za0v\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z4_0_w12m1, svmfloat8x2_t,\n+\t\t z4 = svread_ver_za8_mf8_vg2 (0, w12 - 1),\n+\t\t z4 = svread_ver_za8_mf8_vg2 (0, w12 - 1))\n+\n /*\n ** read_za8_u8_z18_0_w16:\n **\tmov\t(w1[2-5]), w16\n@@ -138,3 +206,13 @@ TEST_READ_ZA_XN (read_za8_u8_z4_0_w12m1, svuint8x2_t,\n TEST_READ_ZA_XN (read_za8_u8_z18_0_w16, svuint8x2_t,\n \t\t z18 = svread_ver_za8_u8_vg2 (0, w16),\n \t\t z18 = svread_ver_za8_u8_vg2 (0, w16))\n+\n+/*\n+** read_za8_mf8_z18_0_w16:\n+**\tmov\t(w1[2-5]), w16\n+**\tmova\t{z18\\.b - z19\\.b}, za0v\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z18_0_w16, svmfloat8x2_t,\n+\t\t z18 = svread_ver_za8_mf8_vg2 (0, w16),\n+\t\t z18 = svread_ver_za8_mf8_vg2 (0, w16))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_ver_za8_vg4.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_ver_za8_vg4.c\nindex 6fd8a976d4f..daae8bc7285 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_ver_za8_vg4.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_ver_za8_vg4.c\n@@ -22,6 +22,16 @@ TEST_READ_ZA_XN (read_za8_u8_z4_0_1, svuint8x4_t,\n \t\t z4 = svread_ver_za8_u8_vg4 (0, 1),\n \t\t z4 = svread_ver_za8_u8_vg4 (0, 1))\n \n+/*\n+** read_za8_mf8_z4_0_1:\n+**\tmov\t(w1[2-5]), #?1\n+**\tmova\t{z4\\.b - z7\\.b}, za0v\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z4_0_1, svmfloat8x4_t,\n+\t\t z4 = svread_ver_za8_mf8_vg4 (0, 1),\n+\t\t z4 = svread_ver_za8_mf8_vg4 (0, 1))\n+\n /*\n ** read_za8_s8_z28_0_w11:\n **\tmov\t(w1[2-5]), w11\n@@ -54,6 +64,19 @@ TEST_READ_ZA_XN (read_za8_u8_z18_0_w15, svuint8x4_t,\n \t\t z18 = svread_ver_za8_u8_vg4 (0, w15),\n \t\t z18 = svread_ver_za8_u8_vg4 (0, w15))\n \n+/*\n+** read_za8_mf8_z18_0_w15:\n+**\tmova\t{[^\\n]+}, za0v\\.b\\[w15, 0:3\\]\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z18_0_w15, svmfloat8x4_t,\n+\t\t z18 = svread_ver_za8_mf8_vg4 (0, w15),\n+\t\t z18 = svread_ver_za8_mf8_vg4 (0, w15))\n+\n /*\n ** read_za8_s8_z23_0_w12p12:\n **\tmova\t{[^\\n]+}, za0v\\.b\\[w12, 12:15\\]\n@@ -77,6 +100,16 @@ TEST_READ_ZA_XN (read_za8_u8_z4_0_w12p1, svuint8x4_t,\n \t\t z4 = svread_ver_za8_u8_vg4 (0, w12 + 1),\n \t\t z4 = svread_ver_za8_u8_vg4 (0, w12 + 1))\n \n+/*\n+** read_za8_mf8_z4_0_w12p1:\n+**\tadd\t(w[0-9]+), w12, #?1\n+**\tmova\t{z4\\.b - z7\\.b}, za0v\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z4_0_w12p1, svmfloat8x4_t,\n+\t\t z4 = svread_ver_za8_mf8_vg4 (0, w12 + 1),\n+\t\t z4 = svread_ver_za8_mf8_vg4 (0, w12 + 1))\n+\n /*\n ** read_za8_s8_z28_0_w12p2:\n **\tadd\t(w[0-9]+), w12, #?2\n@@ -97,6 +130,16 @@ TEST_READ_ZA_XN (read_za8_u8_z0_0_w15p3, svuint8x4_t,\n \t\t z0 = svread_ver_za8_u8_vg4 (0, w15 + 3),\n \t\t z0 = svread_ver_za8_u8_vg4 (0, w15 + 3))\n \n+/*\n+** read_za8_mf8_z0_0_w15p3:\n+**\tadd\t(w[0-9]+), w15, #?3\n+**\tmova\t{z0\\.b - z3\\.b}, za0v\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z0_0_w15p3, svmfloat8x4_t,\n+\t\t z0 = svread_ver_za8_mf8_vg4 (0, w15 + 3),\n+\t\t z0 = svread_ver_za8_mf8_vg4 (0, w15 + 3))\n+\n /*\n ** read_za8_u8_z0_0_w12p4:\n **\tmova\t{z0\\.b - z3\\.b}, za0v\\.b\\[w12, 4:7\\]\n@@ -106,6 +149,15 @@ TEST_READ_ZA_XN (read_za8_u8_z0_0_w12p4, svuint8x4_t,\n \t\t z0 = svread_ver_za8_u8_vg4 (0, w12 + 4),\n \t\t z0 = svread_ver_za8_u8_vg4 (0, w12 + 4))\n \n+/*\n+** read_za8_mf8_z0_0_w12p4:\n+**\tmova\t{z0\\.b - z3\\.b}, za0v\\.b\\[w12, 4:7\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z0_0_w12p4, svmfloat8x4_t,\n+\t\t z0 = svread_ver_za8_mf8_vg4 (0, w12 + 4),\n+\t\t z0 = svread_ver_za8_mf8_vg4 (0, w12 + 4))\n+\n /*\n ** read_za8_u8_z4_0_w15p12:\n **\tmova\t{z4\\.b - z7\\.b}, za0v\\.b\\[w15, 12:15\\]\n@@ -115,6 +167,15 @@ TEST_READ_ZA_XN (read_za8_u8_z4_0_w15p12, svuint8x4_t,\n \t\t z4 = svread_ver_za8_u8_vg4 (0, w15 + 12),\n \t\t z4 = svread_ver_za8_u8_vg4 (0, w15 + 12))\n \n+/*\n+** read_za8_mf8_z4_0_w15p12:\n+**\tmova\t{z4\\.b - z7\\.b}, za0v\\.b\\[w15, 12:15\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z4_0_w15p12, svmfloat8x4_t,\n+\t\t z4 = svread_ver_za8_mf8_vg4 (0, w15 + 12),\n+\t\t z4 = svread_ver_za8_mf8_vg4 (0, w15 + 12))\n+\n /*\n ** read_za8_u8_z28_0_w12p14:\n **\tadd\t(w[0-9]+), w12, #?14\n@@ -125,6 +186,16 @@ TEST_READ_ZA_XN (read_za8_u8_z28_0_w12p14, svuint8x4_t,\n \t\t z28 = svread_ver_za8_u8_vg4 (0, w12 + 14),\n \t\t z28 = svread_ver_za8_u8_vg4 (0, w12 + 14))\n \n+/*\n+** read_za8_mf8_z28_0_w12p14:\n+**\tadd\t(w[0-9]+), w12, #?14\n+**\tmova\t{z28\\.b - z31\\.b}, za0v\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z28_0_w12p14, svmfloat8x4_t,\n+\t\t z28 = svread_ver_za8_mf8_vg4 (0, w12 + 14),\n+\t\t z28 = svread_ver_za8_mf8_vg4 (0, w12 + 14))\n+\n /*\n ** read_za8_s8_z0_0_w15p16:\n **\tadd\t(w[0-9]+), w15, #?16\n@@ -145,6 +216,16 @@ TEST_READ_ZA_XN (read_za8_u8_z4_0_w12m1, svuint8x4_t,\n \t\t z4 = svread_ver_za8_u8_vg4 (0, w12 - 1),\n \t\t z4 = svread_ver_za8_u8_vg4 (0, w12 - 1))\n \n+/*\n+** read_za8_mf8_z4_0_w12m1:\n+**\tsub\t(w[0-9]+), w12, #?1\n+**\tmova\t{z4\\.b - z7\\.b}, za0v\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z4_0_w12m1, svmfloat8x4_t,\n+\t\t z4 = svread_ver_za8_mf8_vg4 (0, w12 - 1),\n+\t\t z4 = svread_ver_za8_mf8_vg4 (0, w12 - 1))\n+\n /*\n ** read_za8_u8_z28_0_w16:\n **\tmov\t(w1[2-5]), w16\n@@ -154,3 +235,13 @@ TEST_READ_ZA_XN (read_za8_u8_z4_0_w12m1, svuint8x4_t,\n TEST_READ_ZA_XN (read_za8_u8_z28_0_w16, svuint8x4_t,\n \t\t z28 = svread_ver_za8_u8_vg4 (0, w16),\n \t\t z28 = svread_ver_za8_u8_vg4 (0, w16))\n+\n+/*\n+** read_za8_mf8_z28_0_w16:\n+**\tmov\t(w1[2-5]), w16\n+**\tmova\t{z28\\.b - z31\\.b}, za0v\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_za8_mf8_z28_0_w16, svmfloat8x4_t,\n+\t\t z28 = svread_ver_za8_mf8_vg4 (0, w16),\n+\t\t z28 = svread_ver_za8_mf8_vg4 (0, w16))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_za8_vg1x2.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_za8_vg1x2.c\nindex 9b151abf4fa..819bf786a4f 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_za8_vg1x2.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_za8_vg1x2.c\n@@ -32,6 +32,16 @@ TEST_READ_ZA_XN (read_w7_z0, svuint8x2_t,\n \t\t z0 = svread_za8_u8_vg1x2 (w7),\n \t\t z0 = svread_za8_u8_vg1x2 (w7))\n \n+/*\n+** read_mf8_w7_z0:\n+**\tmov\t(w8|w9|w10|w11), w7\n+**\tmova\t{z0\\.d - z1\\.d}, za\\.d\\[\\1, 0, vgx2\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_mf8_w7_z0, svmfloat8x2_t,\n+\t\t z0 = svread_za8_mf8_vg1x2 (w7),\n+\t\t z0 = svread_za8_mf8_vg1x2 (w7))\n+\n /*\n ** read_w8_z0:\n **\tmova\t{z0\\.d - z1\\.d}, za\\.d\\[w8, 0, vgx2\\]\n@@ -61,6 +71,16 @@ TEST_READ_ZA_XN (read_w12_z0, svuint8x2_t,\n \t\t z0 = svread_za8_u8_vg1x2 (w12),\n \t\t z0 = svread_za8_u8_vg1x2 (w12))\n \n+/*\n+** read_mf8_w12_z0:\n+**\tmov\t(w8|w9|w10|w11), w12\n+**\tmova\t{z0\\.d - z1\\.d}, za\\.d\\[\\1, 0, vgx2\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_mf8_w12_z0, svmfloat8x2_t,\n+\t\t z0 = svread_za8_mf8_vg1x2 (w12),\n+\t\t z0 = svread_za8_mf8_vg1x2 (w12))\n+\n /*\n ** read_w8p7_z0:\n **\tmova\t{z0\\.d - z1\\.d}, za\\.d\\[w8, 7, vgx2\\]\n@@ -90,6 +110,16 @@ TEST_READ_ZA_XN (read_w8m1_z0, svuint8x2_t,\n \t\t z0 = svread_za8_u8_vg1x2 (w8 - 1),\n \t\t z0 = svread_za8_u8_vg1x2 (w8 - 1))\n \n+/*\n+** read_mf8_w8m1_z0:\n+**\tsub\t(w8|w9|w10|w11), w8, #?1\n+**\tmova\t{z0\\.d - z1\\.d}, za\\.d\\[\\1, 0, vgx2\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_mf8_w8m1_z0, svmfloat8x2_t,\n+\t\t z0 = svread_za8_mf8_vg1x2 (w8 - 1),\n+\t\t z0 = svread_za8_mf8_vg1x2 (w8 - 1))\n+\n /*\n ** read_w8_z18:\n **\tmova\t{z18\\.d - z19\\.d}, za\\.d\\[w8, 0, vgx2\\]\n@@ -99,6 +129,15 @@ TEST_READ_ZA_XN (read_w8_z18, svuint8x2_t,\n \t\t z18 = svread_za8_u8_vg1x2 (w8),\n \t\t z18 = svread_za8_u8_vg1x2 (w8))\n \n+/*\n+** read_mf8_w8_z18:\n+**\tmova\t{z18\\.d - z19\\.d}, za\\.d\\[w8, 0, vgx2\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_mf8_w8_z18, svmfloat8x2_t,\n+\t\t z18 = svread_za8_mf8_vg1x2 (w8),\n+\t\t z18 = svread_za8_mf8_vg1x2 (w8))\n+\n /* Leave the assembler to check for correctness for misaligned registers.  */\n \n /*\n@@ -120,3 +159,12 @@ TEST_READ_ZA_XN (read_w8_z23, svint8x2_t,\n TEST_READ_ZA_XN (read_w8_z28, svuint8x2_t,\n \t\t z28 = svread_za8_u8_vg1x2 (w8),\n \t\t z28 = svread_za8_u8_vg1x2 (w8))\n+\n+/*\n+** read_mf8_w8_z28:\n+**\tmova\t{z28\\.d - z29\\.d}, za\\.d\\[w8, 0, vgx2\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_mf8_w8_z28, svmfloat8x2_t,\n+\t\t z28 = svread_za8_mf8_vg1x2 (w8),\n+\t\t z28 = svread_za8_mf8_vg1x2 (w8))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_za8_vg1x4.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_za8_vg1x4.c\nindex 80c81dde097..f8c6d2a3d43 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_za8_vg1x4.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/read_za8_vg1x4.c\n@@ -22,6 +22,16 @@ TEST_READ_ZA_XN (read_w0_z0, svuint8x4_t,\n \t\t z0 = svread_za8_u8_vg1x4 (w0),\n \t\t z0 = svread_za8_u8_vg1x4 (w0))\n \n+/*\n+** read_mf8_w0_z0:\n+**\tmov\t(w8|w9|w10|w11), w0\n+**\tmova\t{z0\\.d - z3\\.d}, za\\.d\\[\\1, 0, vgx4\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_mf8_w0_z0, svmfloat8x4_t,\n+\t\t z0 = svread_za8_mf8_vg1x4 (w0),\n+\t\t z0 = svread_za8_mf8_vg1x4 (w0))\n+\n /*\n ** read_w7_z0:\n **\tmov\t(w8|w9|w10|w11), w7\n@@ -50,6 +60,14 @@ TEST_READ_ZA_XN (read_w11_z0, svuint8x4_t,\n \t\t z0 = svread_za8_u8_vg1x4 (w11),\n \t\t z0 = svread_za8_u8_vg1x4 (w11))\n \n+/*\n+** read_mf8_w11_z0:\n+**\tmova\t{z0\\.d - z3\\.d}, za\\.d\\[w11, 0, vgx4\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_mf8_w11_z0, svmfloat8x4_t,\n+\t\t z0 = svread_za8_mf8_vg1x4 (w11),\n+\t\t z0 = svread_za8_mf8_vg1x4 (w11))\n \n /*\n ** read_w12_z0:\n@@ -80,6 +98,16 @@ TEST_READ_ZA_XN (read_w8p8_z0, svuint8x4_t,\n \t\t z0 = svread_za8_u8_vg1x4 (w8 + 8),\n \t\t z0 = svread_za8_u8_vg1x4 (w8 + 8))\n \n+/*\n+** read_mf8_w8p8_z0:\n+**\tadd\t(w8|w9|w10|w11), w8, #?8\n+**\tmova\t{z0\\.d - z3\\.d}, za\\.d\\[\\1, 0, vgx4\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_mf8_w8p8_z0, svmfloat8x4_t,\n+\t\t z0 = svread_za8_mf8_vg1x4 (w8 + 8),\n+\t\t z0 = svread_za8_mf8_vg1x4 (w8 + 8))\n+\n /*\n ** read_w8m1_z0:\n **\tsub\t(w8|w9|w10|w11), w8, #?1\n@@ -114,6 +142,19 @@ TEST_READ_ZA_XN (read_w8_z18, svuint8x4_t,\n \t\t z18 = svread_za8_u8_vg1x4 (w8),\n \t\t z18 = svread_za8_u8_vg1x4 (w8))\n \n+/*\n+** read_mf8_w8_z18:\n+**\tmova\t[^\\n]+, za\\.d\\[w8, 0, vgx4\\]\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_mf8_w8_z18, svmfloat8x4_t,\n+\t\t z18 = svread_za8_mf8_vg1x4 (w8),\n+\t\t z18 = svread_za8_mf8_vg1x4 (w8))\n+\n /*\n ** read_w8_z23:\n **\tmova\t[^\\n]+, za\\.d\\[w8, 0, vgx4\\]\n@@ -127,6 +168,19 @@ TEST_READ_ZA_XN (read_w8_z23, svuint8x4_t,\n \t\t z23 = svread_za8_u8_vg1x4 (w8),\n \t\t z23 = svread_za8_u8_vg1x4 (w8))\n \n+/*\n+** read_mf8_w8_z23:\n+**\tmova\t[^\\n]+, za\\.d\\[w8, 0, vgx4\\]\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_READ_ZA_XN (read_mf8_w8_z23, svmfloat8x4_t,\n+\t\t z23 = svread_za8_mf8_vg1x4 (w8),\n+\t\t z23 = svread_za8_mf8_vg1x4 (w8))\n+\n /*\n ** read_w8_z28:\n **\tmova\t{z28\\.d - z31\\.d}, za\\.d\\[w8, 0, vgx4\\]\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_hor_za128.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_hor_za128.c\nindex 8b6644f1d6e..aa29879331e 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_hor_za128.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_hor_za128.c\n@@ -86,6 +86,16 @@ TEST_READ_ZA (readz_za128_u8_0_w0, svuint8_t,\n \t      z0 = svreadz_hor_za128_u8 (0, w0),\n \t      z0 = svreadz_hor_za128_u8 (0, w0))\n \n+/*\n+** readz_za128_mf8_0_w0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmovaz\tz0\\.q, za0h\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za128_mf8_0_w0, svmfloat8_t,\n+\t      z0 = svreadz_hor_za128_mf8 (0, w0),\n+\t      z0 = svreadz_hor_za128_mf8 (0, w0))\n+\n /*\n ** readz_za128_s16_0_w0:\n **\tmov\t(w1[2-5]), w0\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_hor_za8.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_hor_za8.c\nindex 6fea16459e2..f6f595f5697 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_hor_za8.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_hor_za8.c\n@@ -85,3 +85,13 @@ TEST_READ_ZA (readz_za8_s8_0_w0m1, svint8_t,\n TEST_READ_ZA (readz_za8_u8_0_w0, svuint8_t,\n \t      z0 = svreadz_hor_za8_u8 (0, w0),\n \t      z0 = svreadz_hor_za8_u8 (0, w0))\n+\n+/*\n+** readz_za8_mf8_0_w0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmovaz\tz0\\.b, za0h\\.b\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za8_mf8_0_w0, svmfloat8_t,\n+\t      z0 = svreadz_hor_za8_mf8 (0, w0),\n+\t      z0 = svreadz_hor_za8_mf8 (0, w0))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_hor_za8_vg2.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_hor_za8_vg2.c\nindex a1a63104ad4..d09687e3674 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_hor_za8_vg2.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_hor_za8_vg2.c\n@@ -26,6 +26,16 @@ TEST_READ_ZA_XN (readz_za8_u8_z4_0_1, svuint8x2_t,\n \t\t z4 = svreadz_hor_za8_u8_vg2 (0, 1),\n \t\t z4 = svreadz_hor_za8_u8_vg2 (0, 1))\n \n+/*\n+** readz_za8_mf8_z4_0_1:\n+**\tmov\t(w1[2-5]), #?1\n+**\tmovaz\t{z4\\.b - z5\\.b}, za0h\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z4_0_1, svmfloat8x2_t,\n+\t\t z4 = svreadz_hor_za8_mf8_vg2 (0, 1),\n+\t\t z4 = svreadz_hor_za8_mf8_vg2 (0, 1))\n+\n /*\n ** readz_za8_s8_z28_0_w11:\n **\tmov\t(w1[2-5]), w11\n@@ -54,6 +64,15 @@ TEST_READ_ZA_XN (readz_za8_u8_z18_0_w15, svuint8x2_t,\n \t\t z18 = svreadz_hor_za8_u8_vg2 (0, w15),\n \t\t z18 = svreadz_hor_za8_u8_vg2 (0, w15))\n \n+/*\n+** readz_za8_mf8_z18_0_w15:\n+**\tmovaz\t{z18\\.b - z19\\.b}, za0h\\.b\\[w15, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z18_0_w15, svmfloat8x2_t,\n+\t\t z18 = svreadz_hor_za8_mf8_vg2 (0, w15),\n+\t\t z18 = svreadz_hor_za8_mf8_vg2 (0, w15))\n+\n /*\n ** readz_za8_s8_z23_0_w12p14:\n **\tmovaz\t{[^\\n]+}, za0h\\.b\\[w12, 14:15\\]\n@@ -75,6 +94,16 @@ TEST_READ_ZA_XN (readz_za8_u8_z4_0_w12p1, svuint8x2_t,\n \t\t z4 = svreadz_hor_za8_u8_vg2 (0, w12 + 1),\n \t\t z4 = svreadz_hor_za8_u8_vg2 (0, w12 + 1))\n \n+/*\n+** readz_za8_mf8_z4_0_w12p1:\n+**\tadd\t(w[0-9]+), w12, #?1\n+**\tmovaz\t{z4\\.b - z5\\.b}, za0h\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z4_0_w12p1, svmfloat8x2_t,\n+\t\t z4 = svreadz_hor_za8_mf8_vg2 (0, w12 + 1),\n+\t\t z4 = svreadz_hor_za8_mf8_vg2 (0, w12 + 1))\n+\n /*\n ** readz_za8_s8_z28_0_w12p2:\n **\tmovaz\t{z28\\.b - z29\\.b}, za0h\\.b\\[w12, 2:3\\]\n@@ -94,6 +123,16 @@ TEST_READ_ZA_XN (readz_za8_u8_z0_0_w15p3, svuint8x2_t,\n \t\t z0 = svreadz_hor_za8_u8_vg2 (0, w15 + 3),\n \t\t z0 = svreadz_hor_za8_u8_vg2 (0, w15 + 3))\n \n+/*\n+** readz_za8_mf8_z0_0_w15p3:\n+**\tadd\t(w[0-9]+), w15, #?3\n+**\tmovaz\t{z0\\.b - z1\\.b}, za0h\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z0_0_w15p3, svmfloat8x2_t,\n+\t\t z0 = svreadz_hor_za8_mf8_vg2 (0, w15 + 3),\n+\t\t z0 = svreadz_hor_za8_mf8_vg2 (0, w15 + 3))\n+\n /*\n ** readz_za8_u8_z4_0_w15p12:\n **\tmovaz\t{z4\\.b - z5\\.b}, za0h\\.b\\[w15, 12:13\\]\n@@ -103,6 +142,15 @@ TEST_READ_ZA_XN (readz_za8_u8_z4_0_w15p12, svuint8x2_t,\n \t\t z4 = svreadz_hor_za8_u8_vg2 (0, w15 + 12),\n \t\t z4 = svreadz_hor_za8_u8_vg2 (0, w15 + 12))\n \n+/*\n+** readz_za8_mf8_z4_0_w15p12:\n+**\tmovaz\t{z4\\.b - z5\\.b}, za0h\\.b\\[w15, 12:13\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z4_0_w15p12, svmfloat8x2_t,\n+\t\t z4 = svreadz_hor_za8_mf8_vg2 (0, w15 + 12),\n+\t\t z4 = svreadz_hor_za8_mf8_vg2 (0, w15 + 12))\n+\n /*\n ** readz_za8_u8_z28_0_w12p15:\n **\tadd\t(w[0-9]+), w12, #?15\n@@ -113,6 +161,16 @@ TEST_READ_ZA_XN (readz_za8_u8_z28_0_w12p15, svuint8x2_t,\n \t\t z28 = svreadz_hor_za8_u8_vg2 (0, w12 + 15),\n \t\t z28 = svreadz_hor_za8_u8_vg2 (0, w12 + 15))\n \n+/*\n+** readz_za8_mf8_z28_0_w12p15:\n+**\tadd\t(w[0-9]+), w12, #?15\n+**\tmovaz\t{z28\\.b - z29\\.b}, za0h\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z28_0_w12p15, svmfloat8x2_t,\n+\t\t z28 = svreadz_hor_za8_mf8_vg2 (0, w12 + 15),\n+\t\t z28 = svreadz_hor_za8_mf8_vg2 (0, w12 + 15))\n+\n /*\n ** readz_za8_s8_z0_0_w15p16:\n **\tadd\t(w[0-9]+), w15, #?16\n@@ -133,6 +191,16 @@ TEST_READ_ZA_XN (readz_za8_u8_z4_0_w12m1, svuint8x2_t,\n \t\t z4 = svreadz_hor_za8_u8_vg2 (0, w12 - 1),\n \t\t z4 = svreadz_hor_za8_u8_vg2 (0, w12 - 1))\n \n+/*\n+** readz_za8_mf8_z4_0_w12m1:\n+**\tsub\t(w[0-9]+), w12, #?1\n+**\tmovaz\t{z4\\.b - z5\\.b}, za0h\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z4_0_w12m1, svmfloat8x2_t,\n+\t\t z4 = svreadz_hor_za8_mf8_vg2 (0, w12 - 1),\n+\t\t z4 = svreadz_hor_za8_mf8_vg2 (0, w12 - 1))\n+\n /*\n ** readz_za8_u8_z18_0_w16:\n **\tmov\t(w1[2-5]), w16\n@@ -142,3 +210,13 @@ TEST_READ_ZA_XN (readz_za8_u8_z4_0_w12m1, svuint8x2_t,\n TEST_READ_ZA_XN (readz_za8_u8_z18_0_w16, svuint8x2_t,\n \t\t z18 = svreadz_hor_za8_u8_vg2 (0, w16),\n \t\t z18 = svreadz_hor_za8_u8_vg2 (0, w16))\n+\n+/*\n+** readz_za8_mf8_z18_0_w16:\n+**\tmov\t(w1[2-5]), w16\n+**\tmovaz\t{z18\\.b - z19\\.b}, za0h\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z18_0_w16, svmfloat8x2_t,\n+\t\t z18 = svreadz_hor_za8_mf8_vg2 (0, w16),\n+\t\t z18 = svreadz_hor_za8_mf8_vg2 (0, w16))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_hor_za8_vg4.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_hor_za8_vg4.c\nindex ca71bc513e3..eec47bf3152 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_hor_za8_vg4.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_hor_za8_vg4.c\n@@ -26,6 +26,16 @@ TEST_READ_ZA_XN (readz_za8_u8_z4_0_1, svuint8x4_t,\n \t\t z4 = svreadz_hor_za8_u8_vg4 (0, 1),\n \t\t z4 = svreadz_hor_za8_u8_vg4 (0, 1))\n \n+/*\n+** readz_za8_mf8_z4_0_1:\n+**\tmov\t(w1[2-5]), #?1\n+**\tmovaz\t{z4\\.b - z7\\.b}, za0h\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z4_0_1, svmfloat8x4_t,\n+\t\t z4 = svreadz_hor_za8_mf8_vg4 (0, 1),\n+\t\t z4 = svreadz_hor_za8_mf8_vg4 (0, 1))\n+\n /*\n ** readz_za8_s8_z28_0_w11:\n **\tmov\t(w1[2-5]), w11\n@@ -58,6 +68,19 @@ TEST_READ_ZA_XN (readz_za8_u8_z18_0_w15, svuint8x4_t,\n \t\t z18 = svreadz_hor_za8_u8_vg4 (0, w15),\n \t\t z18 = svreadz_hor_za8_u8_vg4 (0, w15))\n \n+/*\n+** readz_za8_mf8_z18_0_w15:\n+**\tmovaz\t{[^\\n]+}, za0h\\.b\\[w15, 0:3\\]\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z18_0_w15, svmfloat8x4_t,\n+\t\t z18 = svreadz_hor_za8_mf8_vg4 (0, w15),\n+\t\t z18 = svreadz_hor_za8_mf8_vg4 (0, w15))\n+\n /*\n ** readz_za8_s8_z23_0_w12p12:\n **\tmovaz\t{[^\\n]+}, za0h\\.b\\[w12, 12:15\\]\n@@ -81,6 +104,16 @@ TEST_READ_ZA_XN (readz_za8_u8_z4_0_w12p1, svuint8x4_t,\n \t\t z4 = svreadz_hor_za8_u8_vg4 (0, w12 + 1),\n \t\t z4 = svreadz_hor_za8_u8_vg4 (0, w12 + 1))\n \n+/*\n+** readz_za8_mf8_z4_0_w12p1:\n+**\tadd\t(w[0-9]+), w12, #?1\n+**\tmovaz\t{z4\\.b - z7\\.b}, za0h\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z4_0_w12p1, svmfloat8x4_t,\n+\t\t z4 = svreadz_hor_za8_mf8_vg4 (0, w12 + 1),\n+\t\t z4 = svreadz_hor_za8_mf8_vg4 (0, w12 + 1))\n+\n /*\n ** readz_za8_s8_z28_0_w12p2:\n **\tadd\t(w[0-9]+), w12, #?2\n@@ -101,6 +134,16 @@ TEST_READ_ZA_XN (readz_za8_u8_z0_0_w15p3, svuint8x4_t,\n \t\t z0 = svreadz_hor_za8_u8_vg4 (0, w15 + 3),\n \t\t z0 = svreadz_hor_za8_u8_vg4 (0, w15 + 3))\n \n+/*\n+** readz_za8_mf8_z0_0_w15p3:\n+**\tadd\t(w[0-9]+), w15, #?3\n+**\tmovaz\t{z0\\.b - z3\\.b}, za0h\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z0_0_w15p3, svmfloat8x4_t,\n+\t\t z0 = svreadz_hor_za8_mf8_vg4 (0, w15 + 3),\n+\t\t z0 = svreadz_hor_za8_mf8_vg4 (0, w15 + 3))\n+\n /*\n ** readz_za8_u8_z0_0_w12p4:\n **\tmovaz\t{z0\\.b - z3\\.b}, za0h\\.b\\[w12, 4:7\\]\n@@ -110,6 +153,15 @@ TEST_READ_ZA_XN (readz_za8_u8_z0_0_w12p4, svuint8x4_t,\n \t\t z0 = svreadz_hor_za8_u8_vg4 (0, w12 + 4),\n \t\t z0 = svreadz_hor_za8_u8_vg4 (0, w12 + 4))\n \n+/*\n+** readz_za8_mf8_z0_0_w12p4:\n+**\tmovaz\t{z0\\.b - z3\\.b}, za0h\\.b\\[w12, 4:7\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z0_0_w12p4, svmfloat8x4_t,\n+\t\t z0 = svreadz_hor_za8_mf8_vg4 (0, w12 + 4),\n+\t\t z0 = svreadz_hor_za8_mf8_vg4 (0, w12 + 4))\n+\n /*\n ** readz_za8_u8_z4_0_w15p12:\n **\tmovaz\t{z4\\.b - z7\\.b}, za0h\\.b\\[w15, 12:15\\]\n@@ -119,6 +171,15 @@ TEST_READ_ZA_XN (readz_za8_u8_z4_0_w15p12, svuint8x4_t,\n \t\t z4 = svreadz_hor_za8_u8_vg4 (0, w15 + 12),\n \t\t z4 = svreadz_hor_za8_u8_vg4 (0, w15 + 12))\n \n+/*\n+** readz_za8_mf8_z4_0_w15p12:\n+**\tmovaz\t{z4\\.b - z7\\.b}, za0h\\.b\\[w15, 12:15\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z4_0_w15p12, svmfloat8x4_t,\n+\t\t z4 = svreadz_hor_za8_mf8_vg4 (0, w15 + 12),\n+\t\t z4 = svreadz_hor_za8_mf8_vg4 (0, w15 + 12))\n+\n /*\n ** readz_za8_u8_z28_0_w12p14:\n **\tadd\t(w[0-9]+), w12, #?14\n@@ -129,6 +190,16 @@ TEST_READ_ZA_XN (readz_za8_u8_z28_0_w12p14, svuint8x4_t,\n \t\t z28 = svreadz_hor_za8_u8_vg4 (0, w12 + 14),\n \t\t z28 = svreadz_hor_za8_u8_vg4 (0, w12 + 14))\n \n+/*\n+** readz_za8_mf8_z28_0_w12p14:\n+**\tadd\t(w[0-9]+), w12, #?14\n+**\tmovaz\t{z28\\.b - z31\\.b}, za0h\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z28_0_w12p14, svmfloat8x4_t,\n+\t\t z28 = svreadz_hor_za8_mf8_vg4 (0, w12 + 14),\n+\t\t z28 = svreadz_hor_za8_mf8_vg4 (0, w12 + 14))\n+\n /*\n ** readz_za8_s8_z0_0_w15p16:\n **\tadd\t(w[0-9]+), w15, #?16\n@@ -149,6 +220,16 @@ TEST_READ_ZA_XN (readz_za8_u8_z4_0_w12m1, svuint8x4_t,\n \t\t z4 = svreadz_hor_za8_u8_vg4 (0, w12 - 1),\n \t\t z4 = svreadz_hor_za8_u8_vg4 (0, w12 - 1))\n \n+/*\n+** readz_za8_mf8_z4_0_w12m1:\n+**\tsub\t(w[0-9]+), w12, #?1\n+**\tmovaz\t{z4\\.b - z7\\.b}, za0h\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z4_0_w12m1, svmfloat8x4_t,\n+\t\t z4 = svreadz_hor_za8_mf8_vg4 (0, w12 - 1),\n+\t\t z4 = svreadz_hor_za8_mf8_vg4 (0, w12 - 1))\n+\n /*\n ** readz_za8_u8_z28_0_w16:\n **\tmov\t(w1[2-5]), w16\n@@ -158,3 +239,13 @@ TEST_READ_ZA_XN (readz_za8_u8_z4_0_w12m1, svuint8x4_t,\n TEST_READ_ZA_XN (readz_za8_u8_z28_0_w16, svuint8x4_t,\n \t\t z28 = svreadz_hor_za8_u8_vg4 (0, w16),\n \t\t z28 = svreadz_hor_za8_u8_vg4 (0, w16))\n+\n+/*\n+** readz_za8_mf8_z28_0_w16:\n+**\tmov\t(w1[2-5]), w16\n+**\tmovaz\t{z28\\.b - z31\\.b}, za0h\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z28_0_w16, svmfloat8x4_t,\n+\t\t z28 = svreadz_hor_za8_mf8_vg4 (0, w16),\n+\t\t z28 = svreadz_hor_za8_mf8_vg4 (0, w16))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_ver_za128.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_ver_za128.c\nnew file mode 100644\nindex 00000000000..401543cbbcd\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_ver_za128.c\n@@ -0,0 +1,197 @@\n+/* { dg-do assemble { target aarch64_asm_sme2p1_ok } } */\n+/* { dg-do compile { target { ! aarch64_asm_sme2p1_ok } } } */\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" } } */\n+\n+#include \"test_sme2_acle.h\"\n+\n+#pragma GCC target \"+sme2p1\"\n+\n+/*\n+** readz_za128_s8_0_0:\n+**\tmov\t(w1[2-5]), (?:wzr|#?0)\n+**\tmovaz\tz0\\.q, za0v\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za128_s8_0_0, svint8_t,\n+\t      z0 = svreadz_ver_za128_s8 (0, 0),\n+\t      z0 = svreadz_ver_za128_s8 (0, 0))\n+\n+/*\n+** readz_za128_s8_0_1:\n+**\tmov\t(w1[2-5]), #?1\n+**\tmovaz\tz0\\.q, za0v\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za128_s8_0_1, svint8_t,\n+\t      z0 = svreadz_ver_za128_s8 (0, 1),\n+\t      z0 = svreadz_ver_za128_s8 (0, 1))\n+\n+/*\n+** readz_za128_s8_0_w0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmovaz\tz0\\.q, za0v\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za128_s8_0_w0, svint8_t,\n+\t      z0 = svreadz_ver_za128_s8 (0, w0),\n+\t      z0 = svreadz_ver_za128_s8 (0, w0))\n+\n+/*\n+** readz_za128_s8_0_w0p1:\n+**\tadd\t(w1[2-5]), w0, #?1\n+**\tmovaz\tz0\\.q, za0v\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za128_s8_0_w0p1, svint8_t,\n+\t      z0 = svreadz_ver_za128_s8 (0, w0 + 1),\n+\t      z0 = svreadz_ver_za128_s8 (0, w0 + 1))\n+\n+/*\n+** readz_za128_s8_0_w0m1:\n+**\tsub\t(w1[2-5]), w0, #?1\n+**\tmovaz\tz0\\.q, za0v\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za128_s8_0_w0m1, svint8_t,\n+\t      z0 = svreadz_ver_za128_s8 (0, w0 - 1),\n+\t      z0 = svreadz_ver_za128_s8 (0, w0 - 1))\n+\n+/*\n+** readz_za128_s8_1_w0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmovaz\tz0\\.q, za1v\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za128_s8_1_w0, svint8_t,\n+\t      z0 = svreadz_ver_za128_s8 (1, w0),\n+\t      z0 = svreadz_ver_za128_s8 (1, w0))\n+\n+/*\n+** readz_za128_s8_15_w0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmovaz\tz0\\.q, za15v\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za128_s8_15_w0, svint8_t,\n+\t      z0 = svreadz_ver_za128_s8 (15, w0),\n+\t      z0 = svreadz_ver_za128_s8 (15, w0))\n+\n+/*\n+** readz_za128_u8_0_w0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmovaz\tz0\\.q, za0v\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za128_u8_0_w0, svuint8_t,\n+\t      z0 = svreadz_ver_za128_u8 (0, w0),\n+\t      z0 = svreadz_ver_za128_u8 (0, w0))\n+\n+/*\n+** readz_za128_mf8_0_w0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmovaz\tz0\\.q, za0v\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za128_mf8_0_w0, svmfloat8_t,\n+\t      z0 = svreadz_ver_za128_mf8 (0, w0),\n+\t      z0 = svreadz_ver_za128_mf8 (0, w0))\n+\n+/*\n+** readz_za128_s16_0_w0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmovaz\tz0\\.q, za0v\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za128_s16_0_w0, svint16_t,\n+\t      z0 = svreadz_ver_za128_s16 (0, w0),\n+\t      z0 = svreadz_ver_za128_s16 (0, w0))\n+\n+/*\n+** readz_za128_u16_0_w0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmovaz\tz0\\.q, za0v\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za128_u16_0_w0, svuint16_t,\n+\t      z0 = svreadz_ver_za128_u16 (0, w0),\n+\t      z0 = svreadz_ver_za128_u16 (0, w0))\n+\n+/*\n+** readz_za128_f16_0_w0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmovaz\tz0\\.q, za0v\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za128_f16_0_w0, svfloat16_t,\n+\t      z0 = svreadz_ver_za128_f16 (0, w0),\n+\t      z0 = svreadz_ver_za128_f16 (0, w0))\n+\n+/*\n+** readz_za128_bf16_0_w0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmovaz\tz0\\.q, za0v\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za128_bf16_0_w0, svbfloat16_t,\n+\t      z0 = svreadz_ver_za128_bf16 (0, w0),\n+\t      z0 = svreadz_ver_za128_bf16 (0, w0))\n+\n+/*\n+** readz_za128_s32_0_w0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmovaz\tz0\\.q, za0v\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za128_s32_0_w0, svint32_t,\n+\t      z0 = svreadz_ver_za128_s32 (0, w0),\n+\t      z0 = svreadz_ver_za128_s32 (0, w0))\n+\n+/*\n+** readz_za128_u32_0_w0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmovaz\tz0\\.q, za0v\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za128_u32_0_w0, svuint32_t,\n+\t      z0 = svreadz_ver_za128_u32 (0, w0),\n+\t      z0 = svreadz_ver_za128_u32 (0, w0))\n+\n+/*\n+** readz_za128_f32_0_w0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmovaz\tz0\\.q, za0v\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za128_f32_0_w0, svfloat32_t,\n+\t      z0 = svreadz_ver_za128_f32 (0, w0),\n+\t      z0 = svreadz_ver_za128_f32 (0, w0))\n+\n+/*\n+** readz_za128_s64_0_w0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmovaz\tz0\\.q, za0v\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za128_s64_0_w0, svint64_t,\n+\t      z0 = svreadz_ver_za128_s64 (0, w0),\n+\t      z0 = svreadz_ver_za128_s64 (0, w0))\n+\n+/*\n+** readz_za128_u64_0_w0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmovaz\tz0\\.q, za0v\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za128_u64_0_w0, svuint64_t,\n+\t      z0 = svreadz_ver_za128_u64 (0, w0),\n+\t      z0 = svreadz_ver_za128_u64 (0, w0))\n+\n+/*\n+** readz_za128_f64_0_w0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmovaz\tz0\\.q, za0v\\.q\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za128_f64_0_w0, svfloat64_t,\n+\t      z0 = svreadz_ver_za128_f64 (0, w0),\n+\t      z0 = svreadz_ver_za128_f64 (0, w0))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_ver_za8.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_ver_za8.c\nindex 4bd5ae783ef..66c42cecd31 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_ver_za8.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_ver_za8.c\n@@ -85,3 +85,13 @@ TEST_READ_ZA (readz_za8_s8_0_w0m1, svint8_t,\n TEST_READ_ZA (readz_za8_u8_0_w0, svuint8_t,\n \t      z0 = svreadz_ver_za8_u8 (0, w0),\n \t      z0 = svreadz_ver_za8_u8 (0, w0))\n+\n+/*\n+** readz_za8_mf8_0_w0:\n+**\tmov\t(w1[2-5]), w0\n+**\tmovaz\tz0\\.b, za0v\\.b\\[\\1, 0\\]\n+**\tret\n+*/\n+TEST_READ_ZA (readz_za8_mf8_0_w0, svmfloat8_t,\n+\t      z0 = svreadz_ver_za8_mf8 (0, w0),\n+\t      z0 = svreadz_ver_za8_mf8 (0, w0))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_ver_za8_vg2.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_ver_za8_vg2.c\nindex 940a5619a13..daa6b131587 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_ver_za8_vg2.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_ver_za8_vg2.c\n@@ -26,6 +26,16 @@ TEST_READ_ZA_XN (readz_za8_u8_z4_0_1, svuint8x2_t,\n \t\t z4 = svreadz_ver_za8_u8_vg2 (0, 1),\n \t\t z4 = svreadz_ver_za8_u8_vg2 (0, 1))\n \n+/*\n+** readz_za8_mf8_z4_0_1:\n+**\tmov\t(w1[2-5]), #?1\n+**\tmovaz\t{z4\\.b - z5\\.b}, za0v\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z4_0_1, svmfloat8x2_t,\n+\t\t z4 = svreadz_ver_za8_mf8_vg2 (0, 1),\n+\t\t z4 = svreadz_ver_za8_mf8_vg2 (0, 1))\n+\n /*\n ** readz_za8_s8_z28_0_w11:\n **\tmov\t(w1[2-5]), w11\n@@ -54,6 +64,15 @@ TEST_READ_ZA_XN (readz_za8_u8_z18_0_w15, svuint8x2_t,\n \t\t z18 = svreadz_ver_za8_u8_vg2 (0, w15),\n \t\t z18 = svreadz_ver_za8_u8_vg2 (0, w15))\n \n+/*\n+** readz_za8_mf8_z18_0_w15:\n+**\tmovaz\t{z18\\.b - z19\\.b}, za0v\\.b\\[w15, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z18_0_w15, svmfloat8x2_t,\n+\t\t z18 = svreadz_ver_za8_mf8_vg2 (0, w15),\n+\t\t z18 = svreadz_ver_za8_mf8_vg2 (0, w15))\n+\n /*\n ** readz_za8_s8_z23_0_w12p14:\n **\tmovaz\t{[^\\n]+}, za0v\\.b\\[w12, 14:15\\]\n@@ -75,6 +94,16 @@ TEST_READ_ZA_XN (readz_za8_u8_z4_0_w12p1, svuint8x2_t,\n \t\t z4 = svreadz_ver_za8_u8_vg2 (0, w12 + 1),\n \t\t z4 = svreadz_ver_za8_u8_vg2 (0, w12 + 1))\n \n+/*\n+** readz_za8_mf8_z4_0_w12p1:\n+**\tadd\t(w[0-9]+), w12, #?1\n+**\tmovaz\t{z4\\.b - z5\\.b}, za0v\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z4_0_w12p1, svmfloat8x2_t,\n+\t\t z4 = svreadz_ver_za8_mf8_vg2 (0, w12 + 1),\n+\t\t z4 = svreadz_ver_za8_mf8_vg2 (0, w12 + 1))\n+\n /*\n ** readz_za8_s8_z28_0_w12p2:\n **\tmovaz\t{z28\\.b - z29\\.b}, za0v\\.b\\[w12, 2:3\\]\n@@ -94,6 +123,16 @@ TEST_READ_ZA_XN (readz_za8_u8_z0_0_w15p3, svuint8x2_t,\n \t\t z0 = svreadz_ver_za8_u8_vg2 (0, w15 + 3),\n \t\t z0 = svreadz_ver_za8_u8_vg2 (0, w15 + 3))\n \n+/*\n+** readz_za8_mf8_z0_0_w15p3:\n+**\tadd\t(w[0-9]+), w15, #?3\n+**\tmovaz\t{z0\\.b - z1\\.b}, za0v\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z0_0_w15p3, svmfloat8x2_t,\n+\t\t z0 = svreadz_ver_za8_mf8_vg2 (0, w15 + 3),\n+\t\t z0 = svreadz_ver_za8_mf8_vg2 (0, w15 + 3))\n+\n /*\n ** readz_za8_u8_z4_0_w15p12:\n **\tmovaz\t{z4\\.b - z5\\.b}, za0v\\.b\\[w15, 12:13\\]\n@@ -103,6 +142,15 @@ TEST_READ_ZA_XN (readz_za8_u8_z4_0_w15p12, svuint8x2_t,\n \t\t z4 = svreadz_ver_za8_u8_vg2 (0, w15 + 12),\n \t\t z4 = svreadz_ver_za8_u8_vg2 (0, w15 + 12))\n \n+/*\n+** readz_za8_mf8_z4_0_w15p12:\n+**\tmovaz\t{z4\\.b - z5\\.b}, za0v\\.b\\[w15, 12:13\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z4_0_w15p12, svmfloat8x2_t,\n+\t\t z4 = svreadz_ver_za8_mf8_vg2 (0, w15 + 12),\n+\t\t z4 = svreadz_ver_za8_mf8_vg2 (0, w15 + 12))\n+\n /*\n ** readz_za8_u8_z28_0_w12p15:\n **\tadd\t(w[0-9]+), w12, #?15\n@@ -113,6 +161,16 @@ TEST_READ_ZA_XN (readz_za8_u8_z28_0_w12p15, svuint8x2_t,\n \t\t z28 = svreadz_ver_za8_u8_vg2 (0, w12 + 15),\n \t\t z28 = svreadz_ver_za8_u8_vg2 (0, w12 + 15))\n \n+/*\n+** readz_za8_mf8_z28_0_w12p15:\n+**\tadd\t(w[0-9]+), w12, #?15\n+**\tmovaz\t{z28\\.b - z29\\.b}, za0v\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z28_0_w12p15, svmfloat8x2_t,\n+\t\t z28 = svreadz_ver_za8_mf8_vg2 (0, w12 + 15),\n+\t\t z28 = svreadz_ver_za8_mf8_vg2 (0, w12 + 15))\n+\n /*\n ** readz_za8_s8_z0_0_w15p16:\n **\tadd\t(w[0-9]+), w15, #?16\n@@ -133,6 +191,16 @@ TEST_READ_ZA_XN (readz_za8_u8_z4_0_w12m1, svuint8x2_t,\n \t\t z4 = svreadz_ver_za8_u8_vg2 (0, w12 - 1),\n \t\t z4 = svreadz_ver_za8_u8_vg2 (0, w12 - 1))\n \n+/*\n+** readz_za8_mf8_z4_0_w12m1:\n+**\tsub\t(w[0-9]+), w12, #?1\n+**\tmovaz\t{z4\\.b - z5\\.b}, za0v\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z4_0_w12m1, svmfloat8x2_t,\n+\t\t z4 = svreadz_ver_za8_mf8_vg2 (0, w12 - 1),\n+\t\t z4 = svreadz_ver_za8_mf8_vg2 (0, w12 - 1))\n+\n /*\n ** readz_za8_u8_z18_0_w16:\n **\tmov\t(w1[2-5]), w16\n@@ -142,3 +210,12 @@ TEST_READ_ZA_XN (readz_za8_u8_z4_0_w12m1, svuint8x2_t,\n TEST_READ_ZA_XN (readz_za8_u8_z18_0_w16, svuint8x2_t,\n \t\t z18 = svreadz_ver_za8_u8_vg2 (0, w16),\n \t\t z18 = svreadz_ver_za8_u8_vg2 (0, w16))\n+/*\n+** readz_za8_mf8_z18_0_w16:\n+**\tmov\t(w1[2-5]), w16\n+**\tmovaz\t{z18\\.b - z19\\.b}, za0v\\.b\\[\\1, 0:1\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z18_0_w16, svmfloat8x2_t,\n+\t\t z18 = svreadz_ver_za8_mf8_vg2 (0, w16),\n+\t\t z18 = svreadz_ver_za8_mf8_vg2 (0, w16))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_ver_za8_vg4.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_ver_za8_vg4.c\nindex 9f776ded80f..f3c06d8f029 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_ver_za8_vg4.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_ver_za8_vg4.c\n@@ -26,6 +26,16 @@ TEST_READ_ZA_XN (readz_za8_u8_z4_0_1, svuint8x4_t,\n \t\t z4 = svreadz_ver_za8_u8_vg4 (0, 1),\n \t\t z4 = svreadz_ver_za8_u8_vg4 (0, 1))\n \n+/*\n+** readz_za8_mf8_z4_0_1:\n+**\tmov\t(w1[2-5]), #?1\n+**\tmovaz\t{z4\\.b - z7\\.b}, za0v\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z4_0_1, svmfloat8x4_t,\n+\t\t z4 = svreadz_ver_za8_mf8_vg4 (0, 1),\n+\t\t z4 = svreadz_ver_za8_mf8_vg4 (0, 1))\n+\n /*\n ** readz_za8_s8_z28_0_w11:\n **\tmov\t(w1[2-5]), w11\n@@ -58,6 +68,19 @@ TEST_READ_ZA_XN (readz_za8_u8_z18_0_w15, svuint8x4_t,\n \t\t z18 = svreadz_ver_za8_u8_vg4 (0, w15),\n \t\t z18 = svreadz_ver_za8_u8_vg4 (0, w15))\n \n+/*\n+** readz_za8_mf8_z18_0_w15:\n+**\tmovaz\t{[^\\n]+}, za0v\\.b\\[w15, 0:3\\]\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z18_0_w15, svmfloat8x4_t,\n+\t\t z18 = svreadz_ver_za8_mf8_vg4 (0, w15),\n+\t\t z18 = svreadz_ver_za8_mf8_vg4 (0, w15))\n+\n /*\n ** readz_za8_s8_z23_0_w12p12:\n **\tmovaz\t{[^\\n]+}, za0v\\.b\\[w12, 12:15\\]\n@@ -81,6 +104,16 @@ TEST_READ_ZA_XN (readz_za8_u8_z4_0_w12p1, svuint8x4_t,\n \t\t z4 = svreadz_ver_za8_u8_vg4 (0, w12 + 1),\n \t\t z4 = svreadz_ver_za8_u8_vg4 (0, w12 + 1))\n \n+/*\n+** readz_za8_mf8_z4_0_w12p1:\n+**\tadd\t(w[0-9]+), w12, #?1\n+**\tmovaz\t{z4\\.b - z7\\.b}, za0v\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z4_0_w12p1, svmfloat8x4_t,\n+\t\t z4 = svreadz_ver_za8_mf8_vg4 (0, w12 + 1),\n+\t\t z4 = svreadz_ver_za8_mf8_vg4 (0, w12 + 1))\n+\n /*\n ** readz_za8_s8_z28_0_w12p2:\n **\tadd\t(w[0-9]+), w12, #?2\n@@ -101,6 +134,16 @@ TEST_READ_ZA_XN (readz_za8_u8_z0_0_w15p3, svuint8x4_t,\n \t\t z0 = svreadz_ver_za8_u8_vg4 (0, w15 + 3),\n \t\t z0 = svreadz_ver_za8_u8_vg4 (0, w15 + 3))\n \n+/*\n+** readz_za8_mf8_z0_0_w15p3:\n+**\tadd\t(w[0-9]+), w15, #?3\n+**\tmovaz\t{z0\\.b - z3\\.b}, za0v\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z0_0_w15p3, svmfloat8x4_t,\n+\t\t z0 = svreadz_ver_za8_mf8_vg4 (0, w15 + 3),\n+\t\t z0 = svreadz_ver_za8_mf8_vg4 (0, w15 + 3))\n+\n /*\n ** readz_za8_u8_z0_0_w12p4:\n **\tmovaz\t{z0\\.b - z3\\.b}, za0v\\.b\\[w12, 4:7\\]\n@@ -110,6 +153,15 @@ TEST_READ_ZA_XN (readz_za8_u8_z0_0_w12p4, svuint8x4_t,\n \t\t z0 = svreadz_ver_za8_u8_vg4 (0, w12 + 4),\n \t\t z0 = svreadz_ver_za8_u8_vg4 (0, w12 + 4))\n \n+/*\n+** readz_za8_mf8_z0_0_w12p4:\n+**\tmovaz\t{z0\\.b - z3\\.b}, za0v\\.b\\[w12, 4:7\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z0_0_w12p4, svmfloat8x4_t,\n+\t\t z0 = svreadz_ver_za8_mf8_vg4 (0, w12 + 4),\n+\t\t z0 = svreadz_ver_za8_mf8_vg4 (0, w12 + 4))\n+\n /*\n ** readz_za8_u8_z4_0_w15p12:\n **\tmovaz\t{z4\\.b - z7\\.b}, za0v\\.b\\[w15, 12:15\\]\n@@ -119,6 +171,15 @@ TEST_READ_ZA_XN (readz_za8_u8_z4_0_w15p12, svuint8x4_t,\n \t\t z4 = svreadz_ver_za8_u8_vg4 (0, w15 + 12),\n \t\t z4 = svreadz_ver_za8_u8_vg4 (0, w15 + 12))\n \n+/*\n+** readz_za8_mf8_z4_0_w15p12:\n+**\tmovaz\t{z4\\.b - z7\\.b}, za0v\\.b\\[w15, 12:15\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z4_0_w15p12, svmfloat8x4_t,\n+\t\t z4 = svreadz_ver_za8_mf8_vg4 (0, w15 + 12),\n+\t\t z4 = svreadz_ver_za8_mf8_vg4 (0, w15 + 12))\n+\n /*\n ** readz_za8_u8_z28_0_w12p14:\n **\tadd\t(w[0-9]+), w12, #?14\n@@ -129,6 +190,16 @@ TEST_READ_ZA_XN (readz_za8_u8_z28_0_w12p14, svuint8x4_t,\n \t\t z28 = svreadz_ver_za8_u8_vg4 (0, w12 + 14),\n \t\t z28 = svreadz_ver_za8_u8_vg4 (0, w12 + 14))\n \n+/*\n+** readz_za8_mf8_z28_0_w12p14:\n+**\tadd\t(w[0-9]+), w12, #?14\n+**\tmovaz\t{z28\\.b - z31\\.b}, za0v\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z28_0_w12p14, svmfloat8x4_t,\n+\t\t z28 = svreadz_ver_za8_mf8_vg4 (0, w12 + 14),\n+\t\t z28 = svreadz_ver_za8_mf8_vg4 (0, w12 + 14))\n+\n /*\n ** readz_za8_s8_z0_0_w15p16:\n **\tadd\t(w[0-9]+), w15, #?16\n@@ -149,6 +220,16 @@ TEST_READ_ZA_XN (readz_za8_u8_z4_0_w12m1, svuint8x4_t,\n \t\t z4 = svreadz_ver_za8_u8_vg4 (0, w12 - 1),\n \t\t z4 = svreadz_ver_za8_u8_vg4 (0, w12 - 1))\n \n+/*\n+** readz_za8_mf8_z4_0_w12m1:\n+**\tsub\t(w[0-9]+), w12, #?1\n+**\tmovaz\t{z4\\.b - z7\\.b}, za0v\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z4_0_w12m1, svmfloat8x4_t,\n+\t\t z4 = svreadz_ver_za8_mf8_vg4 (0, w12 - 1),\n+\t\t z4 = svreadz_ver_za8_mf8_vg4 (0, w12 - 1))\n+\n /*\n ** readz_za8_u8_z28_0_w16:\n **\tmov\t(w1[2-5]), w16\n@@ -158,3 +239,12 @@ TEST_READ_ZA_XN (readz_za8_u8_z4_0_w12m1, svuint8x4_t,\n TEST_READ_ZA_XN (readz_za8_u8_z28_0_w16, svuint8x4_t,\n \t\t z28 = svreadz_ver_za8_u8_vg4 (0, w16),\n \t\t z28 = svreadz_ver_za8_u8_vg4 (0, w16))\n+/*\n+** readz_za8_mf8_z28_0_w16:\n+**\tmov\t(w1[2-5]), w16\n+**\tmovaz\t{z28\\.b - z31\\.b}, za0v\\.b\\[\\1, 0:3\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_za8_mf8_z28_0_w16, svmfloat8x4_t,\n+\t\t z28 = svreadz_ver_za8_mf8_vg4 (0, w16),\n+\t\t z28 = svreadz_ver_za8_mf8_vg4 (0, w16))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_za8_vg1x2.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_za8_vg1x2.c\nindex 7bdb17d7e79..f4d40315acd 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_za8_vg1x2.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_za8_vg1x2.c\n@@ -36,6 +36,16 @@ TEST_READ_ZA_XN (readz_w7_z0, svuint8x2_t,\n \t\t z0 = svreadz_za8_u8_vg1x2 (w7),\n \t\t z0 = svreadz_za8_u8_vg1x2 (w7))\n \n+/*\n+** readz_mf8_w7_z0:\n+**\tmov\t(w8|w9|w10|w11), w7\n+**\tmovaz\t{z0\\.d - z1\\.d}, za\\.d\\[\\1, 0, vgx2\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_mf8_w7_z0, svmfloat8x2_t,\n+\t\t z0 = svreadz_za8_mf8_vg1x2 (w7),\n+\t\t z0 = svreadz_za8_mf8_vg1x2 (w7))\n+\n /*\n ** readz_w8_z0:\n **\tmovaz\t{z0\\.d - z1\\.d}, za\\.d\\[w8, 0, vgx2\\]\n@@ -65,6 +75,16 @@ TEST_READ_ZA_XN (readz_w12_z0, svuint8x2_t,\n \t\t z0 = svreadz_za8_u8_vg1x2 (w12),\n \t\t z0 = svreadz_za8_u8_vg1x2 (w12))\n \n+/*\n+** readz_mf8_w12_z0:\n+**\tmov\t(w8|w9|w10|w11), w12\n+**\tmovaz\t{z0\\.d - z1\\.d}, za\\.d\\[\\1, 0, vgx2\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_mf8_w12_z0, svmfloat8x2_t,\n+\t\t z0 = svreadz_za8_mf8_vg1x2 (w12),\n+\t\t z0 = svreadz_za8_mf8_vg1x2 (w12))\n+\n /*\n ** readz_w8p7_z0:\n **\tmovaz\t{z0\\.d - z1\\.d}, za\\.d\\[w8, 7, vgx2\\]\n@@ -94,6 +114,16 @@ TEST_READ_ZA_XN (readz_w8m1_z0, svuint8x2_t,\n \t\t z0 = svreadz_za8_u8_vg1x2 (w8 - 1),\n \t\t z0 = svreadz_za8_u8_vg1x2 (w8 - 1))\n \n+/*\n+** readz_mf8_w8m1_z0:\n+**\tsub\t(w8|w9|w10|w11), w8, #?1\n+**\tmovaz\t{z0\\.d - z1\\.d}, za\\.d\\[\\1, 0, vgx2\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_mf8_w8m1_z0, svmfloat8x2_t,\n+\t\t z0 = svreadz_za8_mf8_vg1x2 (w8 - 1),\n+\t\t z0 = svreadz_za8_mf8_vg1x2 (w8 - 1))\n+\n /*\n ** readz_w8_z18:\n **\tmovaz\t{z18\\.d - z19\\.d}, za\\.d\\[w8, 0, vgx2\\]\n@@ -103,6 +133,15 @@ TEST_READ_ZA_XN (readz_w8_z18, svuint8x2_t,\n \t\t z18 = svreadz_za8_u8_vg1x2 (w8),\n \t\t z18 = svreadz_za8_u8_vg1x2 (w8))\n \n+/*\n+** readz_mf8_w8_z18:\n+**\tmovaz\t{z18\\.d - z19\\.d}, za\\.d\\[w8, 0, vgx2\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_mf8_w8_z18, svmfloat8x2_t,\n+\t\t z18 = svreadz_za8_mf8_vg1x2 (w8),\n+\t\t z18 = svreadz_za8_mf8_vg1x2 (w8))\n+\n /* Leave the assembler to check for correctness for misaligned registers.  */\n \n /*\n@@ -124,3 +163,12 @@ TEST_READ_ZA_XN (readz_w8_z23, svint8x2_t,\n TEST_READ_ZA_XN (readz_w8_z28, svuint8x2_t,\n \t\t z28 = svreadz_za8_u8_vg1x2 (w8),\n \t\t z28 = svreadz_za8_u8_vg1x2 (w8))\n+\n+/*\n+** readz_mf8_w8_z28:\n+**\tmovaz\t{z28\\.d - z29\\.d}, za\\.d\\[w8, 0, vgx2\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_mf8_w8_z28, svmfloat8x2_t,\n+\t\t z28 = svreadz_za8_mf8_vg1x2 (w8),\n+\t\t z28 = svreadz_za8_mf8_vg1x2 (w8))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_za8_vg1x4.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_za8_vg1x4.c\nindex 02beaae85c6..d9be244c62c 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_za8_vg1x4.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/readz_za8_vg1x4.c\n@@ -26,6 +26,16 @@ TEST_READ_ZA_XN (readz_w0_z0, svuint8x4_t,\n \t\t z0 = svreadz_za8_u8_vg1x4 (w0),\n \t\t z0 = svreadz_za8_u8_vg1x4 (w0))\n \n+/*\n+** readz_mf8_w0_z0:\n+**\tmov\t(w8|w9|w10|w11), w0\n+**\tmovaz\t{z0\\.d - z3\\.d}, za\\.d\\[\\1, 0, vgx4\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_mf8_w0_z0, svmfloat8x4_t,\n+\t\t z0 = svreadz_za8_mf8_vg1x4 (w0),\n+\t\t z0 = svreadz_za8_mf8_vg1x4 (w0))\n+\n /*\n ** readz_w7_z0:\n **\tmov\t(w8|w9|w10|w11), w7\n@@ -55,6 +65,16 @@ TEST_READ_ZA_XN (readz_w11_z0, svuint8x4_t,\n \t\t z0 = svreadz_za8_u8_vg1x4 (w11))\n \n \n+/*\n+** readz_mf8_w11_z0:\n+**\tmovaz\t{z0\\.d - z3\\.d}, za\\.d\\[w11, 0, vgx4\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_mf8_w11_z0, svmfloat8x4_t,\n+\t\t z0 = svreadz_za8_mf8_vg1x4 (w11),\n+\t\t z0 = svreadz_za8_mf8_vg1x4 (w11))\n+\n+\n /*\n ** readz_w12_z0:\n **\tmov\t(w8|w9|w10|w11), w12\n@@ -84,6 +104,16 @@ TEST_READ_ZA_XN (readz_w8p8_z0, svuint8x4_t,\n \t\t z0 = svreadz_za8_u8_vg1x4 (w8 + 8),\n \t\t z0 = svreadz_za8_u8_vg1x4 (w8 + 8))\n \n+/*\n+** readz_mf8_w8p8_z0:\n+**\tadd\t(w8|w9|w10|w11), w8, #?8\n+**\tmovaz\t{z0\\.d - z3\\.d}, za\\.d\\[\\1, 0, vgx4\\]\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_mf8_w8p8_z0, svmfloat8x4_t,\n+\t\t z0 = svreadz_za8_mf8_vg1x4 (w8 + 8),\n+\t\t z0 = svreadz_za8_mf8_vg1x4 (w8 + 8))\n+\n /*\n ** readz_w8m1_z0:\n **\tsub\t(w8|w9|w10|w11), w8, #?1\n@@ -118,6 +148,19 @@ TEST_READ_ZA_XN (readz_w8_z18, svuint8x4_t,\n \t\t z18 = svreadz_za8_u8_vg1x4 (w8),\n \t\t z18 = svreadz_za8_u8_vg1x4 (w8))\n \n+/*\n+** readz_mf8_w8_z18:\n+**\tmovaz\t[^\\n]+, za\\.d\\[w8, 0, vgx4\\]\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_mf8_w8_z18, svmfloat8x4_t,\n+\t\t z18 = svreadz_za8_mf8_vg1x4 (w8),\n+\t\t z18 = svreadz_za8_mf8_vg1x4 (w8))\n+\n /*\n ** readz_w8_z23:\n **\tmovaz\t[^\\n]+, za\\.d\\[w8, 0, vgx4\\]\n@@ -131,6 +174,19 @@ TEST_READ_ZA_XN (readz_w8_z23, svuint8x4_t,\n \t\t z23 = svreadz_za8_u8_vg1x4 (w8),\n \t\t z23 = svreadz_za8_u8_vg1x4 (w8))\n \n+/*\n+** readz_mf8_w8_z23:\n+**\tmovaz\t[^\\n]+, za\\.d\\[w8, 0, vgx4\\]\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_READ_ZA_XN (readz_mf8_w8_z23, svmfloat8x4_t,\n+\t\t z23 = svreadz_za8_mf8_vg1x4 (w8),\n+\t\t z23 = svreadz_za8_mf8_vg1x4 (w8))\n+\n /*\n ** readz_w8_z28:\n **\tmovaz\t{z28\\.d - z31\\.d}, za\\.d\\[w8, 0, vgx4\\]\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/sel_mf8_x2.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/sel_mf8_x2.c\nnew file mode 100644\nindex 00000000000..1192aa84dc2\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/sel_mf8_x2.c\n@@ -0,0 +1,92 @@\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" } } */\n+\n+#include \"test_sme2_acle.h\"\n+\n+/*\n+** sel_z0_pn0_z0_z4:\n+**\tmov\tp([0-9]+)\\.b, p0\\.b\n+**\tsel\t{z0\\.b - z1\\.b}, pn\\1, {z0\\.b - z1\\.b}, {z4\\.b - z5\\.b}\n+**\tret\n+*/\n+TEST_XN (sel_z0_pn0_z0_z4, svmfloat8x2_t, z0,\n+\t svsel_mf8_x2 (pn0, z0, z4),\n+\t svsel (pn0, z0, z4))\n+\n+/*\n+** sel_z0_pn7_z0_z4:\n+**\tmov\tp([0-9]+)\\.b, p7\\.b\n+**\tsel\t{z0\\.b - z1\\.b}, pn\\1, {z0\\.b - z1\\.b}, {z4\\.b - z5\\.b}\n+**\tret\n+*/\n+TEST_XN (sel_z0_pn7_z0_z4, svmfloat8x2_t, z0,\n+\t svsel_mf8_x2 (pn7, z0, z4),\n+\t svsel (pn7, z0, z4))\n+\n+/*\n+** sel_z0_pn8_z4_z28:\n+**\tsel\t{z0\\.b - z1\\.b}, pn8, {z4\\.b - z5\\.b}, {z28\\.b - z29\\.b}\n+**\tret\n+*/\n+TEST_XN (sel_z0_pn8_z4_z28, svmfloat8x2_t, z0,\n+\t svsel_mf8_x2 (pn8, z4, z28),\n+\t svsel (pn8, z4, z28))\n+\n+/*\n+** sel_z4_pn8_z18_z0:\n+**\tsel\t{z4\\.b - z5\\.b}, pn8, {z18\\.b - z19\\.b}, {z0\\.b - z1\\.b}\n+**\tret\n+*/\n+TEST_XN (sel_z4_pn8_z18_z0, svmfloat8x2_t, z4,\n+\t svsel_mf8_x2 (pn8, z18, z0),\n+\t svsel (pn8, z18, z0))\n+\n+/*\n+** sel_z18_pn15_z28_z4:\n+**\tsel\t{z18\\.b - z19\\.b}, pn15, {z28\\.b - z29\\.b}, {z4\\.b - z5\\.b}\n+**\tret\n+*/\n+TEST_XN (sel_z18_pn15_z28_z4, svmfloat8x2_t, z18,\n+\t svsel_mf8_x2 (pn15, z28, z4),\n+\t svsel (pn15, z28, z4))\n+\n+/*\n+** sel_z18_pn8_z18_z4:\n+**\tsel\t{z18\\.b - z19\\.b}, pn8, {z18\\.b - z19\\.b}, {z4\\.b - z5\\.b}\n+**\tret\n+*/\n+TEST_XN (sel_z18_pn8_z18_z4, svmfloat8x2_t, z18,\n+\t svsel_mf8_x2 (pn8, z18, z4),\n+\t svsel (pn8, z18, z4))\n+\n+/*\n+** sel_z23_pn15_z0_z18:\n+**\tsel\t[^\\n]+, pn15, {z0\\.b - z1\\.b}, {z18\\.b - z19\\.b}\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_XN (sel_z23_pn15_z0_z18, svmfloat8x2_t, z23,\n+\t svsel_mf8_x2 (pn15, z0, z18),\n+\t svsel (pn15, z0, z18))\n+\n+/*\n+** sel_z0_pn15_z23_z28:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tsel\t{z0\\.b - z1\\.b}, pn15, {[^}]+}, {z28\\.b - z29\\.b}\n+**\tret\n+*/\n+TEST_XN (sel_z0_pn15_z23_z28, svmfloat8x2_t, z0,\n+\t svsel_mf8_x2 (pn15, z23, z28),\n+\t svsel (pn15, z23, z28))\n+\n+/*\n+** sel_z0_pn8_z28_z23:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tsel\t{z0\\.b - z1\\.b}, pn8, {z28\\.b - z29\\.b}, {[^}]+}\n+**\tret\n+*/\n+TEST_XN (sel_z0_pn8_z28_z23, svmfloat8x2_t, z0,\n+\t svsel_mf8_x2 (pn8, z28, z23),\n+\t svsel (pn8, z28, z23))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/sel_mf8_x4.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/sel_mf8_x4.c\nnew file mode 100644\nindex 00000000000..ddcba0318d9\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/sel_mf8_x4.c\n@@ -0,0 +1,92 @@\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" } } */\n+\n+#include \"test_sme2_acle.h\"\n+\n+/*\n+** sel_z0_pn0_z0_z4:\n+**\tmov\tp([0-9]+)\\.b, p0\\.b\n+**\tsel\t{z0\\.b - z3\\.b}, pn\\1, {z0\\.b - z3\\.b}, {z4\\.b - z7\\.b}\n+**\tret\n+*/\n+TEST_XN (sel_z0_pn0_z0_z4, svmfloat8x4_t, z0,\n+\t svsel_mf8_x4 (pn0, z0, z4),\n+\t svsel (pn0, z0, z4))\n+\n+/*\n+** sel_z0_pn7_z0_z4:\n+**\tmov\tp([0-9]+)\\.b, p7\\.b\n+**\tsel\t{z0\\.b - z3\\.b}, pn\\1, {z0\\.b - z3\\.b}, {z4\\.b - z7\\.b}\n+**\tret\n+*/\n+TEST_XN (sel_z0_pn7_z0_z4, svmfloat8x4_t, z0,\n+\t svsel_mf8_x4 (pn7, z0, z4),\n+\t svsel (pn7, z0, z4))\n+\n+/*\n+** sel_z0_pn8_z4_z28:\n+**\tsel\t{z0\\.b - z3\\.b}, pn8, {z4\\.b - z7\\.b}, {z28\\.b - z31\\.b}\n+**\tret\n+*/\n+TEST_XN (sel_z0_pn8_z4_z28, svmfloat8x4_t, z0,\n+\t svsel_mf8_x4 (pn8, z4, z28),\n+\t svsel (pn8, z4, z28))\n+\n+/*\n+** sel_z4_pn8_z18_z0:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tsel\t{z4\\.b - z7\\.b}, pn8, {[^}]+}, {z0\\.b - z3\\.b}\n+**\tret\n+*/\n+TEST_XN (sel_z4_pn8_z18_z0, svmfloat8x4_t, z4,\n+\t svsel_mf8_x4 (pn8, z18, z0),\n+\t svsel (pn8, z18, z0))\n+\n+/*\n+** sel_z18_pn15_z28_z4:\n+**\tsel\t{[^}]+}, pn15, {z28\\.b - z31\\.b}, {z4\\.b - z7\\.b}\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_XN (sel_z18_pn15_z28_z4, svmfloat8x4_t, z18,\n+\t svsel_mf8_x4 (pn15, z28, z4),\n+\t svsel (pn15, z28, z4))\n+\n+/*\n+** sel_z18_pn8_z18_z4:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tsel\t{[^}]+}, pn8, {[^}]+}, {z4\\.b - z7\\.b}\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_XN (sel_z18_pn8_z18_z4, svmfloat8x4_t, z18,\n+\t svsel_mf8_x4 (pn8, z18, z4),\n+\t svsel (pn8, z18, z4))\n+\n+/*\n+** sel_z23_pn15_z0_z18:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tsel\t[^\\n]+, pn15, {z0\\.b - z3\\.b}, {[^}]+}\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_XN (sel_z23_pn15_z0_z18, svmfloat8x4_t, z23,\n+\t svsel_mf8_x4 (pn15, z0, z18),\n+\t svsel (pn15, z0, z18))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/st1_mf8_x2.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/st1_mf8_x2.c\nnew file mode 100644\nindex 00000000000..c778c139e8e\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/st1_mf8_x2.c\n@@ -0,0 +1,262 @@\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" { target { ! ilp32 } } } } */\n+\n+#include \"test_sme2_acle.h\"\n+\n+/*\n+** st1_mf8_base:\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_base, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_mf8_x2 (pn8, x0, z0),\n+\t\t  svst1 (pn8, x0, z0))\n+\n+/*\n+** st1_mf8_index:\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, x1\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_index, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_mf8_x2 (pn8, x0 + x1, z0),\n+\t\t  svst1 (pn8, x0 + x1, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** st1_mf8_1:\n+**\tincb\tx0\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_1, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_mf8_x2 (pn8, x0 + svcntb (), z0),\n+\t\t  svst1 (pn8, x0 + svcntb (), z0))\n+\n+/*\n+** st1_mf8_2:\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #2, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_2, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_mf8_x2 (pn8, x0 + svcntb () * 2, z0),\n+\t\t  svst1 (pn8, x0 + svcntb () * 2, z0))\n+\n+/*\n+** st1_mf8_14:\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #14, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_14, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_mf8_x2 (pn8, x0 + svcntb () * 14, z0),\n+\t\t  svst1 (pn8, x0 + svcntb () * 14, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** st1_mf8_16:\n+**\tincb\tx0, all, mul #16\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_16, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_mf8_x2 (pn8, x0 + svcntb () * 16, z0),\n+\t\t  svst1 (pn8, x0 + svcntb () * 16, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** st1_mf8_m1:\n+**\tdecb\tx0\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_m1, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_mf8_x2 (pn8, x0 - svcntb (), z0),\n+\t\t  svst1 (pn8, x0 - svcntb (), z0))\n+\n+/*\n+** st1_mf8_m2:\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #-2, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_m2, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_mf8_x2 (pn8, x0 - svcntb () * 2, z0),\n+\t\t  svst1 (pn8, x0 - svcntb () * 2, z0))\n+\n+/*\n+** st1_mf8_m16:\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #-16, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_m16, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_mf8_x2 (pn8, x0 - svcntb () * 16, z0),\n+\t\t  svst1 (pn8, x0 - svcntb () * 16, z0))\n+\n+/*\n+** st1_mf8_m18:\n+**\taddvl\t(x[0-9]+), x0, #-18\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[\\1\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_m18, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_mf8_x2 (pn8, x0 - svcntb () * 18, z0),\n+\t\t  svst1 (pn8, x0 - svcntb () * 18, z0))\n+\n+/*\n+** st1_mf8_z17:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tst1b\t{z[^\\n]+}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_z17, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_mf8_x2 (pn8, x0, z17),\n+\t\t  svst1 (pn8, x0, z17))\n+\n+/*\n+** st1_mf8_z22:\n+**\tst1b\t{z22\\.b(?: - |, )z23\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_z22, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_mf8_x2 (pn8, x0, z22),\n+\t\t  svst1 (pn8, x0, z22))\n+\n+/*\n+** st1_mf8_z28:\n+**\tst1b\t{z28\\.b(?: - |, )z29\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_z28, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_mf8_x2 (pn8, x0, z28),\n+\t\t  svst1 (pn8, x0, z28))\n+\n+/*\n+** st1_mf8_pn0:\n+**\tmov\tp([89]|1[0-5])\\.b, p0\\.b\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn\\1, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_pn0, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_mf8_x2 (pn0, x0, z0),\n+\t\t  svst1 (pn0, x0, z0))\n+\n+/*\n+** st1_mf8_pn7:\n+**\tmov\tp([89]|1[0-5])\\.b, p7\\.b\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn\\1, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_pn7, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_mf8_x2 (pn7, x0, z0),\n+\t\t  svst1 (pn7, x0, z0))\n+\n+/*\n+** st1_mf8_pn15:\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn15, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_pn15, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_mf8_x2 (pn15, x0, z0),\n+\t\t  svst1 (pn15, x0, z0))\n+\n+/*\n+** st1_vnum_mf8_0:\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_0, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x2 (pn8, x0, 0, z0),\n+\t\t  svst1_vnum (pn8, x0, 0, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** st1_vnum_mf8_1:\n+**\tincb\tx0\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_1, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x2 (pn8, x0, 1, z0),\n+\t\t  svst1_vnum (pn8, x0, 1, z0))\n+\n+/*\n+** st1_vnum_mf8_2:\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #2, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_2, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x2 (pn8, x0, 2, z0),\n+\t\t  svst1_vnum (pn8, x0, 2, z0))\n+\n+/*\n+** st1_vnum_mf8_14:\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #14, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_14, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x2 (pn8, x0, 14, z0),\n+\t\t  svst1_vnum (pn8, x0, 14, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** st1_vnum_mf8_16:\n+**\tincb\tx0, all, mul #16\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_16, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x2 (pn8, x0, 16, z0),\n+\t\t  svst1_vnum (pn8, x0, 16, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** st1_vnum_mf8_m1:\n+**\tdecb\tx0\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_m1, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x2 (pn8, x0, -1, z0),\n+\t\t  svst1_vnum (pn8, x0, -1, z0))\n+\n+/*\n+** st1_vnum_mf8_m2:\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #-2, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_m2, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x2 (pn8, x0, -2, z0),\n+\t\t  svst1_vnum (pn8, x0, -2, z0))\n+\n+/*\n+** st1_vnum_mf8_m16:\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #-16, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_m16, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x2 (pn8, x0, -16, z0),\n+\t\t  svst1_vnum (pn8, x0, -16, z0))\n+\n+/*\n+** st1_vnum_mf8_m18:\n+**\taddvl\t(x[0-9]+), x0, #-18\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[\\1\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_m18, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x2 (pn8, x0, -18, z0),\n+\t\t  svst1_vnum (pn8, x0, -18, z0))\n+\n+/*\n+** st1_vnum_mf8_x1:\n+**\tcntb\t(x[0-9]+)\n+** (\n+**\tmadd\t(x[0-9]+), (?:x1, \\1|\\1, x1), x0\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[\\2\\]\n+** |\n+**\tmul\t(x[0-9]+), (?:x1, \\1|\\1, x1)\n+**\tst1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, \\3\\]\n+** )\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_x1, svmfloat8x2_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x2 (pn8, x0, x1, z0),\n+\t\t  svst1_vnum (pn8, x0, x1, z0))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/st1_mf8_x4.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/st1_mf8_x4.c\nnew file mode 100644\nindex 00000000000..5f60757f07b\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/st1_mf8_x4.c\n@@ -0,0 +1,354 @@\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" { target { ! ilp32 } } } } */\n+\n+#include \"test_sme2_acle.h\"\n+\n+/*\n+** st1_mf8_base:\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_base, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_mf8_x4 (pn8, x0, z0),\n+\t\t  svst1 (pn8, x0, z0))\n+\n+/*\n+** st1_mf8_index:\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, x1\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_index, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_mf8_x4 (pn8, x0 + x1, z0),\n+\t\t  svst1 (pn8, x0 + x1, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** st1_mf8_1:\n+**\tincb\tx0\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_1, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_mf8_x4 (pn8, x0 + svcntb (), z0),\n+\t\t  svst1 (pn8, x0 + svcntb (), z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** st1_mf8_2:\n+**\tincb\tx0, all, mul #2\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_2, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_mf8_x4 (pn8, x0 + svcntb () * 2, z0),\n+\t\t  svst1 (pn8, x0 + svcntb () * 2, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** st1_mf8_3:\n+**\tincb\tx0, all, mul #3\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_3, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_mf8_x4 (pn8, x0 + svcntb () * 3, z0),\n+\t\t  svst1 (pn8, x0 + svcntb () * 3, z0))\n+\n+/*\n+** st1_mf8_4:\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #4, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_4, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_mf8_x4 (pn8, x0 + svcntb () * 4, z0),\n+\t\t  svst1 (pn8, x0 + svcntb () * 4, z0))\n+\n+/*\n+** st1_mf8_28:\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #28, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_28, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_mf8_x4 (pn8, x0 + svcntb () * 28, z0),\n+\t\t  svst1 (pn8, x0 + svcntb () * 28, z0))\n+\n+/*\n+** st1_mf8_32:\n+**\t[^{]*\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_32, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_mf8_x4 (pn8, x0 + svcntb () * 32, z0),\n+\t\t  svst1 (pn8, x0 + svcntb () * 32, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** st1_mf8_m1:\n+**\tdecb\tx0\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_m1, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_mf8_x4 (pn8, x0 - svcntb (), z0),\n+\t\t  svst1 (pn8, x0 - svcntb (), z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** st1_mf8_m2:\n+**\tdecb\tx0, all, mul #2\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_m2, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_mf8_x4 (pn8, x0 - svcntb () * 2, z0),\n+\t\t  svst1 (pn8, x0 - svcntb () * 2, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** st1_mf8_m3:\n+**\tdecb\tx0, all, mul #3\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_m3, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_mf8_x4 (pn8, x0 - svcntb () * 3, z0),\n+\t\t  svst1 (pn8, x0 - svcntb () * 3, z0))\n+\n+/*\n+** st1_mf8_m4:\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #-4, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_m4, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_mf8_x4 (pn8, x0 - svcntb () * 4, z0),\n+\t\t  svst1 (pn8, x0 - svcntb () * 4, z0))\n+\n+/*\n+** st1_mf8_m32:\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #-32, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_m32, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_mf8_x4 (pn8, x0 - svcntb () * 32, z0),\n+\t\t  svst1 (pn8, x0 - svcntb () * 32, z0))\n+\n+/*\n+** st1_mf8_m36:\n+**\t[^{]*\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_m36, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_mf8_x4 (pn8, x0 - svcntb () * 36, z0),\n+\t\t  svst1 (pn8, x0 - svcntb () * 36, z0))\n+\n+/*\n+** st1_mf8_z17:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tst1b\t{z[^\\n]+}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_z17, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_mf8_x4 (pn8, x0, z17),\n+\t\t  svst1 (pn8, x0, z17))\n+\n+/*\n+** st1_mf8_z22:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tst1b\t{z[^\\n]+}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_z22, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_mf8_x4 (pn8, x0, z22),\n+\t\t  svst1 (pn8, x0, z22))\n+\n+/*\n+** st1_mf8_z28:\n+**\tst1b\t{z28\\.b - z31\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_z28, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_mf8_x4 (pn8, x0, z28),\n+\t\t  svst1 (pn8, x0, z28))\n+\n+/*\n+** st1_mf8_pn0:\n+**\tmov\tp([89]|1[0-5])\\.b, p0\\.b\n+**\tst1b\t{z0\\.b - z3\\.b}, pn\\1, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_pn0, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_mf8_x4 (pn0, x0, z0),\n+\t\t  svst1 (pn0, x0, z0))\n+\n+/*\n+** st1_mf8_pn7:\n+**\tmov\tp([89]|1[0-5])\\.b, p7\\.b\n+**\tst1b\t{z0\\.b - z3\\.b}, pn\\1, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_pn7, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_mf8_x4 (pn7, x0, z0),\n+\t\t  svst1 (pn7, x0, z0))\n+\n+/*\n+** st1_mf8_pn15:\n+**\tst1b\t{z0\\.b - z3\\.b}, pn15, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_mf8_pn15, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_mf8_x4 (pn15, x0, z0),\n+\t\t  svst1 (pn15, x0, z0))\n+\n+/*\n+** st1_vnum_mf8_0:\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_0, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x4 (pn8, x0, 0, z0),\n+\t\t  svst1_vnum (pn8, x0, 0, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** st1_vnum_mf8_1:\n+**\tincb\tx0\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_1, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x4 (pn8, x0, 1, z0),\n+\t\t  svst1_vnum (pn8, x0, 1, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** st1_vnum_mf8_2:\n+**\tincb\tx0, all, mul #2\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_2, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x4 (pn8, x0, 2, z0),\n+\t\t  svst1_vnum (pn8, x0, 2, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** st1_vnum_mf8_3:\n+**\tincb\tx0, all, mul #3\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_3, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x4 (pn8, x0, 3, z0),\n+\t\t  svst1_vnum (pn8, x0, 3, z0))\n+\n+/*\n+** st1_vnum_mf8_4:\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #4, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_4, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x4 (pn8, x0, 4, z0),\n+\t\t  svst1_vnum (pn8, x0, 4, z0))\n+\n+/*\n+** st1_vnum_mf8_28:\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #28, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_28, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x4 (pn8, x0, 28, z0),\n+\t\t  svst1_vnum (pn8, x0, 28, z0))\n+\n+/*\n+** st1_vnum_mf8_32:\n+**\t[^{]*\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_32, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x4 (pn8, x0, 32, z0),\n+\t\t  svst1_vnum (pn8, x0, 32, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** st1_vnum_mf8_m1:\n+**\tdecb\tx0\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_m1, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x4 (pn8, x0, -1, z0),\n+\t\t  svst1_vnum (pn8, x0, -1, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** st1_vnum_mf8_m2:\n+**\tdecb\tx0, all, mul #2\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_m2, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x4 (pn8, x0, -2, z0),\n+\t\t  svst1_vnum (pn8, x0, -2, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** st1_vnum_mf8_m3:\n+**\tdecb\tx0, all, mul #3\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_m3, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x4 (pn8, x0, -3, z0),\n+\t\t  svst1_vnum (pn8, x0, -3, z0))\n+\n+/*\n+** st1_vnum_mf8_m4:\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #-4, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_m4, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x4 (pn8, x0, -4, z0),\n+\t\t  svst1_vnum (pn8, x0, -4, z0))\n+\n+/*\n+** st1_vnum_mf8_m32:\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #-32, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_m32, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x4 (pn8, x0, -32, z0),\n+\t\t  svst1_vnum (pn8, x0, -32, z0))\n+\n+/*\n+** st1_vnum_mf8_m36:\n+**\t[^{]*\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_m36, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x4 (pn8, x0, -36, z0),\n+\t\t  svst1_vnum (pn8, x0, -36, z0))\n+\n+/*\n+** st1_vnum_mf8_x1:\n+**\tcntb\t(x[0-9]+)\n+** (\n+**\tmadd\t(x[0-9]+), (?:x1, \\1|\\1, x1), x0\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[\\2\\]\n+** |\n+**\tmul\t(x[0-9]+), (?:x1, \\1|\\1, x1)\n+**\tst1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, \\3\\]\n+** )\n+**\tret\n+*/\n+TEST_STORE_COUNT (st1_vnum_mf8_x1, svmfloat8x4_t, mfloat8_t,\n+\t\t  svst1_vnum_mf8_x4 (pn8, x0, x1, z0),\n+\t\t  svst1_vnum (pn8, x0, x1, z0))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/stnt1_mf8_x2.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/stnt1_mf8_x2.c\nnew file mode 100644\nindex 00000000000..f9a90fbe9b0\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/stnt1_mf8_x2.c\n@@ -0,0 +1,262 @@\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" { target { ! ilp32 } } } } */\n+\n+#include \"test_sme2_acle.h\"\n+\n+/*\n+** stnt1_mf8_base:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_base, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0, z0),\n+\t\t  svstnt1 (pn8, x0, z0))\n+\n+/*\n+** stnt1_mf8_index:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, x1\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_index, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0 + x1, z0),\n+\t\t  svstnt1 (pn8, x0 + x1, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_mf8_1:\n+**\tincb\tx0\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_1, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0 + svcntb (), z0),\n+\t\t  svstnt1 (pn8, x0 + svcntb (), z0))\n+\n+/*\n+** stnt1_mf8_2:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #2, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_2, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0 + svcntb () * 2, z0),\n+\t\t  svstnt1 (pn8, x0 + svcntb () * 2, z0))\n+\n+/*\n+** stnt1_mf8_14:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #14, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_14, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0 + svcntb () * 14, z0),\n+\t\t  svstnt1 (pn8, x0 + svcntb () * 14, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_mf8_16:\n+**\tincb\tx0, all, mul #16\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_16, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0 + svcntb () * 16, z0),\n+\t\t  svstnt1 (pn8, x0 + svcntb () * 16, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_mf8_m1:\n+**\tdecb\tx0\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_m1, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0 - svcntb (), z0),\n+\t\t  svstnt1 (pn8, x0 - svcntb (), z0))\n+\n+/*\n+** stnt1_mf8_m2:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #-2, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_m2, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0 - svcntb () * 2, z0),\n+\t\t  svstnt1 (pn8, x0 - svcntb () * 2, z0))\n+\n+/*\n+** stnt1_mf8_m16:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #-16, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_m16, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0 - svcntb () * 16, z0),\n+\t\t  svstnt1 (pn8, x0 - svcntb () * 16, z0))\n+\n+/*\n+** stnt1_mf8_m18:\n+**\taddvl\t(x[0-9]+), x0, #-18\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[\\1\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_m18, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0 - svcntb () * 18, z0),\n+\t\t  svstnt1 (pn8, x0 - svcntb () * 18, z0))\n+\n+/*\n+** stnt1_mf8_z17:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tstnt1b\t{z[^\\n]+}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_z17, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0, z17),\n+\t\t  svstnt1 (pn8, x0, z17))\n+\n+/*\n+** stnt1_mf8_z22:\n+**\tstnt1b\t{z22\\.b(?: - |, )z23\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_z22, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0, z22),\n+\t\t  svstnt1 (pn8, x0, z22))\n+\n+/*\n+** stnt1_mf8_z28:\n+**\tstnt1b\t{z28\\.b(?: - |, )z29\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_z28, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0, z28),\n+\t\t  svstnt1 (pn8, x0, z28))\n+\n+/*\n+** stnt1_mf8_pn0:\n+**\tmov\tp([89]|1[0-5])\\.b, p0\\.b\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn\\1, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_pn0, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn0, x0, z0),\n+\t\t  svstnt1 (pn0, x0, z0))\n+\n+/*\n+** stnt1_mf8_pn7:\n+**\tmov\tp([89]|1[0-5])\\.b, p7\\.b\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn\\1, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_pn7, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn7, x0, z0),\n+\t\t  svstnt1 (pn7, x0, z0))\n+\n+/*\n+** stnt1_mf8_pn15:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn15, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_pn15, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn15, x0, z0),\n+\t\t  svstnt1 (pn15, x0, z0))\n+\n+/*\n+** stnt1_vnum_mf8_0:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_0, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x2 (pn8, x0, 0, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 0, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_vnum_mf8_1:\n+**\tincb\tx0\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_1, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x2 (pn8, x0, 1, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 1, z0))\n+\n+/*\n+** stnt1_vnum_mf8_2:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #2, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_2, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x2 (pn8, x0, 2, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 2, z0))\n+\n+/*\n+** stnt1_vnum_mf8_14:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #14, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_14, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x2 (pn8, x0, 14, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 14, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_vnum_mf8_16:\n+**\tincb\tx0, all, mul #16\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_16, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x2 (pn8, x0, 16, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 16, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_vnum_mf8_m1:\n+**\tdecb\tx0\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_m1, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x2 (pn8, x0, -1, z0),\n+\t\t  svstnt1_vnum (pn8, x0, -1, z0))\n+\n+/*\n+** stnt1_vnum_mf8_m2:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #-2, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_m2, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x2 (pn8, x0, -2, z0),\n+\t\t  svstnt1_vnum (pn8, x0, -2, z0))\n+\n+/*\n+** stnt1_vnum_mf8_m16:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #-16, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_m16, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x2 (pn8, x0, -16, z0),\n+\t\t  svstnt1_vnum (pn8, x0, -16, z0))\n+\n+/*\n+** stnt1_vnum_mf8_m18:\n+**\taddvl\t(x[0-9]+), x0, #-18\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[\\1\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_m18, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x2 (pn8, x0, -18, z0),\n+\t\t  svstnt1_vnum (pn8, x0, -18, z0))\n+\n+/*\n+** stnt1_vnum_mf8_x1:\n+**\tcntb\t(x[0-9]+)\n+** (\n+**\tmadd\t(x[0-9]+), (?:x1, \\1|\\1, x1), x0\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[\\2\\]\n+** |\n+**\tmul\t(x[0-9]+), (?:x1, \\1|\\1, x1)\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, \\3\\]\n+** )\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_x1, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x2 (pn8, x0, x1, z0),\n+\t\t  svstnt1_vnum (pn8, x0, x1, z0))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/stnt1_mf8_x4.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/stnt1_mf8_x4.c\nnew file mode 100644\nindex 00000000000..a204f796982\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/stnt1_mf8_x4.c\n@@ -0,0 +1,354 @@\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" { target { ! ilp32 } } } } */\n+\n+#include \"test_sme2_acle.h\"\n+\n+/*\n+** stnt1_mf8_base:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_base, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0, z0),\n+\t\t  svstnt1 (pn8, x0, z0))\n+\n+/*\n+** stnt1_mf8_index:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, x1\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_index, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 + x1, z0),\n+\t\t  svstnt1 (pn8, x0 + x1, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_mf8_1:\n+**\tincb\tx0\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_1, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 + svcntb (), z0),\n+\t\t  svstnt1 (pn8, x0 + svcntb (), z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_mf8_2:\n+**\tincb\tx0, all, mul #2\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_2, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 + svcntb () * 2, z0),\n+\t\t  svstnt1 (pn8, x0 + svcntb () * 2, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_mf8_3:\n+**\tincb\tx0, all, mul #3\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_3, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 + svcntb () * 3, z0),\n+\t\t  svstnt1 (pn8, x0 + svcntb () * 3, z0))\n+\n+/*\n+** stnt1_mf8_4:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #4, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_4, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 + svcntb () * 4, z0),\n+\t\t  svstnt1 (pn8, x0 + svcntb () * 4, z0))\n+\n+/*\n+** stnt1_mf8_28:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #28, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_28, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 + svcntb () * 28, z0),\n+\t\t  svstnt1 (pn8, x0 + svcntb () * 28, z0))\n+\n+/*\n+** stnt1_mf8_32:\n+**\t[^{]*\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_32, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 + svcntb () * 32, z0),\n+\t\t  svstnt1 (pn8, x0 + svcntb () * 32, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_mf8_m1:\n+**\tdecb\tx0\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_m1, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 - svcntb (), z0),\n+\t\t  svstnt1 (pn8, x0 - svcntb (), z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_mf8_m2:\n+**\tdecb\tx0, all, mul #2\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_m2, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 - svcntb () * 2, z0),\n+\t\t  svstnt1 (pn8, x0 - svcntb () * 2, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_mf8_m3:\n+**\tdecb\tx0, all, mul #3\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_m3, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 - svcntb () * 3, z0),\n+\t\t  svstnt1 (pn8, x0 - svcntb () * 3, z0))\n+\n+/*\n+** stnt1_mf8_m4:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #-4, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_m4, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 - svcntb () * 4, z0),\n+\t\t  svstnt1 (pn8, x0 - svcntb () * 4, z0))\n+\n+/*\n+** stnt1_mf8_m32:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #-32, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_m32, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 - svcntb () * 32, z0),\n+\t\t  svstnt1 (pn8, x0 - svcntb () * 32, z0))\n+\n+/*\n+** stnt1_mf8_m36:\n+**\t[^{]*\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_m36, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 - svcntb () * 36, z0),\n+\t\t  svstnt1 (pn8, x0 - svcntb () * 36, z0))\n+\n+/*\n+** stnt1_mf8_z17:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tstnt1b\t{z[^\\n]+}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_z17, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0, z17),\n+\t\t  svstnt1 (pn8, x0, z17))\n+\n+/*\n+** stnt1_mf8_z22:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tstnt1b\t{z[^\\n]+}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_z22, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0, z22),\n+\t\t  svstnt1 (pn8, x0, z22))\n+\n+/*\n+** stnt1_mf8_z28:\n+**\tstnt1b\t{z28\\.b - z31\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_z28, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0, z28),\n+\t\t  svstnt1 (pn8, x0, z28))\n+\n+/*\n+** stnt1_mf8_pn0:\n+**\tmov\tp([89]|1[0-5])\\.b, p0\\.b\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn\\1, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_pn0, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn0, x0, z0),\n+\t\t  svstnt1 (pn0, x0, z0))\n+\n+/*\n+** stnt1_mf8_pn7:\n+**\tmov\tp([89]|1[0-5])\\.b, p7\\.b\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn\\1, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_pn7, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn7, x0, z0),\n+\t\t  svstnt1 (pn7, x0, z0))\n+\n+/*\n+** stnt1_mf8_pn15:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn15, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_pn15, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn15, x0, z0),\n+\t\t  svstnt1 (pn15, x0, z0))\n+\n+/*\n+** stnt1_vnum_mf8_0:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_0, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, 0, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 0, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_vnum_mf8_1:\n+**\tincb\tx0\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_1, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, 1, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 1, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_vnum_mf8_2:\n+**\tincb\tx0, all, mul #2\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_2, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, 2, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 2, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_vnum_mf8_3:\n+**\tincb\tx0, all, mul #3\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_3, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, 3, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 3, z0))\n+\n+/*\n+** stnt1_vnum_mf8_4:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #4, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_4, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, 4, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 4, z0))\n+\n+/*\n+** stnt1_vnum_mf8_28:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #28, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_28, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, 28, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 28, z0))\n+\n+/*\n+** stnt1_vnum_mf8_32:\n+**\t[^{]*\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_32, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, 32, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 32, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_vnum_mf8_m1:\n+**\tdecb\tx0\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_m1, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, -1, z0),\n+\t\t  svstnt1_vnum (pn8, x0, -1, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_vnum_mf8_m2:\n+**\tdecb\tx0, all, mul #2\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_m2, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, -2, z0),\n+\t\t  svstnt1_vnum (pn8, x0, -2, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_vnum_mf8_m3:\n+**\tdecb\tx0, all, mul #3\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_m3, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, -3, z0),\n+\t\t  svstnt1_vnum (pn8, x0, -3, z0))\n+\n+/*\n+** stnt1_vnum_mf8_m4:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #-4, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_m4, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, -4, z0),\n+\t\t  svstnt1_vnum (pn8, x0, -4, z0))\n+\n+/*\n+** stnt1_vnum_mf8_m32:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #-32, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_m32, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, -32, z0),\n+\t\t  svstnt1_vnum (pn8, x0, -32, z0))\n+\n+/*\n+** stnt1_vnum_mf8_m36:\n+**\t[^{]*\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_m36, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, -36, z0),\n+\t\t  svstnt1_vnum (pn8, x0, -36, z0))\n+\n+/*\n+** stnt1_vnum_mf8_x1:\n+**\tcntb\t(x[0-9]+)\n+** (\n+**\tmadd\t(x[0-9]+), (?:x1, \\1|\\1, x1), x0\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[\\2\\]\n+** |\n+**\tmul\t(x[0-9]+), (?:x1, \\1|\\1, x1)\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, \\3\\]\n+** )\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_x1, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, x1, z0),\n+\t\t  svstnt1_vnum (pn8, x0, x1, z0))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/uzp_mf8_x2.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/uzp_mf8_x2.c\nnew file mode 100644\nindex 00000000000..f107b4c7a18\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/uzp_mf8_x2.c\n@@ -0,0 +1,77 @@\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" } } */\n+\n+#include \"test_sme2_acle.h\"\n+\n+/*\n+** uzp_z0_z0:\n+**\tuzp\t{z0\\.b - z1\\.b}, z0\\.b, z1\\.b\n+**\tret\n+*/\n+TEST_XN (uzp_z0_z0, svmfloat8x2_t, z0,\n+\t svuzp_mf8_x2 (z0),\n+\t svuzp (z0))\n+\n+/*\n+** uzp_z0_z4:\n+**\tuzp\t{z0\\.b - z1\\.b}, z4\\.b, z5\\.b\n+**\tret\n+*/\n+TEST_XN (uzp_z0_z4, svmfloat8x2_t, z0,\n+\t svuzp_mf8_x2 (z4),\n+\t svuzp (z4))\n+\n+/*\n+** uzp_z4_z18:\n+**\tuzp\t{z4\\.b - z5\\.b}, z18\\.b, z19\\.b\n+**\tret\n+*/\n+TEST_XN (uzp_z4_z18, svmfloat8x2_t, z4,\n+\t svuzp_mf8_x2 (z18),\n+\t svuzp (z18))\n+\n+/*\n+** uzp_z18_z23:\n+**\tuzp\t{z18\\.b - z19\\.b}, z23\\.b, z24\\.b\n+**\tret\n+*/\n+TEST_XN (uzp_z18_z23, svmfloat8x2_t, z18,\n+\t svuzp_mf8_x2 (z23),\n+\t svuzp (z23))\n+\n+/*\n+** uzp_z23_z28:\n+**\tuzp\t[^\\n]+, z28\\.b, z29\\.b\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_XN (uzp_z23_z28, svmfloat8x2_t, z23,\n+\t svuzp_mf8_x2 (z28),\n+\t svuzp (z28))\n+\n+/*\n+** uzp_z28_z0:\n+**\tuzp\t{z28\\.b - z29\\.b}, z0\\.b, z1\\.b\n+**\tret\n+*/\n+TEST_XN (uzp_z28_z0, svmfloat8x2_t, z28,\n+\t svuzp_mf8_x2 (z0),\n+\t svuzp (z0))\n+\n+/*\n+** uzp_z28_z0_z23:\t{ xfail aarch64_big_endian }\n+**\tuzp\t{z28\\.b - z29\\.b}, z0\\.b, z23\\.b\n+**\tret\n+*/\n+TEST_XN (uzp_z28_z0_z23, svmfloat8x2_t, z28,\n+\t svuzp_mf8_x2 (svcreate2 (svget2 (z0, 0), svget2 (z23, 0))),\n+\t svuzp (svcreate2 (svget2 (z0, 0), svget2 (z23, 0))))\n+\n+/*\n+** uzp_z28_z5_z19:\n+**\tuzp\t{z28\\.b - z29\\.b}, z5\\.b, z19\\.b\n+**\tret\n+*/\n+TEST_XN (uzp_z28_z5_z19, svmfloat8x2_t, z28,\n+\t svuzp_mf8_x2 (svcreate2 (svget2 (z4, 1), svget2 (z18, 1))),\n+\t svuzp (svcreate2 (svget2 (z4, 1), svget2 (z18, 1))))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/uzp_mf8_x4.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/uzp_mf8_x4.c\nnew file mode 100644\nindex 00000000000..bbaf26c85a5\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/uzp_mf8_x4.c\n@@ -0,0 +1,73 @@\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" } } */\n+\n+#include \"test_sme2_acle.h\"\n+\n+/*\n+** uzp_z0_z0:\n+**\tuzp\t{z0\\.b - z3\\.b}, {z0\\.b - z3\\.b}\n+**\tret\n+*/\n+TEST_XN (uzp_z0_z0, svmfloat8x4_t, z0,\n+\t svuzp_mf8_x4 (z0),\n+\t svuzp (z0))\n+\n+/*\n+** uzp_z0_z4:\n+**\tuzp\t{z0\\.b - z3\\.b}, {z4\\.b - z7\\.b}\n+**\tret\n+*/\n+TEST_XN (uzp_z0_z4, svmfloat8x4_t, z0,\n+\t svuzp_mf8_x4 (z4),\n+\t svuzp (z4))\n+\n+/*\n+** uzp_z4_z18:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tuzp\t{z4\\.b - z7\\.b}, [^\\n]+\n+**\tret\n+*/\n+TEST_XN (uzp_z4_z18, svmfloat8x4_t, z4,\n+\t svuzp_mf8_x4 (z18),\n+\t svuzp (z18))\n+\n+/*\n+** uzp_z18_z23:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tuzp\t{z[^\\n]+}, {z[^\\n]+}\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_XN (uzp_z18_z23, svmfloat8x4_t, z18,\n+\t svuzp_mf8_x4 (z23),\n+\t svuzp (z23))\n+\n+/*\n+** uzp_z23_z28:\n+**\tuzp\t[^\\n]+, {z28\\.b - z31\\.b}\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_XN (uzp_z23_z28, svmfloat8x4_t, z23,\n+\t svuzp_mf8_x4 (z28),\n+\t svuzp (z28))\n+\n+/*\n+** uzp_z28_z0:\n+**\tuzp\t{z28\\.b - z31\\.b}, {z0\\.b - z3\\.b}\n+**\tret\n+*/\n+TEST_XN (uzp_z28_z0, svmfloat8x4_t, z28,\n+\t svuzp_mf8_x4 (z0),\n+\t svuzp (z0))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/uzpq_mf8_x2.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/uzpq_mf8_x2.c\nnew file mode 100644\nindex 00000000000..cef514c46e8\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/uzpq_mf8_x2.c\n@@ -0,0 +1,77 @@\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" } } */\n+\n+#include \"test_sme2_acle.h\"\n+\n+/*\n+** uzpq_z0_z0:\n+**\tuzp\t{z0\\.q - z1\\.q}, z0\\.q, z1\\.q\n+**\tret\n+*/\n+TEST_XN (uzpq_z0_z0, svmfloat8x2_t, z0,\n+\t svuzpq_mf8_x2 (z0),\n+\t svuzpq (z0))\n+\n+/*\n+** uzpq_z0_z4:\n+**\tuzp\t{z0\\.q - z1\\.q}, z4\\.q, z5\\.q\n+**\tret\n+*/\n+TEST_XN (uzpq_z0_z4, svmfloat8x2_t, z0,\n+\t svuzpq_mf8_x2 (z4),\n+\t svuzpq (z4))\n+\n+/*\n+** uzpq_z4_z18:\n+**\tuzp\t{z4\\.q - z5\\.q}, z18\\.q, z19\\.q\n+**\tret\n+*/\n+TEST_XN (uzpq_z4_z18, svmfloat8x2_t, z4,\n+\t svuzpq_mf8_x2 (z18),\n+\t svuzpq (z18))\n+\n+/*\n+** uzpq_z18_z23:\n+**\tuzp\t{z18\\.q - z19\\.q}, z23\\.q, z24\\.q\n+**\tret\n+*/\n+TEST_XN (uzpq_z18_z23, svmfloat8x2_t, z18,\n+\t svuzpq_mf8_x2 (z23),\n+\t svuzpq (z23))\n+\n+/*\n+** uzpq_z23_z28:\n+**\tuzp\t[^\\n]+, z28\\.q, z29\\.q\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_XN (uzpq_z23_z28, svmfloat8x2_t, z23,\n+\t svuzpq_mf8_x2 (z28),\n+\t svuzpq (z28))\n+\n+/*\n+** uzpq_z28_z0:\n+**\tuzp\t{z28\\.q - z29\\.q}, z0\\.q, z1\\.q\n+**\tret\n+*/\n+TEST_XN (uzpq_z28_z0, svmfloat8x2_t, z28,\n+\t svuzpq_mf8_x2 (z0),\n+\t svuzpq (z0))\n+\n+/*\n+** uzpq_z28_z0_z23:\t{ xfail aarch64_big_endian }\n+**\tuzp\t{z28\\.q - z29\\.q}, z0\\.q, z23\\.q\n+**\tret\n+*/\n+TEST_XN (uzpq_z28_z0_z23, svmfloat8x2_t, z28,\n+\t svuzpq_mf8_x2 (svcreate2 (svget2 (z0, 0), svget2 (z23, 0))),\n+\t svuzpq (svcreate2 (svget2 (z0, 0), svget2 (z23, 0))))\n+\n+/*\n+** uzpq_z28_z5_z19:\n+**\tuzp\t{z28\\.q - z29\\.q}, z5\\.q, z19\\.q\n+**\tret\n+*/\n+TEST_XN (uzpq_z28_z5_z19, svmfloat8x2_t, z28,\n+\t svuzpq_mf8_x2 (svcreate2 (svget2 (z4, 1), svget2 (z18, 1))),\n+\t svuzpq (svcreate2 (svget2 (z4, 1), svget2 (z18, 1))))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/uzpq_mf8_x4.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/uzpq_mf8_x4.c\nnew file mode 100644\nindex 00000000000..6b348c95f83\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/uzpq_mf8_x4.c\n@@ -0,0 +1,73 @@\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" } } */\n+\n+#include \"test_sme2_acle.h\"\n+\n+/*\n+** uzpq_z0_z0:\n+**\tuzp\t{z0\\.q - z3\\.q}, {z0\\.q - z3\\.q}\n+**\tret\n+*/\n+TEST_XN (uzpq_z0_z0, svmfloat8x4_t, z0,\n+\t svuzpq_mf8_x4 (z0),\n+\t svuzpq (z0))\n+\n+/*\n+** uzpq_z0_z4:\n+**\tuzp\t{z0\\.q - z3\\.q}, {z4\\.q - z7\\.q}\n+**\tret\n+*/\n+TEST_XN (uzpq_z0_z4, svmfloat8x4_t, z0,\n+\t svuzpq_mf8_x4 (z4),\n+\t svuzpq (z4))\n+\n+/*\n+** uzpq_z4_z18:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tuzp\t{z4\\.q - z7\\.q}, [^\\n]+\n+**\tret\n+*/\n+TEST_XN (uzpq_z4_z18, svmfloat8x4_t, z4,\n+\t svuzpq_mf8_x4 (z18),\n+\t svuzpq (z18))\n+\n+/*\n+** uzpq_z18_z23:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tuzp\t{z[^\\n]+}, {z[^\\n]+}\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_XN (uzpq_z18_z23, svmfloat8x4_t, z18,\n+\t svuzpq_mf8_x4 (z23),\n+\t svuzpq (z23))\n+\n+/*\n+** uzpq_z23_z28:\n+**\tuzp\t[^\\n]+, {z28\\.q - z31\\.q}\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_XN (uzpq_z23_z28, svmfloat8x4_t, z23,\n+\t svuzpq_mf8_x4 (z28),\n+\t svuzpq (z28))\n+\n+/*\n+** uzpq_z28_z0:\n+**\tuzp\t{z28\\.q - z31\\.q}, {z0\\.q - z3\\.q}\n+**\tret\n+*/\n+TEST_XN (uzpq_z28_z0, svmfloat8x4_t, z28,\n+\t svuzpq_mf8_x4 (z0),\n+\t svuzpq (z0))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_hor_za8_vg2.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_hor_za8_vg2.c\nindex a2af846b60b..8df504cb423 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_hor_za8_vg2.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_hor_za8_vg2.c\n@@ -22,6 +22,16 @@ TEST_ZA_XN (write_za8_u8_z4_0_1, svuint8x2_t,\n \t    svwrite_hor_za8_u8_vg2 (0, 1, z4),\n \t    svwrite_hor_za8_u8_vg2 (0, 1, z4))\n \n+/*\n+** write_za8_mf8_z4_0_1:\n+**\tmov\t(w1[2-5]), #?1\n+**\tmova\tza0h\\.b\\[\\1, 0:1\\], {z4\\.b - z5\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z4_0_1, svmfloat8x2_t,\n+\t    svwrite_hor_za8_mf8_vg2 (0, 1, z4),\n+\t    svwrite_hor_za8_mf8_vg2 (0, 1, z4))\n+\n /*\n ** write_za8_s8_z28_0_w11:\n **\tmov\t(w1[2-5]), w11\n@@ -50,6 +60,15 @@ TEST_ZA_XN (write_za8_u8_z18_0_w15, svuint8x2_t,\n \t    svwrite_hor_za8_u8_vg2 (0, w15, z18),\n \t    svwrite_hor_za8_u8_vg2 (0, w15, z18))\n \n+/*\n+** write_za8_mf8_z18_0_w15:\n+**\tmova\tza0h\\.b\\[w15, 0:1\\], {z18\\.b - z19\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z18_0_w15, svmfloat8x2_t,\n+\t    svwrite_hor_za8_mf8_vg2 (0, w15, z18),\n+\t    svwrite_hor_za8_mf8_vg2 (0, w15, z18))\n+\n /*\n ** write_za8_s8_z23_0_w12p14:\n **\tmov\t[^\\n]+\n@@ -71,6 +90,16 @@ TEST_ZA_XN (write_za8_u8_z4_0_w12p1, svuint8x2_t,\n \t    svwrite_hor_za8_u8_vg2 (0, w12 + 1, z4),\n \t    svwrite_hor_za8_u8_vg2 (0, w12 + 1, z4))\n \n+/*\n+** write_za8_mf8_z4_0_w12p1:\n+**\tadd\t(w[0-9]+), w12, #?1\n+**\tmova\tza0h\\.b\\[\\1, 0:1\\], {z4\\.b - z5\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z4_0_w12p1, svmfloat8x2_t,\n+\t    svwrite_hor_za8_mf8_vg2 (0, w12 + 1, z4),\n+\t    svwrite_hor_za8_mf8_vg2 (0, w12 + 1, z4))\n+\n /*\n ** write_za8_s8_z28_0_w12p2:\n **\tmova\tza0h\\.b\\[w12, 2:3\\], {z28\\.b - z29\\.b}\n@@ -90,6 +119,16 @@ TEST_ZA_XN (write_za8_u8_z0_0_w15p3, svuint8x2_t,\n \t    svwrite_hor_za8_u8_vg2 (0, w15 + 3, z0),\n \t    svwrite_hor_za8_u8_vg2 (0, w15 + 3, z0))\n \n+/*\n+** write_za8_mf8_z0_0_w15p3:\n+**\tadd\t(w[0-9]+), w15, #?3\n+**\tmova\tza0h\\.b\\[\\1, 0:1\\], {z0\\.b - z1\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z0_0_w15p3, svmfloat8x2_t,\n+\t    svwrite_hor_za8_mf8_vg2 (0, w15 + 3, z0),\n+\t    svwrite_hor_za8_mf8_vg2 (0, w15 + 3, z0))\n+\n /*\n ** write_za8_u8_z4_0_w15p12:\n **\tmova\tza0h\\.b\\[w15, 12:13\\], {z4\\.b - z5\\.b}\n@@ -99,6 +138,15 @@ TEST_ZA_XN (write_za8_u8_z4_0_w15p12, svuint8x2_t,\n \t    svwrite_hor_za8_u8_vg2 (0, w15 + 12, z4),\n \t    svwrite_hor_za8_u8_vg2 (0, w15 + 12, z4))\n \n+/*\n+** write_za8_mf8_z4_0_w15p12:\n+**\tmova\tza0h\\.b\\[w15, 12:13\\], {z4\\.b - z5\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z4_0_w15p12, svmfloat8x2_t,\n+\t    svwrite_hor_za8_mf8_vg2 (0, w15 + 12, z4),\n+\t    svwrite_hor_za8_mf8_vg2 (0, w15 + 12, z4))\n+\n /*\n ** write_za8_u8_z28_0_w12p15:\n **\tadd\t(w[0-9]+), w12, #?15\n@@ -109,6 +157,16 @@ TEST_ZA_XN (write_za8_u8_z28_0_w12p15, svuint8x2_t,\n \t    svwrite_hor_za8_u8_vg2 (0, w12 + 15, z28),\n \t    svwrite_hor_za8_u8_vg2 (0, w12 + 15, z28))\n \n+/*\n+** write_za8_mf8_z28_0_w12p15:\n+**\tadd\t(w[0-9]+), w12, #?15\n+**\tmova\tza0h\\.b\\[\\1, 0:1\\], {z28\\.b - z29\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z28_0_w12p15, svmfloat8x2_t,\n+\t    svwrite_hor_za8_mf8_vg2 (0, w12 + 15, z28),\n+\t    svwrite_hor_za8_mf8_vg2 (0, w12 + 15, z28))\n+\n /*\n ** write_za8_s8_z0_0_w15p16:\n **\tadd\t(w[0-9]+), w15, #?16\n@@ -129,6 +187,16 @@ TEST_ZA_XN (write_za8_u8_z4_0_w12m1, svuint8x2_t,\n \t    svwrite_hor_za8_u8_vg2 (0, w12 - 1, z4),\n \t    svwrite_hor_za8_u8_vg2 (0, w12 - 1, z4))\n \n+/*\n+** write_za8_mf8_z4_0_w12m1:\n+**\tsub\t(w[0-9]+), w12, #?1\n+**\tmova\tza0h\\.b\\[\\1, 0:1\\], {z4\\.b - z5\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z4_0_w12m1, svmfloat8x2_t,\n+\t    svwrite_hor_za8_mf8_vg2 (0, w12 - 1, z4),\n+\t    svwrite_hor_za8_mf8_vg2 (0, w12 - 1, z4))\n+\n /*\n ** write_za8_u8_z18_0_w16:\n **\tmov\t(w1[2-5]), w16\n@@ -138,3 +206,13 @@ TEST_ZA_XN (write_za8_u8_z4_0_w12m1, svuint8x2_t,\n TEST_ZA_XN (write_za8_u8_z18_0_w16, svuint8x2_t,\n \t    svwrite_hor_za8_u8_vg2 (0, w16, z18),\n \t    svwrite_hor_za8_u8_vg2 (0, w16, z18))\n+\n+/*\n+** write_za8_mf8_z18_0_w16:\n+**\tmov\t(w1[2-5]), w16\n+**\tmova\tza0h\\.b\\[\\1, 0:1\\], {z18\\.b - z19\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z18_0_w16, svmfloat8x2_t,\n+\t    svwrite_hor_za8_mf8_vg2 (0, w16, z18),\n+\t    svwrite_hor_za8_mf8_vg2 (0, w16, z18))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_hor_za8_vg4.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_hor_za8_vg4.c\nindex e333ce699e3..70a2e95db96 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_hor_za8_vg4.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_hor_za8_vg4.c\n@@ -22,6 +22,16 @@ TEST_ZA_XN (write_za8_u8_z4_0_1, svuint8x4_t,\n \t    svwrite_hor_za8_u8_vg4 (0, 1, z4),\n \t    svwrite_hor_za8_u8_vg4 (0, 1, z4))\n \n+/*\n+** write_za8_mf8_z4_0_1:\n+**\tmov\t(w1[2-5]), #?1\n+**\tmova\tza0h\\.b\\[\\1, 0:3\\], {z4\\.b - z7\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z4_0_1, svmfloat8x4_t,\n+\t    svwrite_hor_za8_mf8_vg4 (0, 1, z4),\n+\t    svwrite_hor_za8_mf8_vg4 (0, 1, z4))\n+\n /*\n ** write_za8_s8_z28_0_w11:\n **\tmov\t(w1[2-5]), w11\n@@ -54,6 +64,19 @@ TEST_ZA_XN (write_za8_u8_z18_0_w15, svuint8x4_t,\n \t    svwrite_hor_za8_u8_vg4 (0, w15, z18),\n \t    svwrite_hor_za8_u8_vg4 (0, w15, z18))\n \n+/*\n+** write_za8_mf8_z18_0_w15:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmova\tza0h\\.b\\[w15, 0:3\\], {[^\\n]+}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z18_0_w15, svmfloat8x4_t,\n+\t    svwrite_hor_za8_mf8_vg4 (0, w15, z18),\n+\t    svwrite_hor_za8_mf8_vg4 (0, w15, z18))\n+\n /*\n ** write_za8_s8_z23_0_w12p12:\n **\tmov\t[^\\n]+\n@@ -77,6 +100,16 @@ TEST_ZA_XN (write_za8_u8_z4_0_w12p1, svuint8x4_t,\n \t    svwrite_hor_za8_u8_vg4 (0, w12 + 1, z4),\n \t    svwrite_hor_za8_u8_vg4 (0, w12 + 1, z4))\n \n+/*\n+** write_za8_mf8_z4_0_w12p1:\n+**\tadd\t(w[0-9]+), w12, #?1\n+**\tmova\tza0h\\.b\\[\\1, 0:3\\], {z4\\.b - z7\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z4_0_w12p1, svmfloat8x4_t,\n+\t    svwrite_hor_za8_mf8_vg4 (0, w12 + 1, z4),\n+\t    svwrite_hor_za8_mf8_vg4 (0, w12 + 1, z4))\n+\n /*\n ** write_za8_s8_z28_0_w12p2:\n **\tadd\t(w[0-9]+), w12, #?2\n@@ -97,6 +130,16 @@ TEST_ZA_XN (write_za8_u8_z0_0_w15p3, svuint8x4_t,\n \t    svwrite_hor_za8_u8_vg4 (0, w15 + 3, z0),\n \t    svwrite_hor_za8_u8_vg4 (0, w15 + 3, z0))\n \n+/*\n+** write_za8_mf8_z0_0_w15p3:\n+**\tadd\t(w[0-9]+), w15, #?3\n+**\tmova\tza0h\\.b\\[\\1, 0:3\\], {z0\\.b - z3\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z0_0_w15p3, svmfloat8x4_t,\n+\t    svwrite_hor_za8_mf8_vg4 (0, w15 + 3, z0),\n+\t    svwrite_hor_za8_mf8_vg4 (0, w15 + 3, z0))\n+\n /*\n ** write_za8_u8_z0_0_w12p4:\n **\tmova\tza0h\\.b\\[w12, 4:7\\], {z0\\.b - z3\\.b}\n@@ -106,6 +149,15 @@ TEST_ZA_XN (write_za8_u8_z0_0_w12p4, svuint8x4_t,\n \t    svwrite_hor_za8_u8_vg4 (0, w12 + 4, z0),\n \t    svwrite_hor_za8_u8_vg4 (0, w12 + 4, z0))\n \n+/*\n+** write_za8_mf8_z0_0_w12p4:\n+**\tmova\tza0h\\.b\\[w12, 4:7\\], {z0\\.b - z3\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z0_0_w12p4, svmfloat8x4_t,\n+\t    svwrite_hor_za8_mf8_vg4 (0, w12 + 4, z0),\n+\t    svwrite_hor_za8_mf8_vg4 (0, w12 + 4, z0))\n+\n /*\n ** write_za8_u8_z4_0_w15p12:\n **\tmova\tza0h\\.b\\[w15, 12:15\\], {z4\\.b - z7\\.b}\n@@ -115,6 +167,15 @@ TEST_ZA_XN (write_za8_u8_z4_0_w15p12, svuint8x4_t,\n \t    svwrite_hor_za8_u8_vg4 (0, w15 + 12, z4),\n \t    svwrite_hor_za8_u8_vg4 (0, w15 + 12, z4))\n \n+/*\n+** write_za8_mf8_z4_0_w15p12:\n+**\tmova\tza0h\\.b\\[w15, 12:15\\], {z4\\.b - z7\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z4_0_w15p12, svmfloat8x4_t,\n+\t    svwrite_hor_za8_mf8_vg4 (0, w15 + 12, z4),\n+\t    svwrite_hor_za8_mf8_vg4 (0, w15 + 12, z4))\n+\n /*\n ** write_za8_u8_z28_0_w12p14:\n **\tadd\t(w[0-9]+), w12, #?14\n@@ -125,6 +186,16 @@ TEST_ZA_XN (write_za8_u8_z28_0_w12p14, svuint8x4_t,\n \t    svwrite_hor_za8_u8_vg4 (0, w12 + 14, z28),\n \t    svwrite_hor_za8_u8_vg4 (0, w12 + 14, z28))\n \n+/*\n+** write_za8_mf8_z28_0_w12p14:\n+**\tadd\t(w[0-9]+), w12, #?14\n+**\tmova\tza0h\\.b\\[\\1, 0:3\\], {z28\\.b - z31\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z28_0_w12p14, svmfloat8x4_t,\n+\t    svwrite_hor_za8_mf8_vg4 (0, w12 + 14, z28),\n+\t    svwrite_hor_za8_mf8_vg4 (0, w12 + 14, z28))\n+\n /*\n ** write_za8_s8_z0_0_w15p16:\n **\tadd\t(w[0-9]+), w15, #?16\n@@ -145,6 +216,16 @@ TEST_ZA_XN (write_za8_u8_z4_0_w12m1, svuint8x4_t,\n \t    svwrite_hor_za8_u8_vg4 (0, w12 - 1, z4),\n \t    svwrite_hor_za8_u8_vg4 (0, w12 - 1, z4))\n \n+/*\n+** write_za8_mf8_z4_0_w12m1:\n+**\tsub\t(w[0-9]+), w12, #?1\n+**\tmova\tza0h\\.b\\[\\1, 0:3\\], {z4\\.b - z7\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z4_0_w12m1, svmfloat8x4_t,\n+\t    svwrite_hor_za8_mf8_vg4 (0, w12 - 1, z4),\n+\t    svwrite_hor_za8_mf8_vg4 (0, w12 - 1, z4))\n+\n /*\n ** write_za8_u8_z28_0_w16:\n **\tmov\t(w1[2-5]), w16\n@@ -154,3 +235,13 @@ TEST_ZA_XN (write_za8_u8_z4_0_w12m1, svuint8x4_t,\n TEST_ZA_XN (write_za8_u8_z28_0_w16, svuint8x4_t,\n \t    svwrite_hor_za8_u8_vg4 (0, w16, z28),\n \t    svwrite_hor_za8_u8_vg4 (0, w16, z28))\n+\n+/*\n+** write_za8_mf8_z28_0_w16:\n+**\tmov\t(w1[2-5]), w16\n+**\tmova\tza0h\\.b\\[\\1, 0:3\\], {z28\\.b - z31\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z28_0_w16, svmfloat8x4_t,\n+\t    svwrite_hor_za8_mf8_vg4 (0, w16, z28),\n+\t    svwrite_hor_za8_mf8_vg4 (0, w16, z28))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_ver_za8_vg2.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_ver_za8_vg2.c\nindex ce3dbdd8729..a576b753301 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_ver_za8_vg2.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_ver_za8_vg2.c\n@@ -22,6 +22,16 @@ TEST_ZA_XN (write_za8_u8_z4_0_1, svuint8x2_t,\n \t    svwrite_ver_za8_u8_vg2 (0, 1, z4),\n \t    svwrite_ver_za8_u8_vg2 (0, 1, z4))\n \n+/*\n+** write_za8_mf8_z4_0_1:\n+**\tmov\t(w1[2-5]), #?1\n+**\tmova\tza0v\\.b\\[\\1, 0:1\\], {z4\\.b - z5\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z4_0_1, svmfloat8x2_t,\n+\t    svwrite_ver_za8_mf8_vg2 (0, 1, z4),\n+\t    svwrite_ver_za8_mf8_vg2 (0, 1, z4))\n+\n /*\n ** write_za8_s8_z28_0_w11:\n **\tmov\t(w1[2-5]), w11\n@@ -50,6 +60,15 @@ TEST_ZA_XN (write_za8_u8_z18_0_w15, svuint8x2_t,\n \t    svwrite_ver_za8_u8_vg2 (0, w15, z18),\n \t    svwrite_ver_za8_u8_vg2 (0, w15, z18))\n \n+/*\n+** write_za8_mf8_z18_0_w15:\n+**\tmova\tza0v\\.b\\[w15, 0:1\\], {z18\\.b - z19\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z18_0_w15, svmfloat8x2_t,\n+\t    svwrite_ver_za8_mf8_vg2 (0, w15, z18),\n+\t    svwrite_ver_za8_mf8_vg2 (0, w15, z18))\n+\n /*\n ** write_za8_s8_z23_0_w12p14:\n **\tmov\t[^\\n]+\n@@ -71,6 +90,16 @@ TEST_ZA_XN (write_za8_u8_z4_0_w12p1, svuint8x2_t,\n \t    svwrite_ver_za8_u8_vg2 (0, w12 + 1, z4),\n \t    svwrite_ver_za8_u8_vg2 (0, w12 + 1, z4))\n \n+/*\n+** write_za8_mf8_z4_0_w12p1:\n+**\tadd\t(w[0-9]+), w12, #?1\n+**\tmova\tza0v\\.b\\[\\1, 0:1\\], {z4\\.b - z5\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z4_0_w12p1, svmfloat8x2_t,\n+\t    svwrite_ver_za8_mf8_vg2 (0, w12 + 1, z4),\n+\t    svwrite_ver_za8_mf8_vg2 (0, w12 + 1, z4))\n+\n /*\n ** write_za8_s8_z28_0_w12p2:\n **\tmova\tza0v\\.b\\[w12, 2:3\\], {z28\\.b - z29\\.b}\n@@ -90,6 +119,16 @@ TEST_ZA_XN (write_za8_u8_z0_0_w15p3, svuint8x2_t,\n \t    svwrite_ver_za8_u8_vg2 (0, w15 + 3, z0),\n \t    svwrite_ver_za8_u8_vg2 (0, w15 + 3, z0))\n \n+/*\n+** write_za8_mf8_z0_0_w15p3:\n+**\tadd\t(w[0-9]+), w15, #?3\n+**\tmova\tza0v\\.b\\[\\1, 0:1\\], {z0\\.b - z1\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z0_0_w15p3, svmfloat8x2_t,\n+\t    svwrite_ver_za8_mf8_vg2 (0, w15 + 3, z0),\n+\t    svwrite_ver_za8_mf8_vg2 (0, w15 + 3, z0))\n+\n /*\n ** write_za8_u8_z4_0_w15p12:\n **\tmova\tza0v\\.b\\[w15, 12:13\\], {z4\\.b - z5\\.b}\n@@ -99,6 +138,15 @@ TEST_ZA_XN (write_za8_u8_z4_0_w15p12, svuint8x2_t,\n \t    svwrite_ver_za8_u8_vg2 (0, w15 + 12, z4),\n \t    svwrite_ver_za8_u8_vg2 (0, w15 + 12, z4))\n \n+/*\n+** write_za8_mf8_z4_0_w15p12:\n+**\tmova\tza0v\\.b\\[w15, 12:13\\], {z4\\.b - z5\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z4_0_w15p12, svmfloat8x2_t,\n+\t    svwrite_ver_za8_mf8_vg2 (0, w15 + 12, z4),\n+\t    svwrite_ver_za8_mf8_vg2 (0, w15 + 12, z4))\n+\n /*\n ** write_za8_u8_z28_0_w12p15:\n **\tadd\t(w[0-9]+), w12, #?15\n@@ -109,6 +157,16 @@ TEST_ZA_XN (write_za8_u8_z28_0_w12p15, svuint8x2_t,\n \t    svwrite_ver_za8_u8_vg2 (0, w12 + 15, z28),\n \t    svwrite_ver_za8_u8_vg2 (0, w12 + 15, z28))\n \n+/*\n+** write_za8_mf8_z28_0_w12p15:\n+**\tadd\t(w[0-9]+), w12, #?15\n+**\tmova\tza0v\\.b\\[\\1, 0:1\\], {z28\\.b - z29\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z28_0_w12p15, svmfloat8x2_t,\n+\t    svwrite_ver_za8_mf8_vg2 (0, w12 + 15, z28),\n+\t    svwrite_ver_za8_mf8_vg2 (0, w12 + 15, z28))\n+\n /*\n ** write_za8_s8_z0_0_w15p16:\n **\tadd\t(w[0-9]+), w15, #?16\n@@ -129,6 +187,16 @@ TEST_ZA_XN (write_za8_u8_z4_0_w12m1, svuint8x2_t,\n \t    svwrite_ver_za8_u8_vg2 (0, w12 - 1, z4),\n \t    svwrite_ver_za8_u8_vg2 (0, w12 - 1, z4))\n \n+/*\n+** write_za8_mf8_z4_0_w12m1:\n+**\tsub\t(w[0-9]+), w12, #?1\n+**\tmova\tza0v\\.b\\[\\1, 0:1\\], {z4\\.b - z5\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z4_0_w12m1, svmfloat8x2_t,\n+\t    svwrite_ver_za8_mf8_vg2 (0, w12 - 1, z4),\n+\t    svwrite_ver_za8_mf8_vg2 (0, w12 - 1, z4))\n+\n /*\n ** write_za8_u8_z18_0_w16:\n **\tmov\t(w1[2-5]), w16\n@@ -138,3 +206,13 @@ TEST_ZA_XN (write_za8_u8_z4_0_w12m1, svuint8x2_t,\n TEST_ZA_XN (write_za8_u8_z18_0_w16, svuint8x2_t,\n \t    svwrite_ver_za8_u8_vg2 (0, w16, z18),\n \t    svwrite_ver_za8_u8_vg2 (0, w16, z18))\n+\n+/*\n+** write_za8_mf8_z18_0_w16:\n+**\tmov\t(w1[2-5]), w16\n+**\tmova\tza0v\\.b\\[\\1, 0:1\\], {z18\\.b - z19\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z18_0_w16, svmfloat8x2_t,\n+\t    svwrite_ver_za8_mf8_vg2 (0, w16, z18),\n+\t    svwrite_ver_za8_mf8_vg2 (0, w16, z18))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_ver_za8_vg4.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_ver_za8_vg4.c\nindex 8972fed59e3..0444f80fa42 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_ver_za8_vg4.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_ver_za8_vg4.c\n@@ -22,6 +22,16 @@ TEST_ZA_XN (write_za8_u8_z4_0_1, svuint8x4_t,\n \t    svwrite_ver_za8_u8_vg4 (0, 1, z4),\n \t    svwrite_ver_za8_u8_vg4 (0, 1, z4))\n \n+/*\n+** write_za8_mf8_z4_0_1:\n+**\tmov\t(w1[2-5]), #?1\n+**\tmova\tza0v\\.b\\[\\1, 0:3\\], {z4\\.b - z7\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z4_0_1, svmfloat8x4_t,\n+\t    svwrite_ver_za8_mf8_vg4 (0, 1, z4),\n+\t    svwrite_ver_za8_mf8_vg4 (0, 1, z4))\n+\n /*\n ** write_za8_s8_z28_0_w11:\n **\tmov\t(w1[2-5]), w11\n@@ -54,6 +64,19 @@ TEST_ZA_XN (write_za8_u8_z18_0_w15, svuint8x4_t,\n \t    svwrite_ver_za8_u8_vg4 (0, w15, z18),\n \t    svwrite_ver_za8_u8_vg4 (0, w15, z18))\n \n+/*\n+** write_za8_mf8_z18_0_w15:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmova\tza0v\\.b\\[w15, 0:3\\], {[^\\n]+}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z18_0_w15, svmfloat8x4_t,\n+\t    svwrite_ver_za8_mf8_vg4 (0, w15, z18),\n+\t    svwrite_ver_za8_mf8_vg4 (0, w15, z18))\n+\n /*\n ** write_za8_s8_z23_0_w12p12:\n **\tmov\t[^\\n]+\n@@ -77,6 +100,16 @@ TEST_ZA_XN (write_za8_u8_z4_0_w12p1, svuint8x4_t,\n \t    svwrite_ver_za8_u8_vg4 (0, w12 + 1, z4),\n \t    svwrite_ver_za8_u8_vg4 (0, w12 + 1, z4))\n \n+/*\n+** write_za8_mf8_z4_0_w12p1:\n+**\tadd\t(w[0-9]+), w12, #?1\n+**\tmova\tza0v\\.b\\[\\1, 0:3\\], {z4\\.b - z7\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z4_0_w12p1, svmfloat8x4_t,\n+\t    svwrite_ver_za8_mf8_vg4 (0, w12 + 1, z4),\n+\t    svwrite_ver_za8_mf8_vg4 (0, w12 + 1, z4))\n+\n /*\n ** write_za8_s8_z28_0_w12p2:\n **\tadd\t(w[0-9]+), w12, #?2\n@@ -97,6 +130,16 @@ TEST_ZA_XN (write_za8_u8_z0_0_w15p3, svuint8x4_t,\n \t    svwrite_ver_za8_u8_vg4 (0, w15 + 3, z0),\n \t    svwrite_ver_za8_u8_vg4 (0, w15 + 3, z0))\n \n+/*\n+** write_za8_mf8_z0_0_w15p3:\n+**\tadd\t(w[0-9]+), w15, #?3\n+**\tmova\tza0v\\.b\\[\\1, 0:3\\], {z0\\.b - z3\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z0_0_w15p3, svmfloat8x4_t,\n+\t    svwrite_ver_za8_mf8_vg4 (0, w15 + 3, z0),\n+\t    svwrite_ver_za8_mf8_vg4 (0, w15 + 3, z0))\n+\n /*\n ** write_za8_u8_z0_0_w12p4:\n **\tmova\tza0v\\.b\\[w12, 4:7\\], {z0\\.b - z3\\.b}\n@@ -106,6 +149,15 @@ TEST_ZA_XN (write_za8_u8_z0_0_w12p4, svuint8x4_t,\n \t    svwrite_ver_za8_u8_vg4 (0, w12 + 4, z0),\n \t    svwrite_ver_za8_u8_vg4 (0, w12 + 4, z0))\n \n+/*\n+** write_za8_mf8_z0_0_w12p4:\n+**\tmova\tza0v\\.b\\[w12, 4:7\\], {z0\\.b - z3\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z0_0_w12p4, svmfloat8x4_t,\n+\t    svwrite_ver_za8_mf8_vg4 (0, w12 + 4, z0),\n+\t    svwrite_ver_za8_mf8_vg4 (0, w12 + 4, z0))\n+\n /*\n ** write_za8_u8_z4_0_w15p12:\n **\tmova\tza0v\\.b\\[w15, 12:15\\], {z4\\.b - z7\\.b}\n@@ -115,6 +167,15 @@ TEST_ZA_XN (write_za8_u8_z4_0_w15p12, svuint8x4_t,\n \t    svwrite_ver_za8_u8_vg4 (0, w15 + 12, z4),\n \t    svwrite_ver_za8_u8_vg4 (0, w15 + 12, z4))\n \n+/*\n+** write_za8_mf8_z4_0_w15p12:\n+**\tmova\tza0v\\.b\\[w15, 12:15\\], {z4\\.b - z7\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z4_0_w15p12, svmfloat8x4_t,\n+\t    svwrite_ver_za8_mf8_vg4 (0, w15 + 12, z4),\n+\t    svwrite_ver_za8_mf8_vg4 (0, w15 + 12, z4))\n+\n /*\n ** write_za8_u8_z28_0_w12p14:\n **\tadd\t(w[0-9]+), w12, #?14\n@@ -125,6 +186,16 @@ TEST_ZA_XN (write_za8_u8_z28_0_w12p14, svuint8x4_t,\n \t    svwrite_ver_za8_u8_vg4 (0, w12 + 14, z28),\n \t    svwrite_ver_za8_u8_vg4 (0, w12 + 14, z28))\n \n+/*\n+** write_za8_mf8_z28_0_w12p14:\n+**\tadd\t(w[0-9]+), w12, #?14\n+**\tmova\tza0v\\.b\\[\\1, 0:3\\], {z28\\.b - z31\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z28_0_w12p14, svmfloat8x4_t,\n+\t    svwrite_ver_za8_mf8_vg4 (0, w12 + 14, z28),\n+\t    svwrite_ver_za8_mf8_vg4 (0, w12 + 14, z28))\n+\n /*\n ** write_za8_s8_z0_0_w15p16:\n **\tadd\t(w[0-9]+), w15, #?16\n@@ -145,6 +216,16 @@ TEST_ZA_XN (write_za8_u8_z4_0_w12m1, svuint8x4_t,\n \t    svwrite_ver_za8_u8_vg4 (0, w12 - 1, z4),\n \t    svwrite_ver_za8_u8_vg4 (0, w12 - 1, z4))\n \n+/*\n+** write_za8_mf8_z4_0_w12m1:\n+**\tsub\t(w[0-9]+), w12, #?1\n+**\tmova\tza0v\\.b\\[\\1, 0:3\\], {z4\\.b - z7\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z4_0_w12m1, svmfloat8x4_t,\n+\t    svwrite_ver_za8_mf8_vg4 (0, w12 - 1, z4),\n+\t    svwrite_ver_za8_mf8_vg4 (0, w12 - 1, z4))\n+\n /*\n ** write_za8_u8_z28_0_w16:\n **\tmov\t(w1[2-5]), w16\n@@ -154,3 +235,13 @@ TEST_ZA_XN (write_za8_u8_z4_0_w12m1, svuint8x4_t,\n TEST_ZA_XN (write_za8_u8_z28_0_w16, svuint8x4_t,\n \t    svwrite_ver_za8_u8_vg4 (0, w16, z28),\n \t    svwrite_ver_za8_u8_vg4 (0, w16, z28))\n+\n+/*\n+** write_za8_mf8_z28_0_w16:\n+**\tmov\t(w1[2-5]), w16\n+**\tmova\tza0v\\.b\\[\\1, 0:3\\], {z28\\.b - z31\\.b}\n+**\tret\n+*/\n+TEST_ZA_XN (write_za8_mf8_z28_0_w16, svmfloat8x4_t,\n+\t    svwrite_ver_za8_mf8_vg4 (0, w16, z28),\n+\t    svwrite_ver_za8_mf8_vg4 (0, w16, z28))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_za8_vg1x2.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_za8_vg1x2.c\nindex 4b83a37edd2..836118b0fa7 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_za8_vg1x2.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_za8_vg1x2.c\n@@ -32,6 +32,16 @@ TEST_ZA_XN (write_w7_z0, svuint8x2_t,\n \t    svwrite_za8_u8_vg1x2 (w7, z0),\n \t    svwrite_za8_vg1x2 (w7, z0))\n \n+/*\n+** write_mf8_w7_z0:\n+**\tmov\t(w8|w9|w10|w11), w7\n+**\tmova\tza\\.d\\[\\1, 0, vgx2\\], {z0\\.d - z1\\.d}\n+**\tret\n+*/\n+TEST_ZA_XN (write_mf8_w7_z0, svmfloat8x2_t,\n+\t    svwrite_za8_mf8_vg1x2 (w7, z0),\n+\t    svwrite_za8_vg1x2 (w7, z0))\n+\n /*\n ** write_w8_z0:\n **\tmova\tza\\.d\\[w8, 0, vgx2\\], {z0\\.d - z1\\.d}\n@@ -61,6 +71,16 @@ TEST_ZA_XN (write_w12_z0, svuint8x2_t,\n \t    svwrite_za8_u8_vg1x2 (w12, z0),\n \t    svwrite_za8_vg1x2 (w12, z0))\n \n+/*\n+** write_mf8_w12_z0:\n+**\tmov\t(w8|w9|w10|w11), w12\n+**\tmova\tza\\.d\\[\\1, 0, vgx2\\], {z0\\.d - z1\\.d}\n+**\tret\n+*/\n+TEST_ZA_XN (write_mf8_w12_z0, svmfloat8x2_t,\n+\t    svwrite_za8_mf8_vg1x2 (w12, z0),\n+\t    svwrite_za8_vg1x2 (w12, z0))\n+\n /*\n ** write_w8p7_z0:\n **\tmova\tza\\.d\\[w8, 7, vgx2\\], {z0\\.d - z1\\.d}\n@@ -90,6 +110,16 @@ TEST_ZA_XN (write_w8m1_z0, svuint8x2_t,\n \t    svwrite_za8_u8_vg1x2 (w8 - 1, z0),\n \t    svwrite_za8_vg1x2 (w8 - 1, z0))\n \n+/*\n+** write_mf8_w8m1_z0:\n+**\tsub\t(w8|w9|w10|w11), w8, #?1\n+**\tmova\tza\\.d\\[\\1, 0, vgx2\\], {z0\\.d - z1\\.d}\n+**\tret\n+*/\n+TEST_ZA_XN (write_mf8_w8m1_z0, svmfloat8x2_t,\n+\t    svwrite_za8_mf8_vg1x2 (w8 - 1, z0),\n+\t    svwrite_za8_vg1x2 (w8 - 1, z0))\n+\n /*\n ** write_w8_z18:\n **\tmova\tza\\.d\\[w8, 0, vgx2\\], {z18\\.d - z19\\.d}\n@@ -99,6 +129,15 @@ TEST_ZA_XN (write_w8_z18, svuint8x2_t,\n \t    svwrite_za8_u8_vg1x2 (w8, z18),\n \t    svwrite_za8_vg1x2 (w8, z18))\n \n+/*\n+** write_mf8_w8_z18:\n+**\tmova\tza\\.d\\[w8, 0, vgx2\\], {z18\\.d - z19\\.d}\n+**\tret\n+*/\n+TEST_ZA_XN (write_mf8_w8_z18, svmfloat8x2_t,\n+\t    svwrite_za8_mf8_vg1x2 (w8, z18),\n+\t    svwrite_za8_vg1x2 (w8, z18))\n+\n /* Leave the assembler to check for correctness for misaligned registers.  */\n \n /*\n@@ -120,3 +159,12 @@ TEST_ZA_XN (write_w8_z23, svint8x2_t,\n TEST_ZA_XN (write_w8_z28, svuint8x2_t,\n \t    svwrite_za8_u8_vg1x2 (w8, z28),\n \t    svwrite_za8_vg1x2 (w8, z28))\n+\n+/*\n+** write_mf8_w8_z28:\n+**\tmova\tza\\.d\\[w8, 0, vgx2\\], {z28\\.d - z29\\.d}\n+**\tret\n+*/\n+TEST_ZA_XN (write_mf8_w8_z28, svmfloat8x2_t,\n+\t    svwrite_za8_mf8_vg1x2 (w8, z28),\n+\t    svwrite_za8_vg1x2 (w8, z28))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_za8_vg1x4.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_za8_vg1x4.c\nindex a529bf9fcca..649a5c0ca63 100644\n--- a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_za8_vg1x4.c\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/write_za8_vg1x4.c\n@@ -22,6 +22,16 @@ TEST_ZA_XN (write_w0_z0, svuint8x4_t,\n \t    svwrite_za8_u8_vg1x4 (w0, z0),\n \t    svwrite_za8_vg1x4 (w0, z0))\n \n+/*\n+** write_mf8_w0_z0:\n+**\tmov\t(w8|w9|w10|w11), w0\n+**\tmova\tza\\.d\\[\\1, 0, vgx4\\], {z0\\.d - z3\\.d}\n+**\tret\n+*/\n+TEST_ZA_XN (write_mf8_w0_z0, svmfloat8x4_t,\n+\t    svwrite_za8_mf8_vg1x4 (w0, z0),\n+\t    svwrite_za8_vg1x4 (w0, z0))\n+\n /*\n ** write_w7_z0:\n **\tmov\t(w8|w9|w10|w11), w7\n@@ -50,6 +60,14 @@ TEST_ZA_XN (write_w11_z0, svuint8x4_t,\n \t    svwrite_za8_u8_vg1x4 (w11, z0),\n \t    svwrite_za8_vg1x4 (w11, z0))\n \n+/*\n+** write_mf8_w11_z0:\n+**\tmova\tza\\.d\\[w11, 0, vgx4\\], {z0\\.d - z3\\.d}\n+**\tret\n+*/\n+TEST_ZA_XN (write_mf8_w11_z0, svmfloat8x4_t,\n+\t    svwrite_za8_mf8_vg1x4 (w11, z0),\n+\t    svwrite_za8_vg1x4 (w11, z0))\n \n /*\n ** write_w12_z0:\n@@ -80,6 +98,16 @@ TEST_ZA_XN (write_w8p8_z0, svuint8x4_t,\n \t    svwrite_za8_u8_vg1x4 (w8 + 8, z0),\n \t    svwrite_za8_vg1x4 (w8 + 8, z0))\n \n+/*\n+** write_mf8_w8p8_z0:\n+**\tadd\t(w8|w9|w10|w11), w8, #?8\n+**\tmova\tza\\.d\\[\\1, 0, vgx4\\], {z0\\.d - z3\\.d}\n+**\tret\n+*/\n+TEST_ZA_XN (write_mf8_w8p8_z0, svmfloat8x4_t,\n+\t    svwrite_za8_mf8_vg1x4 (w8 + 8, z0),\n+\t    svwrite_za8_vg1x4 (w8 + 8, z0))\n+\n /*\n ** write_w8m1_z0:\n **\tsub\t(w8|w9|w10|w11), w8, #?1\n@@ -114,6 +142,19 @@ TEST_ZA_XN (write_w8_z18, svuint8x4_t,\n \t    svwrite_za8_u8_vg1x4 (w8, z18),\n \t    svwrite_za8_vg1x4 (w8, z18))\n \n+/*\n+** write_mf8_w8_z18:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmova\tza\\.d\\[w8, 0, vgx4\\], [^\\n]+\n+**\tret\n+*/\n+TEST_ZA_XN (write_mf8_w8_z18, svmfloat8x4_t,\n+\t    svwrite_za8_mf8_vg1x4 (w8, z18),\n+\t    svwrite_za8_vg1x4 (w8, z18))\n+\n /*\n ** write_w8_z23:\n **\tmov\t[^\\n]+\n@@ -127,6 +168,19 @@ TEST_ZA_XN (write_w8_z23, svuint8x4_t,\n \t    svwrite_za8_u8_vg1x4 (w8, z23),\n \t    svwrite_za8_vg1x4 (w8, z23))\n \n+/*\n+** write_mf8_w8_z23:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmova\tza\\.d\\[w8, 0, vgx4\\], [^\\n]+\n+**\tret\n+*/\n+TEST_ZA_XN (write_mf8_w8_z23, svmfloat8x4_t,\n+\t    svwrite_za8_mf8_vg1x4 (w8, z23),\n+\t    svwrite_za8_vg1x4 (w8, z23))\n+\n /*\n ** write_w8_z28:\n **\tmova\tza\\.d\\[w8, 0, vgx4\\], {z28\\.d - z31\\.d}\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/zip_mf8_x2.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/zip_mf8_x2.c\nnew file mode 100644\nindex 00000000000..834a0e680a8\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/zip_mf8_x2.c\n@@ -0,0 +1,77 @@\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" } } */\n+\n+#include \"test_sme2_acle.h\"\n+\n+/*\n+** zip_z0_z0:\n+**\tzip\t{z0\\.b - z1\\.b}, z0\\.b, z1\\.b\n+**\tret\n+*/\n+TEST_XN (zip_z0_z0, svmfloat8x2_t, z0,\n+\t svzip_mf8_x2 (z0),\n+\t svzip (z0))\n+\n+/*\n+** zip_z0_z4:\n+**\tzip\t{z0\\.b - z1\\.b}, z4\\.b, z5\\.b\n+**\tret\n+*/\n+TEST_XN (zip_z0_z4, svmfloat8x2_t, z0,\n+\t svzip_mf8_x2 (z4),\n+\t svzip (z4))\n+\n+/*\n+** zip_z4_z18:\n+**\tzip\t{z4\\.b - z5\\.b}, z18\\.b, z19\\.b\n+**\tret\n+*/\n+TEST_XN (zip_z4_z18, svmfloat8x2_t, z4,\n+\t svzip_mf8_x2 (z18),\n+\t svzip (z18))\n+\n+/*\n+** zip_z18_z23:\n+**\tzip\t{z18\\.b - z19\\.b}, z23\\.b, z24\\.b\n+**\tret\n+*/\n+TEST_XN (zip_z18_z23, svmfloat8x2_t, z18,\n+\t svzip_mf8_x2 (z23),\n+\t svzip (z23))\n+\n+/*\n+** zip_z23_z28:\n+**\tzip\t[^\\n]+, z28\\.b, z29\\.b\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_XN (zip_z23_z28, svmfloat8x2_t, z23,\n+\t svzip_mf8_x2 (z28),\n+\t svzip (z28))\n+\n+/*\n+** zip_z28_z0:\n+**\tzip\t{z28\\.b - z29\\.b}, z0\\.b, z1\\.b\n+**\tret\n+*/\n+TEST_XN (zip_z28_z0, svmfloat8x2_t, z28,\n+\t svzip_mf8_x2 (z0),\n+\t svzip (z0))\n+\n+/*\n+** zip_z28_z0_z23:\t{ xfail aarch64_big_endian }\n+**\tzip\t{z28\\.b - z29\\.b}, z0\\.b, z23\\.b\n+**\tret\n+*/\n+TEST_XN (zip_z28_z0_z23, svmfloat8x2_t, z28,\n+\t svzip_mf8_x2 (svcreate2 (svget2 (z0, 0), svget2 (z23, 0))),\n+\t svzip (svcreate2 (svget2 (z0, 0), svget2 (z23, 0))))\n+\n+/*\n+** zip_z28_z5_z19:\n+**\tzip\t{z28\\.b - z29\\.b}, z5\\.b, z19\\.b\n+**\tret\n+*/\n+TEST_XN (zip_z28_z5_z19, svmfloat8x2_t, z28,\n+\t svzip_mf8_x2 (svcreate2 (svget2 (z4, 1), svget2 (z18, 1))),\n+\t svzip (svcreate2 (svget2 (z4, 1), svget2 (z18, 1))))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/zip_mf8_x4.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/zip_mf8_x4.c\nnew file mode 100644\nindex 00000000000..487e9b2d3fb\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/zip_mf8_x4.c\n@@ -0,0 +1,73 @@\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" } } */\n+\n+#include \"test_sme2_acle.h\"\n+\n+/*\n+** zip_z0_z0:\n+**\tzip\t{z0\\.b - z3\\.b}, {z0\\.b - z3\\.b}\n+**\tret\n+*/\n+TEST_XN (zip_z0_z0, svmfloat8x4_t, z0,\n+\t svzip_mf8_x4 (z0),\n+\t svzip (z0))\n+\n+/*\n+** zip_z0_z4:\n+**\tzip\t{z0\\.b - z3\\.b}, {z4\\.b - z7\\.b}\n+**\tret\n+*/\n+TEST_XN (zip_z0_z4, svmfloat8x4_t, z0,\n+\t svzip_mf8_x4 (z4),\n+\t svzip (z4))\n+\n+/*\n+** zip_z4_z18:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tzip\t{z4\\.b - z7\\.b}, [^\\n]+\n+**\tret\n+*/\n+TEST_XN (zip_z4_z18, svmfloat8x4_t, z4,\n+\t svzip_mf8_x4 (z18),\n+\t svzip (z18))\n+\n+/*\n+** zip_z18_z23:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tzip\t{z[^\\n]+}, {z[^\\n]+}\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_XN (zip_z18_z23, svmfloat8x4_t, z18,\n+\t svzip_mf8_x4 (z23),\n+\t svzip (z23))\n+\n+/*\n+** zip_z23_z28:\n+**\tzip\t[^\\n]+, {z28\\.b - z31\\.b}\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_XN (zip_z23_z28, svmfloat8x4_t, z23,\n+\t svzip_mf8_x4 (z28),\n+\t svzip (z28))\n+\n+/*\n+** zip_z28_z0:\n+**\tzip\t{z28\\.b - z31\\.b}, {z0\\.b - z3\\.b}\n+**\tret\n+*/\n+TEST_XN (zip_z28_z0, svmfloat8x4_t, z28,\n+\t svzip_mf8_x4 (z0),\n+\t svzip (z0))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/zipq_mf8_x2.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/zipq_mf8_x2.c\nnew file mode 100644\nindex 00000000000..4dd4753461a\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/zipq_mf8_x2.c\n@@ -0,0 +1,77 @@\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" } } */\n+\n+#include \"test_sme2_acle.h\"\n+\n+/*\n+** zipq_z0_z0:\n+**\tzip\t{z0\\.q - z1\\.q}, z0\\.q, z1\\.q\n+**\tret\n+*/\n+TEST_XN (zipq_z0_z0, svmfloat8x2_t, z0,\n+\t svzipq_mf8_x2 (z0),\n+\t svzipq (z0))\n+\n+/*\n+** zipq_z0_z4:\n+**\tzip\t{z0\\.q - z1\\.q}, z4\\.q, z5\\.q\n+**\tret\n+*/\n+TEST_XN (zipq_z0_z4, svmfloat8x2_t, z0,\n+\t svzipq_mf8_x2 (z4),\n+\t svzipq (z4))\n+\n+/*\n+** zipq_z4_z18:\n+**\tzip\t{z4\\.q - z5\\.q}, z18\\.q, z19\\.q\n+**\tret\n+*/\n+TEST_XN (zipq_z4_z18, svmfloat8x2_t, z4,\n+\t svzipq_mf8_x2 (z18),\n+\t svzipq (z18))\n+\n+/*\n+** zipq_z18_z23:\n+**\tzip\t{z18\\.q - z19\\.q}, z23\\.q, z24\\.q\n+**\tret\n+*/\n+TEST_XN (zipq_z18_z23, svmfloat8x2_t, z18,\n+\t svzipq_mf8_x2 (z23),\n+\t svzipq (z23))\n+\n+/*\n+** zipq_z23_z28:\n+**\tzip\t[^\\n]+, z28\\.q, z29\\.q\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_XN (zipq_z23_z28, svmfloat8x2_t, z23,\n+\t svzipq_mf8_x2 (z28),\n+\t svzipq (z28))\n+\n+/*\n+** zipq_z28_z0:\n+**\tzip\t{z28\\.q - z29\\.q}, z0\\.q, z1\\.q\n+**\tret\n+*/\n+TEST_XN (zipq_z28_z0, svmfloat8x2_t, z28,\n+\t svzipq_mf8_x2 (z0),\n+\t svzipq (z0))\n+\n+/*\n+** zipq_z28_z0_z23:\t{ xfail aarch64_big_endian }\n+**\tzip\t{z28\\.q - z29\\.q}, z0\\.q, z23\\.q\n+**\tret\n+*/\n+TEST_XN (zipq_z28_z0_z23, svmfloat8x2_t, z28,\n+\t svzipq_mf8_x2 (svcreate2 (svget2 (z0, 0), svget2 (z23, 0))),\n+\t svzipq (svcreate2 (svget2 (z0, 0), svget2 (z23, 0))))\n+\n+/*\n+** zipq_z28_z5_z19:\n+**\tzip\t{z28\\.q - z29\\.q}, z5\\.q, z19\\.q\n+**\tret\n+*/\n+TEST_XN (zipq_z28_z5_z19, svmfloat8x2_t, z28,\n+\t svzipq_mf8_x2 (svcreate2 (svget2 (z4, 1), svget2 (z18, 1))),\n+\t svzipq (svcreate2 (svget2 (z4, 1), svget2 (z18, 1))))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/zipq_mf8_x4.c b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/zipq_mf8_x4.c\nnew file mode 100644\nindex 00000000000..417eb387e4b\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sme2/acle-asm/zipq_mf8_x4.c\n@@ -0,0 +1,73 @@\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" } } */\n+\n+#include \"test_sme2_acle.h\"\n+\n+/*\n+** zipq_z0_z0:\n+**\tzip\t{z0\\.q - z3\\.q}, {z0\\.q - z3\\.q}\n+**\tret\n+*/\n+TEST_XN (zipq_z0_z0, svmfloat8x4_t, z0,\n+\t svzipq_mf8_x4 (z0),\n+\t svzipq (z0))\n+\n+/*\n+** zipq_z0_z4:\n+**\tzip\t{z0\\.q - z3\\.q}, {z4\\.q - z7\\.q}\n+**\tret\n+*/\n+TEST_XN (zipq_z0_z4, svmfloat8x4_t, z0,\n+\t svzipq_mf8_x4 (z4),\n+\t svzipq (z4))\n+\n+/*\n+** zipq_z4_z18:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tzip\t{z4\\.q - z7\\.q}, [^\\n]+\n+**\tret\n+*/\n+TEST_XN (zipq_z4_z18, svmfloat8x4_t, z4,\n+\t svzipq_mf8_x4 (z18),\n+\t svzipq (z18))\n+\n+/*\n+** zipq_z18_z23:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tzip\t{z[^\\n]+}, {z[^\\n]+}\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_XN (zipq_z18_z23, svmfloat8x4_t, z18,\n+\t svzipq_mf8_x4 (z23),\n+\t svzipq (z23))\n+\n+/*\n+** zipq_z23_z28:\n+**\tzip\t[^\\n]+, {z28\\.q - z31\\.q}\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_XN (zipq_z23_z28, svmfloat8x4_t, z23,\n+\t svzipq_mf8_x4 (z28),\n+\t svzipq (z28))\n+\n+/*\n+** zipq_z28_z0:\n+**\tzip\t{z28\\.q - z31\\.q}, {z0\\.q - z3\\.q}\n+**\tret\n+*/\n+TEST_XN (zipq_z28_z0, svmfloat8x4_t, z28,\n+\t svzipq_mf8_x4 (z0),\n+\t svzipq (z0))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/ld1_mf8_x2.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/ld1_mf8_x2.c\nnew file mode 100644\nindex 00000000000..d4073ab279d\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/ld1_mf8_x2.c\n@@ -0,0 +1,269 @@\n+/* { dg-do assemble { target aarch64_asm_sve2p1_ok } } */\n+/* { dg-do compile { target { ! aarch64_asm_sve2p1_ok } } } */\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" { target { ! ilp32 } } } } */\n+\n+#include \"test_sve_acle.h\"\n+\n+#pragma GCC target \"+sve2p1\"\n+#ifdef STREAMING_COMPATIBLE\n+#pragma GCC target \"+sme2\"\n+#endif\n+\n+/*\n+** ld1_mf8_base:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_base, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn8, x0),\n+\t\t z0 = svld1_x2 (pn8, x0))\n+\n+/*\n+** ld1_mf8_index:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, x1\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_index, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn8, x0 + x1),\n+\t\t z0 = svld1_x2 (pn8, x0 + x1))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_mf8_1:\n+**\tincb\tx0\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_1, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn8, x0 + svcntb ()),\n+\t\t z0 = svld1_x2 (pn8, x0 + svcntb ()))\n+\n+/*\n+** ld1_mf8_2:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #2, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_2, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn8, x0 + svcntb () * 2),\n+\t\t z0 = svld1_x2 (pn8, x0 + svcntb () * 2))\n+\n+/*\n+** ld1_mf8_14:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #14, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_14, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn8, x0 + svcntb () * 14),\n+\t\t z0 = svld1_x2 (pn8, x0 + svcntb () * 14))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_mf8_16:\n+**\tincb\tx0, all, mul #16\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_16, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn8, x0 + svcntb () * 16),\n+\t\t z0 = svld1_x2 (pn8, x0 + svcntb () * 16))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_mf8_m1:\n+**\tdecb\tx0\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_m1, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn8, x0 - svcntb ()),\n+\t\t z0 = svld1_x2 (pn8, x0 - svcntb ()))\n+\n+/*\n+** ld1_mf8_m2:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #-2, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_m2, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn8, x0 - svcntb () * 2),\n+\t\t z0 = svld1_x2 (pn8, x0 - svcntb () * 2))\n+\n+/*\n+** ld1_mf8_m16:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #-16, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_m16, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn8, x0 - svcntb () * 16),\n+\t\t z0 = svld1_x2 (pn8, x0 - svcntb () * 16))\n+\n+/*\n+** ld1_mf8_m18:\n+**\taddvl\t(x[0-9]+), x0, #-18\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[\\1\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_m18, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn8, x0 - svcntb () * 18),\n+\t\t z0 = svld1_x2 (pn8, x0 - svcntb () * 18))\n+\n+/*\n+** ld1_mf8_z17:\n+**\tld1b\t{z[^\\n]+}, pn8/z, \\[x0\\]\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_z17, svmfloat8x2_t, mfloat8_t,\n+\t\t z17 = svld1_mf8_x2 (pn8, x0),\n+\t\t z17 = svld1_x2 (pn8, x0))\n+\n+/*\n+** ld1_mf8_z22:\n+**\tld1b\t{z22\\.b(?: - |, )z23\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_z22, svmfloat8x2_t, mfloat8_t,\n+\t\t z22 = svld1_mf8_x2 (pn8, x0),\n+\t\t z22 = svld1_x2 (pn8, x0))\n+\n+/*\n+** ld1_mf8_z28:\n+**\tld1b\t{z28\\.b(?: - |, )z29\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_z28, svmfloat8x2_t, mfloat8_t,\n+\t\t z28 = svld1_mf8_x2 (pn8, x0),\n+\t\t z28 = svld1_x2 (pn8, x0))\n+\n+/*\n+** ld1_mf8_pn0:\n+**\tmov\tp([89]|1[0-5])\\.b, p0\\.b\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn\\1/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_pn0, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn0, x0),\n+\t\t z0 = svld1_x2 (pn0, x0))\n+\n+/*\n+** ld1_mf8_pn7:\n+**\tmov\tp([89]|1[0-5])\\.b, p7\\.b\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn\\1/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_pn7, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn7, x0),\n+\t\t z0 = svld1_x2 (pn7, x0))\n+\n+/*\n+** ld1_mf8_pn15:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn15/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_pn15, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x2 (pn15, x0),\n+\t\t z0 = svld1_x2 (pn15, x0))\n+\n+/*\n+** ld1_vnum_mf8_0:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_0, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x2 (pn8, x0, 0),\n+\t\t z0 = svld1_vnum_x2 (pn8, x0, 0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_vnum_mf8_1:\n+**\tincb\tx0\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_1, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x2 (pn8, x0, 1),\n+\t\t z0 = svld1_vnum_x2 (pn8, x0, 1))\n+\n+/*\n+** ld1_vnum_mf8_2:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #2, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_2, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x2 (pn8, x0, 2),\n+\t\t z0 = svld1_vnum_x2 (pn8, x0, 2))\n+\n+/*\n+** ld1_vnum_mf8_14:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #14, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_14, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x2 (pn8, x0, 14),\n+\t\t z0 = svld1_vnum_x2 (pn8, x0, 14))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_vnum_mf8_16:\n+**\tincb\tx0, all, mul #16\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_16, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x2 (pn8, x0, 16),\n+\t\t z0 = svld1_vnum_x2 (pn8, x0, 16))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_vnum_mf8_m1:\n+**\tdecb\tx0\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_m1, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x2 (pn8, x0, -1),\n+\t\t z0 = svld1_vnum_x2 (pn8, x0, -1))\n+\n+/*\n+** ld1_vnum_mf8_m2:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #-2, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_m2, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x2 (pn8, x0, -2),\n+\t\t z0 = svld1_vnum_x2 (pn8, x0, -2))\n+\n+/*\n+** ld1_vnum_mf8_m16:\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #-16, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_m16, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x2 (pn8, x0, -16),\n+\t\t z0 = svld1_vnum_x2 (pn8, x0, -16))\n+\n+/*\n+** ld1_vnum_mf8_m18:\n+**\taddvl\t(x[0-9]+), x0, #-18\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[\\1\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_m18, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x2 (pn8, x0, -18),\n+\t\t z0 = svld1_vnum_x2 (pn8, x0, -18))\n+\n+/*\n+** ld1_vnum_mf8_x1:\n+**\tcntb\t(x[0-9]+)\n+** (\n+**\tmadd\t(x[0-9]+), (?:x1, \\1|\\1, x1), x0\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[\\2\\]\n+** |\n+**\tmul\t(x[0-9]+), (?:x1, \\1|\\1, x1)\n+**\tld1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, \\3\\]\n+** )\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_x1, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x2 (pn8, x0, x1),\n+\t\t z0 = svld1_vnum_x2 (pn8, x0, x1))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/ld1_mf8_x4.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/ld1_mf8_x4.c\nnew file mode 100644\nindex 00000000000..84d053a4261\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/ld1_mf8_x4.c\n@@ -0,0 +1,361 @@\n+/* { dg-do assemble { target aarch64_asm_sve2p1_ok } } */\n+/* { dg-do compile { target { ! aarch64_asm_sve2p1_ok } } } */\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" { target { ! ilp32 } } } } */\n+\n+#include \"test_sve_acle.h\"\n+\n+#pragma GCC target \"+sve2p1\"\n+#ifdef STREAMING_COMPATIBLE\n+#pragma GCC target \"+sme2\"\n+#endif\n+\n+/*\n+** ld1_mf8_base:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_base, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0),\n+\t\t z0 = svld1_x4 (pn8, x0))\n+\n+/*\n+** ld1_mf8_index:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, x1\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_index, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 + x1),\n+\t\t z0 = svld1_x4 (pn8, x0 + x1))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_mf8_1:\n+**\tincb\tx0\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_1, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 + svcntb ()),\n+\t\t z0 = svld1_x4 (pn8, x0 + svcntb ()))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_mf8_2:\n+**\tincb\tx0, all, mul #2\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_2, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 + svcntb () * 2),\n+\t\t z0 = svld1_x4 (pn8, x0 + svcntb () * 2))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_mf8_3:\n+**\tincb\tx0, all, mul #3\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_3, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 + svcntb () * 3),\n+\t\t z0 = svld1_x4 (pn8, x0 + svcntb () * 3))\n+\n+/*\n+** ld1_mf8_4:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #4, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_4, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 + svcntb () * 4),\n+\t\t z0 = svld1_x4 (pn8, x0 + svcntb () * 4))\n+\n+/*\n+** ld1_mf8_28:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #28, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_28, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 + svcntb () * 28),\n+\t\t z0 = svld1_x4 (pn8, x0 + svcntb () * 28))\n+\n+/*\n+** ld1_mf8_32:\n+**\t[^{]*\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_32, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 + svcntb () * 32),\n+\t\t z0 = svld1_x4 (pn8, x0 + svcntb () * 32))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_mf8_m1:\n+**\tdecb\tx0\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_m1, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 - svcntb ()),\n+\t\t z0 = svld1_x4 (pn8, x0 - svcntb ()))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_mf8_m2:\n+**\tdecb\tx0, all, mul #2\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_m2, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 - svcntb () * 2),\n+\t\t z0 = svld1_x4 (pn8, x0 - svcntb () * 2))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_mf8_m3:\n+**\tdecb\tx0, all, mul #3\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_m3, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 - svcntb () * 3),\n+\t\t z0 = svld1_x4 (pn8, x0 - svcntb () * 3))\n+\n+/*\n+** ld1_mf8_m4:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #-4, mul vl\\]\n+**\tret\n+*/\n+  TEST_LOAD_COUNT (ld1_mf8_m4, svmfloat8x4_t, mfloat8_t,\n+\t\t   z0 = svld1_mf8_x4 (pn8, x0 - svcntb () * 4),\n+\t\t   z0 = svld1_x4 (pn8, x0 - svcntb () * 4))\n+\n+/*\n+** ld1_mf8_m32:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #-32, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_m32, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 - svcntb () * 32),\n+\t\t z0 = svld1_x4 (pn8, x0 - svcntb () * 32))\n+\n+/*\n+** ld1_mf8_m36:\n+**\t[^{]*\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_m36, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn8, x0 - svcntb () * 36),\n+\t\t z0 = svld1_x4 (pn8, x0 - svcntb () * 36))\n+\n+/*\n+** ld1_mf8_z17:\n+**\tld1b\t{z[^\\n]+}, pn8/z, \\[x0\\]\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_z17, svmfloat8x4_t, mfloat8_t,\n+\t\t z17 = svld1_mf8_x4 (pn8, x0),\n+\t\t z17 = svld1_x4 (pn8, x0))\n+\n+/*\n+** ld1_mf8_z22:\n+**\tld1b\t{z[^\\n]+}, pn8/z, \\[x0\\]\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_z22, svmfloat8x4_t, mfloat8_t,\n+\t\t z22 = svld1_mf8_x4 (pn8, x0),\n+\t\t z22 = svld1_x4 (pn8, x0))\n+\n+/*\n+** ld1_mf8_z28:\n+**\tld1b\t{z28\\.b(?: - |, )z31\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_z28, svmfloat8x4_t, mfloat8_t,\n+\t\t z28 = svld1_mf8_x4 (pn8, x0),\n+\t\t z28 = svld1_x4 (pn8, x0))\n+\n+/*\n+** ld1_mf8_pn0:\n+**\tmov\tp([89]|1[0-5])\\.b, p0\\.b\n+**\tld1b\t{z0\\.b(?: - |, )z3\\.b}, pn\\1/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_pn0, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn0, x0),\n+\t\t z0 = svld1_x4 (pn0, x0))\n+\n+/*\n+** ld1_mf8_pn7:\n+**\tmov\tp([89]|1[0-5])\\.b, p7\\.b\n+**\tld1b\t{z0\\.b(?: - |, )z3\\.b}, pn\\1/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_pn7, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn7, x0),\n+\t\t z0 = svld1_x4 (pn7, x0))\n+\n+/*\n+** ld1_mf8_pn15:\n+**\tld1b\t{z0\\.b(?: - |, )z3\\.b}, pn15/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_mf8_pn15, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_mf8_x4 (pn15, x0),\n+\t\t z0 = svld1_x4 (pn15, x0))\n+\n+/*\n+** ld1_vnum_mf8_0:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_0, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, 0),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, 0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_vnum_mf8_1:\n+**\tincb\tx0\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_1, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, 1),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, 1))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_vnum_mf8_2:\n+**\tincb\tx0, all, mul #2\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_2, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, 2),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, 2))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_vnum_mf8_3:\n+**\tincb\tx0, all, mul #3\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_3, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, 3),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, 3))\n+\n+/*\n+** ld1_vnum_mf8_4:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #4, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_4, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, 4),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, 4))\n+\n+/*\n+** ld1_vnum_mf8_28:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #28, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_28, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, 28),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, 28))\n+\n+/*\n+** ld1_vnum_mf8_32:\n+**\t[^{]*\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_32, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, 32),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, 32))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_vnum_mf8_m1:\n+**\tdecb\tx0\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_m1, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, -1),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, -1))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_vnum_mf8_m2:\n+**\tdecb\tx0, all, mul #2\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_m2, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, -2),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, -2))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ld1_vnum_mf8_m3:\n+**\tdecb\tx0, all, mul #3\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_m3, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, -3),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, -3))\n+\n+/*\n+** ld1_vnum_mf8_m4:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #-4, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_m4, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, -4),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, -4))\n+\n+/*\n+** ld1_vnum_mf8_m32:\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #-32, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_m32, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, -32),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, -32))\n+\n+/*\n+** ld1_vnum_mf8_m36:\n+**\t[^{]*\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_m36, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, -36),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, -36))\n+\n+/*\n+** ld1_vnum_mf8_x1:\n+**\tcntb\t(x[0-9]+)\n+** (\n+**\tmadd\t(x[0-9]+), (?:x1, \\1|\\1, x1), x0\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[\\2\\]\n+** |\n+**\tmul\t(x[0-9]+), (?:x1, \\1|\\1, x1)\n+**\tld1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, \\3\\]\n+** )\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ld1_vnum_mf8_x1, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svld1_vnum_mf8_x4 (pn8, x0, x1),\n+\t\t z0 = svld1_vnum_x4 (pn8, x0, x1))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/ldnt1_mf8_x2.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/ldnt1_mf8_x2.c\nnew file mode 100644\nindex 00000000000..60d2caa1568\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/ldnt1_mf8_x2.c\n@@ -0,0 +1,269 @@\n+/* { dg-do assemble { target aarch64_asm_sve2p1_ok } } */\n+/* { dg-do compile { target { ! aarch64_asm_sve2p1_ok } } } */\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" { target { ! ilp32 } } } } */\n+\n+#include \"test_sve_acle.h\"\n+\n+#pragma GCC target \"+sve2p1\"\n+#ifdef STREAMING_COMPATIBLE\n+#pragma GCC target \"+sme2\"\n+#endif\n+\n+/*\n+** ldnt1_mf8_base:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_base, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn8, x0),\n+\t\t z0 = svldnt1_x2 (pn8, x0))\n+\n+/*\n+** ldnt1_mf8_index:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, x1\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_index, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn8, x0 + x1),\n+\t\t z0 = svldnt1_x2 (pn8, x0 + x1))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_mf8_1:\n+**\tincb\tx0\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_1, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn8, x0 + svcntb ()),\n+\t\t z0 = svldnt1_x2 (pn8, x0 + svcntb ()))\n+\n+/*\n+** ldnt1_mf8_2:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #2, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_2, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn8, x0 + svcntb () * 2),\n+\t\t z0 = svldnt1_x2 (pn8, x0 + svcntb () * 2))\n+\n+/*\n+** ldnt1_mf8_14:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #14, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_14, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn8, x0 + svcntb () * 14),\n+\t\t z0 = svldnt1_x2 (pn8, x0 + svcntb () * 14))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_mf8_16:\n+**\tincb\tx0, all, mul #16\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_16, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn8, x0 + svcntb () * 16),\n+\t\t z0 = svldnt1_x2 (pn8, x0 + svcntb () * 16))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_mf8_m1:\n+**\tdecb\tx0\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_m1, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn8, x0 - svcntb ()),\n+\t\t z0 = svldnt1_x2 (pn8, x0 - svcntb ()))\n+\n+/*\n+** ldnt1_mf8_m2:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #-2, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_m2, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn8, x0 - svcntb () * 2),\n+\t\t z0 = svldnt1_x2 (pn8, x0 - svcntb () * 2))\n+\n+/*\n+** ldnt1_mf8_m16:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #-16, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_m16, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn8, x0 - svcntb () * 16),\n+\t\t z0 = svldnt1_x2 (pn8, x0 - svcntb () * 16))\n+\n+/*\n+** ldnt1_mf8_m18:\n+**\taddvl\t(x[0-9]+), x0, #-18\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[\\1\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_m18, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn8, x0 - svcntb () * 18),\n+\t\t z0 = svldnt1_x2 (pn8, x0 - svcntb () * 18))\n+\n+/*\n+** ldnt1_mf8_z17:\n+**\tldnt1b\t{z[^\\n]+}, pn8/z, \\[x0\\]\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_z17, svmfloat8x2_t, mfloat8_t,\n+\t\t z17 = svldnt1_mf8_x2 (pn8, x0),\n+\t\t z17 = svldnt1_x2 (pn8, x0))\n+\n+/*\n+** ldnt1_mf8_z22:\n+**\tldnt1b\t{z22\\.b(?: - |, )z23\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_z22, svmfloat8x2_t, mfloat8_t,\n+\t\t z22 = svldnt1_mf8_x2 (pn8, x0),\n+\t\t z22 = svldnt1_x2 (pn8, x0))\n+\n+/*\n+** ldnt1_mf8_z28:\n+**\tldnt1b\t{z28\\.b(?: - |, )z29\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_z28, svmfloat8x2_t, mfloat8_t,\n+\t\t z28 = svldnt1_mf8_x2 (pn8, x0),\n+\t\t z28 = svldnt1_x2 (pn8, x0))\n+\n+/*\n+** ldnt1_mf8_pn0:\n+**\tmov\tp([89]|1[0-5])\\.b, p0\\.b\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn\\1/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_pn0, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn0, x0),\n+\t\t z0 = svldnt1_x2 (pn0, x0))\n+\n+/*\n+** ldnt1_mf8_pn7:\n+**\tmov\tp([89]|1[0-5])\\.b, p7\\.b\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn\\1/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_pn7, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn7, x0),\n+\t\t z0 = svldnt1_x2 (pn7, x0))\n+\n+/*\n+** ldnt1_mf8_pn15:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn15/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_pn15, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x2 (pn15, x0),\n+\t\t z0 = svldnt1_x2 (pn15, x0))\n+\n+/*\n+** ldnt1_vnum_mf8_0:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_0, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x2 (pn8, x0, 0),\n+\t\t z0 = svldnt1_vnum_x2 (pn8, x0, 0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_vnum_mf8_1:\n+**\tincb\tx0\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_1, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x2 (pn8, x0, 1),\n+\t\t z0 = svldnt1_vnum_x2 (pn8, x0, 1))\n+\n+/*\n+** ldnt1_vnum_mf8_2:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #2, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_2, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x2 (pn8, x0, 2),\n+\t\t z0 = svldnt1_vnum_x2 (pn8, x0, 2))\n+\n+/*\n+** ldnt1_vnum_mf8_14:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #14, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_14, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x2 (pn8, x0, 14),\n+\t\t z0 = svldnt1_vnum_x2 (pn8, x0, 14))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_vnum_mf8_16:\n+**\tincb\tx0, all, mul #16\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_16, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x2 (pn8, x0, 16),\n+\t\t z0 = svldnt1_vnum_x2 (pn8, x0, 16))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_vnum_mf8_m1:\n+**\tdecb\tx0\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_m1, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x2 (pn8, x0, -1),\n+\t\t z0 = svldnt1_vnum_x2 (pn8, x0, -1))\n+\n+/*\n+** ldnt1_vnum_mf8_m2:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #-2, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_m2, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x2 (pn8, x0, -2),\n+\t\t z0 = svldnt1_vnum_x2 (pn8, x0, -2))\n+\n+/*\n+** ldnt1_vnum_mf8_m16:\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, #-16, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_m16, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x2 (pn8, x0, -16),\n+\t\t z0 = svldnt1_vnum_x2 (pn8, x0, -16))\n+\n+/*\n+** ldnt1_vnum_mf8_m18:\n+**\taddvl\t(x[0-9]+), x0, #-18\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[\\1\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_m18, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x2 (pn8, x0, -18),\n+\t\t z0 = svldnt1_vnum_x2 (pn8, x0, -18))\n+\n+/*\n+** ldnt1_vnum_mf8_x1:\n+**\tcntb\t(x[0-9]+)\n+** (\n+**\tmadd\t(x[0-9]+), (?:x1, \\1|\\1, x1), x0\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[\\2\\]\n+** |\n+**\tmul\t(x[0-9]+), (?:x1, \\1|\\1, x1)\n+**\tldnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8/z, \\[x0, \\3\\]\n+** )\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_x1, svmfloat8x2_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x2 (pn8, x0, x1),\n+\t\t z0 = svldnt1_vnum_x2 (pn8, x0, x1))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/ldnt1_mf8_x4.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/ldnt1_mf8_x4.c\nnew file mode 100644\nindex 00000000000..976b1e6f61c\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/ldnt1_mf8_x4.c\n@@ -0,0 +1,361 @@\n+/* { dg-do assemble { target aarch64_asm_sve2p1_ok } } */\n+/* { dg-do compile { target { ! aarch64_asm_sve2p1_ok } } } */\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" { target { ! ilp32 } } } } */\n+\n+#include \"test_sve_acle.h\"\n+\n+#pragma GCC target \"+sve2p1\"\n+#ifdef STREAMING_COMPATIBLE\n+#pragma GCC target \"+sme2\"\n+#endif\n+\n+/*\n+** ldnt1_mf8_base:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_base, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0),\n+\t\t z0 = svldnt1_x4 (pn8, x0))\n+\n+/*\n+** ldnt1_mf8_index:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, x1\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_index, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 + x1),\n+\t\t z0 = svldnt1_x4 (pn8, x0 + x1))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_mf8_1:\n+**\tincb\tx0\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_1, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 + svcntb ()),\n+\t\t z0 = svldnt1_x4 (pn8, x0 + svcntb ()))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_mf8_2:\n+**\tincb\tx0, all, mul #2\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_2, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 + svcntb () * 2),\n+\t\t z0 = svldnt1_x4 (pn8, x0 + svcntb () * 2))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_mf8_3:\n+**\tincb\tx0, all, mul #3\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_3, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 + svcntb () * 3),\n+\t\t z0 = svldnt1_x4 (pn8, x0 + svcntb () * 3))\n+\n+/*\n+** ldnt1_mf8_4:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #4, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_4, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 + svcntb () * 4),\n+\t\t z0 = svldnt1_x4 (pn8, x0 + svcntb () * 4))\n+\n+/*\n+** ldnt1_mf8_28:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #28, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_28, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 + svcntb () * 28),\n+\t\t z0 = svldnt1_x4 (pn8, x0 + svcntb () * 28))\n+\n+/*\n+** ldnt1_mf8_32:\n+**\t[^{]*\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_32, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 + svcntb () * 32),\n+\t\t z0 = svldnt1_x4 (pn8, x0 + svcntb () * 32))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_mf8_m1:\n+**\tdecb\tx0\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_m1, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 - svcntb ()),\n+\t\t z0 = svldnt1_x4 (pn8, x0 - svcntb ()))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_mf8_m2:\n+**\tdecb\tx0, all, mul #2\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_m2, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 - svcntb () * 2),\n+\t\t z0 = svldnt1_x4 (pn8, x0 - svcntb () * 2))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_mf8_m3:\n+**\tdecb\tx0, all, mul #3\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_m3, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 - svcntb () * 3),\n+\t\t z0 = svldnt1_x4 (pn8, x0 - svcntb () * 3))\n+\n+/*\n+** ldnt1_mf8_m4:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #-4, mul vl\\]\n+**\tret\n+*/\n+  TEST_LOAD_COUNT (ldnt1_mf8_m4, svmfloat8x4_t, mfloat8_t,\n+\t\t   z0 = svldnt1_mf8_x4 (pn8, x0 - svcntb () * 4),\n+\t\t   z0 = svldnt1_x4 (pn8, x0 - svcntb () * 4))\n+\n+/*\n+** ldnt1_mf8_m32:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #-32, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_m32, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 - svcntb () * 32),\n+\t\t z0 = svldnt1_x4 (pn8, x0 - svcntb () * 32))\n+\n+/*\n+** ldnt1_mf8_m36:\n+**\t[^{]*\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_m36, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn8, x0 - svcntb () * 36),\n+\t\t z0 = svldnt1_x4 (pn8, x0 - svcntb () * 36))\n+\n+/*\n+** ldnt1_mf8_z17:\n+**\tldnt1b\t{z[^\\n]+}, pn8/z, \\[x0\\]\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_z17, svmfloat8x4_t, mfloat8_t,\n+\t\t z17 = svldnt1_mf8_x4 (pn8, x0),\n+\t\t z17 = svldnt1_x4 (pn8, x0))\n+\n+/*\n+** ldnt1_mf8_z22:\n+**\tldnt1b\t{z[^\\n]+}, pn8/z, \\[x0\\]\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_z22, svmfloat8x4_t, mfloat8_t,\n+\t\t z22 = svldnt1_mf8_x4 (pn8, x0),\n+\t\t z22 = svldnt1_x4 (pn8, x0))\n+\n+/*\n+** ldnt1_mf8_z28:\n+**\tldnt1b\t{z28\\.b(?: - |, )z31\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_z28, svmfloat8x4_t, mfloat8_t,\n+\t\t z28 = svldnt1_mf8_x4 (pn8, x0),\n+\t\t z28 = svldnt1_x4 (pn8, x0))\n+\n+/*\n+** ldnt1_mf8_pn0:\n+**\tmov\tp([89]|1[0-5])\\.b, p0\\.b\n+**\tldnt1b\t{z0\\.b(?: - |, )z3\\.b}, pn\\1/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_pn0, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn0, x0),\n+\t\t z0 = svldnt1_x4 (pn0, x0))\n+\n+/*\n+** ldnt1_mf8_pn7:\n+**\tmov\tp([89]|1[0-5])\\.b, p7\\.b\n+**\tldnt1b\t{z0\\.b(?: - |, )z3\\.b}, pn\\1/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_pn7, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn7, x0),\n+\t\t z0 = svldnt1_x4 (pn7, x0))\n+\n+/*\n+** ldnt1_mf8_pn15:\n+**\tldnt1b\t{z0\\.b(?: - |, )z3\\.b}, pn15/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_mf8_pn15, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_mf8_x4 (pn15, x0),\n+\t\t z0 = svldnt1_x4 (pn15, x0))\n+\n+/*\n+** ldnt1_vnum_mf8_0:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_0, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, 0),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, 0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_vnum_mf8_1:\n+**\tincb\tx0\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_1, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, 1),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, 1))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_vnum_mf8_2:\n+**\tincb\tx0, all, mul #2\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_2, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, 2),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, 2))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_vnum_mf8_3:\n+**\tincb\tx0, all, mul #3\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_3, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, 3),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, 3))\n+\n+/*\n+** ldnt1_vnum_mf8_4:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #4, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_4, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, 4),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, 4))\n+\n+/*\n+** ldnt1_vnum_mf8_28:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #28, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_28, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, 28),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, 28))\n+\n+/*\n+** ldnt1_vnum_mf8_32:\n+**\t[^{]*\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_32, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, 32),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, 32))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_vnum_mf8_m1:\n+**\tdecb\tx0\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_m1, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, -1),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, -1))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_vnum_mf8_m2:\n+**\tdecb\tx0, all, mul #2\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_m2, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, -2),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, -2))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** ldnt1_vnum_mf8_m3:\n+**\tdecb\tx0, all, mul #3\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_m3, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, -3),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, -3))\n+\n+/*\n+** ldnt1_vnum_mf8_m4:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #-4, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_m4, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, -4),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, -4))\n+\n+/*\n+** ldnt1_vnum_mf8_m32:\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, #-32, mul vl\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_m32, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, -32),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, -32))\n+\n+/*\n+** ldnt1_vnum_mf8_m36:\n+**\t[^{]*\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_m36, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, -36),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, -36))\n+\n+/*\n+** ldnt1_vnum_mf8_x1:\n+**\tcntb\t(x[0-9]+)\n+** (\n+**\tmadd\t(x[0-9]+), (?:x1, \\1|\\1, x1), x0\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[\\2\\]\n+** |\n+**\tmul\t(x[0-9]+), (?:x1, \\1|\\1, x1)\n+**\tldnt1b\t{z0\\.b - z3\\.b}, pn8/z, \\[x0, \\3\\]\n+** )\n+**\tret\n+*/\n+TEST_LOAD_COUNT (ldnt1_vnum_mf8_x1, svmfloat8x4_t, mfloat8_t,\n+\t\t z0 = svldnt1_vnum_mf8_x4 (pn8, x0, x1),\n+\t\t z0 = svldnt1_vnum_x4 (pn8, x0, x1))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/revd_mf8.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/revd_mf8.c\nnew file mode 100644\nindex 00000000000..64d08509c16\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/revd_mf8.c\n@@ -0,0 +1,80 @@\n+/* { dg-do assemble { target aarch64_asm_sve2p1_ok } } */\n+/* { dg-do compile { target { ! aarch64_asm_sve2p1_ok } } } */\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" } } */\n+\n+#include \"test_sve_acle.h\"\n+\n+#pragma GCC target \"+sve2p1\"\n+\n+/*\n+** revd_mf8_m_tied12:\n+**\trevd\tz0\\.q, p0/m, z0\\.q\n+**\tret\n+*/\n+TEST_UNIFORM_Z (revd_mf8_m_tied12, svmfloat8_t,\n+\t\tz0 = svrevd_mf8_m (z0, p0, z0),\n+\t\tz0 = svrevd_m (z0, p0, z0))\n+\n+/*\n+** revd_mf8_m_tied1:\n+**\trevd\tz0\\.q, p0/m, z1\\.q\n+**\tret\n+*/\n+TEST_UNIFORM_Z (revd_mf8_m_tied1, svmfloat8_t,\n+\t\tz0 = svrevd_mf8_m (z0, p0, z1),\n+\t\tz0 = svrevd_m (z0, p0, z1))\n+\n+/*\n+** revd_mf8_m_tied2:\n+**\tmov\t(z[0-9]+)\\.d, z0\\.d\n+**\tmovprfx\tz0, z1\n+**\trevd\tz0\\.q, p0/m, \\1\\.q\n+**\tret\n+*/\n+TEST_UNIFORM_Z (revd_mf8_m_tied2, svmfloat8_t,\n+\t\tz0 = svrevd_mf8_m (z1, p0, z0),\n+\t\tz0 = svrevd_m (z1, p0, z0))\n+\n+/*\n+** revd_mf8_m_untied:\n+**\tmovprfx\tz0, z2\n+**\trevd\tz0\\.q, p0/m, z1\\.q\n+**\tret\n+*/\n+TEST_UNIFORM_Z (revd_mf8_m_untied, svmfloat8_t,\n+\t\tz0 = svrevd_mf8_m (z2, p0, z1),\n+\t\tz0 = svrevd_m (z2, p0, z1))\n+\n+/* Awkward register allocation.  Don't require specific output.  */\n+TEST_UNIFORM_Z (revd_mf8_z_tied1, svmfloat8_t,\n+\t\tz0 = svrevd_mf8_z (p0, z0),\n+\t\tz0 = svrevd_z (p0, z0))\n+\n+/*\n+** revd_mf8_z_untied:\n+**\tmovi?\t[vdz]0\\.?(?:[0-9]*[bhsd])?, #?0\n+**\trevd\tz0\\.q, p0/m, z1\\.q\n+**\tret\n+*/\n+TEST_UNIFORM_Z (revd_mf8_z_untied, svmfloat8_t,\n+\t\tz0 = svrevd_mf8_z (p0, z1),\n+\t\tz0 = svrevd_z (p0, z1))\n+\n+/*\n+** revd_mf8_x_tied1:\n+**\trevd\tz0\\.q, p0/m, z0\\.q\n+**\tret\n+*/\n+TEST_UNIFORM_Z (revd_mf8_x_tied1, svmfloat8_t,\n+\t\tz0 = svrevd_mf8_x (p0, z0),\n+\t\tz0 = svrevd_x (p0, z0))\n+\n+/*\n+** revd_mf8_x_untied:\n+**\tmovprfx\tz0, z1\n+**\trevd\tz0\\.q, p0/m, z1\\.q\n+**\tret\n+*/\n+TEST_UNIFORM_Z (revd_mf8_x_untied, svmfloat8_t,\n+\t\tz0 = svrevd_mf8_x (p0, z1),\n+\t\tz0 = svrevd_x (p0, z1))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/stnt1_mf8_x2.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/stnt1_mf8_x2.c\nnew file mode 100644\nindex 00000000000..489e4fff54d\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/stnt1_mf8_x2.c\n@@ -0,0 +1,269 @@\n+/* { dg-do assemble { target aarch64_asm_sve2p1_ok } } */\n+/* { dg-do compile { target { ! aarch64_asm_sve2p1_ok } } } */\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" { target { ! ilp32 } } } } */\n+\n+#include \"test_sve_acle.h\"\n+\n+#pragma GCC target \"+sve2p1\"\n+#ifdef STREAMING_COMPATIBLE\n+#pragma GCC target \"+sme2\"\n+#endif\n+\n+/*\n+** stnt1_mf8_base:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_base, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0, z0),\n+\t\t  svstnt1 (pn8, x0, z0))\n+\n+/*\n+** stnt1_mf8_index:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, x1\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_index, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0 + x1, z0),\n+\t\t  svstnt1 (pn8, x0 + x1, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_mf8_1:\n+**\tincb\tx0\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_1, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0 + svcntb (), z0),\n+\t\t  svstnt1 (pn8, x0 + svcntb (), z0))\n+\n+/*\n+** stnt1_mf8_2:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #2, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_2, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0 + svcntb () * 2, z0),\n+\t\t  svstnt1 (pn8, x0 + svcntb () * 2, z0))\n+\n+/*\n+** stnt1_mf8_14:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #14, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_14, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0 + svcntb () * 14, z0),\n+\t\t  svstnt1 (pn8, x0 + svcntb () * 14, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_mf8_16:\n+**\tincb\tx0, all, mul #16\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_16, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0 + svcntb () * 16, z0),\n+\t\t  svstnt1 (pn8, x0 + svcntb () * 16, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_mf8_m1:\n+**\tdecb\tx0\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_m1, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0 - svcntb (), z0),\n+\t\t  svstnt1 (pn8, x0 - svcntb (), z0))\n+\n+/*\n+** stnt1_mf8_m2:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #-2, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_m2, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0 - svcntb () * 2, z0),\n+\t\t  svstnt1 (pn8, x0 - svcntb () * 2, z0))\n+\n+/*\n+** stnt1_mf8_m16:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #-16, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_m16, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0 - svcntb () * 16, z0),\n+\t\t  svstnt1 (pn8, x0 - svcntb () * 16, z0))\n+\n+/*\n+** stnt1_mf8_m18:\n+**\taddvl\t(x[0-9]+), x0, #-18\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[\\1\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_m18, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0 - svcntb () * 18, z0),\n+\t\t  svstnt1 (pn8, x0 - svcntb () * 18, z0))\n+\n+/*\n+** stnt1_mf8_z17:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tstnt1b\t{z[^\\n]+}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_z17, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0, z17),\n+\t\t  svstnt1 (pn8, x0, z17))\n+\n+/*\n+** stnt1_mf8_z22:\n+**\tstnt1b\t{z22\\.b(?: - |, )z23\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_z22, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0, z22),\n+\t\t  svstnt1 (pn8, x0, z22))\n+\n+/*\n+** stnt1_mf8_z28:\n+**\tstnt1b\t{z28\\.b(?: - |, )z29\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_z28, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn8, x0, z28),\n+\t\t  svstnt1 (pn8, x0, z28))\n+\n+/*\n+** stnt1_mf8_pn0:\n+**\tmov\tp([89]|1[0-5])\\.b, p0\\.b\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn\\1, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_pn0, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn0, x0, z0),\n+\t\t  svstnt1 (pn0, x0, z0))\n+\n+/*\n+** stnt1_mf8_pn7:\n+**\tmov\tp([89]|1[0-5])\\.b, p7\\.b\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn\\1, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_pn7, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn7, x0, z0),\n+\t\t  svstnt1 (pn7, x0, z0))\n+\n+/*\n+** stnt1_mf8_pn15:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn15, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_pn15, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x2 (pn15, x0, z0),\n+\t\t  svstnt1 (pn15, x0, z0))\n+\n+/*\n+** stnt1_vnum_mf8_0:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_0, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x2 (pn8, x0, 0, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 0, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_vnum_mf8_1:\n+**\tincb\tx0\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_1, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x2 (pn8, x0, 1, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 1, z0))\n+\n+/*\n+** stnt1_vnum_mf8_2:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #2, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_2, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x2 (pn8, x0, 2, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 2, z0))\n+\n+/*\n+** stnt1_vnum_mf8_14:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #14, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_14, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x2 (pn8, x0, 14, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 14, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_vnum_mf8_16:\n+**\tincb\tx0, all, mul #16\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_16, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x2 (pn8, x0, 16, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 16, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_vnum_mf8_m1:\n+**\tdecb\tx0\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_m1, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x2 (pn8, x0, -1, z0),\n+\t\t  svstnt1_vnum (pn8, x0, -1, z0))\n+\n+/*\n+** stnt1_vnum_mf8_m2:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #-2, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_m2, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x2 (pn8, x0, -2, z0),\n+\t\t  svstnt1_vnum (pn8, x0, -2, z0))\n+\n+/*\n+** stnt1_vnum_mf8_m16:\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, #-16, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_m16, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x2 (pn8, x0, -16, z0),\n+\t\t  svstnt1_vnum (pn8, x0, -16, z0))\n+\n+/*\n+** stnt1_vnum_mf8_m18:\n+**\taddvl\t(x[0-9]+), x0, #-18\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[\\1\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_m18, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x2 (pn8, x0, -18, z0),\n+\t\t  svstnt1_vnum (pn8, x0, -18, z0))\n+\n+/*\n+** stnt1_vnum_mf8_x1:\n+**\tcntb\t(x[0-9]+)\n+** (\n+**\tmadd\t(x[0-9]+), (?:x1, \\1|\\1, x1), x0\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[\\2\\]\n+** |\n+**\tmul\t(x[0-9]+), (?:x1, \\1|\\1, x1)\n+**\tstnt1b\t{z0\\.b(?: - |, )z1\\.b}, pn8, \\[x0, \\3\\]\n+** )\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_x1, svmfloat8x2_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x2 (pn8, x0, x1, z0),\n+\t\t  svstnt1_vnum (pn8, x0, x1, z0))\ndiff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/stnt1_mf8_x4.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/stnt1_mf8_x4.c\nnew file mode 100644\nindex 00000000000..4be364514ab\n--- /dev/null\n+++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/stnt1_mf8_x4.c\n@@ -0,0 +1,361 @@\n+/* { dg-do assemble { target aarch64_asm_sve2p1_ok } } */\n+/* { dg-do compile { target { ! aarch64_asm_sve2p1_ok } } } */\n+/* { dg-final { check-function-bodies \"**\" \"\" \"-DCHECK_ASM\" { target { ! ilp32 } } } } */\n+\n+#include \"test_sve_acle.h\"\n+\n+#pragma GCC target \"+sve2p1\"\n+#ifdef STREAMING_COMPATIBLE\n+#pragma GCC target \"+sme2\"\n+#endif\n+\n+/*\n+** stnt1_mf8_base:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_base, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0, z0),\n+\t\t  svstnt1 (pn8, x0, z0))\n+\n+/*\n+** stnt1_mf8_index:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, x1\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_index, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 + x1, z0),\n+\t\t  svstnt1 (pn8, x0 + x1, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_mf8_1:\n+**\tincb\tx0\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_1, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 + svcntb (), z0),\n+\t\t  svstnt1 (pn8, x0 + svcntb (), z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_mf8_2:\n+**\tincb\tx0, all, mul #2\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_2, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 + svcntb () * 2, z0),\n+\t\t  svstnt1 (pn8, x0 + svcntb () * 2, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_mf8_3:\n+**\tincb\tx0, all, mul #3\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_3, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 + svcntb () * 3, z0),\n+\t\t  svstnt1 (pn8, x0 + svcntb () * 3, z0))\n+\n+/*\n+** stnt1_mf8_4:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #4, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_4, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 + svcntb () * 4, z0),\n+\t\t  svstnt1 (pn8, x0 + svcntb () * 4, z0))\n+\n+/*\n+** stnt1_mf8_28:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #28, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_28, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 + svcntb () * 28, z0),\n+\t\t  svstnt1 (pn8, x0 + svcntb () * 28, z0))\n+\n+/*\n+** stnt1_mf8_32:\n+**\t[^{]*\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_32, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 + svcntb () * 32, z0),\n+\t\t  svstnt1 (pn8, x0 + svcntb () * 32, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_mf8_m1:\n+**\tdecb\tx0\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_m1, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 - svcntb (), z0),\n+\t\t  svstnt1 (pn8, x0 - svcntb (), z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_mf8_m2:\n+**\tdecb\tx0, all, mul #2\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_m2, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 - svcntb () * 2, z0),\n+\t\t  svstnt1 (pn8, x0 - svcntb () * 2, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_mf8_m3:\n+**\tdecb\tx0, all, mul #3\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_m3, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 - svcntb () * 3, z0),\n+\t\t  svstnt1 (pn8, x0 - svcntb () * 3, z0))\n+\n+/*\n+** stnt1_mf8_m4:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #-4, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_m4, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 - svcntb () * 4, z0),\n+\t\t  svstnt1 (pn8, x0 - svcntb () * 4, z0))\n+\n+/*\n+** stnt1_mf8_m32:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #-32, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_m32, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 - svcntb () * 32, z0),\n+\t\t  svstnt1 (pn8, x0 - svcntb () * 32, z0))\n+\n+/*\n+** stnt1_mf8_m36:\n+**\t[^{]*\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_m36, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0 - svcntb () * 36, z0),\n+\t\t  svstnt1 (pn8, x0 - svcntb () * 36, z0))\n+\n+/*\n+** stnt1_mf8_z17:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tstnt1b\t{z[^\\n]+}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_z17, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0, z17),\n+\t\t  svstnt1 (pn8, x0, z17))\n+\n+/*\n+** stnt1_mf8_z22:\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tmov\t[^\\n]+\n+**\tstnt1b\t{z[^\\n]+}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_z22, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0, z22),\n+\t\t  svstnt1 (pn8, x0, z22))\n+\n+/*\n+** stnt1_mf8_z28:\n+**\tstnt1b\t{z28\\.b - z31\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_z28, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn8, x0, z28),\n+\t\t  svstnt1 (pn8, x0, z28))\n+\n+/*\n+** stnt1_mf8_pn0:\n+**\tmov\tp([89]|1[0-5])\\.b, p0\\.b\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn\\1, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_pn0, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn0, x0, z0),\n+\t\t  svstnt1 (pn0, x0, z0))\n+\n+/*\n+** stnt1_mf8_pn7:\n+**\tmov\tp([89]|1[0-5])\\.b, p7\\.b\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn\\1, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_pn7, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn7, x0, z0),\n+\t\t  svstnt1 (pn7, x0, z0))\n+\n+/*\n+** stnt1_mf8_pn15:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn15, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_mf8_pn15, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_mf8_x4 (pn15, x0, z0),\n+\t\t  svstnt1 (pn15, x0, z0))\n+\n+/*\n+** stnt1_vnum_mf8_0:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_0, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, 0, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 0, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_vnum_mf8_1:\n+**\tincb\tx0\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_1, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, 1, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 1, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_vnum_mf8_2:\n+**\tincb\tx0, all, mul #2\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_2, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, 2, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 2, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_vnum_mf8_3:\n+**\tincb\tx0, all, mul #3\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_3, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, 3, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 3, z0))\n+\n+/*\n+** stnt1_vnum_mf8_4:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #4, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_4, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, 4, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 4, z0))\n+\n+/*\n+** stnt1_vnum_mf8_28:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #28, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_28, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, 28, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 28, z0))\n+\n+/*\n+** stnt1_vnum_mf8_32:\n+**\t[^{]*\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_32, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, 32, z0),\n+\t\t  svstnt1_vnum (pn8, x0, 32, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_vnum_mf8_m1:\n+**\tdecb\tx0\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_m1, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, -1, z0),\n+\t\t  svstnt1_vnum (pn8, x0, -1, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_vnum_mf8_m2:\n+**\tdecb\tx0, all, mul #2\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_m2, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, -2, z0),\n+\t\t  svstnt1_vnum (pn8, x0, -2, z0))\n+\n+/* Moving the constant into a register would also be OK.  */\n+/*\n+** stnt1_vnum_mf8_m3:\n+**\tdecb\tx0, all, mul #3\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_m3, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, -3, z0),\n+\t\t  svstnt1_vnum (pn8, x0, -3, z0))\n+\n+/*\n+** stnt1_vnum_mf8_m4:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #-4, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_m4, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, -4, z0),\n+\t\t  svstnt1_vnum (pn8, x0, -4, z0))\n+\n+/*\n+** stnt1_vnum_mf8_m32:\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, #-32, mul vl\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_m32, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, -32, z0),\n+\t\t  svstnt1_vnum (pn8, x0, -32, z0))\n+\n+/*\n+** stnt1_vnum_mf8_m36:\n+**\t[^{]*\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, x[0-9]+\\]\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_m36, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, -36, z0),\n+\t\t  svstnt1_vnum (pn8, x0, -36, z0))\n+\n+/*\n+** stnt1_vnum_mf8_x1:\n+**\tcntb\t(x[0-9]+)\n+** (\n+**\tmadd\t(x[0-9]+), (?:x1, \\1|\\1, x1), x0\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[\\2\\]\n+** |\n+**\tmul\t(x[0-9]+), (?:x1, \\1|\\1, x1)\n+**\tstnt1b\t{z0\\.b - z3\\.b}, pn8, \\[x0, \\3\\]\n+** )\n+**\tret\n+*/\n+TEST_STORE_COUNT (stnt1_vnum_mf8_x1, svmfloat8x4_t, mfloat8_t,\n+\t\t  svstnt1_vnum_mf8_x4 (pn8, x0, x1, z0),\n+\t\t  svstnt1_vnum (pn8, x0, x1, z0))\n","prefixes":["v4","1/8"]}