From patchwork Thu Mar 19 17:59:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srinath Parvathaneni X-Patchwork-Id: 1258361 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=gcc-patches-bounces@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=armh.onmicrosoft.com header.i=@armh.onmicrosoft.com header.a=rsa-sha256 header.s=selector2-armh-onmicrosoft-com header.b=Bidofobf; dkim=pass (1024-bit key) header.d=armh.onmicrosoft.com header.i=@armh.onmicrosoft.com header.a=rsa-sha256 header.s=selector2-armh-onmicrosoft-com header.b=Bidofobf; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48jw2v6jxrz9sRN for ; Fri, 20 Mar 2020 05:11:35 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id C1FDB393741C; Thu, 19 Mar 2020 18:11:32 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from EUR04-HE1-obe.outbound.protection.outlook.com (mail-eopbgr70059.outbound.protection.outlook.com [40.107.7.59]) by sourceware.org (Postfix) with ESMTPS id 72754393741C for ; Thu, 19 Mar 2020 18:11:26 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 72754393741C Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=Srinath.Parvathaneni@arm.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=u+yl8lgQVdBDLzR/wHzP0R2TXTRk1sdgNbB+93DPcuo=; b=Bidofobfr16cQisPEOaC/PMRmCV9URAkIW1p13IKAvZyygZ4fgk4N6MV2WxK3JJMp2o0yi8Zk+9pV8CSVqngycuVEWo/ktwvfzRYrlzO00FxHWHI3qo6B9mbQHypNct8Oc7VTNnM1u6cSNFqs428kjYHkhWvg5PbTyTl+iba87w= Received: from AM4PR0101CA0063.eurprd01.prod.exchangelabs.com (2603:10a6:200:41::31) by AM0PR08MB3156.eurprd08.prod.outlook.com (2603:10a6:208:64::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2835.19; Thu, 19 Mar 2020 18:11:24 +0000 Received: from VE1EUR03FT049.eop-EUR03.prod.protection.outlook.com (2603:10a6:200:41:cafe::8f) by AM4PR0101CA0063.outlook.office365.com (2603:10a6:200:41::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2835.18 via Frontend Transport; Thu, 19 Mar 2020 18:11:24 +0000 Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; gcc.gnu.org; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; gcc.gnu.org; dmarc=bestguesspass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by VE1EUR03FT049.mail.protection.outlook.com (10.152.19.216) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2814.13 via Frontend Transport; Thu, 19 Mar 2020 18:11:23 +0000 Received: ("Tessian outbound 0e34ea672d09:v48"); Thu, 19 Mar 2020 18:11:23 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 824a03f11acbec47 X-CR-MTA-TID: 64aa7808 Received: from 9438152d2365.2 by 64aa7808-outbound-1.mta.getcheckrecipient.com id F91BDD2F-280D-412C-9BD3-C18B7407088E.1; Thu, 19 Mar 2020 18:11:18 +0000 Received: from EUR04-DB3-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9438152d2365.2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Thu, 19 Mar 2020 18:11:18 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=eg0nA6Y0X7FQCfY8KFnB8SA54qHkrfTpe1KzqyroAsZXOgCnF/y3U41Lnn5GS6Vg2Jxc8a1BdHQ7iSDAPEyU8tYIhOxM8SO24TeFoT+QH0m3MDwvFm1znOVJvu+CdorPqYgo1+xIg/MW1zESlPa/arQdmfe399+jFT477MemlXcyqytR/gSLRqSdBTImwMpJcYlZUB9/YsMO/9Yrnub5RW+XN90zFJsaaXDVWjQAK40Rn5fcyBeDwH9qYvZGU9xBIpbazXKAwAjYfEtOCL8Sulr/7LUkM1bVdMGnjS5Ub5QGN9Jtp0xBOCD7RLBmFMbH/XEPmvr/MJL32JG6GXgpIA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=u+yl8lgQVdBDLzR/wHzP0R2TXTRk1sdgNbB+93DPcuo=; b=Z4gS/lgFX86WrJ8SgfdcIzR+RGRaIojOpFlBWc0Lhi5LjAyGKkXcF07aarNto1U6H/nIHs4UZGNB7lIQ+Xpg066sCFCoJx4xan7UI7Y50P96ow9c/ZG7380WHFpKX/KFhWqSfnTBSft7Rxv0GP2fViA9hMErP9Rd1eK2yIfnaTFkCTNS3PbgbBNduWdUqbqMI5Q7wMoOE8UTKel+TUzXo6WYJjBdVDt2z2W2wZOMgtzatJqvk5rVSvP/dG1ejTPkhVZzVEurouO8PiuxW33bb+8QOUhevALwqnkA8QeX/qxalXgcnp9FS218ragPhwwLW/Znb8S4IeNHMWi2/HVQRQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=u+yl8lgQVdBDLzR/wHzP0R2TXTRk1sdgNbB+93DPcuo=; b=Bidofobfr16cQisPEOaC/PMRmCV9URAkIW1p13IKAvZyygZ4fgk4N6MV2WxK3JJMp2o0yi8Zk+9pV8CSVqngycuVEWo/ktwvfzRYrlzO00FxHWHI3qo6B9mbQHypNct8Oc7VTNnM1u6cSNFqs428kjYHkhWvg5PbTyTl+iba87w= Authentication-Results-Original: spf=none (sender IP is ) smtp.mailfrom=Srinath.Parvathaneni@arm.com; Received: from AM0PR08MB5380.eurprd08.prod.outlook.com (52.132.213.136) by AM0PR08MB4497.eurprd08.prod.outlook.com (20.179.35.213) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2835.20; Thu, 19 Mar 2020 18:11:16 +0000 Received: from AM0PR08MB5380.eurprd08.prod.outlook.com ([fe80::e016:9e56:512d:b9ae]) by AM0PR08MB5380.eurprd08.prod.outlook.com ([fe80::e016:9e56:512d:b9ae%7]) with mapi id 15.20.2835.017; Thu, 19 Mar 2020 18:11:16 +0000 From: Srinath Parvathaneni Date: Thu, 19 Mar 2020 17:59:08 +0000 To: gcc-patches@gcc.gnu.org Subject: [PATCH v2][ARM][GCC][6x]:MVE ACLE vaddq intrinsics using arithmetic plus operator. X-ClientProxiedBy: LO2P265CA0346.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:d::22) To AM0PR08MB5380.eurprd08.prod.outlook.com (2603:10a6:208:183::8) Message-ID: MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from e120703-lin.cambridge.arm.com (217.140.106.50) by LO2P265CA0346.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:d::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2835.20 via Frontend Transport; Thu, 19 Mar 2020 18:11:15 +0000 X-Originating-IP: [217.140.106.50] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 3475a4b4-0a76-4b25-2f32-08d7cc30ef1f X-MS-TrafficTypeDiagnostic: AM0PR08MB4497:|AM0PR08MB4497:|AM0PR08MB3156: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: x-checkrecipientrouted: true NoDisclaimer: true X-MS-Oob-TLC-OOBClassifiers: OLM:595;OLM:595; X-Forefront-PRVS: 0347410860 X-Forefront-Antispam-Report-Untrusted: SFV:NSPM; SFS:(10009020)(4636009)(346002)(136003)(376002)(366004)(396003)(39860400002)(199004)(33656002)(66616009)(316002)(81166006)(81156014)(8676002)(8936002)(4326008)(235185007)(26005)(16526019)(5660300002)(33964004)(186003)(52536014)(66556008)(66476007)(52116002)(66946007)(6486002)(6512007)(9686003)(44832011)(956004)(30864003)(86362001)(6916009)(6666004)(2906002)(966005)(478600001); DIR:OUT; SFP:1101; SCL:1; SRVR:AM0PR08MB4497; H:AM0PR08MB5380.eurprd08.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; Received-SPF: None (protection.outlook.com: arm.com does not designate permitted sender hosts) X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: bOBmzzKu5fEGTAb7UuPils37ZfFw7IAh4kM5ioshQkBa/uiiG8n439IOi3NE87j/F79d2AVYH8U5isz1E4EKI/111cAiGYiOpl2UPdP716pZF0m1xXMrOgd+Gn13NVrNbqVDRMwgtvkRHkxo/TW50OJJDPYpctAERCItP13m8iA/+0wCmI8QwmvnLUMXOE+LpqlSWw933AWSdGlf74qyXETY4qnG7RwCTocdJPJfO+zr38RckGXzlphQ5vq2SNcFAHP57E07eRjcgigV0ENE3l/Bs3/MwteIGwI+QDB+JJU3JOJxXAItB9YsHyEnqLl8l8dbgyuQCjgdvvEdCs2rgKYQMa1dKxCcD4YSsZasaygQVLWSWl+f1Icx1pifr5ztFE3riqcXl4ie9E6GQLJQys2iSN/hfpRnm1LvERm0s/xFzwR1bqkHubUnnhfjaec0mShtsOMU6BUSKnQgWDjpBFvt4zk2HQj1JMjSN06C0wSeReWVY+xK+d+QmgnzCjfkDLe18aer/DlN4UZi6ePbBA== X-MS-Exchange-AntiSpam-MessageData: f6H9hxbqbGLmoDFk3ZMzDO1sVC2WyL3w1pHfZtNDquAXMTIPzyMxPdKIfLfx4NTj+dLTKoxeBOJ+5gcPncYwpWkkVFE9f44o6Xk66KvP+zoBP9WhEtf/+DGshlEeny5FVDCFtl8KXrRKD/6V/+EGSA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4497 Original-Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Srinath.Parvathaneni@arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT049.eop-EUR03.prod.protection.outlook.com X-Forefront-Antispam-Report: CIP:63.35.35.123; IPV:CAL; SCL:-1; CTRY:IE; EFV:NLI; SFV:NSPM; SFS:(10009020)(4636009)(136003)(39860400002)(346002)(396003)(376002)(199004)(46966005)(30864003)(47076004)(52536014)(33964004)(316002)(44832011)(6486002)(33656002)(478600001)(8936002)(6666004)(356004)(235185007)(5660300002)(70206006)(66616009)(336012)(70586007)(16526019)(186003)(4326008)(86362001)(956004)(26005)(81156014)(9686003)(2906002)(6916009)(36906005)(6512007)(81166006)(8676002)(966005)(26826003); DIR:OUT; SFP:1101; SCL:1; SRVR:AM0PR08MB3156; H:64aa7808-outbound-1.mta.getcheckrecipient.com; FPR:; SPF:Pass; LANG:en; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; A:1; X-MS-Office365-Filtering-Correlation-Id-Prvs: 25219d32-46e5-4c96-fc21-08d7cc30ea3e X-Forefront-PRVS: 0347410860 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Hu3cec8pf6fZ8APn8XMwm2/Vw4psRSQt3XIi0jKk2TFFEer5phugExANTVUqPyJWPV1cpoQJKk4haw9BXHOU/8+maYjnq8nSWVZIVsSxWGswye1tHMWzUGuwRHbYOHtRktu0sNa6HTY6/1FqUdhunoAvVeoCKuNlh50ZHRCkJeEBEZ5yoFDjuS6yp+dh/zNJvRCwKjIbPNsuJOT6bBUdmcaU38KqSX0LTarHrcDg3WBuOzlqCfQkrfsZrW+z/uBVQwfgWSM4K72asQ94ItYElblIpAHNAAFEdQ8JbgstdB/Kg2WiUxdRmQvKmG0JiwIoQn9gdTC/0W9EZWNno0cV2RxQgA3Js1bQ60kdGFuzJT/6FiXkcjga3hs//A76/eJgvs3p4IeN9eCqFef/aAf1AnbSPifU8c9vJ+7kK8BXG5Jh0qIBqZlCGj6g8WV39kDtlhBz1WvKblRFaj2PqDctz9xNJ5bbcvnUK9Ahd+YDeyofmRbAKOjrgsurZQG3BHnTgZlgwOX3WZEt8xO4/d2w6OoLuU95bpi0MCZjhwm8gvHv//kJJGfV646ng5RRFrdQKUU/ZOOd393wOebmcgHw2IFMojFpq7ggGTcS3cpLOX0= X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Mar 2020 18:11:23.7008 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3475a4b4-0a76-4b25-2f32-08d7cc30ef1f X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3156 X-Spam-Status: No, score=-24.7 required=5.0 tests=DKIM_SIGNED, DKIM_VALID, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, KAM_LOTSOFHASH, KAM_SHORT, MSGID_FROM_MTA_HEADER, RCVD_IN_DNSWL_NONE, SPF_HELO_PASS, SPF_PASS, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces@gcc.gnu.org Sender: "Gcc-patches" Hello Kyrill, This patch addresses all the comments in patch version v2. (version v2) https://gcc.gnu.org/pipermail/gcc-patches/2019-November/534349.html #### Hello, This patch supports following MVE ACLE vaddq intrinsics. The RTL patterns for this intrinsics are added using arithmetic "plus" operator. vaddq_s8, vaddq_s16, vaddq_s32, vaddq_u8, vaddq_u16, vaddq_u32, vaddq_f16, vaddq_f32. Please refer to M-profile Vector Extension (MVE) intrinsics [1] for more details. [1] https://developer.arm.com/architectures/instruction-sets/simd-isas/helium/mve-intrinsics Regression tested on arm-none-eabi and found no regressions. Ok for trunk? Thanks, Srinath. gcc/ChangeLog: 2020-03-19 Srinath Parvathaneni Andre Vieira Mihail Ionescu * config/arm/arm_mve.h (vaddq_s8): Define macro. (vaddq_s16): Likewise. (vaddq_s32): Likewise. (vaddq_u8): Likewise. (vaddq_u16): Likewise. (vaddq_u32): Likewise. (vaddq_f16): Likewise. (vaddq_f32): Likewise. (__arm_vaddq_s8): Define intrinsic. (__arm_vaddq_s16): Likewise. (__arm_vaddq_s32): Likewise. (__arm_vaddq_u8): Likewise. (__arm_vaddq_u16): Likewise. (__arm_vaddq_u32): Likewise. (__arm_vaddq_f16): Likewise. (__arm_vaddq_f32): Likewise. (vaddq): Define polymorphic variant. * config/arm/iterators.md (VNIM): Define mode iterator for common types Neon, IWMMXT and MVE. (VNINOTM): Likewise. * config/arm/mve.md (mve_vaddq): Define RTL pattern. (mve_vaddq_f): Define RTL pattern. * config/arm/neon.md (add3): Rename to addv4hf3 RTL pattern. (addv8hf3_neon): Define RTL pattern. * config/arm/vec-common.md (add3): Modify standard add RTL pattern to support MVE. (addv8hf3): Define standard RTL pattern for MVE and Neon. (add3): Modify existing standard add RTL pattern for Neon and IWMMXT. gcc/testsuite/ChangeLog: 2020-03-19 Srinath Parvathaneni Andre Vieira Mihail Ionescu * gcc.target/arm/mve/intrinsics/vaddq_f16.c: New test. * gcc.target/arm/mve/intrinsics/vaddq_f32.c: Likewise. * gcc.target/arm/mve/intrinsics/vaddq_s16.c: Likewise. * gcc.target/arm/mve/intrinsics/vaddq_s32.c: Likewise. * gcc.target/arm/mve/intrinsics/vaddq_s8.c: Likewise. * gcc.target/arm/mve/intrinsics/vaddq_u16.c: Likewise. * gcc.target/arm/mve/intrinsics/vaddq_u32.c: Likewise. * gcc.target/arm/mve/intrinsics/vaddq_u8.c: Likewise. ############### Attachment also inlined for ease of reply ############### diff --git a/gcc/config/arm/arm_mve.h b/gcc/config/arm/arm_mve.h index 5ea42bd6a5bd98d5c77a0e7da3464ba6b431770b..55c256910bb7f4c616ea592be699f7f4fc3f17f7 100644 --- a/gcc/config/arm/arm_mve.h +++ b/gcc/config/arm/arm_mve.h @@ -1898,6 +1898,14 @@ typedef struct { uint8x16_t val[4]; } uint8x16x4_t; #define vstrwq_scatter_shifted_offset_p_u32(__base, __offset, __value, __p) __arm_vstrwq_scatter_shifted_offset_p_u32(__base, __offset, __value, __p) #define vstrwq_scatter_shifted_offset_s32(__base, __offset, __value) __arm_vstrwq_scatter_shifted_offset_s32(__base, __offset, __value) #define vstrwq_scatter_shifted_offset_u32(__base, __offset, __value) __arm_vstrwq_scatter_shifted_offset_u32(__base, __offset, __value) +#define vaddq_s8(__a, __b) __arm_vaddq_s8(__a, __b) +#define vaddq_s16(__a, __b) __arm_vaddq_s16(__a, __b) +#define vaddq_s32(__a, __b) __arm_vaddq_s32(__a, __b) +#define vaddq_u8(__a, __b) __arm_vaddq_u8(__a, __b) +#define vaddq_u16(__a, __b) __arm_vaddq_u16(__a, __b) +#define vaddq_u32(__a, __b) __arm_vaddq_u32(__a, __b) +#define vaddq_f16(__a, __b) __arm_vaddq_f16(__a, __b) +#define vaddq_f32(__a, __b) __arm_vaddq_f32(__a, __b) #endif __extension__ extern __inline void @@ -12341,6 +12349,48 @@ __arm_vstrwq_scatter_shifted_offset_u32 (uint32_t * __base, uint32x4_t __offset, __builtin_mve_vstrwq_scatter_shifted_offset_uv4si ((__builtin_neon_si *) __base, __offset, __value); } +__extension__ extern __inline int8x16_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +__arm_vaddq_s8 (int8x16_t __a, int8x16_t __b) +{ + return __a + __b; +} + +__extension__ extern __inline int16x8_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +__arm_vaddq_s16 (int16x8_t __a, int16x8_t __b) +{ + return __a + __b; +} + +__extension__ extern __inline int32x4_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +__arm_vaddq_s32 (int32x4_t __a, int32x4_t __b) +{ + return __a + __b; +} + +__extension__ extern __inline uint8x16_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +__arm_vaddq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return __a + __b; +} + +__extension__ extern __inline uint16x8_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +__arm_vaddq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return __a + __b; +} + +__extension__ extern __inline uint32x4_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +__arm_vaddq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return __a + __b; +} + #if (__ARM_FEATURE_MVE & 2) /* MVE Floating point. */ __extension__ extern __inline void @@ -14707,6 +14757,20 @@ __arm_vstrwq_scatter_shifted_offset_p_f32 (float32_t * __base, uint32x4_t __offs __builtin_mve_vstrwq_scatter_shifted_offset_p_fv4sf (__base, __offset, __value, __p); } +__extension__ extern __inline float16x8_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +__arm_vaddq_f16 (float16x8_t __a, float16x8_t __b) +{ + return __a + __b; +} + +__extension__ extern __inline float32x4_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +__arm_vaddq_f32 (float32x4_t __a, float32x4_t __b) +{ + return __a + __b; +} + #endif enum { @@ -15186,6 +15250,8 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vaddq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t)), \ int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vaddq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vaddq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)), \ + int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vaddq_f16 (__ARM_mve_coerce(p0, float16x8_t), __ARM_mve_coerce(p1, float16x8_t)), \ + int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vaddq_f32 (__ARM_mve_coerce(p0, float32x4_t), __ARM_mve_coerce(p1, float32x4_t)), \ int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8_t]: __arm_vaddq_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8_t)), \ int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16_t]: __arm_vaddq_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16_t)), \ int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32_t]: __arm_vaddq_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32_t)), \ diff --git a/gcc/config/arm/iterators.md b/gcc/config/arm/iterators.md index 5c1a11bf7dee7590d668e7ec5e3b068789b3b3db..f3cbc0d03564ef8866226f836a27ed6051353f5d 100644 --- a/gcc/config/arm/iterators.md +++ b/gcc/config/arm/iterators.md @@ -66,6 +66,14 @@ ;; Integer and float modes supported by Neon and IWMMXT. (define_mode_iterator VALL [V2DI V2SI V4HI V8QI V2SF V4SI V8HI V16QI V4SF]) +;; Integer and float modes supported by Neon, IWMMXT and MVE, used by +;; arithmetic epxand patterns. +(define_mode_iterator VNIM [V16QI V8HI V4SI V4SF]) + +;; Integer and float modes supported by Neon and IWMMXT but not MVE, used by +;; arithmetic epxand patterns. +(define_mode_iterator VNINOTM [V2SI V4HI V8QI V2SF V2DI]) + ;; Integer and float modes supported by Neon, IWMMXT and MVE. (define_mode_iterator VNIM1 [V16QI V8HI V4SI V4SF V2DI]) diff --git a/gcc/config/arm/mve.md b/gcc/config/arm/mve.md index 5667882e941bac30d5e89b0ff866948d06bd3d5a..7578b8070282a3633d1e6f5fde5ba855ff8e553c 100644 --- a/gcc/config/arm/mve.md +++ b/gcc/config/arm/mve.md @@ -9643,3 +9643,31 @@ return ""; } [(set_attr "length" "4")]) + +;; +;; [vaddq_s, vaddq_u]) +;; +(define_insn "mve_vaddq" + [ + (set (match_operand:MVE_2 0 "s_register_operand" "=w") + (plus:MVE_2 (match_operand:MVE_2 1 "s_register_operand" "w") + (match_operand:MVE_2 2 "s_register_operand" "w"))) + ] + "TARGET_HAVE_MVE" + "vadd.i%# %q0, %q1, %q2" + [(set_attr "type" "mve_move") +]) + +;; +;; [vaddq_f]) +;; +(define_insn "mve_vaddq_f" + [ + (set (match_operand:MVE_0 0 "s_register_operand" "=w") + (plus:MVE_0 (match_operand:MVE_0 1 "s_register_operand" "w") + (match_operand:MVE_0 2 "s_register_operand" "w"))) + ] + "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" + "vadd.f%# %q0, %q1, %q2" + [(set_attr "type" "mve_move") +]) diff --git a/gcc/config/arm/neon.md b/gcc/config/arm/neon.md index fbfeef233f38831a5cb256622625879d15209431..272e6c1e7cfc4c42065d1d50131ef49d89052d91 100644 --- a/gcc/config/arm/neon.md +++ b/gcc/config/arm/neon.md @@ -519,18 +519,30 @@ ;; As with SFmode, full support for HFmode vector arithmetic is only available ;; when flag-unsafe-math-optimizations is enabled. -(define_insn "add3" +;; Add pattern with modes V8HF and V4HF is split into separate patterns to add +;; support for standard pattern addv8hf3 in MVE. Following pattern is called +;; from "addv8hf3" standard pattern inside vec-common.md file. + +(define_insn "addv8hf3_neon" [(set - (match_operand:VH 0 "s_register_operand" "=w") - (plus:VH - (match_operand:VH 1 "s_register_operand" "w") - (match_operand:VH 2 "s_register_operand" "w")))] + (match_operand:V8HF 0 "s_register_operand" "=w") + (plus:V8HF + (match_operand:V8HF 1 "s_register_operand" "w") + (match_operand:V8HF 2 "s_register_operand" "w")))] "TARGET_NEON_FP16INST && flag_unsafe_math_optimizations" - "vadd.\t%0, %1, %2" - [(set (attr "type") - (if_then_else (match_test "") - (const_string "neon_fp_addsub_s") - (const_string "neon_add")))] + "vadd.f16\t%0, %1, %2" + [(set_attr "type" "neon_fp_addsub_s_q")] +) + +(define_insn "addv4hf3" + [(set + (match_operand:V4HF 0 "s_register_operand" "=w") + (plus:V4HF + (match_operand:V4HF 1 "s_register_operand" "w") + (match_operand:V4HF 2 "s_register_operand" "w")))] + "TARGET_NEON_FP16INST && flag_unsafe_math_optimizations" + "vadd.f16\t%0, %1, %2" + [(set_attr "type" "neon_fp_addsub_s_q")] ) (define_insn "add3_fp16" diff --git a/gcc/config/arm/vec-common.md b/gcc/config/arm/vec-common.md index 916e4914a6267f928c3d3229cb9907e6fb79b222..786daa628510a5def50530c5b459bece45a0007c 100644 --- a/gcc/config/arm/vec-common.md +++ b/gcc/config/arm/vec-common.md @@ -77,19 +77,51 @@ } }) +;; Vector arithmetic. Expanders are blank, then unnamed insns implement +;; patterns separately for Neon, IWMMXT and MVE. + +(define_expand "add3" + [(set (match_operand:VNIM 0 "s_register_operand") + (plus:VNIM (match_operand:VNIM 1 "s_register_operand") + (match_operand:VNIM 2 "s_register_operand")))] + "(TARGET_NEON && ((mode != V2SFmode && mode != V4SFmode) + || flag_unsafe_math_optimizations)) + || (TARGET_REALLY_IWMMXT && VALID_IWMMXT_REG_MODE (mode)) + || (TARGET_HAVE_MVE && VALID_MVE_SI_MODE(mode)) + || (TARGET_HAVE_MVE_FLOAT && VALID_MVE_SF_MODE(mode))" +{ +}) + +;; Vector arithmetic. Expanders are blank, then unnamed insns implement +;; patterns separately for Neon and MVE. + +(define_expand "addv8hf3" + [(set (match_operand:V8HF 0 "s_register_operand") + (plus:V8HF (match_operand:V8HF 1 "s_register_operand") + (match_operand:V8HF 2 "s_register_operand")))] + "(TARGET_HAVE_MVE_FLOAT && VALID_MVE_SF_MODE(V8HFmode)) + || (TARGET_NEON_FP16INST && flag_unsafe_math_optimizations)" +{ + if (TARGET_NEON_FP16INST && flag_unsafe_math_optimizations) + emit_insn (gen_addv8hf3_neon (operands[0], operands[1], operands[2])); +}) + +;; Vector arithmetic. Expanders are blank, then unnamed insns implement +;; patterns separately for Neon and IWMMXT. + +(define_expand "add3" + [(set (match_operand:VNINOTM 0 "s_register_operand") + (plus:VNINOTM (match_operand:VNINOTM 1 "s_register_operand") + (match_operand:VNINOTM 2 "s_register_operand")))] + "(TARGET_NEON && ((mode != V2SFmode && mode != V4SFmode) + || flag_unsafe_math_optimizations)) + || (TARGET_REALLY_IWMMXT && VALID_IWMMXT_REG_MODE (mode))" +{ +}) + ;; Vector arithmetic. Expanders are blank, then unnamed insns implement ;; patterns separately for IWMMXT and Neon. -(define_expand "add3" - [(set (match_operand:VALL 0 "s_register_operand") - (plus:VALL (match_operand:VALL 1 "s_register_operand") - (match_operand:VALL 2 "s_register_operand")))] - "(TARGET_NEON && ((mode != V2SFmode && mode != V4SFmode) - || flag_unsafe_math_optimizations)) - || (TARGET_REALLY_IWMMXT && VALID_IWMMXT_REG_MODE (mode))" -{ -}) - (define_expand "sub3" [(set (match_operand:VALL 0 "s_register_operand") (minus:VALL (match_operand:VALL 1 "s_register_operand") diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_f16.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_f16.c new file mode 100644 index 0000000000000000000000000000000000000000..53b84d59f85ca359df68e906fc4c1e3599698a2e --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_f16.c @@ -0,0 +1,22 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target arm_v8_1m_mve_fp_ok } */ +/* { dg-add-options arm_v8_1m_mve_fp } */ +/* { dg-additional-options "-O2" } */ + +#include "arm_mve.h" + +float16x8_t +foo (float16x8_t a, float16x8_t b) +{ + return vaddq_f16 (a, b); +} + +/* { dg-final { scan-assembler "vadd.f16" } } */ + +float16x8_t +foo1 (float16x8_t a, float16x8_t b) +{ + return vaddq (a, b); +} + +/* { dg-final { scan-assembler "vadd.f16" } } */ diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_f32.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_f32.c new file mode 100644 index 0000000000000000000000000000000000000000..9bb7d1c0ecaf4c22303a2a89a41dd61c9fe6352e --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_f32.c @@ -0,0 +1,22 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target arm_v8_1m_mve_fp_ok } */ +/* { dg-add-options arm_v8_1m_mve_fp } */ +/* { dg-additional-options "-O2" } */ + +#include "arm_mve.h" + +float32x4_t +foo (float32x4_t a, float32x4_t b) +{ + return vaddq_f32 (a, b); +} + +/* { dg-final { scan-assembler "vadd.f32" } } */ + +float32x4_t +foo1 (float32x4_t a, float32x4_t b) +{ + return vaddq (a, b); +} + +/* { dg-final { scan-assembler "vadd.f32" } } */ diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_s16.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_s16.c new file mode 100644 index 0000000000000000000000000000000000000000..885473c9dfe6bf92e167cb64bd582b8f0f7b3a6a --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_s16.c @@ -0,0 +1,22 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target arm_v8_1m_mve_ok } */ +/* { dg-add-options arm_v8_1m_mve } */ +/* { dg-additional-options "-O2" } */ + +#include "arm_mve.h" + +int16x8_t +foo (int16x8_t a, int16x8_t b) +{ + return vaddq_s16 (a, b); +} + +/* { dg-final { scan-assembler "vadd.i16" } } */ + +int16x8_t +foo1 (int16x8_t a, int16x8_t b) +{ + return vaddq (a, b); +} + +/* { dg-final { scan-assembler "vadd.i16" } } */ diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_s32.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_s32.c new file mode 100644 index 0000000000000000000000000000000000000000..90ea50198176334b73a459a8a5ae1fc6db558cb0 --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_s32.c @@ -0,0 +1,22 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target arm_v8_1m_mve_ok } */ +/* { dg-add-options arm_v8_1m_mve } */ +/* { dg-additional-options "-O2" } */ + +#include "arm_mve.h" + +int32x4_t +foo (int32x4_t a, int32x4_t b) +{ + return vaddq_s32 (a, b); +} + +/* { dg-final { scan-assembler "vadd.i32" } } */ + +int32x4_t +foo1 (int32x4_t a, int32x4_t b) +{ + return vaddq (a, b); +} + +/* { dg-final { scan-assembler "vadd.i32" } } */ diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_s8.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_s8.c new file mode 100644 index 0000000000000000000000000000000000000000..dbde92affe54d33939208a81b5f5edd4502dd5bd --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_s8.c @@ -0,0 +1,22 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target arm_v8_1m_mve_ok } */ +/* { dg-add-options arm_v8_1m_mve } */ +/* { dg-additional-options "-O2" } */ + +#include "arm_mve.h" + +int8x16_t +foo (int8x16_t a, int8x16_t b) +{ + return vaddq_s8 (a, b); +} + +/* { dg-final { scan-assembler "vadd.i8" } } */ + +int8x16_t +foo1 (int8x16_t a, int8x16_t b) +{ + return vaddq (a, b); +} + +/* { dg-final { scan-assembler "vadd.i8" } } */ diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_u16.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_u16.c new file mode 100644 index 0000000000000000000000000000000000000000..bc966732cdd6481d5a4cef83cc4cea2b6e91e4f5 --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_u16.c @@ -0,0 +1,22 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target arm_v8_1m_mve_ok } */ +/* { dg-add-options arm_v8_1m_mve } */ +/* { dg-additional-options "-O2" } */ + +#include "arm_mve.h" + +uint16x8_t +foo (uint16x8_t a, uint16x8_t b) +{ + return vaddq_u16 (a, b); +} + +/* { dg-final { scan-assembler "vadd.i16" } } */ + +uint16x8_t +foo1 (uint16x8_t a, uint16x8_t b) +{ + return vaddq (a, b); +} + +/* { dg-final { scan-assembler "vadd.i16" } } */ diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_u32.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_u32.c new file mode 100644 index 0000000000000000000000000000000000000000..ed262c29406ab01f60f7e171b27af3ae3f5c2f93 --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_u32.c @@ -0,0 +1,22 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target arm_v8_1m_mve_ok } */ +/* { dg-add-options arm_v8_1m_mve } */ +/* { dg-additional-options "-O2" } */ + +#include "arm_mve.h" + +uint32x4_t +foo (uint32x4_t a, uint32x4_t b) +{ + return vaddq_u32 (a, b); +} + +/* { dg-final { scan-assembler "vadd.i32" } } */ + +uint32x4_t +foo1 (uint32x4_t a, uint32x4_t b) +{ + return vaddq (a, b); +} + +/* { dg-final { scan-assembler "vadd.i32" } } */ diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_u8.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_u8.c new file mode 100644 index 0000000000000000000000000000000000000000..b12e657b7af2f2ed947eb28a6d0e5dcdfde862b0 --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_u8.c @@ -0,0 +1,22 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target arm_v8_1m_mve_ok } */ +/* { dg-add-options arm_v8_1m_mve } */ +/* { dg-additional-options "-O2" } */ + +#include "arm_mve.h" + +uint8x16_t +foo (uint8x16_t a, uint8x16_t b) +{ + return vaddq_u8 (a, b); +} + +/* { dg-final { scan-assembler "vadd.i8" } } */ + +uint8x16_t +foo1 (uint8x16_t a, uint8x16_t b) +{ + return vaddq (a, b); +} + +/* { dg-final { scan-assembler "vadd.i8" } } */ diff --git a/gcc/config/arm/arm_mve.h b/gcc/config/arm/arm_mve.h index 5ea42bd6a5bd98d5c77a0e7da3464ba6b431770b..55c256910bb7f4c616ea592be699f7f4fc3f17f7 100644 --- a/gcc/config/arm/arm_mve.h +++ b/gcc/config/arm/arm_mve.h @@ -1898,6 +1898,14 @@ typedef struct { uint8x16_t val[4]; } uint8x16x4_t; #define vstrwq_scatter_shifted_offset_p_u32(__base, __offset, __value, __p) __arm_vstrwq_scatter_shifted_offset_p_u32(__base, __offset, __value, __p) #define vstrwq_scatter_shifted_offset_s32(__base, __offset, __value) __arm_vstrwq_scatter_shifted_offset_s32(__base, __offset, __value) #define vstrwq_scatter_shifted_offset_u32(__base, __offset, __value) __arm_vstrwq_scatter_shifted_offset_u32(__base, __offset, __value) +#define vaddq_s8(__a, __b) __arm_vaddq_s8(__a, __b) +#define vaddq_s16(__a, __b) __arm_vaddq_s16(__a, __b) +#define vaddq_s32(__a, __b) __arm_vaddq_s32(__a, __b) +#define vaddq_u8(__a, __b) __arm_vaddq_u8(__a, __b) +#define vaddq_u16(__a, __b) __arm_vaddq_u16(__a, __b) +#define vaddq_u32(__a, __b) __arm_vaddq_u32(__a, __b) +#define vaddq_f16(__a, __b) __arm_vaddq_f16(__a, __b) +#define vaddq_f32(__a, __b) __arm_vaddq_f32(__a, __b) #endif __extension__ extern __inline void @@ -12341,6 +12349,48 @@ __arm_vstrwq_scatter_shifted_offset_u32 (uint32_t * __base, uint32x4_t __offset, __builtin_mve_vstrwq_scatter_shifted_offset_uv4si ((__builtin_neon_si *) __base, __offset, __value); } +__extension__ extern __inline int8x16_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +__arm_vaddq_s8 (int8x16_t __a, int8x16_t __b) +{ + return __a + __b; +} + +__extension__ extern __inline int16x8_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +__arm_vaddq_s16 (int16x8_t __a, int16x8_t __b) +{ + return __a + __b; +} + +__extension__ extern __inline int32x4_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +__arm_vaddq_s32 (int32x4_t __a, int32x4_t __b) +{ + return __a + __b; +} + +__extension__ extern __inline uint8x16_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +__arm_vaddq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return __a + __b; +} + +__extension__ extern __inline uint16x8_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +__arm_vaddq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return __a + __b; +} + +__extension__ extern __inline uint32x4_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +__arm_vaddq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return __a + __b; +} + #if (__ARM_FEATURE_MVE & 2) /* MVE Floating point. */ __extension__ extern __inline void @@ -14707,6 +14757,20 @@ __arm_vstrwq_scatter_shifted_offset_p_f32 (float32_t * __base, uint32x4_t __offs __builtin_mve_vstrwq_scatter_shifted_offset_p_fv4sf (__base, __offset, __value, __p); } +__extension__ extern __inline float16x8_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +__arm_vaddq_f16 (float16x8_t __a, float16x8_t __b) +{ + return __a + __b; +} + +__extension__ extern __inline float32x4_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +__arm_vaddq_f32 (float32x4_t __a, float32x4_t __b) +{ + return __a + __b; +} + #endif enum { @@ -15186,6 +15250,8 @@ extern void *__ARM_undef; int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8x16_t]: __arm_vaddq_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8x16_t)), \ int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16x8_t]: __arm_vaddq_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16x8_t)), \ int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32x4_t]: __arm_vaddq_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32x4_t)), \ + int (*)[__ARM_mve_type_float16x8_t][__ARM_mve_type_float16x8_t]: __arm_vaddq_f16 (__ARM_mve_coerce(p0, float16x8_t), __ARM_mve_coerce(p1, float16x8_t)), \ + int (*)[__ARM_mve_type_float32x4_t][__ARM_mve_type_float32x4_t]: __arm_vaddq_f32 (__ARM_mve_coerce(p0, float32x4_t), __ARM_mve_coerce(p1, float32x4_t)), \ int (*)[__ARM_mve_type_uint8x16_t][__ARM_mve_type_uint8_t]: __arm_vaddq_n_u8 (__ARM_mve_coerce(__p0, uint8x16_t), __ARM_mve_coerce(__p1, uint8_t)), \ int (*)[__ARM_mve_type_uint16x8_t][__ARM_mve_type_uint16_t]: __arm_vaddq_n_u16 (__ARM_mve_coerce(__p0, uint16x8_t), __ARM_mve_coerce(__p1, uint16_t)), \ int (*)[__ARM_mve_type_uint32x4_t][__ARM_mve_type_uint32_t]: __arm_vaddq_n_u32 (__ARM_mve_coerce(__p0, uint32x4_t), __ARM_mve_coerce(__p1, uint32_t)), \ diff --git a/gcc/config/arm/iterators.md b/gcc/config/arm/iterators.md index 5c1a11bf7dee7590d668e7ec5e3b068789b3b3db..f3cbc0d03564ef8866226f836a27ed6051353f5d 100644 --- a/gcc/config/arm/iterators.md +++ b/gcc/config/arm/iterators.md @@ -66,6 +66,14 @@ ;; Integer and float modes supported by Neon and IWMMXT. (define_mode_iterator VALL [V2DI V2SI V4HI V8QI V2SF V4SI V8HI V16QI V4SF]) +;; Integer and float modes supported by Neon, IWMMXT and MVE, used by +;; arithmetic epxand patterns. +(define_mode_iterator VNIM [V16QI V8HI V4SI V4SF]) + +;; Integer and float modes supported by Neon and IWMMXT but not MVE, used by +;; arithmetic epxand patterns. +(define_mode_iterator VNINOTM [V2SI V4HI V8QI V2SF V2DI]) + ;; Integer and float modes supported by Neon, IWMMXT and MVE. (define_mode_iterator VNIM1 [V16QI V8HI V4SI V4SF V2DI]) diff --git a/gcc/config/arm/mve.md b/gcc/config/arm/mve.md index 5667882e941bac30d5e89b0ff866948d06bd3d5a..7578b8070282a3633d1e6f5fde5ba855ff8e553c 100644 --- a/gcc/config/arm/mve.md +++ b/gcc/config/arm/mve.md @@ -9643,3 +9643,31 @@ return ""; } [(set_attr "length" "4")]) + +;; +;; [vaddq_s, vaddq_u]) +;; +(define_insn "mve_vaddq" + [ + (set (match_operand:MVE_2 0 "s_register_operand" "=w") + (plus:MVE_2 (match_operand:MVE_2 1 "s_register_operand" "w") + (match_operand:MVE_2 2 "s_register_operand" "w"))) + ] + "TARGET_HAVE_MVE" + "vadd.i%# %q0, %q1, %q2" + [(set_attr "type" "mve_move") +]) + +;; +;; [vaddq_f]) +;; +(define_insn "mve_vaddq_f" + [ + (set (match_operand:MVE_0 0 "s_register_operand" "=w") + (plus:MVE_0 (match_operand:MVE_0 1 "s_register_operand" "w") + (match_operand:MVE_0 2 "s_register_operand" "w"))) + ] + "TARGET_HAVE_MVE && TARGET_HAVE_MVE_FLOAT" + "vadd.f%# %q0, %q1, %q2" + [(set_attr "type" "mve_move") +]) diff --git a/gcc/config/arm/neon.md b/gcc/config/arm/neon.md index fbfeef233f38831a5cb256622625879d15209431..272e6c1e7cfc4c42065d1d50131ef49d89052d91 100644 --- a/gcc/config/arm/neon.md +++ b/gcc/config/arm/neon.md @@ -519,18 +519,30 @@ ;; As with SFmode, full support for HFmode vector arithmetic is only available ;; when flag-unsafe-math-optimizations is enabled. -(define_insn "add3" +;; Add pattern with modes V8HF and V4HF is split into separate patterns to add +;; support for standard pattern addv8hf3 in MVE. Following pattern is called +;; from "addv8hf3" standard pattern inside vec-common.md file. + +(define_insn "addv8hf3_neon" [(set - (match_operand:VH 0 "s_register_operand" "=w") - (plus:VH - (match_operand:VH 1 "s_register_operand" "w") - (match_operand:VH 2 "s_register_operand" "w")))] + (match_operand:V8HF 0 "s_register_operand" "=w") + (plus:V8HF + (match_operand:V8HF 1 "s_register_operand" "w") + (match_operand:V8HF 2 "s_register_operand" "w")))] "TARGET_NEON_FP16INST && flag_unsafe_math_optimizations" - "vadd.\t%0, %1, %2" - [(set (attr "type") - (if_then_else (match_test "") - (const_string "neon_fp_addsub_s") - (const_string "neon_add")))] + "vadd.f16\t%0, %1, %2" + [(set_attr "type" "neon_fp_addsub_s_q")] +) + +(define_insn "addv4hf3" + [(set + (match_operand:V4HF 0 "s_register_operand" "=w") + (plus:V4HF + (match_operand:V4HF 1 "s_register_operand" "w") + (match_operand:V4HF 2 "s_register_operand" "w")))] + "TARGET_NEON_FP16INST && flag_unsafe_math_optimizations" + "vadd.f16\t%0, %1, %2" + [(set_attr "type" "neon_fp_addsub_s_q")] ) (define_insn "add3_fp16" diff --git a/gcc/config/arm/vec-common.md b/gcc/config/arm/vec-common.md index 916e4914a6267f928c3d3229cb9907e6fb79b222..786daa628510a5def50530c5b459bece45a0007c 100644 --- a/gcc/config/arm/vec-common.md +++ b/gcc/config/arm/vec-common.md @@ -77,19 +77,51 @@ } }) +;; Vector arithmetic. Expanders are blank, then unnamed insns implement +;; patterns separately for Neon, IWMMXT and MVE. + +(define_expand "add3" + [(set (match_operand:VNIM 0 "s_register_operand") + (plus:VNIM (match_operand:VNIM 1 "s_register_operand") + (match_operand:VNIM 2 "s_register_operand")))] + "(TARGET_NEON && ((mode != V2SFmode && mode != V4SFmode) + || flag_unsafe_math_optimizations)) + || (TARGET_REALLY_IWMMXT && VALID_IWMMXT_REG_MODE (mode)) + || (TARGET_HAVE_MVE && VALID_MVE_SI_MODE(mode)) + || (TARGET_HAVE_MVE_FLOAT && VALID_MVE_SF_MODE(mode))" +{ +}) + +;; Vector arithmetic. Expanders are blank, then unnamed insns implement +;; patterns separately for Neon and MVE. + +(define_expand "addv8hf3" + [(set (match_operand:V8HF 0 "s_register_operand") + (plus:V8HF (match_operand:V8HF 1 "s_register_operand") + (match_operand:V8HF 2 "s_register_operand")))] + "(TARGET_HAVE_MVE_FLOAT && VALID_MVE_SF_MODE(V8HFmode)) + || (TARGET_NEON_FP16INST && flag_unsafe_math_optimizations)" +{ + if (TARGET_NEON_FP16INST && flag_unsafe_math_optimizations) + emit_insn (gen_addv8hf3_neon (operands[0], operands[1], operands[2])); +}) + +;; Vector arithmetic. Expanders are blank, then unnamed insns implement +;; patterns separately for Neon and IWMMXT. + +(define_expand "add3" + [(set (match_operand:VNINOTM 0 "s_register_operand") + (plus:VNINOTM (match_operand:VNINOTM 1 "s_register_operand") + (match_operand:VNINOTM 2 "s_register_operand")))] + "(TARGET_NEON && ((mode != V2SFmode && mode != V4SFmode) + || flag_unsafe_math_optimizations)) + || (TARGET_REALLY_IWMMXT && VALID_IWMMXT_REG_MODE (mode))" +{ +}) + ;; Vector arithmetic. Expanders are blank, then unnamed insns implement ;; patterns separately for IWMMXT and Neon. -(define_expand "add3" - [(set (match_operand:VALL 0 "s_register_operand") - (plus:VALL (match_operand:VALL 1 "s_register_operand") - (match_operand:VALL 2 "s_register_operand")))] - "(TARGET_NEON && ((mode != V2SFmode && mode != V4SFmode) - || flag_unsafe_math_optimizations)) - || (TARGET_REALLY_IWMMXT && VALID_IWMMXT_REG_MODE (mode))" -{ -}) - (define_expand "sub3" [(set (match_operand:VALL 0 "s_register_operand") (minus:VALL (match_operand:VALL 1 "s_register_operand") diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_f16.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_f16.c new file mode 100644 index 0000000000000000000000000000000000000000..53b84d59f85ca359df68e906fc4c1e3599698a2e --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_f16.c @@ -0,0 +1,22 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target arm_v8_1m_mve_fp_ok } */ +/* { dg-add-options arm_v8_1m_mve_fp } */ +/* { dg-additional-options "-O2" } */ + +#include "arm_mve.h" + +float16x8_t +foo (float16x8_t a, float16x8_t b) +{ + return vaddq_f16 (a, b); +} + +/* { dg-final { scan-assembler "vadd.f16" } } */ + +float16x8_t +foo1 (float16x8_t a, float16x8_t b) +{ + return vaddq (a, b); +} + +/* { dg-final { scan-assembler "vadd.f16" } } */ diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_f32.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_f32.c new file mode 100644 index 0000000000000000000000000000000000000000..9bb7d1c0ecaf4c22303a2a89a41dd61c9fe6352e --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_f32.c @@ -0,0 +1,22 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target arm_v8_1m_mve_fp_ok } */ +/* { dg-add-options arm_v8_1m_mve_fp } */ +/* { dg-additional-options "-O2" } */ + +#include "arm_mve.h" + +float32x4_t +foo (float32x4_t a, float32x4_t b) +{ + return vaddq_f32 (a, b); +} + +/* { dg-final { scan-assembler "vadd.f32" } } */ + +float32x4_t +foo1 (float32x4_t a, float32x4_t b) +{ + return vaddq (a, b); +} + +/* { dg-final { scan-assembler "vadd.f32" } } */ diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_s16.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_s16.c new file mode 100644 index 0000000000000000000000000000000000000000..885473c9dfe6bf92e167cb64bd582b8f0f7b3a6a --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_s16.c @@ -0,0 +1,22 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target arm_v8_1m_mve_ok } */ +/* { dg-add-options arm_v8_1m_mve } */ +/* { dg-additional-options "-O2" } */ + +#include "arm_mve.h" + +int16x8_t +foo (int16x8_t a, int16x8_t b) +{ + return vaddq_s16 (a, b); +} + +/* { dg-final { scan-assembler "vadd.i16" } } */ + +int16x8_t +foo1 (int16x8_t a, int16x8_t b) +{ + return vaddq (a, b); +} + +/* { dg-final { scan-assembler "vadd.i16" } } */ diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_s32.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_s32.c new file mode 100644 index 0000000000000000000000000000000000000000..90ea50198176334b73a459a8a5ae1fc6db558cb0 --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_s32.c @@ -0,0 +1,22 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target arm_v8_1m_mve_ok } */ +/* { dg-add-options arm_v8_1m_mve } */ +/* { dg-additional-options "-O2" } */ + +#include "arm_mve.h" + +int32x4_t +foo (int32x4_t a, int32x4_t b) +{ + return vaddq_s32 (a, b); +} + +/* { dg-final { scan-assembler "vadd.i32" } } */ + +int32x4_t +foo1 (int32x4_t a, int32x4_t b) +{ + return vaddq (a, b); +} + +/* { dg-final { scan-assembler "vadd.i32" } } */ diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_s8.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_s8.c new file mode 100644 index 0000000000000000000000000000000000000000..dbde92affe54d33939208a81b5f5edd4502dd5bd --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_s8.c @@ -0,0 +1,22 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target arm_v8_1m_mve_ok } */ +/* { dg-add-options arm_v8_1m_mve } */ +/* { dg-additional-options "-O2" } */ + +#include "arm_mve.h" + +int8x16_t +foo (int8x16_t a, int8x16_t b) +{ + return vaddq_s8 (a, b); +} + +/* { dg-final { scan-assembler "vadd.i8" } } */ + +int8x16_t +foo1 (int8x16_t a, int8x16_t b) +{ + return vaddq (a, b); +} + +/* { dg-final { scan-assembler "vadd.i8" } } */ diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_u16.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_u16.c new file mode 100644 index 0000000000000000000000000000000000000000..bc966732cdd6481d5a4cef83cc4cea2b6e91e4f5 --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_u16.c @@ -0,0 +1,22 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target arm_v8_1m_mve_ok } */ +/* { dg-add-options arm_v8_1m_mve } */ +/* { dg-additional-options "-O2" } */ + +#include "arm_mve.h" + +uint16x8_t +foo (uint16x8_t a, uint16x8_t b) +{ + return vaddq_u16 (a, b); +} + +/* { dg-final { scan-assembler "vadd.i16" } } */ + +uint16x8_t +foo1 (uint16x8_t a, uint16x8_t b) +{ + return vaddq (a, b); +} + +/* { dg-final { scan-assembler "vadd.i16" } } */ diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_u32.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_u32.c new file mode 100644 index 0000000000000000000000000000000000000000..ed262c29406ab01f60f7e171b27af3ae3f5c2f93 --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_u32.c @@ -0,0 +1,22 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target arm_v8_1m_mve_ok } */ +/* { dg-add-options arm_v8_1m_mve } */ +/* { dg-additional-options "-O2" } */ + +#include "arm_mve.h" + +uint32x4_t +foo (uint32x4_t a, uint32x4_t b) +{ + return vaddq_u32 (a, b); +} + +/* { dg-final { scan-assembler "vadd.i32" } } */ + +uint32x4_t +foo1 (uint32x4_t a, uint32x4_t b) +{ + return vaddq (a, b); +} + +/* { dg-final { scan-assembler "vadd.i32" } } */ diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_u8.c b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_u8.c new file mode 100644 index 0000000000000000000000000000000000000000..b12e657b7af2f2ed947eb28a6d0e5dcdfde862b0 --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/vaddq_u8.c @@ -0,0 +1,22 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target arm_v8_1m_mve_ok } */ +/* { dg-add-options arm_v8_1m_mve } */ +/* { dg-additional-options "-O2" } */ + +#include "arm_mve.h" + +uint8x16_t +foo (uint8x16_t a, uint8x16_t b) +{ + return vaddq_u8 (a, b); +} + +/* { dg-final { scan-assembler "vadd.i8" } } */ + +uint8x16_t +foo1 (uint8x16_t a, uint8x16_t b) +{ + return vaddq (a, b); +} + +/* { dg-final { scan-assembler "vadd.i8" } } */