From patchwork Fri Mar 6 15:03:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wilco Dijkstra X-Patchwork-Id: 1250376 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=gcc-patches-return-520787-incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.a=rsa-sha1 header.s=default header.b=GIa13NZP; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=armh.onmicrosoft.com header.i=@armh.onmicrosoft.com header.a=rsa-sha256 header.s=selector2-armh-onmicrosoft-com header.b=iMj81QjQ; dkim=fail reason="signature verification failed" (1024-bit key) header.d=armh.onmicrosoft.com header.i=@armh.onmicrosoft.com header.a=rsa-sha256 header.s=selector2-armh-onmicrosoft-com header.b=iMj81QjQ; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48YrTm5Lvmz9sRY for ; Sat, 7 Mar 2020 02:03:23 +1100 (AEDT) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:cc:subject:date:message-id:content-type :content-transfer-encoding:mime-version; q=dns; s=default; b=ZG6 b8OToeNPatZqt++8vM2uCvn9ZCSz+VDHmHBGI2hfNr5QrBa6ZRSYUk0Y34C0Ng9a gglNPLUgiwRxVnA/mJ7qvysSm92tJD/3CtVVEEcqrX/KG9oYP/HSmatCjfVMvZZ4 ywqbbWTxwfLpMgQxVG7PeoT2OCUq0lsTsFu8k068= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:cc:subject:date:message-id:content-type :content-transfer-encoding:mime-version; s=default; bh=beaHR1PhT roz0uoTTHfaDYBdIRc=; b=GIa13NZPJ9q2fleuu/MEOQkF24zgwbdEBP8rns22E ZGQeA/irE0K1GIQW/CxXyDRfhz2Xe+Z4eHoNccuGS25br3zo5gZNW3eYPTJmCkPo AHtLwDz2zByZSSPEWF9UyExsC0/+h9D9ECw9CHC4uT6osnlpcqUy55L7pyc1pYXG zM= Received: (qmail 77290 invoked by alias); 6 Mar 2020 15:03:15 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 77281 invoked by uid 89); 6 Mar 2020 15:03:15 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-16.3 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, KAM_LOTSOFHASH, RCVD_IN_DNSWL_NONE, SPF_HELO_PASS, SPF_PASS, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 spammy=24h X-HELO: EUR04-VI1-obe.outbound.protection.outlook.com Received: from mail-eopbgr80081.outbound.protection.outlook.com (HELO EUR04-VI1-obe.outbound.protection.outlook.com) (40.107.8.81) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Fri, 06 Mar 2020 15:03:12 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=iuiRGQTDUSKVO0b9s0GKfkD+s93I+W8lqMlGbA10B0Q=; b=iMj81QjQq47ouveZsDki0qe6P7V5wiVxwRp9bFojL2nez/Mu/Ek8SMs4tE6wp9an3od3Bw1T1a+d/SWegdwgPp8Dl4nbl6b5HvqltV9qSB4ICbbIvNC/+/CkPvgr6KPPzD/gb16YnfYP2AJKAigvATlrLX2dq139dsRmgS8WFfU= Received: from AM5PR0201CA0017.eurprd02.prod.outlook.com (2603:10a6:203:3d::27) by AM6PR08MB4600.eurprd08.prod.outlook.com (2603:10a6:20b:84::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2772.15; Fri, 6 Mar 2020 15:03:08 +0000 Received: from VE1EUR03FT033.eop-EUR03.prod.protection.outlook.com (2603:10a6:203:3d:cafe::31) by AM5PR0201CA0017.outlook.office365.com (2603:10a6:203:3d::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2793.14 via Frontend Transport; Fri, 6 Mar 2020 15:03:07 +0000 Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; gcc.gnu.org; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; gcc.gnu.org; dmarc=bestguesspass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by VE1EUR03FT033.mail.protection.outlook.com (10.152.18.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2793.11 via Frontend Transport; Fri, 6 Mar 2020 15:03:07 +0000 Received: ("Tessian outbound d1ceabc7047e:v42"); Fri, 06 Mar 2020 15:03:07 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: d7def5e012b807a7 X-CR-MTA-TID: 64aa7808 Received: from cd4ccd59d377.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 813E1C21-6DBD-44A8-8412-105D33A74D15.1; Fri, 06 Mar 2020 15:03:02 +0000 Received: from EUR01-DB5-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cd4ccd59d377.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Fri, 06 Mar 2020 15:03:02 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VXmDAU1v3fpHCKAmDhJmU+42Z0oGP0T1H9tPqowjNuoChK4+UDxYubuCGUhL+Zgq3Tk847Km5++30dmF6cJeyH/rk3g8I8GzqZtQXeuU4q+uEf7gaI5ktMi9Bl6dbD+IGaWj1miLOO62jgdHGVYhyxk/I/GgRgrS+NHuJ1UD7xxlUvDpFYKmbbFi87h96LhxFXY4LmdCSqIJI1UITRAx+C0TFtHrErtUQOskrUNePuoqvBnE8SO5gWn48yu+6UCZDMegaeKKW6+rr5OGIoRIMiyxpiEYdNOwb32uqhyZ0sGWrBOkqxYw3l66d+4KJlOqEOle2HyCId44/k8apn9T9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=iuiRGQTDUSKVO0b9s0GKfkD+s93I+W8lqMlGbA10B0Q=; b=ctmIccxACEliWpYkU7nU3A2suzmysQFyJU0nBZDzEoWRz6tk5Y2D5Eevyu1VpUrBGOAcgMEUj+WsPGd83kDSDGwsbt6yPV9IsTZWq5hlGh2jzmRaJ2jdFPwbOPWfYZrrcrCV/8NuK8AADZLuvAPbO6xhjKlCNYvPiJTsImkI+B8ogYP4yGf2noMEFly+bal4ODjFWVR2bwVMhu8iJiukkWDJGJAJ8JoA8/e3NwTSDMsHaZa7A2aM1hEOtcZhRvT61MQRd97jEHOjPzxsTT+eU7UPC5aqgBibT7mGPgA8y0UdbXAYt0N3XpKFisfhggHUCT7QV/1MMEBSvCnsKW42YQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=iuiRGQTDUSKVO0b9s0GKfkD+s93I+W8lqMlGbA10B0Q=; b=iMj81QjQq47ouveZsDki0qe6P7V5wiVxwRp9bFojL2nez/Mu/Ek8SMs4tE6wp9an3od3Bw1T1a+d/SWegdwgPp8Dl4nbl6b5HvqltV9qSB4ICbbIvNC/+/CkPvgr6KPPzD/gb16YnfYP2AJKAigvATlrLX2dq139dsRmgS8WFfU= Received: from AM5PR0801MB2035.eurprd08.prod.outlook.com (10.168.157.147) by AM5PR0801MB1779.eurprd08.prod.outlook.com (10.169.241.11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2772.15; Fri, 6 Mar 2020 15:03:00 +0000 Received: from AM5PR0801MB2035.eurprd08.prod.outlook.com ([fe80::19ff:5219:d351:3199]) by AM5PR0801MB2035.eurprd08.prod.outlook.com ([fe80::19ff:5219:d351:3199%3]) with mapi id 15.20.2772.020; Fri, 6 Mar 2020 15:03:00 +0000 From: Wilco Dijkstra To: GCC Patches CC: Kyrylo Tkachov , Richard Sandiford , Richard Earnshaw Subject: [PATCH][AArch64] Use intrinsics for widening multiplies (PR91598) Date: Fri, 6 Mar 2020 15:03:00 +0000 Message-ID: Authentication-Results-Original: spf=none (sender IP is ) smtp.mailfrom=Wilco.Dijkstra@arm.com; x-ms-exchange-transport-forked: True x-checkrecipientrouted: true x-ms-oob-tlc-oobclassifiers: OLM:2000;OLM:2000; X-Forefront-Antispam-Report-Untrusted: SFV:NSPM; SFS:(10009020)(4636009)(346002)(376002)(136003)(366004)(396003)(39860400002)(199004)(189003)(2906002)(9686003)(478600001)(4326008)(86362001)(186003)(55016002)(71200400001)(26005)(33656002)(7696005)(30864003)(54906003)(6506007)(5660300002)(66446008)(64756008)(66556008)(66946007)(52536014)(81156014)(8676002)(66476007)(6916009)(76116006)(8936002)(81166006)(316002)(559001)(579004); DIR:OUT; SFP:1101; SCL:1; SRVR:AM5PR0801MB1779; H:AM5PR0801MB2035.eurprd08.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: arm.com does not designate permitted sender hosts) X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: Zp7GwI3OBxouM56IAm756Qrdve2CkgvaLynQoqyqI/0e2bBHn2WZi0fTUvVeEqXTHtSyFF0vRWffVqs98bg1eqyIeYgDGEkKXujzA90fkjqQks350rcn67ALpxueGBDmTNuQ/vXOvfDH8VUkv4Twmlbx/DAaYdWONJui2wKCkhy8OAVTE18zuu7zQk0OF9OP8QnSmo4SPpY35H/9rV1TR9dksQV+YmY0tcYqfgRvqYmXm/tXpPEk3MGEdjSARqRHDVldEDAXD03R7hKvMZRtIjaYt2ZWuFT+aOKNN2OuMuBPY4FCEGq9MzBlu3UG7e8Uq7RCOlyNTbi834QHJ0np+qGCUDN1AyWkGl3uIIjmDCqE5gqsZtOLdHp0ZFCiL2tBRtzks7MHutZkcedt1nGCK5kGNeTcwlyXivEPfDrnHv1JnxykzrvVGFZtzFzsUAWz x-ms-exchange-antispam-messagedata: 2MBM7ghLzXQuu0DlCbizAbqM+Xv47uFma0i8e2WDuQF9VgPnVKXQX1tY/Tw2IkafdELMBWsqry3Tz43/ifVrTkW4HG4Y/P3XBgsDphpQ/JQGJDfhlin+h2sw3I5vZrPQwHe5xwGtnqK3xW0476+RYA== MIME-Version: 1.0 Original-Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Wilco.Dijkstra@arm.com; X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT033.eop-EUR03.prod.protection.outlook.com X-MS-Office365-Filtering-Correlation-Id-Prvs: 4d87c076-1ea8-4906-f2f4-08d7c1df76a2 Inline assembler instructions don't have latency info and the scheduler does not attempt to schedule them at all - it does not even honor latencies of asm source operands. As a result, SIMD intrinsics which are implemented using inline assembler perform very poorly, particularly on in-order cores. Fix this by adding new patterns and intrinsics for widening multiplies, which results in a 63% speedup for the example in the PR. This fixes the performance regression. Passes regress&bootstrap. ChangeLog: 2020-03-06 Wilco Dijkstra PR target/91598 * config/aarch64/aarch64-builtins.c (TYPES_TERNOPU_LANE): Add define. * config/aarch64/aarch64-simd.md (aarch64_vec_mult_lane): Add new insn for widening lane mul. (aarch64_vec_mlal_lane): Likewise. * config/aarch64/aarch64-simd-builtins.def: Add intrinsics. * config/aarch64/arm_neon.h: (vmlal_lane_s16): Expand using intrinsics rather than inline asm. (vmlal_lane_u16): Likewise. (vmlal_lane_s32): Likewise. (vmlal_lane_u32): Likewise. (vmlal_laneq_s16): Likewise. (vmlal_laneq_u16): Likewise. (vmlal_laneq_s32): Likewise. (vmlal_laneq_u32): Likewise. (vmull_lane_s16): Likewise. (vmull_lane_u16): Likewise. (vmull_lane_s32): Likewise. (vmull_lane_u32): Likewise. (vmull_laneq_s16): Likewise. (vmull_laneq_u16): Likewise. (vmull_laneq_s32): Likewise. (vmull_laneq_u32): Likewise. * config/aarch64/iterators.md (Vtype2): Add new iterator for lane mul. (Qlane): Likewise. diff --git a/gcc/config/aarch64/aarch64-builtins.c b/gcc/config/aarch64/aarch64-builtins.c index 9c9c6d86ae29fcbcf42e84408c5e94990fed8348..5744e68ea08722dcc387254f44408eb0fd3ffe6e 100644 --- a/gcc/config/aarch64/aarch64-builtins.c +++ b/gcc/config/aarch64/aarch64-builtins.c @@ -175,6 +175,11 @@ aarch64_types_ternopu_qualifiers[SIMD_MAX_BUILTIN_ARGS] qualifier_unsigned, qualifier_unsigned }; #define TYPES_TERNOPU (aarch64_types_ternopu_qualifiers) static enum aarch64_type_qualifiers +aarch64_types_ternopu_lane_qualifiers[SIMD_MAX_BUILTIN_ARGS] + = { qualifier_unsigned, qualifier_unsigned, + qualifier_unsigned, qualifier_lane_index }; +#define TYPES_TERNOPU_LANE (aarch64_types_ternopu_lane_qualifiers) +static enum aarch64_type_qualifiers aarch64_types_ternopu_imm_qualifiers[SIMD_MAX_BUILTIN_ARGS] = { qualifier_unsigned, qualifier_unsigned, qualifier_unsigned, qualifier_immediate }; diff --git a/gcc/config/aarch64/aarch64-simd-builtins.def b/gcc/config/aarch64/aarch64-simd-builtins.def index d8bb96f8ed60648477f952ea6b88eae67cc9c921..e256e9c2086b48dfb1d95ce8391651ec9e86b696 100644 --- a/gcc/config/aarch64/aarch64-simd-builtins.def +++ b/gcc/config/aarch64/aarch64-simd-builtins.def @@ -191,6 +191,15 @@ BUILTIN_VQW (BINOP, vec_widen_smult_hi_, 10) BUILTIN_VQW (BINOPU, vec_widen_umult_hi_, 10) + BUILTIN_VD_HSI (TERNOP_LANE, vec_smult_lane_, 0) + BUILTIN_VD_HSI (QUADOP_LANE, vec_smlal_lane_, 0) + BUILTIN_VD_HSI (TERNOP_LANE, vec_smult_laneq_, 0) + BUILTIN_VD_HSI (QUADOP_LANE, vec_smlal_laneq_, 0) + BUILTIN_VD_HSI (TERNOPU_LANE, vec_umult_lane_, 0) + BUILTIN_VD_HSI (QUADOPU_LANE, vec_umlal_lane_, 0) + BUILTIN_VD_HSI (TERNOPU_LANE, vec_umult_laneq_, 0) + BUILTIN_VD_HSI (QUADOPU_LANE, vec_umlal_laneq_, 0) + BUILTIN_VSD_HSI (BINOP, sqdmull, 0) BUILTIN_VSD_HSI (TERNOP_LANE, sqdmull_lane, 0) BUILTIN_VSD_HSI (TERNOP_LANE, sqdmull_laneq, 0) diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md index 999d80667b7cf06040515958c747d8bca0728acc..ccf4e394c1f6aa7d0adb23cfcd8da1b6d40d7ebf 100644 --- a/gcc/config/aarch64/aarch64-simd.md +++ b/gcc/config/aarch64/aarch64-simd.md @@ -1892,6 +1892,45 @@ (define_expand "vec_widen_mult_hi_" } ) +;; vmull_lane_s16 intrinsics +(define_insn "aarch64_vec_mult_lane" + [(set (match_operand: 0 "register_operand" "=w") + (mult: + (ANY_EXTEND: + (match_operand: 1 "register_operand" "w")) + (ANY_EXTEND: + (vec_duplicate: + (vec_select: + (match_operand:VDQHS 2 "register_operand" "") + (parallel [(match_operand:SI 3 "immediate_operand" "i")]))))))] + "TARGET_SIMD" + { + operands[3] = aarch64_endian_lane_rtx (mode, INTVAL (operands[3])); + return "mull\\t%0., %1., %2.[%3]"; + } + [(set_attr "type" "neon_mul__scalar_long")] +) + +;; vmlal_lane_s16 intrinsics +(define_insn "aarch64_vec_mlal_lane" + [(set (match_operand: 0 "register_operand" "=w") + (plus: (match_operand: 1 "register_operand" "0") + (mult: + (ANY_EXTEND: + (match_operand: 2 "register_operand" "w")) + (ANY_EXTEND: + (vec_duplicate: + (vec_select: + (match_operand:VDQHS 3 "register_operand" "") + (parallel [(match_operand:SI 4 "immediate_operand" "i")])))))))] + "TARGET_SIMD" + { + operands[4] = aarch64_endian_lane_rtx (mode, INTVAL (operands[4])); + return "mlal\\t%0., %2., %3.[%4]"; + } + [(set_attr "type" "neon_mla__scalar_long")] +) + ;; FP vector operations. ;; AArch64 AdvSIMD supports single-precision (32-bit) and ;; double-precision (64-bit) floating-point data types and arithmetic as diff --git a/gcc/config/aarch64/arm_neon.h b/gcc/config/aarch64/arm_neon.h index b6f42ac630295d9b827e2763cf487ccfb5bfe64b..700dd57ccd1b7ced731a92e43bc71911ad1c93cb 100644 --- a/gcc/config/aarch64/arm_neon.h +++ b/gcc/config/aarch64/arm_neon.h @@ -7700,117 +7700,61 @@ vmlal_high_u32 (uint64x2_t __a, uint32x4_t __b, uint32x4_t __c) return __result; } -#define vmlal_lane_s16(a, b, c, d) \ - __extension__ \ - ({ \ - int16x4_t c_ = (c); \ - int16x4_t b_ = (b); \ - int32x4_t a_ = (a); \ - int32x4_t result; \ - __asm__ ("smlal %0.4s,%2.4h,%3.h[%4]" \ - : "=w"(result) \ - : "0"(a_), "w"(b_), "x"(c_), "i"(d) \ - : /* No clobbers */); \ - result; \ - }) +__extension__ extern __inline int32x4_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +vmlal_lane_s16 (int32x4_t __acc, int16x4_t __a, int16x4_t __b, const int __c) +{ + return __builtin_aarch64_vec_smlal_lane_v4hi (__acc, __a, __b, __c); +} -#define vmlal_lane_s32(a, b, c, d) \ - __extension__ \ - ({ \ - int32x2_t c_ = (c); \ - int32x2_t b_ = (b); \ - int64x2_t a_ = (a); \ - int64x2_t result; \ - __asm__ ("smlal %0.2d,%2.2s,%3.s[%4]" \ - : "=w"(result) \ - : "0"(a_), "w"(b_), "w"(c_), "i"(d) \ - : /* No clobbers */); \ - result; \ - }) +__extension__ extern __inline int64x2_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +vmlal_lane_s32 (int64x2_t __acc, int32x2_t __a, int32x2_t __b, const int __c) +{ + return __builtin_aarch64_vec_smlal_lane_v2si (__acc, __a, __b, __c); +} -#define vmlal_lane_u16(a, b, c, d) \ - __extension__ \ - ({ \ - uint16x4_t c_ = (c); \ - uint16x4_t b_ = (b); \ - uint32x4_t a_ = (a); \ - uint32x4_t result; \ - __asm__ ("umlal %0.4s,%2.4h,%3.h[%4]" \ - : "=w"(result) \ - : "0"(a_), "w"(b_), "x"(c_), "i"(d) \ - : /* No clobbers */); \ - result; \ - }) +__extension__ extern __inline uint32x4_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +vmlal_lane_u16 (uint32x4_t __acc, uint16x4_t __a, uint16x4_t __b, const int __c) +{ + return __builtin_aarch64_vec_umlal_lane_v4hi_uuuus (__acc, __a, __b, __c); +} -#define vmlal_lane_u32(a, b, c, d) \ - __extension__ \ - ({ \ - uint32x2_t c_ = (c); \ - uint32x2_t b_ = (b); \ - uint64x2_t a_ = (a); \ - uint64x2_t result; \ - __asm__ ("umlal %0.2d, %2.2s, %3.s[%4]" \ - : "=w"(result) \ - : "0"(a_), "w"(b_), "w"(c_), "i"(d) \ - : /* No clobbers */); \ - result; \ - }) +__extension__ extern __inline uint64x2_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +vmlal_lane_u32 (uint64x2_t __acc, uint32x2_t __a, uint32x2_t __b, const int __c) +{ + return __builtin_aarch64_vec_umlal_lane_v2si_uuuus (__acc, __a, __b, __c); +} -#define vmlal_laneq_s16(a, b, c, d) \ - __extension__ \ - ({ \ - int16x8_t c_ = (c); \ - int16x4_t b_ = (b); \ - int32x4_t a_ = (a); \ - int32x4_t result; \ - __asm__ ("smlal %0.4s, %2.4h, %3.h[%4]" \ - : "=w"(result) \ - : "0"(a_), "w"(b_), "x"(c_), "i"(d) \ - : /* No clobbers */); \ - result; \ - }) +__extension__ extern __inline int32x4_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +vmlal_laneq_s16 (int32x4_t __acc, int16x4_t __a, int16x8_t __b, const int __c) +{ + return __builtin_aarch64_vec_smlal_laneq_v4hi (__acc, __a, __b, __c); +} -#define vmlal_laneq_s32(a, b, c, d) \ - __extension__ \ - ({ \ - int32x4_t c_ = (c); \ - int32x2_t b_ = (b); \ - int64x2_t a_ = (a); \ - int64x2_t result; \ - __asm__ ("smlal %0.2d, %2.2s, %3.s[%4]" \ - : "=w"(result) \ - : "0"(a_), "w"(b_), "w"(c_), "i"(d) \ - : /* No clobbers */); \ - result; \ - }) +__extension__ extern __inline int64x2_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +vmlal_laneq_s32 (int64x2_t __acc, int32x2_t __a, int32x4_t __b, const int __c) +{ + return __builtin_aarch64_vec_smlal_laneq_v2si (__acc, __a, __b, __c); +} -#define vmlal_laneq_u16(a, b, c, d) \ - __extension__ \ - ({ \ - uint16x8_t c_ = (c); \ - uint16x4_t b_ = (b); \ - uint32x4_t a_ = (a); \ - uint32x4_t result; \ - __asm__ ("umlal %0.4s, %2.4h, %3.h[%4]" \ - : "=w"(result) \ - : "0"(a_), "w"(b_), "x"(c_), "i"(d) \ - : /* No clobbers */); \ - result; \ - }) +__extension__ extern __inline uint32x4_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +vmlal_laneq_u16 (uint32x4_t __acc, uint16x4_t __a, uint16x8_t __b, const int __c) +{ + return __builtin_aarch64_vec_umlal_laneq_v4hi_uuuus (__acc, __a, __b, __c); +} -#define vmlal_laneq_u32(a, b, c, d) \ - __extension__ \ - ({ \ - uint32x4_t c_ = (c); \ - uint32x2_t b_ = (b); \ - uint64x2_t a_ = (a); \ - uint64x2_t result; \ - __asm__ ("umlal %0.2d, %2.2s, %3.s[%4]" \ - : "=w"(result) \ - : "0"(a_), "w"(b_), "w"(c_), "i"(d) \ - : /* No clobbers */); \ - result; \ - }) +__extension__ extern __inline uint64x2_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +vmlal_laneq_u32 (uint64x2_t __acc, uint32x2_t __a, uint32x4_t __b, const int __c) +{ + return __builtin_aarch64_vec_umlal_laneq_v2si_uuuus (__acc, __a, __b, __c); +} __extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) @@ -9289,109 +9233,61 @@ vmull_high_u32 (uint32x4_t __a, uint32x4_t __b) return __builtin_aarch64_vec_widen_umult_hi_v4si_uuu (__a, __b); } -#define vmull_lane_s16(a, b, c) \ - __extension__ \ - ({ \ - int16x4_t b_ = (b); \ - int16x4_t a_ = (a); \ - int32x4_t result; \ - __asm__ ("smull %0.4s,%1.4h,%2.h[%3]" \ - : "=w"(result) \ - : "w"(a_), "x"(b_), "i"(c) \ - : /* No clobbers */); \ - result; \ - }) +__extension__ extern __inline int32x4_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +vmull_lane_s16 (int16x4_t __a, int16x4_t __b, const int __c) +{ + return __builtin_aarch64_vec_smult_lane_v4hi (__a, __b, __c); +} -#define vmull_lane_s32(a, b, c) \ - __extension__ \ - ({ \ - int32x2_t b_ = (b); \ - int32x2_t a_ = (a); \ - int64x2_t result; \ - __asm__ ("smull %0.2d,%1.2s,%2.s[%3]" \ - : "=w"(result) \ - : "w"(a_), "w"(b_), "i"(c) \ - : /* No clobbers */); \ - result; \ - }) +__extension__ extern __inline int64x2_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +vmull_lane_s32 (int32x2_t __a, int32x2_t __b, const int __c) +{ + return __builtin_aarch64_vec_smult_lane_v2si (__a, __b, __c); +} -#define vmull_lane_u16(a, b, c) \ - __extension__ \ - ({ \ - uint16x4_t b_ = (b); \ - uint16x4_t a_ = (a); \ - uint32x4_t result; \ - __asm__ ("umull %0.4s,%1.4h,%2.h[%3]" \ - : "=w"(result) \ - : "w"(a_), "x"(b_), "i"(c) \ - : /* No clobbers */); \ - result; \ - }) +__extension__ extern __inline uint32x4_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +vmull_lane_u16 (uint16x4_t __a, uint16x4_t __b, const int __c) +{ + return __builtin_aarch64_vec_umult_lane_v4hi_uuus (__a, __b, __c); +} -#define vmull_lane_u32(a, b, c) \ - __extension__ \ - ({ \ - uint32x2_t b_ = (b); \ - uint32x2_t a_ = (a); \ - uint64x2_t result; \ - __asm__ ("umull %0.2d, %1.2s, %2.s[%3]" \ - : "=w"(result) \ - : "w"(a_), "w"(b_), "i"(c) \ - : /* No clobbers */); \ - result; \ - }) +__extension__ extern __inline uint64x2_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +vmull_lane_u32 (uint32x2_t __a, uint32x2_t __b, const int __c) +{ + return __builtin_aarch64_vec_umult_lane_v2si_uuus (__a, __b, __c); +} -#define vmull_laneq_s16(a, b, c) \ - __extension__ \ - ({ \ - int16x8_t b_ = (b); \ - int16x4_t a_ = (a); \ - int32x4_t result; \ - __asm__ ("smull %0.4s, %1.4h, %2.h[%3]" \ - : "=w"(result) \ - : "w"(a_), "x"(b_), "i"(c) \ - : /* No clobbers */); \ - result; \ - }) +__extension__ extern __inline int32x4_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +vmull_laneq_s16 (int16x4_t __a, int16x8_t __b, const int __c) +{ + return __builtin_aarch64_vec_smult_laneq_v4hi (__a, __b, __c); +} -#define vmull_laneq_s32(a, b, c) \ - __extension__ \ - ({ \ - int32x4_t b_ = (b); \ - int32x2_t a_ = (a); \ - int64x2_t result; \ - __asm__ ("smull %0.2d, %1.2s, %2.s[%3]" \ - : "=w"(result) \ - : "w"(a_), "w"(b_), "i"(c) \ - : /* No clobbers */); \ - result; \ - }) +__extension__ extern __inline int64x2_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +vmull_laneq_s32 (int32x2_t __a, int32x4_t __b, const int __c) +{ + return __builtin_aarch64_vec_smult_laneq_v2si (__a, __b, __c); +} -#define vmull_laneq_u16(a, b, c) \ - __extension__ \ - ({ \ - uint16x8_t b_ = (b); \ - uint16x4_t a_ = (a); \ - uint32x4_t result; \ - __asm__ ("umull %0.4s, %1.4h, %2.h[%3]" \ - : "=w"(result) \ - : "w"(a_), "x"(b_), "i"(c) \ - : /* No clobbers */); \ - result; \ - }) +__extension__ extern __inline uint32x4_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +vmull_laneq_u16 (uint16x4_t __a, uint16x8_t __b, const int __c) +{ + return __builtin_aarch64_vec_umult_laneq_v4hi_uuus (__a, __b, __c); +} -#define vmull_laneq_u32(a, b, c) \ - __extension__ \ - ({ \ - uint32x4_t b_ = (b); \ - uint32x2_t a_ = (a); \ - uint64x2_t result; \ - __asm__ ("umull %0.2d, %1.2s, %2.s[%3]" \ - : "=w"(result) \ - : "w"(a_), "w"(b_), "i"(c) \ - : /* No clobbers */); \ - result; \ - }) +__extension__ extern __inline uint64x2_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +vmull_laneq_u32 (uint32x2_t __a, uint32x4_t __b, const int __c) +{ + return __builtin_aarch64_vec_umult_laneq_v2si_uuus (__a, __b, __c); +} __extension__ extern __inline int32x4_t __attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) diff --git a/gcc/config/aarch64/iterators.md b/gcc/config/aarch64/iterators.md index ec1b92c5379f7c33446d0ac3556f6358fb7433d3..2f4b553a9a433773b222ce9f0bede3630ff0624c 100644 --- a/gcc/config/aarch64/iterators.md +++ b/gcc/config/aarch64/iterators.md @@ -980,6 +980,13 @@ (define_mode_attr Vtype [(V8QI "8b") (V16QI "16b") (V4SF "4s") (V2DF "2d") (V4HF "4h") (V8HF "8h")]) +;; Map mode to type used in widening multiplies. +(define_mode_attr Vtype2 [(V4HI "4h") (V8HI "4h") (V2SI "2s") (V4SI "2s")]) + +;; Map lane mode to name +(define_mode_attr Qlane [(V4HI "_v4hi") (V8HI "q_v4hi") + (V2SI "_v2si") (V4SI "q_v2si")]) + (define_mode_attr Vrevsuff [(V4HI "16") (V8HI "16") (V2SI "32") (V4SI "32") (V2DI "64")])