From patchwork Thu Dec 19 13:26:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 1213375 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=gcc-patches-return-516297-incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="B2y14mTh"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47dt2K6lWPz9sP3 for ; Fri, 20 Dec 2019 00:26:49 +1100 (AEDT) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type; q=dns; s= default; b=pT7HPAfZhovhF7yLPjFpC0O8acT8Yr9jdSWlVQAXyvvZFYTOBxsMq sgkITMwr43NM6u8+OHxAP1qiy957Ib6II7as6yqJkrxJNCrhwUkUf83SJFnUt0ZV awmZrMMpe4IcizdTNIFv9uKTQVWuAGAQLt3HI8/a4XSAOFGWfrsuSY= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type; s= default; bh=tKC5Xi7SfINVRTIDsWDs56cO2i4=; b=B2y14mThHknybJzD3QzO ZcUcWy4nQid8ua6akX5+7FtpavIcsTYOoTMWA57TvTAbM0dKOAhpiMlySH9Z9CBX a7n4TaexuvFXwCXkzcfR/adHqKpbym8wTTCLhT+a+s2/h7sf0XbmPh3i53bgZvQr 49qB/ksH19hVngmY7sTdMk0= Received: (qmail 116823 invoked by alias); 19 Dec 2019 13:26:42 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 116815 invoked by uid 89); 19 Dec 2019 13:26:42 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-9.4 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_2, GIT_PATCH_3, KAM_ASCII_DIVIDERS, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=ham version=3.3.1 spammy=noipa, integral_type_p, INTEGRAL_TYPE_P, DImode X-HELO: foss.arm.com Received: from foss.arm.com (HELO foss.arm.com) (217.140.110.172) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Thu, 19 Dec 2019 13:26:39 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5F05931B for ; Thu, 19 Dec 2019 05:26:38 -0800 (PST) Received: from localhost (e121540-lin.manchester.arm.com [10.32.98.126]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 06AD43F67D for ; Thu, 19 Dec 2019 05:26:37 -0800 (PST) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@arm.com Subject: [AArch64] Handle arguments and return types with partial SVE modes Date: Thu, 19 Dec 2019 13:26:36 +0000 Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux) MIME-Version: 1.0 X-IsSubscribed: yes Partial SVE modes can be picked up and used by the vector_size(N) attribute.[*] This means that we need to cope with arguments and return values with partial SVE modes, which previously triggered asserts like: /* Generic vectors that map to SVE modes with -msve-vector-bits=N are passed by reference, not by value. */ gcc_assert (!aarch64_sve_mode_p (mode)); The ABI for these types is fixed from pre-SVE days, and must in any case be the same for all -msve-vector-bits=N values. All we need to do is ensure that the vectors are passed and returned in the traditional way. [*] Advanced SIMD always wins for 64-bit and 128-bit vectors though. Tested on aarch64-linux-gnu, applied as r279571. Richard 2019-12-19 Richard Sandiford gcc/ * config/aarch64/aarch64.c (aarch64_function_value_1): New function, split out from... (aarch64_function_value): ...here. Handle partial SVE modes by pretending that they have the associated/traditional integer mode, then wrap the result in the real mode. (aarch64_layout_arg): Take an orig_mode argument and pass it to aarch64_function_arg_alignment. Handle partial SVE modes analogously to aarch64_function_value. (aarch64_function_arg): Update call accordingly. (aarch64_function_arg_advance): Likewise. gcc/testsuite/ * gcc.target/aarch64/sve/pcs/gnu_vectors_3.c: New test. Index: gcc/config/aarch64/aarch64.c =================================================================== --- gcc/config/aarch64/aarch64.c 2019-12-13 10:21:19.000000000 +0000 +++ gcc/config/aarch64/aarch64.c 2019-12-19 13:24:47.977362907 +0000 @@ -4948,22 +4948,12 @@ aarch64_return_in_msb (const_tree valtyp return true; } -/* Implement TARGET_FUNCTION_VALUE. - Define how to find the value returned by a function. */ - +/* Subroutine of aarch64_function_value. MODE is the mode of the argument + after promotion, and after partial SVE types have been replaced by + their integer equivalents. */ static rtx -aarch64_function_value (const_tree type, const_tree func, - bool outgoing ATTRIBUTE_UNUSED) +aarch64_function_value_1 (const_tree type, machine_mode mode) { - machine_mode mode; - int unsignedp; - int count; - machine_mode ag_mode; - - mode = TYPE_MODE (type); - if (INTEGRAL_TYPE_P (type)) - mode = promote_function_mode (type, mode, &unsignedp, func, 1); - unsigned int num_zr, num_pr; if (type && aarch64_sve_argument_p (type, &num_zr, &num_pr)) { @@ -4998,6 +4988,8 @@ aarch64_function_value (const_tree type, } } + int count; + machine_mode ag_mode; if (aarch64_vfp_is_call_or_return_candidate (mode, type, &ag_mode, &count, NULL)) { @@ -5026,6 +5018,42 @@ aarch64_function_value (const_tree type, return gen_rtx_REG (mode, R0_REGNUM); } +/* Implement TARGET_FUNCTION_VALUE. + Define how to find the value returned by a function. */ + +static rtx +aarch64_function_value (const_tree type, const_tree func, + bool outgoing ATTRIBUTE_UNUSED) +{ + machine_mode mode; + int unsignedp; + + mode = TYPE_MODE (type); + if (INTEGRAL_TYPE_P (type)) + mode = promote_function_mode (type, mode, &unsignedp, func, 1); + + /* Vector types can acquire a partial SVE mode using things like + __attribute__((vector_size(N))), and this is potentially useful. + However, the choice of mode doesn't affect the type's ABI identity, + so we should treat the types as though they had the associated + integer mode, just like they did before SVE was introduced. + + We know that the vector must be 128 bits or smaller, otherwise we'd + have returned it in memory instead. */ + unsigned int vec_flags = aarch64_classify_vector_mode (mode); + if ((vec_flags & VEC_ANY_SVE) && (vec_flags & VEC_PARTIAL)) + { + scalar_int_mode int_mode = int_mode_for_mode (mode).require (); + rtx reg = aarch64_function_value_1 (type, int_mode); + /* Vector types are never returned in the MSB and are never split. */ + gcc_assert (REG_P (reg) && GET_MODE (reg) == int_mode); + rtx pair = gen_rtx_EXPR_LIST (VOIDmode, reg, const0_rtx); + return gen_rtx_PARALLEL (VOIDmode, gen_rtvec (1, pair)); + } + + return aarch64_function_value_1 (type, mode); +} + /* Implements TARGET_FUNCTION_VALUE_REGNO_P. Return true if REGNO is the number of a hard register in which the values of called function may come back. */ @@ -5151,10 +5179,14 @@ aarch64_function_arg_alignment (machine_ } /* Layout a function argument according to the AAPCS64 rules. The rule - numbers refer to the rule numbers in the AAPCS64. */ + numbers refer to the rule numbers in the AAPCS64. ORIG_MODE is the + mode that was originally given to us by the target hook, whereas the + mode in ARG might be the result of replacing partial SVE modes with + the equivalent integer mode. */ static void -aarch64_layout_arg (cumulative_args_t pcum_v, const function_arg_info &arg) +aarch64_layout_arg (cumulative_args_t pcum_v, const function_arg_info &arg, + machine_mode orig_mode) { CUMULATIVE_ARGS *pcum = get_cumulative_args (pcum_v); tree type = arg.type; @@ -5168,6 +5200,29 @@ aarch64_layout_arg (cumulative_args_t pc if (pcum->aapcs_arg_processed) return; + /* Vector types can acquire a partial SVE mode using things like + __attribute__((vector_size(N))), and this is potentially useful. + However, the choice of mode doesn't affect the type's ABI identity, + so we should treat the types as though they had the associated + integer mode, just like they did before SVE was introduced. + + We know that the vector must be 128 bits or smaller, otherwise we'd + have passed it by reference instead. */ + unsigned int vec_flags = aarch64_classify_vector_mode (mode); + if ((vec_flags & VEC_ANY_SVE) && (vec_flags & VEC_PARTIAL)) + { + function_arg_info tmp_arg = arg; + tmp_arg.mode = int_mode_for_mode (mode).require (); + aarch64_layout_arg (pcum_v, tmp_arg, orig_mode); + if (rtx reg = pcum->aapcs_reg) + { + gcc_assert (REG_P (reg) && GET_MODE (reg) == tmp_arg.mode); + rtx pair = gen_rtx_EXPR_LIST (VOIDmode, reg, const0_rtx); + pcum->aapcs_reg = gen_rtx_PARALLEL (mode, gen_rtvec (1, pair)); + } + return; + } + pcum->aapcs_arg_processed = true; unsigned int num_zr, num_pr; @@ -5289,7 +5344,7 @@ aarch64_layout_arg (cumulative_args_t pc comparison is there because for > 16 * BITS_PER_UNIT alignment nregs should be > 2 and therefore it should be passed by reference rather than value. */ - && (aarch64_function_arg_alignment (mode, type, &abi_break) + && (aarch64_function_arg_alignment (orig_mode, type, &abi_break) == 16 * BITS_PER_UNIT)) { if (abi_break && warn_psabi && currently_expanding_gimple_stmt) @@ -5332,7 +5387,7 @@ aarch64_layout_arg (cumulative_args_t pc on_stack: pcum->aapcs_stack_words = size / UNITS_PER_WORD; - if (aarch64_function_arg_alignment (mode, type, &abi_break) + if (aarch64_function_arg_alignment (orig_mode, type, &abi_break) == 16 * BITS_PER_UNIT) { int new_size = ROUND_UP (pcum->aapcs_stack_size, 16 / UNITS_PER_WORD); @@ -5360,7 +5415,7 @@ aarch64_function_arg (cumulative_args_t if (arg.end_marker_p ()) return gen_int_mode (pcum->pcs_variant, DImode); - aarch64_layout_arg (pcum_v, arg); + aarch64_layout_arg (pcum_v, arg, arg.mode); return pcum->aapcs_reg; } @@ -5425,7 +5480,7 @@ aarch64_function_arg_advance (cumulative || pcum->pcs_variant == ARM_PCS_SIMD || pcum->pcs_variant == ARM_PCS_SVE) { - aarch64_layout_arg (pcum_v, arg); + aarch64_layout_arg (pcum_v, arg, arg.mode); gcc_assert ((pcum->aapcs_reg != NULL_RTX) != (pcum->aapcs_stack_words != 0)); pcum->aapcs_arg_processed = false; Index: gcc/testsuite/gcc.target/aarch64/sve/pcs/gnu_vectors_3.c =================================================================== --- /dev/null 2019-09-17 11:41:18.176664108 +0100 +++ gcc/testsuite/gcc.target/aarch64/sve/pcs/gnu_vectors_3.c 2019-12-19 13:24:47.997362774 +0000 @@ -0,0 +1,58 @@ +/* { dg-options "-O -msve-vector-bits=256" } */ + +typedef unsigned char int8x4_t __attribute__((vector_size (4))); + +/* +** passthru_x0: +** ret +*/ +int8x4_t passthru_x0 (int8x4_t x0) { return x0; } + +/* +** passthru_x1: +** mov w0, w1 +** ret +*/ +int8x4_t passthru_x1 (int8x4_t x0, int8x4_t x1) { return x1; } + +int8x4_t load (int8x4_t *x0) { return *x0; } + +void store (int8x4_t *x0, int8x4_t x1) { *x0 = x1; } + +/* +** stack_callee: +** ptrue p[0-7], vl32 +** ld1b (z[0-9]+\.d), \1/z, \[sp\] +** st1b \2, \1, \[x0\] +** ret +*/ +__attribute__((noipa)) +void stack_callee (int8x4_t *x0, int8x4_t x1, int8x4_t x2, int8x4_t x3, + int8x4_t x4, int8x4_t x5, int8x4_t x6, int8x4_t x7, + int8x4_t stack0) +{ + *x0 = stack0; +} + +/* +** stack_callee: +** \.\.\. +** ptrue p[0-7], vl32 +** \.\.\. +** ld1b (z[0-9]+\.d), \1/z, \[x0\] +** \.\.\. +** st1b \2, \1, \[sp\] +** \.\.\. +** ret +*/ +void stack_caller (int8x4_t *x0, int8x4_t x1) +{ + stack_callee (x0, x1, x1, x1, x1, x1, x1, x1, *x0); +} + +/* { dg-final { scan-assembler {\tmov\tw2, w} } } */ +/* { dg-final { scan-assembler {\tmov\tw3, w} } } */ +/* { dg-final { scan-assembler {\tmov\tw4, w} } } */ +/* { dg-final { scan-assembler {\tmov\tw5, w} } } */ +/* { dg-final { scan-assembler {\tmov\tw6, w} } } */ +/* { dg-final { scan-assembler {\tmov\tw7, w} } } */