From patchwork Tue Aug 4 11:13:15 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alan Lawrence X-Patchwork-Id: 503544 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id DEABD1402B0 for ; Tue, 4 Aug 2015 21:13:35 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b=K3TJhBhf; dkim-atps=neutral DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :message-id:date:from:mime-version:to:cc:subject:references :in-reply-to:content-type; q=dns; s=default; b=IP/sealtpI7cEsS27 l/AarR+PAbRByXYjQlWSQjGMr25+knt2Q0w4k7jwGNRsqso2jjs22EhzcN22wN+H WHDUB62H4kRqf/LiABI9d/n2erYBtRjP7byK6gwFTqjN2cHlznMrpD3bPDodDcvL aAhZYtYe3nst4bABFJjtQMwoDU= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :message-id:date:from:mime-version:to:cc:subject:references :in-reply-to:content-type; s=default; bh=Deom2695X011jBz6k6EheUH RavI=; b=K3TJhBhfXM/gu6SHWzZsqoEXN5zJ78aErMKFH84rwsNUk7JlrFRHBjd 3EsQC3cXrD6PrerHWq6g3Zfn8Yj8tXB4IJDbRYggpe2z1/KhWOg6lOmJ89pYqBOO XYdbLA51l7FjiiiDfJcnnSZkW191raQgF2bjdRkm/XYuY6/ARHWg= Received: (qmail 39170 invoked by alias); 4 Aug 2015 11:13:26 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 39161 invoked by uid 89); 4 Aug 2015 11:13:25 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-0.2 required=5.0 tests=AWL, BAYES_50, KAM_LOTSOFHASH, SPF_PASS autolearn=no version=3.3.2 X-HELO: eu-smtp-delivery-143.mimecast.com Received: from eu-smtp-delivery-143.mimecast.com (HELO eu-smtp-delivery-143.mimecast.com) (207.82.80.143) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 04 Aug 2015 11:13:22 +0000 Received: from cam-owa1.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.140]) by eu-smtp-1.mimecast.com with ESMTP id uk-mta-24-xYtkjRkHQ82SXKeFDWdmng-1; Tue, 04 Aug 2015 12:13:15 +0100 Received: from [10.2.207.65] ([10.1.2.79]) by cam-owa1.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Tue, 4 Aug 2015 12:13:15 +0100 Message-ID: <55C09E4B.3000308@arm.com> Date: Tue, 04 Aug 2015 12:13:15 +0100 From: Alan Lawrence User-Agent: Thunderbird 2.0.0.24 (X11/20101213) MIME-Version: 1.0 To: "gcc-patches@gcc.gnu.org" CC: James Greenhalgh Subject: Re: [PATCH 8/15][AArch64] Add support for float16x{4, 8}_t vectors/builtins References: <55B765DF.4040706@arm.com> <55B766B4.2030305@arm.com> <20150729102409.GC5656@arm.com> <55C09B8F.6020700@arm.com> In-Reply-To: <55C09B8F.6020700@arm.com> X-MC-Unique: xYtkjRkHQ82SXKeFDWdmng-1 X-IsSubscribed: yes Sorry, attached the wrong file. Here! --Alan Alan Lawrence wrote: > James Greenhalgh wrote: >>> -;; All modes. >>> +;; All vector modes on which we support any arithmetic operations. >>> (define_mode_iterator VALL [V8QI V16QI V4HI V8HI V2SI V4SI V2DI V2SF V4SF V2DF]) >>> >>> -;; All vector modes and DI. >>> +;; All vector modes, including HF modes on which we cannot operate >> The wording here is a bit off, we can operate on them - for a limited set >> of operations (and you are missing a full stop). How >> about something like: >> >> All vector modes suitable for moving, loading and storing. >> >>> +(define_mode_iterator VALL_F16 [V8QI V16QI V4HI V8HI V2SI V4SI V2DI >>> + V4HF V8HF V2SF V4SF V2DF]) >>> + >>> +;; All vector modes barring F16, plus DI. >> "barring HF modes" for consistency with the above comment. >> >>> (define_mode_iterator VALLDI [V8QI V16QI V4HI V8HI V2SI V4SI V2DI V2SF V4SF V2DF DI]) >>> >>> +;; All vector modes and DI. >>> +(define_mode_iterator VALLDI_F16 [V8QI V16QI V4HI V8HI V2SI V4SI V2DI >>> + V4HF V8HF V2SF V4SF V2DF DI]) >>> + >>> ;; All vector modes and DI and DF. >> Except HF modes. > > Here's a new version, updating the comments much as you suggest, dropping the > unrelated testsuite changes (already pushed), and adding VRL2/3/4 iterator > values only for V4HF. > > Bootstrapped + check-gcc on aarch64-none-linux-gnu. > > gcc/ChangeLog: > > * config/aarch64/aarch64.c (aarch64_vector_mode_supported_p): Support > V4HFmode and V8HFmode. > (aarch64_split_simd_move): Add case for V8HFmode. > * config/aarch64/aarch64-builtins.c (v4hf_UP, v8hf_UP): Define. > (aarch64_simd_builtin_std_type): Handle HFmode. > (aarch64_init_simd_builtin_types): Include Float16x4_t and Float16x8_t. > > * config/aarch64/aarch64-simd.md (mov, aarch64_get_lane, > aarch64_ld1, aarch64_st1 (aarch64_be_ld1, aarch64_be_st1): Use VALLDI_F16 iterator. > > * config/aarch64/aarch64-simd-builtin-types.def: Add Float16x4_t, > Float16x8_t. > > * config/aarch64/aarch64-simd-builtins.def (ld1, st1): Use VALL_F16. > * config/aarch64/arm_neon.h (float16x4_t, float16x8_t, float16_t): > New typedefs. > (vget_lane_f16, vgetq_lane_f16, vset_lane_f16, vsetq_lane_f16, > vld1_f16, vld1q_f16, vst1_f16, vst1q_f16, vst1_lane_f16, > vst1q_lane_f16): New. > * config/aarch64/iterators.md (VD, VQ, VQ_NO2E): Add vectors of HFmode. > (VALLDI_F16, VALL_F16): New. > (Vmtype, VEL, VCONQ, VHALF, V_TWO_ELEM, V_THREE_ELEM, V_FOUR_ELEM, q): > Add cases for V4HF and V8HF. > (VDBL, VRL2, VRL3, VRL4): Add V4HF case. > > gcc/testsuite/ChangeLog: > > * g++.dg/abi/mangle-neon-aarch64.C: Add cases for float16x4_t and > float16x8_t. > * gcc.target/aarch64/vset_lane_1.c: Likewise. > * gcc.target/aarch64/vld1-vst1_1.c: Likewise. > * gcc.target/aarch64/vld1_lane.c: Likewise. > diff --git a/gcc/config/aarch64/aarch64-builtins.c b/gcc/config/aarch64/aarch64-builtins.c index 800f6e1ffcd358aa22ceecbc460bc1dcac4acd9e..2394efdb483e1128d2990852871ab4abfed8bdfc 100644 --- a/gcc/config/aarch64/aarch64-builtins.c +++ b/gcc/config/aarch64/aarch64-builtins.c @@ -61,6 +61,7 @@ #define v8qi_UP V8QImode #define v4hi_UP V4HImode +#define v4hf_UP V4HFmode #define v2si_UP V2SImode #define v2sf_UP V2SFmode #define v1df_UP V1DFmode @@ -68,6 +69,7 @@ #define df_UP DFmode #define v16qi_UP V16QImode #define v8hi_UP V8HImode +#define v8hf_UP V8HFmode #define v4si_UP V4SImode #define v4sf_UP V4SFmode #define v2di_UP V2DImode @@ -520,6 +522,8 @@ aarch64_simd_builtin_std_type (enum machine_mode mode, return aarch64_simd_intCI_type_node; case XImode: return aarch64_simd_intXI_type_node; + case HFmode: + return aarch64_fp16_type_node; case SFmode: return float_type_node; case DFmode: @@ -604,6 +608,8 @@ aarch64_init_simd_builtin_types (void) aarch64_simd_types[Poly64x2_t].eltype = aarch64_simd_types[Poly64_t].itype; /* Continue with standard types. */ + aarch64_simd_types[Float16x4_t].eltype = aarch64_fp16_type_node; + aarch64_simd_types[Float16x8_t].eltype = aarch64_fp16_type_node; aarch64_simd_types[Float32x2_t].eltype = float_type_node; aarch64_simd_types[Float32x4_t].eltype = float_type_node; aarch64_simd_types[Float64x1_t].eltype = double_type_node; diff --git a/gcc/config/aarch64/aarch64-simd-builtin-types.def b/gcc/config/aarch64/aarch64-simd-builtin-types.def index bb54e56ce63c040dbfe69e2249e642d2c43fd0af..ea219b72ff9ac406c2439cda002617e710b2966c 100644 --- a/gcc/config/aarch64/aarch64-simd-builtin-types.def +++ b/gcc/config/aarch64/aarch64-simd-builtin-types.def @@ -44,6 +44,8 @@ ENTRY (Poly16x8_t, V8HI, poly, 12) ENTRY (Poly64x1_t, DI, poly, 12) ENTRY (Poly64x2_t, V2DI, poly, 12) + ENTRY (Float16x4_t, V4HF, none, 13) + ENTRY (Float16x8_t, V8HF, none, 13) ENTRY (Float32x2_t, V2SF, none, 13) ENTRY (Float32x4_t, V4SF, none, 13) ENTRY (Float64x1_t, V1DF, none, 13) diff --git a/gcc/config/aarch64/aarch64-simd-builtins.def b/gcc/config/aarch64/aarch64-simd-builtins.def index d0f298a1f075f51d4d47c6f364860dd1d0a545e0..39ff34e16d8bb79bcd44a4f40d214963996968af 100644 --- a/gcc/config/aarch64/aarch64-simd-builtins.def +++ b/gcc/config/aarch64/aarch64-simd-builtins.def @@ -367,11 +367,11 @@ VAR1 (UNOP, float_extend_lo_, 0, v2df) VAR1 (UNOP, float_truncate_lo_, 0, v2sf) - /* Implemented by aarch64_ld1. */ - BUILTIN_VALL (LOAD1, ld1, 0) + /* Implemented by aarch64_ld1. */ + BUILTIN_VALL_F16 (LOAD1, ld1, 0) - /* Implemented by aarch64_st1. */ - BUILTIN_VALL (STORE1, st1, 0) + /* Implemented by aarch64_st1. */ + BUILTIN_VALL_F16 (STORE1, st1, 0) /* Implemented by fma4. */ BUILTIN_VDQF (TERNOP, fma, 4) diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md index 97774181fab11b846d40c3981e2d1f9ea4891337..cab712d7d18dc8a9bebf2b25608b5b4490a07b45 100644 --- a/gcc/config/aarch64/aarch64-simd.md +++ b/gcc/config/aarch64/aarch64-simd.md @@ -19,8 +19,8 @@ ;; . (define_expand "mov" - [(set (match_operand:VALL 0 "nonimmediate_operand" "") - (match_operand:VALL 1 "general_operand" ""))] + [(set (match_operand:VALL_F16 0 "nonimmediate_operand" "") + (match_operand:VALL_F16 1 "general_operand" ""))] "TARGET_SIMD" " if (GET_CODE (operands[0]) == MEM) @@ -2450,7 +2450,7 @@ (define_insn "aarch64_get_lane" [(set (match_operand: 0 "aarch64_simd_nonimmediate_operand" "=r, w, Utv") (vec_select: - (match_operand:VALL 1 "register_operand" "w, w, w") + (match_operand:VALL_F16 1 "register_operand" "w, w, w") (parallel [(match_operand:SI 2 "immediate_operand" "i, i, i")])))] "TARGET_SIMD" { @@ -4243,8 +4243,9 @@ ) (define_insn "aarch64_be_ld1" - [(set (match_operand:VALLDI 0 "register_operand" "=w") - (unspec:VALLDI [(match_operand:VALLDI 1 "aarch64_simd_struct_operand" "Utv")] + [(set (match_operand:VALLDI_F16 0 "register_operand" "=w") + (unspec:VALLDI_F16 [(match_operand:VALLDI_F16 1 + "aarch64_simd_struct_operand" "Utv")] UNSPEC_LD1))] "TARGET_SIMD" "ld1\\t{%0}, %1" @@ -4252,8 +4253,8 @@ ) (define_insn "aarch64_be_st1" - [(set (match_operand:VALLDI 0 "aarch64_simd_struct_operand" "=Utv") - (unspec:VALLDI [(match_operand:VALLDI 1 "register_operand" "w")] + [(set (match_operand:VALLDI_F16 0 "aarch64_simd_struct_operand" "=Utv") + (unspec:VALLDI_F16 [(match_operand:VALLDI_F16 1 "register_operand" "w")] UNSPEC_ST1))] "TARGET_SIMD" "st1\\t{%1}, %0" @@ -4542,16 +4543,16 @@ DONE; }) -(define_expand "aarch64_ld1" - [(match_operand:VALL 0 "register_operand") +(define_expand "aarch64_ld1" + [(match_operand:VALL_F16 0 "register_operand") (match_operand:DI 1 "register_operand")] "TARGET_SIMD" { - machine_mode mode = mode; + machine_mode mode = mode; rtx mem = gen_rtx_MEM (mode, operands[1]); if (BYTES_BIG_ENDIAN) - emit_insn (gen_aarch64_be_ld1 (operands[0], mem)); + emit_insn (gen_aarch64_be_ld1 (operands[0], mem)); else emit_move_insn (operands[0], mem); DONE; @@ -4895,16 +4896,16 @@ DONE; }) -(define_expand "aarch64_st1" +(define_expand "aarch64_st1" [(match_operand:DI 0 "register_operand") - (match_operand:VALL 1 "register_operand")] + (match_operand:VALL_F16 1 "register_operand")] "TARGET_SIMD" { - machine_mode mode = mode; + machine_mode mode = mode; rtx mem = gen_rtx_MEM (mode, operands[0]); if (BYTES_BIG_ENDIAN) - emit_insn (gen_aarch64_be_st1 (mem, operands[1])); + emit_insn (gen_aarch64_be_st1 (mem, operands[1])); else emit_move_insn (mem, operands[1]); DONE; diff --git a/gcc/config/aarch64/aarch64.h b/gcc/config/aarch64/aarch64.h index 535695c4f450cf4d362e631a2864379283ae1fc6..2bcb7cc6e1487f4b0b18c49dd7255427ecaa2809 100644 --- a/gcc/config/aarch64/aarch64.h +++ b/gcc/config/aarch64/aarch64.h @@ -920,7 +920,8 @@ extern enum aarch64_code_model aarch64_cmodel; /* Modes valid for AdvSIMD Q registers. */ #define AARCH64_VALID_SIMD_QREG_MODE(MODE) \ ((MODE) == V4SImode || (MODE) == V8HImode || (MODE) == V16QImode \ - || (MODE) == V4SFmode || (MODE) == V2DImode || mode == V2DFmode) + || (MODE) == V4SFmode || (MODE) == V8HFmode || (MODE) == V2DImode \ + || (MODE) == V2DFmode) #define ENDIAN_LANE_N(mode, n) \ (BYTES_BIG_ENDIAN ? GET_MODE_NUNITS (mode) - 1 - n : n) diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c index 2b1ae36f7f079d6b64ecd8a139a9dffce2edf727..0c40e8c6e42a3685e4865ab54f26a4883821d9d5 100644 --- a/gcc/config/aarch64/aarch64.c +++ b/gcc/config/aarch64/aarch64.c @@ -1267,6 +1267,9 @@ aarch64_split_simd_move (rtx dst, rtx src) case V2DImode: gen = gen_aarch64_split_simd_movv2di; break; + case V8HFmode: + gen = gen_aarch64_split_simd_movv8hf; + break; case V4SFmode: gen = gen_aarch64_split_simd_movv4sf; break; @@ -8625,6 +8628,7 @@ aarch64_vector_mode_supported_p (machine_mode mode) || mode == V2SImode || mode == V4HImode || mode == V8QImode || mode == V2SFmode || mode == V4SFmode || mode == V2DFmode + || mode == V4HFmode || mode == V8HFmode || mode == V1DFmode)) return true; diff --git a/gcc/config/aarch64/arm_neon.h b/gcc/config/aarch64/arm_neon.h index fce557779c2f8ebe46a0eb7a29092b1b8597729e..9654584f966b192119839d8cdd30513b0d4f8f4a 100644 --- a/gcc/config/aarch64/arm_neon.h +++ b/gcc/config/aarch64/arm_neon.h @@ -40,6 +40,7 @@ typedef __Int8x8_t int8x8_t; typedef __Int16x4_t int16x4_t; typedef __Int32x2_t int32x2_t; typedef __Int64x1_t int64x1_t; +typedef __Float16x4_t float16x4_t; typedef __Float32x2_t float32x2_t; typedef __Poly8x8_t poly8x8_t; typedef __Poly16x4_t poly16x4_t; @@ -52,6 +53,7 @@ typedef __Int8x16_t int8x16_t; typedef __Int16x8_t int16x8_t; typedef __Int32x4_t int32x4_t; typedef __Int64x2_t int64x2_t; +typedef __Float16x8_t float16x8_t; typedef __Float32x4_t float32x4_t; typedef __Float64x2_t float64x2_t; typedef __Poly8x16_t poly8x16_t; @@ -67,6 +69,7 @@ typedef __Poly16_t poly16_t; typedef __Poly64_t poly64_t; typedef __Poly128_t poly128_t; +typedef __fp16 float16_t; typedef float float32_t; typedef double float64_t; @@ -2691,6 +2694,12 @@ vcreate_p16 (uint64_t __a) /* vget_lane */ +__extension__ static __inline float16_t __attribute__ ((__always_inline__)) +vget_lane_f16 (float16x4_t __a, const int __b) +{ + return __aarch64_vget_lane_any (__a, __b); +} + __extension__ static __inline float32_t __attribute__ ((__always_inline__)) vget_lane_f32 (float32x2_t __a, const int __b) { @@ -2765,6 +2774,12 @@ vget_lane_u64 (uint64x1_t __a, const int __b) /* vgetq_lane */ +__extension__ static __inline float16_t __attribute__ ((__always_inline__)) +vgetq_lane_f16 (float16x8_t __a, const int __b) +{ + return __aarch64_vget_lane_any (__a, __b); +} + __extension__ static __inline float32_t __attribute__ ((__always_inline__)) vgetq_lane_f32 (float32x4_t __a, const int __b) { @@ -4425,6 +4440,12 @@ vreinterpretq_u32_p16 (poly16x8_t __a) /* vset_lane */ +__extension__ static __inline float16x4_t __attribute__ ((__always_inline__)) +vset_lane_f16 (float16_t __elem, float16x4_t __vec, const int __index) +{ + return __aarch64_vset_lane_any (__elem, __vec, __index); +} + __extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) vset_lane_f32 (float32_t __elem, float32x2_t __vec, const int __index) { @@ -4499,6 +4520,12 @@ vset_lane_u64 (uint64_t __elem, uint64x1_t __vec, const int __index) /* vsetq_lane */ +__extension__ static __inline float16x8_t __attribute__ ((__always_inline__)) +vsetq_lane_f16 (float16_t __elem, float16x8_t __vec, const int __index) +{ + return __aarch64_vset_lane_any (__elem, __vec, __index); +} + __extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) vsetq_lane_f32 (float32_t __elem, float32x4_t __vec, const int __index) { @@ -14630,6 +14657,12 @@ vfmsq_laneq_f64 (float64x2_t __a, float64x2_t __b, /* vld1 */ +__extension__ static __inline float16x4_t __attribute__ ((__always_inline__)) +vld1_f16 (const float16_t *__a) +{ + return __builtin_aarch64_ld1v4hf (__a); +} + __extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) vld1_f32 (const float32_t *a) { @@ -14709,6 +14742,12 @@ vld1_u64 (const uint64_t *a) /* vld1q */ +__extension__ static __inline float16x8_t __attribute__ ((__always_inline__)) +vld1q_f16 (const float16_t *__a) +{ + return __builtin_aarch64_ld1v8hf (__a); +} + __extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) vld1q_f32 (const float32_t *a) { @@ -14937,6 +14976,12 @@ vld1q_dup_u64 (const uint64_t* __a) /* vld1_lane */ +__extension__ static __inline float16x4_t __attribute__ ((__always_inline__)) +vld1_lane_f16 (const float16_t *__src, float16x4_t __vec, const int __lane) +{ + return __aarch64_vset_lane_any (*__src, __vec, __lane); +} + __extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) vld1_lane_f32 (const float32_t *__src, float32x2_t __vec, const int __lane) { @@ -15011,6 +15056,12 @@ vld1_lane_u64 (const uint64_t *__src, uint64x1_t __vec, const int __lane) /* vld1q_lane */ +__extension__ static __inline float16x8_t __attribute__ ((__always_inline__)) +vld1q_lane_f16 (const float16_t *__src, float16x8_t __vec, const int __lane) +{ + return __aarch64_vset_lane_any (*__src, __vec, __lane); +} + __extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) vld1q_lane_f32 (const float32_t *__src, float32x4_t __vec, const int __lane) { @@ -21978,6 +22029,12 @@ vsrid_n_u64 (uint64_t __a, uint64_t __b, const int __c) /* vst1 */ __extension__ static __inline void __attribute__ ((__always_inline__)) +vst1_f16 (float16_t *__a, float16x4_t __b) +{ + __builtin_aarch64_st1v4hf (__a, __b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) vst1_f32 (float32_t *a, float32x2_t b) { __builtin_aarch64_st1v2sf ((__builtin_aarch64_simd_sf *) a, b); @@ -22057,6 +22114,12 @@ vst1_u64 (uint64_t *a, uint64x1_t b) /* vst1q */ __extension__ static __inline void __attribute__ ((__always_inline__)) +vst1q_f16 (float16_t *__a, float16x8_t __b) +{ + __builtin_aarch64_st1v8hf (__a, __b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) vst1q_f32 (float32_t *a, float32x4_t b) { __builtin_aarch64_st1v4sf ((__builtin_aarch64_simd_sf *) a, b); @@ -22137,6 +22200,12 @@ vst1q_u64 (uint64_t *a, uint64x2_t b) /* vst1_lane */ __extension__ static __inline void __attribute__ ((__always_inline__)) +vst1_lane_f16 (float16_t *__a, float16x4_t __b, const int __lane) +{ + *__a = __aarch64_vget_lane_any (__b, __lane); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) vst1_lane_f32 (float32_t *__a, float32x2_t __b, const int __lane) { *__a = __aarch64_vget_lane_any (__b, __lane); @@ -22211,6 +22280,12 @@ vst1_lane_u64 (uint64_t *__a, uint64x1_t __b, const int __lane) /* vst1q_lane */ __extension__ static __inline void __attribute__ ((__always_inline__)) +vst1q_lane_f16 (float16_t *__a, float16x8_t __b, const int __lane) +{ + *__a = __aarch64_vget_lane_any (__b, __lane); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) vst1q_lane_f32 (float32_t *__a, float32x4_t __b, const int __lane) { *__a = __aarch64_vget_lane_any (__b, __lane); diff --git a/gcc/config/aarch64/iterators.md b/gcc/config/aarch64/iterators.md index 5d7966d7adf49a1824ddc41cd34b04c6f179b09a..da9bbace3817a837db73fb4b413507cde21b4997 100644 --- a/gcc/config/aarch64/iterators.md +++ b/gcc/config/aarch64/iterators.md @@ -52,7 +52,7 @@ (define_mode_iterator VSDQ_I_DI [V8QI V16QI V4HI V8HI V2SI V4SI V2DI DI]) ;; Double vector modes. -(define_mode_iterator VD [V8QI V4HI V2SI V2SF]) +(define_mode_iterator VD [V8QI V4HI V4HF V2SI V2SF]) ;; vector, 64-bit container, all integer modes (define_mode_iterator VD_BHSI [V8QI V4HI V2SI]) @@ -61,10 +61,10 @@ (define_mode_iterator VDQ_BHSI [V8QI V16QI V4HI V8HI V2SI V4SI]) ;; Quad vector modes. -(define_mode_iterator VQ [V16QI V8HI V4SI V2DI V4SF V2DF]) +(define_mode_iterator VQ [V16QI V8HI V4SI V2DI V8HF V4SF V2DF]) ;; VQ without 2 element modes. -(define_mode_iterator VQ_NO2E [V16QI V8HI V4SI V4SF]) +(define_mode_iterator VQ_NO2E [V16QI V8HI V4SI V8HF V4SF]) ;; Quad vector with only 2 element modes. (define_mode_iterator VQ_2E [V2DI V2DF]) @@ -97,13 +97,21 @@ ;; Vector Float modes with 2 elements. (define_mode_iterator V2F [V2SF V2DF]) -;; All modes. +;; All vector modes on which we support any arithmetic operations. (define_mode_iterator VALL [V8QI V16QI V4HI V8HI V2SI V4SI V2DI V2SF V4SF V2DF]) -;; All vector modes and DI. +;; All vector modes suitable for moving, loading, and storing. +(define_mode_iterator VALL_F16 [V8QI V16QI V4HI V8HI V2SI V4SI V2DI + V4HF V8HF V2SF V4SF V2DF]) + +;; All vector modes barring HF modes, plus DI. (define_mode_iterator VALLDI [V8QI V16QI V4HI V8HI V2SI V4SI V2DI V2SF V4SF V2DF DI]) -;; All vector modes and DI and DF. +;; All vector modes and DI. +(define_mode_iterator VALLDI_F16 [V8QI V16QI V4HI V8HI V2SI V4SI V2DI + V4HF V8HF V2SF V4SF V2DF DI]) + +;; All vector modes barring HF modes, plus DI and DF. (define_mode_iterator VALLDIF [V8QI V16QI V4HI V8HI V2SI V4SI V2DI V2SF V4SF V2DF DI DF]) @@ -364,7 +372,8 @@ (define_mode_attr Vmtype [(V8QI ".8b") (V16QI ".16b") (V4HI ".4h") (V8HI ".8h") (V2SI ".2s") (V4SI ".4s") - (V2DI ".2d") (V2SF ".2s") + (V2DI ".2d") (V4HF ".4h") + (V8HF ".8h") (V2SF ".2s") (V4SF ".4s") (V2DF ".2d") (DI "") (SI "") (HI "") (QI "") @@ -401,6 +410,7 @@ (V4HI "HI") (V8HI "HI") (V2SI "SI") (V4SI "SI") (DI "DI") (V2DI "DI") + (V4HF "HF") (V8HF "HF") (V2SF "SF") (V4SF "SF") (V2DF "DF") (DF "DF") (SI "SI") (HI "HI") @@ -419,6 +429,7 @@ (V4HI "V8HI") (V8HI "V8HI") (V2SI "V4SI") (V4SI "V4SI") (DI "V2DI") (V2DI "V2DI") + (V4HF "V8HF") (V8HF "V8HF") (V2SF "V2SF") (V4SF "V4SF") (V2DF "V2DF") (SI "V4SI") (HI "V8HI") (QI "V16QI")]) @@ -428,10 +439,12 @@ (V4HI "V2HI") (V8HI "V4HI") (V2SI "SI") (V4SI "V2SI") (V2DI "DI") (V2SF "SF") - (V4SF "V2SF") (V2DF "DF")]) + (V4SF "V2SF") (V4HF "V2HF") + (V8HF "V4HF") (V2DF "DF")]) ;; Double modes of vector modes. (define_mode_attr VDBL [(V8QI "V16QI") (V4HI "V8HI") + (V4HF "V8HF") (V2SI "V4SI") (V2SF "V4SF") (SI "V2SI") (DI "V2DI") (DF "V2DF")]) @@ -542,14 +555,17 @@ (define_mode_attr nregs [(OI "2") (CI "3") (XI "4")]) (define_mode_attr VRL2 [(V8QI "V32QI") (V4HI "V16HI") + (V4HF "V16HF") (V2SI "V8SI") (V2SF "V8SF") (DI "V4DI") (DF "V4DF")]) (define_mode_attr VRL3 [(V8QI "V48QI") (V4HI "V24HI") + (V4HF "V24HF") (V2SI "V12SI") (V2SF "V12SF") (DI "V6DI") (DF "V6DF")]) (define_mode_attr VRL4 [(V8QI "V64QI") (V4HI "V32HI") + (V4HF "V32HF") (V2SI "V16SI") (V2SF "V16SF") (DI "V8DI") (DF "V8DF")]) @@ -562,6 +578,7 @@ (V2SI "V2SI") (V4SI "V2SI") (DI "V2DI") (V2DI "V2DI") (V2SF "V2SF") (V4SF "V2SF") + (V4HF "SF") (V8HF "SF") (DF "V2DI") (V2DF "V2DI")]) ;; Similar, for three elements. @@ -570,6 +587,7 @@ (V2SI "BLK") (V4SI "BLK") (DI "EI") (V2DI "EI") (V2SF "BLK") (V4SF "BLK") + (V4HF "BLK") (V8HF "BLK") (DF "EI") (V2DF "EI")]) ;; Similar, for four elements. @@ -578,6 +596,7 @@ (V2SI "V4SI") (V4SI "V4SI") (DI "OI") (V2DI "OI") (V2SF "V4SF") (V4SF "V4SF") + (V4HF "V4HF") (V8HF "V4HF") (DF "OI") (V2DF "OI")]) @@ -636,6 +655,7 @@ (V4HI "") (V8HI "_q") (V2SI "") (V4SI "_q") (DI "") (V2DI "_q") + (V4HF "") (V8HF "_q") (V2SF "") (V4SF "_q") (V2DF "_q") (QI "") (HI "") (SI "") (DI "") (SF "") (DF "")]) diff --git a/gcc/testsuite/g++.dg/abi/mangle-neon-aarch64.C b/gcc/testsuite/g++.dg/abi/mangle-neon-aarch64.C index 09a20dc985ef04314e3435b5eb899035429400c4..5740c0281b2fdf8bbc11d9428ca2f6ba8f1760a0 100644 --- a/gcc/testsuite/g++.dg/abi/mangle-neon-aarch64.C +++ b/gcc/testsuite/g++.dg/abi/mangle-neon-aarch64.C @@ -13,6 +13,7 @@ void f3 (uint8x8_t a) {} void f4 (uint16x4_t a) {} void f5 (uint32x2_t a) {} void f23 (uint64x1_t a) {} +void f61 (float16x4_t a) {} void f6 (float32x2_t a) {} void f7 (poly8x8_t a) {} void f8 (poly16x4_t a) {} @@ -25,6 +26,7 @@ void f13 (uint8x16_t a) {} void f14 (uint16x8_t a) {} void f15 (uint32x4_t a) {} void f16 (uint64x2_t a) {} +void f171 (float16x8_t a) {} void f17 (float32x4_t a) {} void f18 (float64x2_t a) {} void f19 (poly8x16_t a) {} @@ -42,6 +44,7 @@ void g1 (int8x16_t, int8x16_t) {} // { dg-final { scan-assembler "_Z2f412__Uint16x4_t:" } } // { dg-final { scan-assembler "_Z2f512__Uint32x2_t:" } } // { dg-final { scan-assembler "_Z3f2312__Uint64x1_t:" } } +// { dg-final { scan-assembler "_Z3f6113__Float16x4_t:" } } // { dg-final { scan-assembler "_Z2f613__Float32x2_t:" } } // { dg-final { scan-assembler "_Z2f711__Poly8x8_t:" } } // { dg-final { scan-assembler "_Z2f812__Poly16x4_t:" } } @@ -53,6 +56,7 @@ void g1 (int8x16_t, int8x16_t) {} // { dg-final { scan-assembler "_Z3f1412__Uint16x8_t:" } } // { dg-final { scan-assembler "_Z3f1512__Uint32x4_t:" } } // { dg-final { scan-assembler "_Z3f1612__Uint64x2_t:" } } +// { dg-final { scan-assembler "_Z4f17113__Float16x8_t:" } } // { dg-final { scan-assembler "_Z3f1713__Float32x4_t:" } } // { dg-final { scan-assembler "_Z3f1813__Float64x2_t:" } } // { dg-final { scan-assembler "_Z3f1912__Poly8x16_t:" } } diff --git a/gcc/testsuite/gcc.target/aarch64/vld1-vst1_1.c b/gcc/testsuite/gcc.target/aarch64/vld1-vst1_1.c index f8c6edb3bcf4e9c7f640b3be51129000f43b509f..fa9ef0f4e438b45cd7f316b18ba462573fe0e035 100644 --- a/gcc/testsuite/gcc.target/aarch64/vld1-vst1_1.c +++ b/gcc/testsuite/gcc.target/aarch64/vld1-vst1_1.c @@ -31,6 +31,7 @@ THING (int8x8_t, 8, int8_t, _s8) \ THING (uint8x8_t, 8, uint8_t, _u8) \ THING (int16x4_t, 4, int16_t, _s16) \ THING (uint16x4_t, 4, uint16_t, _u16) \ +THING (float16x4_t, 4, float16_t, _f16) \ THING (int32x2_t, 2, int32_t, _s32) \ THING (uint32x2_t, 2, uint32_t, _u32) \ THING (float32x2_t, 2, float32_t, _f32) \ @@ -38,6 +39,7 @@ THING (int8x16_t, 16, int8_t, q_s8) \ THING (uint8x16_t, 16, uint8_t, q_u8) \ THING (int16x8_t, 8, int16_t, q_s16) \ THING (uint16x8_t, 8, uint16_t, q_u16) \ +THING (float16x8_t, 8, float16_t, q_f16)\ THING (int32x4_t, 4, int32_t, q_s32) \ THING (uint32x4_t, 4, uint32_t, q_u32) \ THING (float32x4_t, 4, float32_t, q_f32)\ diff --git a/gcc/testsuite/gcc.target/aarch64/vld1_lane.c b/gcc/testsuite/gcc.target/aarch64/vld1_lane.c index 463c88c0a5f89120d39f5cbc991fac709e89e3c3..c70df7135c1f32714d87f0c21cae41650354ffb6 100644 --- a/gcc/testsuite/gcc.target/aarch64/vld1_lane.c +++ b/gcc/testsuite/gcc.target/aarch64/vld1_lane.c @@ -16,6 +16,7 @@ VARIANT (int32, , 2, _s32, 0) \ VARIANT (int64, , 1, _s64, 0) \ VARIANT (poly8, , 8, _p8, 7) \ VARIANT (poly16, , 4, _p16, 2) \ +VARIANT (float16, , 4, _f16, 3) \ VARIANT (float32, , 2, _f32, 1) \ VARIANT (float64, , 1, _f64, 0) \ VARIANT (uint8, q, 16, _u8, 13) \ @@ -28,6 +29,7 @@ VARIANT (int32, q, 4, _s32, 1) \ VARIANT (int64, q, 2, _s64, 1) \ VARIANT (poly8, q, 16, _p8, 7) \ VARIANT (poly16, q, 8, _p16, 4) \ +VARIANT (float16, q, 8, _f16, 3)\ VARIANT (float32, q, 4, _f32, 2)\ VARIANT (float64, q, 2, _f64, 1) @@ -76,6 +78,7 @@ main (int argc, char **argv) int64_t int64_data = 0x1234567890abcdefLL; poly8_t poly8_data = 13; poly16_t poly16_data = 11111; + float16_t float16_data = 8.75; float32_t float32_data = 3.14159; float64_t float64_data = 1.010010001; diff --git a/gcc/testsuite/gcc.target/aarch64/vset_lane_1.c b/gcc/testsuite/gcc.target/aarch64/vset_lane_1.c index 5fb11399f202df7bc9a67c3d8ffb78f71c87e5c6..bc0132c20a7b8150b81491eaaf9b76ce448b2410 100644 --- a/gcc/testsuite/gcc.target/aarch64/vset_lane_1.c +++ b/gcc/testsuite/gcc.target/aarch64/vset_lane_1.c @@ -16,6 +16,7 @@ VARIANT (int32_t, , 2, int32x2_t, _s32, 0) \ VARIANT (int64_t, , 1, int64x1_t, _s64, 0) \ VARIANT (poly8_t, , 8, poly8x8_t, _p8, 6) \ VARIANT (poly16_t, , 4, poly16x4_t, _p16, 2) \ +VARIANT (float16_t, , 4, float16x4_t, _f16, 3) \ VARIANT (float32_t, , 2, float32x2_t, _f32, 1) \ VARIANT (float64_t, , 1, float64x1_t, _f64, 0) \ VARIANT (uint8_t, q, 16, uint8x16_t, _u8, 11) \ @@ -28,6 +29,7 @@ VARIANT (int32_t, q, 4, int32x4_t, _s32, 3) \ VARIANT (int64_t, q, 2, int64x2_t, _s64, 0) \ VARIANT (poly8_t, q, 16, poly8x16_t, _p8, 14) \ VARIANT (poly16_t, q, 8, poly16x8_t, _p16, 6) \ +VARIANT (float16_t, q, 8, float16x8_t, _f16, 6) \ VARIANT (float32_t, q, 4, float32x4_t, _f32, 2) \ VARIANT (float64_t, q, 2, float64x2_t, _f64, 1) @@ -76,6 +78,9 @@ main (int argc, char **argv) poly8_t poly8_t_data[16] = { 0, 7, 13, 18, 22, 25, 27, 28, 29, 31, 34, 38, 43, 49, 56, 64 }; poly16_t poly16_t_data[8] = { 11111, 2222, 333, 44, 5, 65432, 54321, 43210 }; + float16_t float16_t_data[8] = { 1.25, 4.5, 7.875, 2.3125, 5.675, 8.875, + 3.6875, 6.75}; + float32_t float32_t_data[4] = { 3.14159, 2.718, 1.414, 100.0 }; float64_t float64_t_data[2] = { 1.01001000100001, 12345.6789 };