From patchwork Thu Oct 12 06:02:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: liuhongt X-Patchwork-Id: 1847233 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=a2UadfEw; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=8.43.85.97; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4S5fH64V76z1yqj for ; Thu, 12 Oct 2023 17:04:34 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 9C998385735E for ; Thu, 12 Oct 2023 06:04:32 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by sourceware.org (Postfix) with ESMTPS id 01BD53858C52 for ; Thu, 12 Oct 2023 06:04:16 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 01BD53858C52 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697090657; x=1728626657; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=WN3FN+nUEDPuuAzDpFkCWLN62TtwkQ+U4XKxCUH05cM=; b=a2UadfEwJIjFCKh1TLpsZc400xVU96QG7UY+z01sSSpN8QKELsPMZFls OwVYaOMwD0GueH3wZ2v4Hb5inMZXplITu/+KLwMGi4AUTRjHp1iV5NLWA Y62imIiAWtgfThO7mPTvwCp7iXCkQW6Gb32Xg/h59uu78SmpWqRB4Gfjx QIf7A8Q6WmIgtchff1rM/mXEWAoqHpJAlVoA2E5v5HtWLyHYr3jvfbEGY HxKCV+Jn3074m4uyf6gHYi8nKfp5lmjRU5QJRWxQPvIm0bqTXEYKkKjDP BhqEvMXlPHLd90TkzOPHRD4j4AWEVUUFE1CyjYu9+ZXm5aTQVTrxlSCGw A==; X-IronPort-AV: E=McAfee;i="6600,9927,10860"; a="471096530" X-IronPort-AV: E=Sophos;i="6.03,218,1694761200"; d="scan'208";a="471096530" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Oct 2023 23:04:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10860"; a="877984837" X-IronPort-AV: E=Sophos;i="6.03,218,1694761200"; d="scan'208";a="877984837" Received: from shvmail03.sh.intel.com ([10.239.245.20]) by orsmga004.jf.intel.com with ESMTP; 11 Oct 2023 23:04:10 -0700 Received: from shliclel4217.sh.intel.com (shliclel4217.sh.intel.com [10.239.240.127]) by shvmail03.sh.intel.com (Postfix) with ESMTP id 883C510056F1; Thu, 12 Oct 2023 14:04:09 +0800 (CST) From: liuhongt To: gcc-patches@gcc.gnu.org Cc: crazylht@gmail.com, hjl.tools@gmail.com Subject: [PATCH 1/2] Enable vectorization for V2HF/V4HF rounding operations and sqrt. Date: Thu, 12 Oct 2023 14:02:08 +0800 Message-Id: <20231012060209.4130200-1-hongtao.liu@intel.com> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 X-Spam-Status: No, score=-12.0 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org For lrint/lround/lceil/lfoor is not vectorized due to vectorization restriction. When input element size is different from output element size, vectorization relies on the old TARGET_VECTORIZE_BUILTIN_VECTORIZED_FUNCTION intstead of the modern standand pattern name. The patch only supports standard pattern name, doesn't update ix86_builtin_vectorized_function. Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}. Ready push to trunk. gcc/ChangeLog: * config/i386/i386-expand.cc (ix86_sse_copysign_to_positive): Handle HFmode. (ix86_expand_round_sse4): Ditto. * config/i386/i386.md (roundhf2): New expander. (lroundhf2): Ditto. (lrinthf2): Ditto. (lhf2): Ditto. * config/i386/mmx.md (sqrt2): Ditto. (btrunc2): Ditto. (nearbyint2): Ditto. (rint2): Ditto. (lrint2): Ditto. (floor2): Ditto. (lfloor2): Ditto. (ceil2): Ditto. (lceil2): Ditto. (round2): Ditto. (lround2): Ditto. * config/i386/sse.md (lrint2): Ditto. (lfloor2): Ditto. (lceil2): Ditto. (lround2): Ditto. (sse4_1_round): Extend to V8HF. (round2): Extend to V8HF/V16HF/V32HF. gcc/testsuite/ChangeLog: * gcc.target/i386/part-vect-roundhf.c: New test. * gcc.target/i386/part-vect-sqrtph-1.c: New test. --- gcc/config/i386/i386-expand.cc | 6 + gcc/config/i386/i386.md | 38 +++ gcc/config/i386/mmx.md | 191 ++++++++++++++- gcc/config/i386/sse.md | 60 ++++- .../gcc.target/i386/part-vect-roundhf.c | 217 ++++++++++++++++++ .../gcc.target/i386/part-vect-sqrtph-1.c | 20 ++ 6 files changed, 521 insertions(+), 11 deletions(-) create mode 100644 gcc/testsuite/gcc.target/i386/part-vect-roundhf.c create mode 100644 gcc/testsuite/gcc.target/i386/part-vect-sqrtph-1.c diff --git a/gcc/config/i386/i386-expand.cc b/gcc/config/i386/i386-expand.cc index 425f3531862..b81b5cc030c 100644 --- a/gcc/config/i386/i386-expand.cc +++ b/gcc/config/i386/i386-expand.cc @@ -18434,6 +18434,8 @@ ix86_sse_copysign_to_positive (rtx result, rtx abs_value, rtx sign, rtx mask) vmode = V4SFmode; else if (mode == DFmode) vmode = V2DFmode; + else if (mode == HFmode) + vmode = V8HFmode; else vmode = mode; @@ -18970,6 +18972,10 @@ ix86_expand_round_sse4 (rtx op0, rtx op1) switch (mode) { + case E_HFmode: + gen_copysign = gen_copysignhf3; + gen_round = gen_sse4_1_roundhf2; + break; case E_SFmode: gen_copysign = gen_copysignsf3; gen_round = gen_sse4_1_roundsf2; diff --git a/gcc/config/i386/i386.md b/gcc/config/i386/i386.md index 65a0dd025c7..41173cb3452 100644 --- a/gcc/config/i386/i386.md +++ b/gcc/config/i386/i386.md @@ -21741,6 +21741,15 @@ (define_expand "nearbyint2" DONE; }) +(define_expand "roundhf2" + [(match_operand:HF 0 "register_operand") + (match_operand:HF 1 "register_operand")] + "TARGET_AVX512FP16 && !flag_trapping_math && !flag_rounding_math" +{ + ix86_expand_round_sse4 (operands[0], operands[1]); + DONE; +}) + (define_expand "round2" [(match_operand:X87MODEF 0 "register_operand") (match_operand:X87MODEF 1 "nonimmediate_operand")] @@ -21792,6 +21801,22 @@ (define_insn "lrintxf2" [(set_attr "type" "fpspc") (set_attr "mode" "")]) +(define_expand "lroundhf2" + [(set (match_operand:SWI248 0 "register_operand") + (unspec:SWI248 [(match_operand:HF 1 "nonimmediate_operand")] + UNSPEC_FIX_NOTRUNC))] + "TARGET_AVX512FP16 && !flag_trapping_math && !flag_rounding_math" +{ + ix86_expand_lround (operands[0], operands[1]); + DONE; +}) + +(define_expand "lrinthf2" + [(set (match_operand:SWI48 0 "register_operand") + (unspec:SWI48 [(match_operand:HF 1 "nonimmediate_operand")] + UNSPEC_FIX_NOTRUNC))] + "TARGET_AVX512FP16") + (define_expand "lrint2" [(set (match_operand:SWI48 0 "register_operand") (unspec:SWI48 [(match_operand:MODEF 1 "nonimmediate_operand")] @@ -22034,6 +22059,19 @@ (define_expand "lxf2" && (!TARGET_SSE_MATH || TARGET_MIX_SSE_I387) && flag_unsafe_math_optimizations") +(define_expand "lhf2" + [(set (match_operand:SWI48 0 "nonimmediate_operand") + (unspec:SWI48 [(match_operand:HF 1 "register_operand")] + FIST_ROUNDING))] + "TARGET_AVX512FP16" +{ + rtx tmp = gen_reg_rtx (HFmode); + emit_insn (gen_sse4_1_roundhf2 (tmp, operands[1], + GEN_INT (ROUND_ | ROUND_NO_EXC))); + emit_insn (gen_fix_trunchf2 (operands[0], tmp)); + DONE; +}) + (define_expand "l2" [(parallel [(set (match_operand:SWI48 0 "nonimmediate_operand") (unspec:SWI48 [(match_operand:MODEF 1 "register_operand")] diff --git a/gcc/config/i386/mmx.md b/gcc/config/i386/mmx.md index c84a37a8444..8375100d4bf 100644 --- a/gcc/config/i386/mmx.md +++ b/gcc/config/i386/mmx.md @@ -103,7 +103,8 @@ (define_mode_attr mmxintvecmode (V4HF "V4HF") (V2HF "V2HI")]) (define_mode_attr mmxintvecmodelower - [(V2SF "v2si") (V2SI "v2si") (V4HI "v4hi") (V8QI "v8qi")]) + [(V2SF "v2si") (V2SI "v2si") (V4HI "v4hi") (V8QI "v8qi") + (V4HF "v4hi") (V2HF "v2hi")]) ;; Mapping of vector modes to a vector mode of double size (define_mode_attr mmxdoublevecmode @@ -2053,6 +2054,21 @@ (define_expand "3" DONE; }) +(define_expand "sqrt2" + [(set (match_operand:VHF_32_64 0 "register_operand") + (sqrt:VHF_32_64 + (match_operand:VHF_32_64 1 "nonimmediate_operand")))] + "TARGET_AVX512FP16 && TARGET_AVX512VL && ix86_partial_vec_fp_math" +{ + rtx op1 = gen_reg_rtx (V8HFmode); + rtx op0 = gen_reg_rtx (V8HFmode); + + emit_insn (gen_mov__to_sse (op1, operands[1])); + emit_insn (gen_sqrtv8hf2 (op0, op1)); + emit_move_insn (operands[0], lowpart_subreg (mode, op0, V8HFmode)); + DONE; +}) + (define_expand "2" [(set (match_operand:VHF_32_64 0 "register_operand") (absneg:VHF_32_64 @@ -2088,6 +2104,179 @@ (define_insn_and_split "*mmx_nabs2" [(set (match_dup 0) (ior: (match_dup 1) (match_dup 2)))]) +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; +;; Parallel half-precision floating point rounding operations. +;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +(define_expand "btrunc2" + [(match_operand:VHF_32_64 0 "register_operand") + (match_operand:VHF_32_64 1 "nonimmediate_operand")] + "TARGET_AVX512FP16 && TARGET_AVX512VL + && ix86_partial_vec_fp_math + && !flag_trapping_math" +{ + rtx op1 = gen_reg_rtx (V8HFmode); + rtx op0 = gen_reg_rtx (V8HFmode); + + emit_insn (gen_mov__to_sse (op1, operands[1])); + emit_insn (gen_btruncv8hf2 (op0, op1)); + emit_move_insn (operands[0], lowpart_subreg (mode, op0, V8HFmode)); + + DONE; +}) + +(define_expand "nearbyint2" + [(match_operand:VHF_32_64 0 "register_operand") + (match_operand:VHF_32_64 1 "nonimmediate_operand")] + "TARGET_AVX512FP16 && TARGET_AVX512VL + && ix86_partial_vec_fp_math" +{ + rtx op1 = gen_reg_rtx (V8HFmode); + rtx op0 = gen_reg_rtx (V8HFmode); + + emit_insn (gen_mov__to_sse (op1, operands[1])); + emit_insn (gen_nearbyintv8hf2 (op0, op1)); + emit_move_insn (operands[0], lowpart_subreg (mode, op0, V8HFmode)); + + DONE; +}) + +(define_expand "rint2" + [(match_operand:VHF_32_64 0 "register_operand") + (match_operand:VHF_32_64 1 "nonimmediate_operand")] + "TARGET_AVX512FP16 && TARGET_AVX512VL + && ix86_partial_vec_fp_math" +{ + rtx op1 = gen_reg_rtx (V8HFmode); + rtx op0 = gen_reg_rtx (V8HFmode); + + emit_insn (gen_mov__to_sse (op1, operands[1])); + emit_insn (gen_rintv8hf2 (op0, op1)); + emit_move_insn (operands[0], lowpart_subreg (mode, op0, V8HFmode)); + + DONE; +}) + +(define_expand "lrint2" + [(match_operand: 0 "register_operand") + (match_operand:VHF_32_64 1 "nonimmediate_operand")] + "TARGET_AVX512FP16 && TARGET_AVX512VL + && ix86_partial_vec_fp_math" +{ + rtx op1 = gen_reg_rtx (V8HFmode); + rtx op0 = gen_reg_rtx (V8HFmode); + + emit_insn (gen_mov__to_sse (op1, operands[1])); + emit_insn (gen_lrintv8hfv8hi2 (op0, op1)); + emit_move_insn (operands[0], lowpart_subreg (mode, op0, V8HFmode)); + + DONE; +}) + +(define_expand "floor2" + [(match_operand:VHF_32_64 0 "register_operand") + (match_operand:VHF_32_64 1 "nonimmediate_operand")] + "TARGET_AVX512FP16 && TARGET_AVX512VL + && ix86_partial_vec_fp_math + && !flag_trapping_math" +{ + rtx op1 = gen_reg_rtx (V8HFmode); + rtx op0 = gen_reg_rtx (V8HFmode); + + emit_insn (gen_mov__to_sse (op1, operands[1])); + emit_insn (gen_floorv8hf2 (op0, op1)); + emit_move_insn (operands[0], lowpart_subreg (mode, op0, V8HFmode)); + + DONE; +}) + +(define_expand "lfloor2" + [(match_operand: 0 "register_operand") + (match_operand:VHF_32_64 1 "nonimmediate_operand")] + "TARGET_AVX512FP16 && TARGET_AVX512VL + && ix86_partial_vec_fp_math + && !flag_trapping_math" +{ + rtx op1 = gen_reg_rtx (V8HFmode); + rtx op0 = gen_reg_rtx (V8HFmode); + + emit_insn (gen_mov__to_sse (op1, operands[1])); + emit_insn (gen_lfloorv8hfv8hi2 (op0, op1)); + emit_move_insn (operands[0], lowpart_subreg (mode, op0, V8HFmode)); + + DONE; +}) + +(define_expand "ceil2" + [(match_operand:VHF_32_64 0 "register_operand") + (match_operand:VHF_32_64 1 "nonimmediate_operand")] + "TARGET_AVX512FP16 && TARGET_AVX512VL + && ix86_partial_vec_fp_math + && !flag_trapping_math" +{ + rtx op1 = gen_reg_rtx (V8HFmode); + rtx op0 = gen_reg_rtx (V8HFmode); + + emit_insn (gen_mov__to_sse (op1, operands[1])); + emit_insn (gen_ceilv8hf2 (op0, op1)); + emit_move_insn (operands[0], lowpart_subreg (mode, op0, V8HFmode)); + + DONE; +}) + +(define_expand "lceil2" + [(match_operand: 0 "register_operand") + (match_operand:VHF_32_64 1 "nonimmediate_operand")] + "TARGET_AVX512FP16 && TARGET_AVX512VL + && ix86_partial_vec_fp_math + && !flag_trapping_math" +{ + rtx op1 = gen_reg_rtx (V8HFmode); + rtx op0 = gen_reg_rtx (V8HFmode); + + emit_insn (gen_mov__to_sse (op1, operands[1])); + emit_insn (gen_lceilv8hfv8hi2 (op0, op1)); + emit_move_insn (operands[0], lowpart_subreg (mode, op0, V8HFmode)); + + DONE; +}) + +(define_expand "round2" + [(match_operand:VHF_32_64 0 "register_operand") + (match_operand:VHF_32_64 1 "nonimmediate_operand")] + "TARGET_AVX512FP16 && TARGET_AVX512VL + && ix86_partial_vec_fp_math + && !flag_trapping_math" +{ + rtx op1 = gen_reg_rtx (V8HFmode); + rtx op0 = gen_reg_rtx (V8HFmode); + + emit_insn (gen_mov__to_sse (op1, operands[1])); + emit_insn (gen_roundv8hf2 (op0, op1)); + emit_move_insn (operands[0], lowpart_subreg (mode, op0, V8HFmode)); + + DONE; +}) + +(define_expand "lround2" + [(match_operand: 0 "register_operand") + (match_operand:VHF_32_64 1 "nonimmediate_operand")] + "TARGET_AVX512FP16 && TARGET_AVX512VL + && ix86_partial_vec_fp_math + && !flag_trapping_math" +{ + rtx op1 = gen_reg_rtx (V8HFmode); + rtx op0 = gen_reg_rtx (V8HFmode); + + emit_insn (gen_mov__to_sse (op1, operands[1])); + emit_insn (gen_lroundv8hfv8hi2 (op0, op1)); + emit_move_insn (operands[0], lowpart_subreg (mode, op0, V8HFmode)); + + DONE; +}) + ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;; ;; Parallel half-precision floating point logical operations diff --git a/gcc/config/i386/sse.md b/gcc/config/i386/sse.md index 22e43eb3f92..4602edf2374 100644 --- a/gcc/config/i386/sse.md +++ b/gcc/config/i386/sse.md @@ -7092,6 +7092,13 @@ (define_expand "vec_unpacks_hi_" DONE; }) +(define_expand "lrint2" + [(set (match_operand: 0 "register_operand") + (unspec: + [(match_operand:VHF_AVX512VL 1 "register_operand")] + UNSPEC_FIX_NOTRUNC))] + "TARGET_AVX512FP16") + (define_insn "avx512fp16_vcvtph2_" [(set (match_operand:VI248_AVX512VL 0 "register_operand" "=v") (unspec:VI248_AVX512VL @@ -24183,13 +24190,13 @@ (define_expand "_round_vec_pack_sfix" }) (define_insn "sse4_1_round" - [(set (match_operand:VF_128 0 "register_operand" "=Yr,*x,x,v") - (vec_merge:VF_128 - (unspec:VF_128 - [(match_operand:VF_128 2 "nonimmediate_operand" "Yrjm,*xjm,xjm,vm") + [(set (match_operand:VFH_128 0 "register_operand" "=Yr,*x,x,v") + (vec_merge:VFH_128 + (unspec:VFH_128 + [(match_operand:VFH_128 2 "nonimmediate_operand" "Yrjm,*xjm,xjm,vm") (match_operand:SI 3 "const_0_to_15_operand")] UNSPEC_ROUND) - (match_operand:VF_128 1 "register_operand" "0,0,x,v") + (match_operand:VFH_128 1 "register_operand" "0,0,x,v") (const_int 1)))] "TARGET_SSE4_1" { @@ -24201,7 +24208,7 @@ (define_insn "sse4_1_round" case 2: return "vround\t{%3, %2, %1, %0|%0, %1, %2, %3}"; case 3: - if (x86_evex_reg_mentioned_p (operands, 3)) + if (x86_evex_reg_mentioned_p (operands, 3) || mode == V8HFmode) return "vrndscale\t{%3, %2, %1, %0|%0, %1, %2, %3}"; else return "vround\t{%3, %2, %1, %0|%0, %1, %2, %3}"; @@ -24264,6 +24271,17 @@ (define_expand "floor2" "TARGET_SSE4_1 && !flag_trapping_math" "operands[2] = GEN_INT (ROUND_FLOOR | ROUND_NO_EXC);") +(define_expand "lfloor2" + [(match_operand: 0 "register_operand") + (match_operand:VHF_AVX512VL 1 "nonimmediate_operand")] + "TARGET_AVX512FP16 && !flag_trapping_math" +{ + rtx tmp = gen_reg_rtx (mode); + emit_insn (gen_floor2 (tmp, operands[1])); + emit_insn (gen_fix_trunc2 (operands[0], tmp)); + DONE; +}) + (define_expand "lfloor2" [(match_operand: 0 "register_operand") (match_operand:VF1_VF2_AVX512DQ 1 "register_operand")] @@ -24284,6 +24302,17 @@ (define_expand "ceil2" "TARGET_SSE4_1 && !flag_trapping_math" "operands[2] = GEN_INT (ROUND_CEIL | ROUND_NO_EXC);") +(define_expand "lceil2" + [(match_operand: 0 "register_operand") + (match_operand:VHF_AVX512VL 1 "register_operand")] + "TARGET_AVX512FP16 && !flag_trapping_math" +{ + rtx tmp = gen_reg_rtx (mode); + emit_insn (gen_ceil2 (tmp, operands[1])); + emit_insn (gen_fix_trunc2 (operands[0], tmp)); + DONE; +}) + (define_expand "lceil2" [(match_operand: 0 "register_operand") (match_operand:VF1_VF2_AVX512DQ 1 "register_operand")] @@ -24306,11 +24335,11 @@ (define_expand "btrunc2" (define_expand "round2" [(set (match_dup 3) - (plus:VF - (match_operand:VF 1 "register_operand") + (plus:VFH + (match_operand:VFH 1 "register_operand") (match_dup 2))) - (set (match_operand:VF 0 "register_operand") - (unspec:VF + (set (match_operand:VFH 0 "register_operand") + (unspec:VFH [(match_dup 3) (match_dup 4)] UNSPEC_ROUND))] "TARGET_SSE4_1 && !flag_trapping_math" @@ -24338,6 +24367,17 @@ (define_expand "round2" operands[4] = GEN_INT (ROUND_TRUNC); }) +(define_expand "lround2" + [(match_operand: 0 "register_operand") + (match_operand:VHF_AVX512VL 1 "register_operand")] + "TARGET_AVX512FP16 && !flag_trapping_math" +{ + rtx tmp = gen_reg_rtx (mode); + emit_insn (gen_round2 (tmp, operands[1])); + emit_insn (gen_fix_trunc2 (operands[0], tmp)); + DONE; +}) + (define_expand "lround2" [(match_operand: 0 "register_operand") (match_operand:VF1_VF2_AVX512DQ 1 "register_operand")] diff --git a/gcc/testsuite/gcc.target/i386/part-vect-roundhf.c b/gcc/testsuite/gcc.target/i386/part-vect-roundhf.c new file mode 100644 index 00000000000..38235c157b2 --- /dev/null +++ b/gcc/testsuite/gcc.target/i386/part-vect-roundhf.c @@ -0,0 +1,217 @@ +/* { dg-do run { target avx512fp16 } } */ +/* { dg-options "-O1 -mavx512fp16 -mavx512vl -fdump-tree-slp-details -fdump-tree-optimized" } */ + +extern void abort (); + +static void do_test (void); + +#define DO_TEST do_test +#define AVX512FP16 +#include "avx512-check.h" + +#define N 16 +_Float16 b[N] = {-1.2f, 3.4f, -5.6f, 7.8f, + -9.0f, 1.0f, -2.0f, 3.0f, + -4.0f, -5.0f, 6.0f, 7.0f, + -8.0f, -9.0f, 10.0f, 11.0f}; +_Float16 r[N]; + +void +__attribute__((noipa,noinline,optimize("Ofast"))) +round_32 (void) +{ + r[0] = __builtin_roundf16 (b[0]); + r[1] = __builtin_roundf16 (b[1]); +} + +void +__attribute__((noipa,noinline,optimize("Ofast"))) +round_64 (void) +{ + r[0] = __builtin_roundf16 (b[0]); + r[1] = __builtin_roundf16 (b[1]); + r[2] = __builtin_roundf16 (b[2]); + r[3] = __builtin_roundf16 (b[3]); +} + +void +__attribute__((noipa,noinline,optimize("O2"))) +rint_32 (void) +{ + r[0] = __builtin_rintf16 (b[0]); + r[1] = __builtin_rintf16 (b[1]); +} + +void +__attribute__((noipa,noinline,optimize("O2"))) +rint_64 (void) +{ + r[0] = __builtin_rintf16 (b[0]); + r[1] = __builtin_rintf16 (b[1]); + r[2] = __builtin_rintf16 (b[2]); + r[3] = __builtin_rintf16 (b[3]); +} + +void +__attribute__((noipa,noinline,optimize("O2"))) +nearbyint_32 (void) +{ + r[0] = __builtin_nearbyintf16 (b[0]); + r[1] = __builtin_nearbyintf16 (b[1]); +} + +void +__attribute__((noipa,noinline,optimize("O2"))) +nearbyint_64 (void) +{ + r[0] = __builtin_nearbyintf16 (b[0]); + r[1] = __builtin_nearbyintf16 (b[1]); + r[2] = __builtin_nearbyintf16 (b[2]); + r[3] = __builtin_nearbyintf16 (b[3]); +} + +void +__attribute__((noipa,noinline,optimize("Ofast"))) +trunc_32 (void) +{ + r[0] = __builtin_truncf16 (b[0]); + r[1] = __builtin_truncf16 (b[1]); +} + +void +__attribute__((noipa,noinline,optimize("Ofast"))) +trunc_64 (void) +{ + r[0] = __builtin_truncf16 (b[0]); + r[1] = __builtin_truncf16 (b[1]); + r[2] = __builtin_truncf16 (b[2]); + r[3] = __builtin_truncf16 (b[3]); +} + +void +__attribute__((noipa,noinline,optimize("Ofast"))) +floor_32 (void) +{ + r[0] = __builtin_floorf16 (b[0]); + r[1] = __builtin_floorf16 (b[1]); +} + +void +__attribute__((noipa,noinline,optimize("Ofast"))) +floor_64 (void) +{ + r[0] = __builtin_floorf16 (b[0]); + r[1] = __builtin_floorf16 (b[1]); + r[2] = __builtin_floorf16 (b[2]); + r[3] = __builtin_floorf16 (b[3]); +} + +void +__attribute__((noipa,noinline,optimize("Ofast"))) +ceil_32 (void) +{ + r[0] = __builtin_ceilf16 (b[0]); + r[1] = __builtin_ceilf16 (b[1]); +} + +void +__attribute__((noipa,noinline,optimize("Ofast"))) +ceil_64 (void) +{ + r[0] = __builtin_ceilf16 (b[0]); + r[1] = __builtin_ceilf16 (b[1]); + r[2] = __builtin_ceilf16 (b[2]); + r[3] = __builtin_ceilf16 (b[3]); +} + +_Float16 +__attribute__((noipa,noinline,optimize("Ofast"))) +dummy_roundf16 (_Float16 a) +{ + return __builtin_roundf16 (a); +} +static void +__attribute__ ((noinline, noclone)) +do_test (void) +{ + round_32 (); + /* check results: */ + for (int i = 0; i != 2; i++) + if (r[i] != dummy_roundf16 (b[i])) + abort (); + + round_64 (); + /* check results: */ + for (int i = 0; i != 4; i++) + if (r[i] != dummy_roundf16 (b[i])) + abort (); + + rint_32 (); + /* check results: */ + for (int i = 0; i != 2; i++) + if (r[i] != __builtin_rintf16 (b[i])) + abort (); + + rint_64 (); + /* check results: */ + for (int i = 0; i != 4; i++) + if (r[i] != __builtin_rintf16 (b[i])) + abort (); + + nearbyint_32 (); + /* check results: */ + for (int i = 0; i != 2; i++) + if (r[i] != __builtin_nearbyintf16 (b[i])) + abort (); + + nearbyint_64 (); + /* check results: */ + for (int i = 0; i != 4; i++) + if (r[i] != __builtin_nearbyintf16 (b[i])) + abort (); + + trunc_32 (); + /* check results: */ + for (int i = 0; i != 2; i++) + if (r[i] != __builtin_truncf16 (b[i])) + abort (); + + trunc_64 (); + /* check results: */ + for (int i = 0; i != 4; i++) + if (r[i] != __builtin_truncf16 (b[i])) + abort (); + + floor_32 (); + /* check results: */ + for (int i = 0; i != 2; i++) + if (r[i] != __builtin_floorf16 (b[i])) + abort (); + + floor_64 (); + /* check results: */ + for (int i = 0; i != 4; i++) + if (r[i] != __builtin_floorf16 (b[i])) + abort (); + + ceil_32 (); + /* check results: */ + for (int i = 0; i != 2; i++) + if (r[i] != __builtin_ceilf16 (b[i])) + abort (); + + ceil_64 (); + /* check results: */ + for (int i = 0; i != 4; i++) + if (r[i] != __builtin_ceilf16 (b[i])) + abort (); +} + +/* { dg-final { scan-tree-dump-times "vectorized using 8 byte vectors" 6 "slp2" { target { ! ia32 } } } } */ +/* { dg-final { scan-tree-dump-times "vectorized using 4 byte vectors" 6 "slp2" { target { ! ia32 } } } } */ +/* { dg-final { scan-tree-dump-times {(?n).CEIL \(vect} 2 "optimized" { target { ! ia32 } } } } */ +/* { dg-final { scan-tree-dump-times {(?n).FLOOR \(vect} 2 "optimized" { target { ! ia32 } } } } */ +/* { dg-final { scan-tree-dump-times {(?n).ROUND \(vect} 2 "optimized" { target { ! ia32 } } } } */ +/* { dg-final { scan-tree-dump-times {(?n).RINT \(vect} 2 "optimized" { target { ! ia32 } } } } */ +/* { dg-final { scan-tree-dump-times {(?n).NEARBYINT \(vect} 2 "optimized" { target { ! ia32 } } } } */ +/* { dg-final { scan-tree-dump-times {(?n).TRUNC \(vect} 2 "optimized" { target { ! ia32 } } } } */ diff --git a/gcc/testsuite/gcc.target/i386/part-vect-sqrtph-1.c b/gcc/testsuite/gcc.target/i386/part-vect-sqrtph-1.c new file mode 100644 index 00000000000..b7f9e7fb9b2 --- /dev/null +++ b/gcc/testsuite/gcc.target/i386/part-vect-sqrtph-1.c @@ -0,0 +1,20 @@ +/* { dg-do compile } */ +/* { dg-options "-mavx512fp16 -mavx512vl -Ofast" } */ +/* { dg-final { scan-assembler-times {(?n)vsqrtph[ \t].*%xmm[0-9]} 2 { target { ! ia32 } } } } */ +/* { dg-final { scan-assembler-times {(?n)vsqrtph[ \t].*%xmm[0-9]} 2 { target { ! ia32 } } } } */ + +void +foo16_sqrt (_Float16* a, _Float16* __restrict c) +{ + c[0] = __builtin_sqrtf16 (a[0]); + c[1] = __builtin_sqrtf16 (a[1]); +} + +void +foo32_sqrt(_Float16* a, _Float16* __restrict c) +{ + c[0] = __builtin_sqrtf16 (a[0]); + c[1] = __builtin_sqrtf16 (a[1]); + c[2] = __builtin_sqrtf16 (a[2]); + c[3] = __builtin_sqrtf16 (a[3]); +} From patchwork Thu Oct 12 06:02:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: liuhongt X-Patchwork-Id: 1847232 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=kGEdPmQ0; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=8.43.85.97; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (ip-8-43-85-97.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4S5fH31HB6z1yqj for ; Thu, 12 Oct 2023 17:04:29 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id BF2DC385802F for ; Thu, 12 Oct 2023 06:04:27 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by sourceware.org (Postfix) with ESMTPS id 902913858CDA for ; Thu, 12 Oct 2023 06:04:14 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 902913858CDA Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697090654; x=1728626654; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kPPh4CSZuENVmtHwj/skIUbKsmIxAYjrXEz1bo+SGB0=; b=kGEdPmQ0HlsyQLyLV7QWczeCEMRvr1lf4H9efKXCusi/kHUPev7KcX9I lNEOfLIxVP7BZEY7N6oGgUw6J/9TnuPLN0bfRPylMjveDb+vi0lYHbckx c6TXKrADJTF7nfyj3W2tIFwVtMSPwKa73cZYPETnxj4V7WcN7hpIouepg mDHp2Yhz5hNRPfdWcCOPDS8KPdKGgwhDs7RUmDVwXKFTTUmkP+OrLKLta A09VQ6Q4VT8Rn1fuvMw21dGiOF4NYL3DY26HOxzjMdpaDMxGA4OXQERGq FqyfPrcFyde74T0lbRMpALMm1XL4hPkXHmsCBJOIRnm+ckSIObgxDV0PB A==; X-IronPort-AV: E=McAfee;i="6600,9927,10860"; a="471096523" X-IronPort-AV: E=Sophos;i="6.03,218,1694761200"; d="scan'208";a="471096523" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Oct 2023 23:04:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10860"; a="877984838" X-IronPort-AV: E=Sophos;i="6.03,218,1694761200"; d="scan'208";a="877984838" Received: from shvmail03.sh.intel.com ([10.239.245.20]) by orsmga004.jf.intel.com with ESMTP; 11 Oct 2023 23:04:10 -0700 Received: from shliclel4217.sh.intel.com (shliclel4217.sh.intel.com [10.239.240.127]) by shvmail03.sh.intel.com (Postfix) with ESMTP id 8C95210056F6; Thu, 12 Oct 2023 14:04:09 +0800 (CST) From: liuhongt To: gcc-patches@gcc.gnu.org Cc: crazylht@gmail.com, hjl.tools@gmail.com Subject: [PATCH 2/2] Support 32/64-bit vectorization for conversion between _Float16 and integer/float. Date: Thu, 12 Oct 2023 14:02:09 +0800 Message-Id: <20231012060209.4130200-2-hongtao.liu@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231012060209.4130200-1-hongtao.liu@intel.com> References: <20231012060209.4130200-1-hongtao.liu@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-12.0 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}. Ready push to trunk. gcc/ChangeLog: * config/i386/mmx.md (V2FI_32): New mode iterator (movd_v2hf_to_sse): Rename to .. (movd__to_sse): .. this. (movd_v2hf_to_sse_reg): Rename to .. (movd__to_sse_reg): .. this. (fix_trunc2): New expander. (fix_truncv2hfv2si2): Ditto. (float2): Ditto. (floatv2siv2hf2): Ditto. (extendv2hfv2sf2): Ditto. (truncv2sfv2hf2): Ditto. * config/i386/sse.md (*vec_concatv8hf_movss): Rename to .. (*vec_concat_movss): .. this. gcc/testsuite/ChangeLog: * gcc.target/i386/part-vect-hf-convert-1.c: New test. --- gcc/config/i386/mmx.md | 164 ++++++++++++++++-- gcc/config/i386/sse.md | 12 +- .../gcc.target/i386/part-vect-hf-convert-1.c | 111 ++++++++++++ 3 files changed, 262 insertions(+), 25 deletions(-) create mode 100644 gcc/testsuite/gcc.target/i386/part-vect-hf-convert-1.c diff --git a/gcc/config/i386/mmx.md b/gcc/config/i386/mmx.md index 8375100d4bf..be2a9026c44 100644 --- a/gcc/config/i386/mmx.md +++ b/gcc/config/i386/mmx.md @@ -60,6 +60,7 @@ (define_mode_iterator MMXMODE248 [V4HI V2SI V1DI]) ;; All 4-byte integer/float16 vector modes (define_mode_iterator V_32 [V4QI V2HI V1SI V2HF V2BF]) +(define_mode_iterator V2FI_32 [V2HF V2BF V2HI]) ;; 4-byte integer vector modes (define_mode_iterator VI_32 [V4QI V2HI]) @@ -79,7 +80,7 @@ (define_mode_iterator V_16_32_64 ;; V2S* modes (define_mode_iterator V2FI [V2SF V2SI]) -(define_mode_iterator V2FI_V4HF [V2SF V2SI V4HF]) +(define_mode_iterator V24FI [V2SF V2SI V4HF V4HI]) ;; Mapping from integer vector mode to mnemonic suffix (define_mode_attr mmxvecsize [(V8QI "b") (V4QI "b") (V2QI "b") @@ -100,7 +101,7 @@ (define_mode_attr mmxdoublemode ;; Mapping of vector float modes to an integer mode of the same size (define_mode_attr mmxintvecmode [(V2SF "V2SI") (V2SI "V2SI") (V4HI "V4HI") (V8QI "V8QI") - (V4HF "V4HF") (V2HF "V2HI")]) + (V4HF "V4HI") (V2HF "V2HI")]) (define_mode_attr mmxintvecmodelower [(V2SF "v2si") (V2SI "v2si") (V4HI "v4hi") (V8QI "v8qi") @@ -108,7 +109,7 @@ (define_mode_attr mmxintvecmodelower ;; Mapping of vector modes to a vector mode of double size (define_mode_attr mmxdoublevecmode - [(V2SF "V4SF") (V2SI "V4SI") (V4HF "V8HF")]) + [(V2SF "V4SF") (V2SI "V4SI") (V4HF "V8HF") (V4HI "V8HI")]) ;; Mapping of vector modes back to the scalar modes (define_mode_attr mmxscalarmode @@ -600,7 +601,7 @@ (define_insn "sse_movntq" (define_expand "movq__to_sse" [(set (match_operand: 0 "register_operand") (vec_concat: - (match_operand:V2FI_V4HF 1 "nonimmediate_operand") + (match_operand:V24FI 1 "nonimmediate_operand") (match_dup 2)))] "TARGET_SSE2" { @@ -1967,31 +1968,40 @@ (define_expand "divv4hf3" DONE; }) -(define_mode_attr mov_to_sse_suffix [(V2HF "d") (V4HF "q")]) -(define_expand "movd_v2hf_to_sse" - [(set (match_operand:V8HF 0 "register_operand") - (vec_merge:V8HF - (vec_duplicate:V8HF - (match_operand:V2HF 1 "nonimmediate_operand")) +(define_mode_attr mov_to_sse_suffix + [(V2HF "d") (V4HF "q") (V2HI "d") (V4HI "q")]) + +(define_mode_attr mmxxmmmode + [(V2HF "V8HF") (V2HI "V8HI") (V2BF "V8BF")]) + +(define_mode_attr mmxxmmmodelower + [(V2HF "v8hf") (V2HI "v8hi") (V2BF "v8bf")]) + +(define_expand "movd__to_sse" + [(set (match_operand: 0 "register_operand") + (vec_merge: + (vec_duplicate: + (match_operand:V2FI_32 1 "nonimmediate_operand")) (match_dup 2) (const_int 3)))] "TARGET_SSE" { if (!flag_trapping_math) { - rtx op1 = force_reg (V2HFmode, operands[1]); - emit_move_insn (operands[0], lowpart_subreg (V8HFmode, op1, V2HFmode)); + rtx op1 = force_reg (mode, operands[1]); + emit_move_insn (operands[0], + lowpart_subreg (mode, op1, mode)); DONE; } - operands[2] = CONST0_RTX (V8HFmode); + operands[2] = CONST0_RTX (mode); }) -(define_expand "movd_v2hf_to_sse_reg" - [(set (match_operand:V8HF 0 "register_operand") - (vec_merge:V8HF - (vec_duplicate:V8HF - (match_operand:V2HF 1 "nonimmediate_operand")) - (match_operand:V8HF 2 "register_operand") +(define_expand "movd__to_sse_reg" + [(set (match_operand: 0 "register_operand") + (vec_merge: + (vec_duplicate: + (match_operand:V2FI_32 1 "nonimmediate_operand")) + (match_operand: 2 "register_operand") (const_int 3)))] "TARGET_SSE") @@ -2353,6 +2363,122 @@ (define_expand "signbit2" "TARGET_SSE2" "operands[2] = GEN_INT (GET_MODE_UNIT_BITSIZE (mode)-1);") +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +;; +;; Parallel single-precision floating point conversion operations +;; +;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +(define_expand "fix_trunc2" + [(set (match_operand: 0 "register_operand") + (any_fix: + (match_operand:VHF_32_64 1 "nonimmediate_operand")))] + "TARGET_AVX512FP16 && TARGET_AVX512VL && ix86_partial_vec_fp_math" +{ + rtx op1 = gen_reg_rtx (V8HFmode); + rtx op0 = gen_reg_rtx (V8HImode); + + emit_insn (gen_mov__to_sse (op1, operands[1])); + + emit_insn (gen_fix_truncv8hfv8hi2 (op0, op1)); + + emit_move_insn (operands[0], + lowpart_subreg (mode, op0, V8HImode)); + DONE; +}) + +(define_expand "fix_truncv2hfv2si2" + [(set (match_operand:V2SI 0 "register_operand") + (any_fix:V2SI + (match_operand:V2HF 1 "nonimmediate_operand")))] + "TARGET_AVX512FP16 && TARGET_AVX512VL + && TARGET_MMX_WITH_SSE && ix86_partial_vec_fp_math" +{ + rtx op1 = gen_reg_rtx (V8HFmode); + rtx op0 = gen_reg_rtx (V4SImode); + + emit_insn (gen_movd_v2hf_to_sse (op1, operands[1])); + + emit_insn (gen_avx512fp16_fix_truncv4si2 (op0, op1)); + + emit_move_insn (operands[0], lowpart_subreg (V2SImode, op0, V4SImode)); + DONE; +}) + +(define_expand "float2" + [(set (match_operand:VHF_32_64 0 "register_operand") + (any_float:VHF_32_64 + (match_operand: 1 "nonimmediate_operand")))] + "TARGET_AVX512FP16 && TARGET_AVX512VL && ix86_partial_vec_fp_math" +{ + rtx op1 = gen_reg_rtx (V8HImode); + rtx op0 = gen_reg_rtx (V8HFmode); + + rtx (*gen_movd_sse) (rtx, rtx) + = gen_mov__to_sse; + emit_insn (gen_movd_sse (op1, operands[1])); + + emit_insn (gen_floatv8hiv8hf2 (op0, op1)); + + emit_move_insn (operands[0], + lowpart_subreg (mode, op0, V8HFmode)); + DONE; +}) + +(define_expand "floatv2siv2hf2" + [(set (match_operand:V2HF 0 "register_operand") + (any_float:V2HF + (match_operand:V2SI 1 "nonimmediate_operand")))] + "TARGET_AVX512FP16 && TARGET_AVX512VL + && TARGET_MMX_WITH_SSE && ix86_partial_vec_fp_math" +{ + rtx op1 = gen_reg_rtx (V4SImode); + rtx op0 = gen_reg_rtx (V8HFmode); + + emit_insn (gen_movq_v2si_to_sse (op1, operands[1])); + + emit_insn (gen_avx512fp16_floatv4siv4hf2 (op0, op1)); + + emit_move_insn (operands[0], lowpart_subreg (V2HFmode, op0, V8HFmode)); + DONE; +}) + +(define_expand "extendv2hfv2sf2" + [(set (match_operand:V2SF 0 "register_operand") + (float_extend:V2SF + (match_operand:V2HF 1 "nonimmediate_operand")))] + "TARGET_AVX512FP16 && TARGET_AVX512VL + && TARGET_MMX_WITH_SSE && ix86_partial_vec_fp_math" +{ + rtx op1 = gen_reg_rtx (V8HFmode); + rtx op0 = gen_reg_rtx (V4SFmode); + + emit_insn (gen_movd_v2hf_to_sse (op1, operands[1])); + + emit_insn (gen_avx512fp16_float_extend_phv4sf2 (op0, op1)); + + emit_move_insn (operands[0], lowpart_subreg (V2SFmode, op0, V4SFmode)); + DONE; +}) + +(define_expand "truncv2sfv2hf2" + [(set (match_operand:V2HF 0 "register_operand") + (float_truncate:V2HF + (match_operand:V2SF 1 "nonimmediate_operand")))] + "TARGET_AVX512FP16 && TARGET_AVX512VL + && TARGET_MMX_WITH_SSE && ix86_partial_vec_fp_math" +{ + rtx op1 = gen_reg_rtx (V4SFmode); + rtx op0 = gen_reg_rtx (V8HFmode); + + emit_insn (gen_movq_v2sf_to_sse (op1, operands[1])); + + emit_insn (gen_avx512fp16_truncv4sfv4hf2 (op0, op1)); + + emit_move_insn (operands[0], lowpart_subreg (V2HFmode, op0, V8HFmode)); + DONE; +}) + ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;; ;; Parallel integral arithmetic diff --git a/gcc/config/i386/sse.md b/gcc/config/i386/sse.md index 4602edf2374..e9f947291c1 100644 --- a/gcc/config/i386/sse.md +++ b/gcc/config/i386/sse.md @@ -10975,12 +10975,12 @@ (define_insn "*vec_concat_0" (set_attr "prefix" "maybe_vex") (set_attr "mode" "DF")]) -(define_insn "*vec_concatv8hf_movss" - [(set (match_operand:V8HF 0 "register_operand" "=x,v,v") - (vec_merge:V8HF - (vec_duplicate:V8HF - (match_operand:V2HF 2 "nonimmediate_operand" "x,m,v")) - (match_operand:V8HF 1 "reg_or_0_operand" "0,C,v" ) +(define_insn "*vec_concat_movss" + [(set (match_operand: 0 "register_operand" "=x,v,v") + (vec_merge: + (vec_duplicate: + (match_operand:V2FI_32 2 "nonimmediate_operand" "x,m,v")) + (match_operand: 1 "reg_or_0_operand" "0,C,v" ) (const_int 3)))] "TARGET_SSE" "@ diff --git a/gcc/testsuite/gcc.target/i386/part-vect-hf-convert-1.c b/gcc/testsuite/gcc.target/i386/part-vect-hf-convert-1.c new file mode 100644 index 00000000000..95426015b58 --- /dev/null +++ b/gcc/testsuite/gcc.target/i386/part-vect-hf-convert-1.c @@ -0,0 +1,111 @@ +/* { dg-do compile { target { ! ia32 } } } */ +/* { dg-options "-mavx512fp16 -mavx512vl -O2" } */ +/* { dg-final { scan-assembler-times {(?n)vcvttph2w[ \t]} 2 } } */ +/* { dg-final { scan-assembler-times {(?n)vcvttph2uw[ \t]} 2 } } */ +/* { dg-final { scan-assembler-times {(?n)vcvttph2dq[ \t]} 1 } } */ +/* { dg-final { scan-assembler-times {(?n)vcvttph2udq[ \t]} 1 } } */ +/* { dg-final { scan-assembler-times {(?n)vcvtw2ph[ \t]} 2 } } */ +/* { dg-final { scan-assembler-times {(?n)vcvtuw2ph[ \t]} 2 } } */ +/* { dg-final { scan-assembler-times {(?n)vcvtdq2phx[ \t]} 1 } } */ +/* { dg-final { scan-assembler-times {(?n)vcvtudq2phx[ \t]} 1 } } */ +/* { dg-final { scan-assembler-times {(?n)vcvtph2psx[ \t]} 1 } } */ +/* { dg-final { scan-assembler-times {(?n)vcvtps2phxx[ \t]} 1 } } */ + + +void +fix_32 (short* __restrict pa, _Float16* pb) +{ + for (int i = 0; i != 2; i++) + pa[i] = pb[i]; +} + +void +fix_64 (short* __restrict pa, _Float16* pb) +{ + for (int i = 0; i != 4; i++) + pa[i] = pb[i]; +} + +void +fixuns_32 (unsigned short* __restrict pa, _Float16* pb) +{ + for (int i = 0; i != 2; i++) + pa[i] = pb[i]; +} + +void +fixuns_64 (unsigned short* __restrict pa, _Float16* pb) +{ + for (int i = 0; i != 4; i++) + pa[i] = pb[i]; +} + +void +float_32 (short* __restrict pa, _Float16* pb) +{ + for (int i = 0; i != 2; i++) + pb[i] = pa[i]; +} + +void +float_64 (short* __restrict pa, _Float16* pb) +{ + for (int i = 0; i != 4; i++) + pb[i] = pa[i]; +} + +void +floatuns_32 (unsigned short* __restrict pa, _Float16* pb) +{ + for (int i = 0; i != 2; i++) + pb[i] = pa[i]; +} + +void +floatuns_64 (unsigned short* __restrict pa, _Float16* pb) +{ + for (int i = 0; i != 4; i++) + pb[i] = pa[i]; +} + +void +fix_32si (int* __restrict pa, _Float16* pb) +{ + for (int i = 0; i != 2; i++) + pa[i] = pb[i]; +} + +void +fix_32usi (unsigned int* __restrict pa, _Float16* pb) +{ + for (int i = 0; i != 2; i++) + pa[i] = pb[i]; +} + +void +float_32si (int* __restrict pa, _Float16* pb) +{ + for (int i = 0; i != 2; i++) + pb[i] = pa[i]; +} + +void +float_32usi (unsigned int* __restrict pa, _Float16* pb) +{ + for (int i = 0; i != 2; i++) + pb[i] = pa[i]; +} + +void +float_extend (float* __restrict pa, _Float16* pb) +{ + for (int i = 0; i != 2; i++) + pa[i] = pb[i]; +} + +void +float_truncate (float* __restrict pa, _Float16* pb) +{ + for (int i = 0; i != 2; i++) + pb[i] = pa[i]; +}