From patchwork Thu Aug 26 04:57:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: liuhongt X-Patchwork-Id: 1520956 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=8.43.85.97; helo=sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.a=rsa-sha256 header.s=default header.b=JKbUVoF4; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4Gw9bX2vmzz9sXN for ; Thu, 26 Aug 2021 14:58:30 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 18E63385843E for ; Thu, 26 Aug 2021 04:58:26 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 18E63385843E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1629953906; bh=X/fiqm8mim5QvUUAvX216nY0UV8F/aM3A+eUbgktqt4=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=JKbUVoF4wLWnungAngeiPHpxA5WUlxsSV6zlerIVfZMlXAu+kltUm0KlqysIao2wg fDr0qLE2J4jI8rElGkO6IXBKGRir8CCx5mfZVO2QdbnXja+AA46QjBiU2aFACzUyQV bvVTWY1bcr7NHKCD70RdGmTZ3wDgoq4+WjMUFrIM= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by sourceware.org (Postfix) with ESMTPS id 1CD593858400 for ; Thu, 26 Aug 2021 04:57:55 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 1CD593858400 X-IronPort-AV: E=McAfee;i="6200,9189,10087"; a="217669181" X-IronPort-AV: E=Sophos;i="5.84,352,1620716400"; d="scan'208";a="217669181" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Aug 2021 21:57:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,352,1620716400"; d="scan'208";a="537183283" Received: from scymds01.sc.intel.com ([10.148.94.138]) by fmsmga002.fm.intel.com with ESMTP; 25 Aug 2021 21:57:53 -0700 Received: from shliclel320.sh.intel.com (shliclel320.sh.intel.com [10.239.236.50]) by scymds01.sc.intel.com with ESMTP id 17Q4vqFs017389; Wed, 25 Aug 2021 21:57:52 -0700 To: gcc-patches@gcc.gnu.org Subject: [PATCH] Fold more shuffle builtins to VEC_PERM_EXPR. Date: Thu, 26 Aug 2021 12:57:51 +0800 Message-Id: <20210826045751.40630-1-hongtao.liu@intel.com> X-Mailer: git-send-email 2.18.1 MIME-Version: 1.0 X-Spam-Status: No, score=-11.7 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_NONE, KAM_DMARC_STATUS, KAM_LAZY_DOMAIN_SECURITY, KAM_SHORT, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: liuhongt via Gcc-patches From: liuhongt Reply-To: liuhongt Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org Sender: "Gcc-patches" This patch is a follow-up to [1], it fold all shufps/shufpd builtins into gimple. Bootstrapped and regtested on x86_64-linux-gnu{-m32,}. [1] https://gcc.gnu.org/pipermail/gcc-patches/2019-May/521983.html gcc/ PR target/98167 PR target/43147 * config/i386/i386.c (ix86_gimple_fold_builtin): Fold IX86_BUILTIN_SHUFPD512, IX86_BUILTIN_SHUFPS512, IX86_BUILTIN_SHUFPD256, IX86_BUILTIN_SHUFPS, IX86_BUILTIN_SHUFPS256. (ix86_masked_all_ones): New function. gcc/testsuite/ * gcc.target/i386/avx512f-vshufpd-1.c: Adjust testcase. * gcc.target/i386/avx512f-vshufps-1.c: Adjust testcase. * gcc.target/i386/pr43147.c: New test. --- gcc/config/i386/i386.c | 90 ++++++++++++++----- .../gcc.target/i386/avx512f-vshufpd-1.c | 3 +- .../gcc.target/i386/avx512f-vshufps-1.c | 3 +- gcc/testsuite/gcc.target/i386/pr43147.c | 15 ++++ 4 files changed, 87 insertions(+), 24 deletions(-) create mode 100644 gcc/testsuite/gcc.target/i386/pr43147.c diff --git a/gcc/config/i386/i386.c b/gcc/config/i386/i386.c index ebec8668758..f3eed9f2426 100644 --- a/gcc/config/i386/i386.c +++ b/gcc/config/i386/i386.c @@ -17541,6 +17541,20 @@ ix86_vector_shift_count (tree arg1) return NULL_TREE; } +/* Return true if arg_mask is all ones, arg_vec is corresponding vector. */ +static bool +ix86_masked_all_ones (unsigned HOST_WIDE_INT elems, tree arg_mask) +{ + if (TREE_CODE (arg_mask) != INTEGER_CST) + return false; + + unsigned HOST_WIDE_INT mask = TREE_INT_CST_LOW (arg_mask); + if ((mask | (HOST_WIDE_INT_M1U << elems)) != HOST_WIDE_INT_M1U) + return false; + + return true; +} + static tree ix86_fold_builtin (tree fndecl, int n_args, tree *args, bool ignore ATTRIBUTE_UNUSED) @@ -18026,6 +18040,7 @@ ix86_gimple_fold_builtin (gimple_stmt_iterator *gsi) enum tree_code tcode; unsigned HOST_WIDE_INT count; bool is_vshift; + unsigned HOST_WIDE_INT elems; switch (fn_code) { @@ -18349,17 +18364,11 @@ ix86_gimple_fold_builtin (gimple_stmt_iterator *gsi) gcc_assert (n_args >= 2); arg0 = gimple_call_arg (stmt, 0); arg1 = gimple_call_arg (stmt, 1); - if (n_args > 2) - { - /* This is masked shift. Only optimize if the mask is all ones. */ - tree argl = gimple_call_arg (stmt, n_args - 1); - if (!tree_fits_uhwi_p (argl)) - break; - unsigned HOST_WIDE_INT mask = tree_to_uhwi (argl); - unsigned elems = TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0)); - if ((mask | (HOST_WIDE_INT_M1U << elems)) != HOST_WIDE_INT_M1U) - break; - } + elems = TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0)); + /* For masked shift, only optimize if the mask is all ones. */ + if (n_args > 2 + && !ix86_masked_all_ones (elems, gimple_call_arg (stmt, n_args - 1))) + break; if (is_vshift) { if (TREE_CODE (arg1) != VECTOR_CST) @@ -18408,25 +18417,62 @@ ix86_gimple_fold_builtin (gimple_stmt_iterator *gsi) } break; + case IX86_BUILTIN_SHUFPD512: + case IX86_BUILTIN_SHUFPS512: case IX86_BUILTIN_SHUFPD: + case IX86_BUILTIN_SHUFPD256: + case IX86_BUILTIN_SHUFPS: + case IX86_BUILTIN_SHUFPS256: + arg0 = gimple_call_arg (stmt, 0); + elems = TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg0)); + /* This is masked shuffle. Only optimize if the mask is all ones. */ + if (n_args > 3 + && !ix86_masked_all_ones (elems, + gimple_call_arg (stmt, n_args - 1))) + break; arg2 = gimple_call_arg (stmt, 2); if (TREE_CODE (arg2) == INTEGER_CST) { + unsigned HOST_WIDE_INT shuffle_mask = TREE_INT_CST_LOW (arg2); + /* Check valid imm, refer to gcc.target/i386/testimm-10.c. */ + if (shuffle_mask > 255) + return false; + + machine_mode imode = GET_MODE_INNER (TYPE_MODE (TREE_TYPE (arg0))); location_t loc = gimple_location (stmt); - unsigned HOST_WIDE_INT imask = TREE_INT_CST_LOW (arg2); - arg0 = gimple_call_arg (stmt, 0); + tree itype = (imode == E_DFmode + ? long_long_integer_type_node : integer_type_node); + tree vtype = build_vector_type (itype, elems); + tree_vector_builder elts (vtype, elems, 1); + + + /* Transform integer shuffle_mask to vector perm_mask which + is used by vec_perm_expr, refer to shuflp[sd]256/512 in sse.md. */ + for (unsigned i = 0; i != elems; i++) + { + unsigned sel_idx; + /* Imm[1:0](if VL > 128, then use Imm[3:2],Imm[5:4],Imm[7:6]) + provide 2 select constrols for each element of the + destination. */ + if (imode == E_DFmode) + sel_idx = (i & 1) * elems + (i & ~1) + + ((shuffle_mask >> i) & 1); + else + { + /* Imm[7:0](if VL > 128, also use Imm[7:0]) provide 4 select + controls for each element of the destination. */ + unsigned j = i % 4; + sel_idx = ((i >> 1) & 1) * elems + (i & ~3) + + ((shuffle_mask >> 2 * j) & 3); + } + elts.quick_push (build_int_cst (itype, sel_idx)); + } + + tree perm_mask = elts.build (); arg1 = gimple_call_arg (stmt, 1); - tree itype = long_long_integer_type_node; - tree vtype = build_vector_type (itype, 2); /* V2DI */ - tree_vector_builder elts (vtype, 2, 1); - /* Ignore bits other than the lowest 2. */ - elts.quick_push (build_int_cst (itype, imask & 1)); - imask >>= 1; - elts.quick_push (build_int_cst (itype, 2 + (imask & 1))); - tree omask = elts.build (); gimple *g = gimple_build_assign (gimple_call_lhs (stmt), VEC_PERM_EXPR, - arg0, arg1, omask); + arg0, arg1, perm_mask); gimple_set_location (g, loc); gsi_replace (gsi, g, false); return true; diff --git a/gcc/testsuite/gcc.target/i386/avx512f-vshufpd-1.c b/gcc/testsuite/gcc.target/i386/avx512f-vshufpd-1.c index d1ac01e1c88..8df5b9d4441 100644 --- a/gcc/testsuite/gcc.target/i386/avx512f-vshufpd-1.c +++ b/gcc/testsuite/gcc.target/i386/avx512f-vshufpd-1.c @@ -7,11 +7,12 @@ #include __m512d x; +__m512d y; void extern avx512f_test (void) { - x = _mm512_shuffle_pd (x, x, 56); + x = _mm512_shuffle_pd (x, y, 56); x = _mm512_mask_shuffle_pd (x, 2, x, x, 56); x = _mm512_maskz_shuffle_pd (2, x, x, 56); } diff --git a/gcc/testsuite/gcc.target/i386/avx512f-vshufps-1.c b/gcc/testsuite/gcc.target/i386/avx512f-vshufps-1.c index 07a63fca3ff..378ae4b7101 100644 --- a/gcc/testsuite/gcc.target/i386/avx512f-vshufps-1.c +++ b/gcc/testsuite/gcc.target/i386/avx512f-vshufps-1.c @@ -7,11 +7,12 @@ #include __m512 x; +__m512 y; void extern avx512f_test (void) { - x = _mm512_shuffle_ps (x, x, 56); + x = _mm512_shuffle_ps (x, y, 56); x = _mm512_mask_shuffle_ps (x, 2, x, x, 56); x = _mm512_maskz_shuffle_ps (2, x, x, 56); } diff --git a/gcc/testsuite/gcc.target/i386/pr43147.c b/gcc/testsuite/gcc.target/i386/pr43147.c new file mode 100644 index 00000000000..3c30f917c06 --- /dev/null +++ b/gcc/testsuite/gcc.target/i386/pr43147.c @@ -0,0 +1,15 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -msse2" } */ +/* { dg-final { scan-assembler "movaps" } } */ +/* { dg-final { scan-assembler-not "shufps" } } */ + +#include + +__m128 +foo (void) +{ + __m128 m = _mm_set_ps(1.0f, 2.0f, 3.0f, 4.0f); + m = _mm_shuffle_ps(m, m, 0xC9); + m = _mm_shuffle_ps(m, m, 0x2D); + return m; +}