From patchwork Sun Jun 5 11:12:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roger Sayle X-Patchwork-Id: 1638990 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=nextmovesoftware.com header.i=@nextmovesoftware.com header.a=rsa-sha256 header.s=default header.b=nm3rwAQe; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4LGDVx5T0Tz9s0r for ; Sun, 5 Jun 2022 21:12:56 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id F30403835808 for ; Sun, 5 Jun 2022 11:12:51 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from server.nextmovesoftware.com (server.nextmovesoftware.com [162.254.253.69]) by sourceware.org (Postfix) with ESMTPS id A65C8383A614 for ; Sun, 5 Jun 2022 11:12:38 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org A65C8383A614 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=nextmovesoftware.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=nextmovesoftware.com DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=nextmovesoftware.com; s=default; h=Content-Type:MIME-Version:Message-ID: Date:Subject:Cc:To:From:Sender:Reply-To:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=Q3D/nNmB+HXU7tmHgQsm9R4B6r0lQOOxxrb+6LJRWgE=; b=nm3rwAQexJ2BznyTynZWfhXHWu zKQSgtxamZeog/iEpWOXSHDI/6RRW70SKKlh7QPMDrN/xTwtsZ192xub6Cgq2JiwakOdKexjJ8f8h T/Bet4tiygjHd/Ersd69prkIQrXd+L/GGv31rneqfQVEx7xACkGirgAtdwixkW+yzs1rRcoZlSaIh Hjn3n6UICi0OYfyijQHYkaWoFQ7ZqxzoveKcq144xa+GG+ghEu7QtbwPCZdBBTHSmAYiplVJLtd2r DmaJr0pGUIlg+aZQeKK7Pnb99FBPCAMZ6qxb81ARGt0qYP7fSYBAj2NR/EghWGyrbaHFM1lLQLQAW QVy2MltA==; Received: from host109-154-46-241.range109-154.btcentralplus.com ([109.154.46.241]:61917 helo=Dell) by server.nextmovesoftware.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1nxoBS-0002XJ-3r; Sun, 05 Jun 2022 07:12:38 -0400 From: "Roger Sayle" To: "'Richard Biener'" Subject: [PATCH take #2] Fold truncations of left shifts in match.pd Date: Sun, 5 Jun 2022 12:12:36 +0100 Message-ID: <007e01d878cd$2a32f100$7e98d300$@nextmovesoftware.com> MIME-Version: 1.0 X-Mailer: Microsoft Outlook 16.0 Thread-Index: Adh4yP6ikti1S1cmQIOEt+zs6VH2Cg== Content-Language: en-gb X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - server.nextmovesoftware.com X-AntiAbuse: Original Domain - gcc.gnu.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - nextmovesoftware.com X-Get-Message-Sender-Via: server.nextmovesoftware.com: authenticated_id: roger@nextmovesoftware.com X-Authenticated-Sender: server.nextmovesoftware.com: roger@nextmovesoftware.com X-Source: X-Source-Args: X-Source-Dir: X-Spam-Status: No, score=-12.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: 'GCC Patches' Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org Sender: "Gcc-patches" Hi Richard, Many thanks for taking the time to explain how vectorization is supposed to work. I now see that vect_recog_rotate_pattern in tree-vect-patterns.cc is supposed to handle lowering of rotations to (vector) shifts, and completely agree that adding support for signed types (using appropriate casts to unsigned_type_for and casting the result back to the original signed type) is a better approach to avoid the regression of pr98674.c. I've also implemented your suggestions of combining the proposed new (convert (lshift @1 INTEGER_CST@2)) with the existing one, and at the same time including support for valid shifts greater than the narrower type, such as (short)(x << 20), to constant zero. Although this optimization is already performed during the tree-ssa passes, it's convenient to also catch it here during constant folding. This revised patch has been tested on x86_64-pc-linux-gnu with make bootstrap and make -k check, both with and without --target_board=unix{-m32}, with no new failures. Ok for mainline? 2022-06-05 Roger Sayle Richard Biener gcc/ChangeLog * match.pd (convert (lshift @1 INTEGER_CST@2)): Narrow integer left shifts by a constant when the result is truncated, and the shift constant is well-defined. * tree-vect-patterns.cc (vect_recog_rotate_pattern): Add support for rotations of signed integer types, by lowering using unsigned vector shifts. gcc/testsuite/ChangeLog * gcc.dg/fold-convlshift-4.c: New test case. * gcc.dg/optimize-bswaphi-1.c: Update found bswap count. * gcc.dg/tree-ssa/pr61839_3.c: Shift is now optimized before VRP. * gcc.dg/vect/vect-over-widen-1-big-array.c: Remove obsolete tests. * gcc.dg/vect/vect-over-widen-1.c: Likewise. * gcc.dg/vect/vect-over-widen-3-big-array.c: Likewise. * gcc.dg/vect/vect-over-widen-3.c: Likewise. * gcc.dg/vect/vect-over-widen-4-big-array.c: Likewise. * gcc.dg/vect/vect-over-widen-4.c: Likewise. Thanks again, Roger --- > -----Original Message----- > From: Richard Biener > Sent: 02 June 2022 12:03 > To: Roger Sayle > Cc: GCC Patches > Subject: Re: [PATCH] Fold truncations of left shifts in match.pd > > On Thu, Jun 2, 2022 at 12:55 PM Roger Sayle > wrote: > > > > > > Hi Richard, > > > + /* RTL expansion knows how to expand rotates using shift/or. */ > > > + if (icode == CODE_FOR_nothing > > > + && (code == LROTATE_EXPR || code == RROTATE_EXPR) > > > + && optab_handler (ior_optab, vec_mode) != CODE_FOR_nothing > > > + && optab_handler (ashl_optab, vec_mode) != CODE_FOR_nothing) > > > + icode = (int) optab_handler (lshr_optab, vec_mode); > > > > > > but we then get the vector costing wrong. > > > > The issue is that we currently get the (relative) vector costing wrong. > > Currently for gcc.dg/vect/pr98674.c, the vectorizer thinks the scalar > > code requires two shifts and an ior, so believes its profitable to > > vectorize this loop using two vector shifts and an vector ior. But > > once match.pd simplifies the truncate and recognizes the HImode rotate we > end up with: > > > > pr98674.c:6:16: note: ==> examining statement: _6 = _1 r>> 8; > > pr98674.c:6:16: note: vect_is_simple_use: vectype vector(8) short int > > pr98674.c:6:16: note: vect_is_simple_use: operand 8, type of def: constant > > pr98674.c:6:16: missed: op not supported by target. > > pr98674.c:8:33: missed: not vectorized: relevant stmt not supported: _6 = _1 > r>> 8; > > pr98674.c:6:16: missed: bad operation or unsupported loop bound. > > > > > > Clearly, it's a win to vectorize HImode rotates, when the backend can > > perform > > 8 (or 16) rotations at a time, but using 3 vector instructions, even > > when a scalar rotate can performed in a single instruction. > > Fundamentally, vectorization may still be desirable/profitable even when the > backend doesn't provide an optab. > > Yes, as said it's tree-vect-patterns.cc job to handle this not natively supported > rotate by re-writing it. Can you check why vect_recog_rotate_pattern does not > do this? Ah, the code only handles !TYPE_UNSIGNED (type) - not sure why > though (for rotates it should not matter and for the lowered sequence we can > convert to desired signedness to get arithmetic/logical shifts)? > > > The current situation where the i386's backend provides expanders to > > lower rotations (or vcond) into individual instruction sequences, also interferes > with > > vector costing. It's the vector cost function that needs to be fixed, not the > > generated code made worse (or the backend bloated performing its own > > RTL expansion workarounds). > > > > Is it instead ok to mark pr98674.c as XFAIL (a regression)? > > The tweak to tree-vect-stmts.cc was based on the assumption that we > > wished to continue vectorizing this loop. Improving scalar code > > generation really shouldn't disable vectorization like this. > > Yes, see above where the fix needs to be. The pattern will then expose the shift > and ior to the vectorizer which then are properly costed. > > Richard. > > > > > > > Cheers, > > Roger > > -- > > > > diff --git a/gcc/match.pd b/gcc/match.pd index 2d3ffc4..bbcf9e2 100644 --- a/gcc/match.pd +++ b/gcc/match.pd @@ -3621,17 +3621,18 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) (if (integer_zerop (@2) || integer_all_onesp (@2)) (cmp @0 @2))))) -/* Both signed and unsigned lshift produce the same result, so use - the form that minimizes the number of conversions. Postpone this - transformation until after shifts by zero have been folded. */ +/* Narrow a lshift by constant. */ (simplify - (convert (lshift:s@0 (convert:s@1 @2) INTEGER_CST@3)) + (convert (lshift:s@0 @1 INTEGER_CST@2)) (if (INTEGRAL_TYPE_P (type) - && tree_nop_conversion_p (type, TREE_TYPE (@0)) - && INTEGRAL_TYPE_P (TREE_TYPE (@2)) - && TYPE_PRECISION (TREE_TYPE (@2)) <= TYPE_PRECISION (type) - && !integer_zerop (@3)) - (lshift (convert @2) @3))) + && INTEGRAL_TYPE_P (TREE_TYPE (@0)) + && !integer_zerop (@2) + && TYPE_PRECISION (type) <= TYPE_PRECISION (TREE_TYPE (@0))) + (if (TYPE_PRECISION (type) == TYPE_PRECISION (TREE_TYPE (@0)) + || wi::ltu_p (wi::to_wide (@2), TYPE_PRECISION (type))) + (lshift (convert @1) @2) + (if (wi::ltu_p (wi::to_wide (@2), TYPE_PRECISION (TREE_TYPE (@0)))) + { build_zero_cst (type); })))) /* Simplifications of conversions. */ diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc index 0fad4db..8f62486 100644 --- a/gcc/tree-vect-patterns.cc +++ b/gcc/tree-vect-patterns.cc @@ -2614,8 +2614,7 @@ vect_recog_rotate_pattern (vec_info *vinfo, || TYPE_PRECISION (TREE_TYPE (lhs)) != 16 || TYPE_PRECISION (type) <= 16 || TREE_CODE (oprnd0) != SSA_NAME - || BITS_PER_UNIT != 8 - || !TYPE_UNSIGNED (TREE_TYPE (lhs))) + || BITS_PER_UNIT != 8) return NULL; stmt_vec_info def_stmt_info; @@ -2688,8 +2687,7 @@ vect_recog_rotate_pattern (vec_info *vinfo, if (TREE_CODE (oprnd0) != SSA_NAME || TYPE_PRECISION (TREE_TYPE (lhs)) != TYPE_PRECISION (type) - || !INTEGRAL_TYPE_P (type) - || !TYPE_UNSIGNED (type)) + || !INTEGRAL_TYPE_P (type)) return NULL; stmt_vec_info def_stmt_info; @@ -2745,31 +2743,36 @@ vect_recog_rotate_pattern (vec_info *vinfo, goto use_rotate; } + tree utype = unsigned_type_for (type); + tree uvectype = get_vectype_for_scalar_type (vinfo, utype); + if (!uvectype) + return NULL; + /* If vector/vector or vector/scalar shifts aren't supported by the target, don't do anything here either. */ - optab1 = optab_for_tree_code (LSHIFT_EXPR, vectype, optab_vector); - optab2 = optab_for_tree_code (RSHIFT_EXPR, vectype, optab_vector); + optab1 = optab_for_tree_code (LSHIFT_EXPR, uvectype, optab_vector); + optab2 = optab_for_tree_code (RSHIFT_EXPR, uvectype, optab_vector); if (!optab1 - || optab_handler (optab1, TYPE_MODE (vectype)) == CODE_FOR_nothing + || optab_handler (optab1, TYPE_MODE (uvectype)) == CODE_FOR_nothing || !optab2 - || optab_handler (optab2, TYPE_MODE (vectype)) == CODE_FOR_nothing) + || optab_handler (optab2, TYPE_MODE (uvectype)) == CODE_FOR_nothing) { if (! is_a (vinfo) && dt == vect_internal_def) return NULL; - optab1 = optab_for_tree_code (LSHIFT_EXPR, vectype, optab_scalar); - optab2 = optab_for_tree_code (RSHIFT_EXPR, vectype, optab_scalar); + optab1 = optab_for_tree_code (LSHIFT_EXPR, uvectype, optab_scalar); + optab2 = optab_for_tree_code (RSHIFT_EXPR, uvectype, optab_scalar); if (!optab1 - || optab_handler (optab1, TYPE_MODE (vectype)) == CODE_FOR_nothing + || optab_handler (optab1, TYPE_MODE (uvectype)) == CODE_FOR_nothing || !optab2 - || optab_handler (optab2, TYPE_MODE (vectype)) == CODE_FOR_nothing) + || optab_handler (optab2, TYPE_MODE (uvectype)) == CODE_FOR_nothing) return NULL; } *type_out = vectype; - if (bswap16_p && !useless_type_conversion_p (type, TREE_TYPE (oprnd0))) + if (!useless_type_conversion_p (utype, TREE_TYPE (oprnd0))) { - def = vect_recog_temp_ssa_var (type, NULL); + def = vect_recog_temp_ssa_var (utype, NULL); def_stmt = gimple_build_assign (def, NOP_EXPR, oprnd0); append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt); oprnd0 = def; @@ -2779,7 +2782,7 @@ vect_recog_rotate_pattern (vec_info *vinfo, ext_def = vect_get_external_def_edge (vinfo, oprnd1); def = NULL_TREE; - scalar_int_mode mode = SCALAR_INT_TYPE_MODE (type); + scalar_int_mode mode = SCALAR_INT_TYPE_MODE (utype); if (dt != vect_internal_def || TYPE_MODE (TREE_TYPE (oprnd1)) == mode) def = oprnd1; else if (def_stmt && gimple_assign_cast_p (def_stmt)) @@ -2793,7 +2796,7 @@ vect_recog_rotate_pattern (vec_info *vinfo, if (def == NULL_TREE) { - def = vect_recog_temp_ssa_var (type, NULL); + def = vect_recog_temp_ssa_var (utype, NULL); def_stmt = gimple_build_assign (def, NOP_EXPR, oprnd1); append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt); } @@ -2839,13 +2842,13 @@ vect_recog_rotate_pattern (vec_info *vinfo, append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt, vecstype); } - var1 = vect_recog_temp_ssa_var (type, NULL); + var1 = vect_recog_temp_ssa_var (utype, NULL); def_stmt = gimple_build_assign (var1, rhs_code == LROTATE_EXPR ? LSHIFT_EXPR : RSHIFT_EXPR, oprnd0, def); append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt); - var2 = vect_recog_temp_ssa_var (type, NULL); + var2 = vect_recog_temp_ssa_var (utype, NULL); def_stmt = gimple_build_assign (var2, rhs_code == LROTATE_EXPR ? RSHIFT_EXPR : LSHIFT_EXPR, oprnd0, def2); @@ -2855,9 +2858,15 @@ vect_recog_rotate_pattern (vec_info *vinfo, vect_pattern_detected ("vect_recog_rotate_pattern", last_stmt); /* Pattern supported. Create a stmt to be used to replace the pattern. */ - var = vect_recog_temp_ssa_var (type, NULL); + var = vect_recog_temp_ssa_var (utype, NULL); pattern_stmt = gimple_build_assign (var, BIT_IOR_EXPR, var1, var2); + if (!useless_type_conversion_p (type, utype)) + { + append_pattern_def_seq (vinfo, stmt_vinfo, pattern_stmt); + tree result = vect_recog_temp_ssa_var (type, NULL); + pattern_stmt = gimple_build_assign (result, NOP_EXPR, var); + } return pattern_stmt; } diff --git a/gcc/testsuite/gcc.dg/fold-convlshift-4.c b/gcc/testsuite/gcc.dg/fold-convlshift-4.c new file mode 100644 index 0000000..001627f --- /dev/null +++ b/gcc/testsuite/gcc.dg/fold-convlshift-4.c @@ -0,0 +1,9 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -fdump-tree-optimized" } */ +short foo(short x) +{ + return x << 5; +} + +/* { dg-final { scan-tree-dump-not "\\(int\\)" "optimized" } } */ +/* { dg-final { scan-tree-dump-not "\\(short int\\)" "optimized" } } */ diff --git a/gcc/testsuite/gcc.dg/optimize-bswaphi-1.c b/gcc/testsuite/gcc.dg/optimize-bswaphi-1.c index d045da9..a5d8bfd 100644 --- a/gcc/testsuite/gcc.dg/optimize-bswaphi-1.c +++ b/gcc/testsuite/gcc.dg/optimize-bswaphi-1.c @@ -68,4 +68,4 @@ get_unaligned_16_be (unsigned char *p) /* { dg-final { scan-tree-dump-times "16 bit load in target endianness found at" 4 "bswap" } } */ -/* { dg-final { scan-tree-dump-times "16 bit bswap implementation found at" 5 "bswap" } } */ +/* { dg-final { scan-tree-dump-times "16 bit bswap implementation found at" 4 "bswap" } } */ diff --git a/gcc/testsuite/gcc.dg/tree-ssa/pr61839_3.c b/gcc/testsuite/gcc.dg/tree-ssa/pr61839_3.c index bc2126f..38cf792 100644 --- a/gcc/testsuite/gcc.dg/tree-ssa/pr61839_3.c +++ b/gcc/testsuite/gcc.dg/tree-ssa/pr61839_3.c @@ -1,6 +1,6 @@ /* PR tree-optimization/61839. */ /* { dg-do run } */ -/* { dg-options "-O2 -fdump-tree-vrp -fdump-tree-optimized -fdisable-tree-ethread -fdisable-tree-threadfull1" } */ +/* { dg-options "-O2 -fdump-tree-optimized -fdisable-tree-ethread -fdisable-tree-threadfull1" } */ __attribute__ ((noinline)) int foo (int a, unsigned b) @@ -21,6 +21,4 @@ int main () foo (-1, b); } -/* Scan for c [12, 13] << 8 in function foo. */ -/* { dg-final { scan-tree-dump-times "3072 : 3328" 1 "vrp1" } } */ /* { dg-final { scan-tree-dump-times "3072" 0 "optimized" } } */ diff --git a/gcc/testsuite/gcc.dg/vect/vect-over-widen-1-big-array.c b/gcc/testsuite/gcc.dg/vect/vect-over-widen-1-big-array.c index 9e5f464..9a5141ee 100644 --- a/gcc/testsuite/gcc.dg/vect/vect-over-widen-1-big-array.c +++ b/gcc/testsuite/gcc.dg/vect/vect-over-widen-1-big-array.c @@ -58,9 +58,7 @@ int main (void) } /* { dg-final { scan-tree-dump-times "vect_recog_widen_shift_pattern: detected" 2 "vect" { target vect_widen_shift } } } */ -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 3} "vect" } } */ /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 3} "vect" } } */ -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 8} "vect" } } */ /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 5} "vect" } } */ /* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect" } } */ diff --git a/gcc/testsuite/gcc.dg/vect/vect-over-widen-1.c b/gcc/testsuite/gcc.dg/vect/vect-over-widen-1.c index c2d0797..f2d284c 100644 --- a/gcc/testsuite/gcc.dg/vect/vect-over-widen-1.c +++ b/gcc/testsuite/gcc.dg/vect/vect-over-widen-1.c @@ -62,9 +62,7 @@ int main (void) } /* { dg-final { scan-tree-dump-times "vect_recog_widen_shift_pattern: detected" 2 "vect" { target vect_widen_shift } } } */ -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 3} "vect" } } */ /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 3} "vect" } } */ -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 8} "vect" } } */ /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 5} "vect" } } */ /* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect" } } */ diff --git a/gcc/testsuite/gcc.dg/vect/vect-over-widen-3-big-array.c b/gcc/testsuite/gcc.dg/vect/vect-over-widen-3-big-array.c index 37da7c9..6f89aac 100644 --- a/gcc/testsuite/gcc.dg/vect/vect-over-widen-3-big-array.c +++ b/gcc/testsuite/gcc.dg/vect/vect-over-widen-3-big-array.c @@ -59,9 +59,7 @@ int main (void) return 0; } -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 3} "vect" } } */ /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 3} "vect" } } */ /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 8} "vect" } } */ -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 9} "vect" } } */ /* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect" } } */ diff --git a/gcc/testsuite/gcc.dg/vect/vect-over-widen-3.c b/gcc/testsuite/gcc.dg/vect/vect-over-widen-3.c index 4138480..a1e1182 100644 --- a/gcc/testsuite/gcc.dg/vect/vect-over-widen-3.c +++ b/gcc/testsuite/gcc.dg/vect/vect-over-widen-3.c @@ -57,9 +57,7 @@ int main (void) return 0; } -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 3} "vect" } } */ /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 3} "vect" } } */ /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 8} "vect" } } */ -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 9} "vect" } } */ /* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect" } } */ diff --git a/gcc/testsuite/gcc.dg/vect/vect-over-widen-4-big-array.c b/gcc/testsuite/gcc.dg/vect/vect-over-widen-4-big-array.c index 514337c..03a6e67 100644 --- a/gcc/testsuite/gcc.dg/vect/vect-over-widen-4-big-array.c +++ b/gcc/testsuite/gcc.dg/vect/vect-over-widen-4-big-array.c @@ -62,9 +62,7 @@ int main (void) } /* { dg-final { scan-tree-dump-times "vect_recog_widen_shift_pattern: detected" 2 "vect" { target vect_widen_shift } } } */ -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 3} "vect" } } */ /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 3} "vect" } } */ -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 8} "vect" } } */ /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 5} "vect" } } */ /* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect" } } */ diff --git a/gcc/testsuite/gcc.dg/vect/vect-over-widen-4.c b/gcc/testsuite/gcc.dg/vect/vect-over-widen-4.c index 3d536d5..0ef377f 100644 --- a/gcc/testsuite/gcc.dg/vect/vect-over-widen-4.c +++ b/gcc/testsuite/gcc.dg/vect/vect-over-widen-4.c @@ -66,9 +66,7 @@ int main (void) } /* { dg-final { scan-tree-dump-times "vect_recog_widen_shift_pattern: detected" 2 "vect" { target vect_widen_shift } } } */ -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 3} "vect" } } */ /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 3} "vect" } } */ -/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 8} "vect" } } */ /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 5} "vect" } } */ /* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect" } } */