From patchwork Fri Aug 2 13:15:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Botcazou X-Patchwork-Id: 1141208 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=gcc.gnu.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=gcc-patches-return-506102-incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=adacore.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="ss/XBswy"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 460SPr0HF0z9sDB for ; Fri, 2 Aug 2019 23:17:34 +1000 (AEST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type :content-transfer-encoding; q=dns; s=default; b=xkmyEaBiUbInL//H 9QqCJzcCOX7WFOw/pzMS3blKkZw29zBBIBROgnjwA/3VP0cHwOvDCo5Hlj0HlQEp mOOezQrOwcPKjoJZISMqYtgskFQu1sN4oJYd2g68eSCe7oM+fQb4LAJ3qGILrKaO SOVvaqABMVy2r7e/FTD25vWU/cs= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type :content-transfer-encoding; s=default; bh=pRj7IXihR4mSrh6WtZClnS ujc0o=; b=ss/XBswy6p9ekhBewqccNByHbk/QPC4LAOSRbrVlrMbyBR6V3mk+U6 9IjZZ86tioFBjzO5GzZ+2ax0mlB6mDV75SAZdFIlw8bd6C3s9/0sBkQ2S8l8Gi7Z 9n/cbTjGwCj78lPtFZThbUKp9DXLTurLm1daswQzIFkyQSHxIkcWA= Received: (qmail 103476 invoked by alias); 2 Aug 2019 13:17:26 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 102314 invoked by uid 89); 2 Aug 2019 13:17:26 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-7.4 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_2, GIT_PATCH_3, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=ham version=3.3.1 spammy=sk:gimple-, sk:gimple, dump_flags X-HELO: smtp.eu.adacore.com Received: from mel.act-europe.fr (HELO smtp.eu.adacore.com) (194.98.77.210) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Fri, 02 Aug 2019 13:17:25 +0000 Received: from localhost (localhost [127.0.0.1]) by filtered-smtp.eu.adacore.com (Postfix) with ESMTP id 9CDA48138E for ; Fri, 2 Aug 2019 15:17:22 +0200 (CEST) Received: from smtp.eu.adacore.com ([127.0.0.1]) by localhost (smtp.eu.adacore.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id A15iJKOKkMh5 for ; Fri, 2 Aug 2019 15:17:22 +0200 (CEST) Received: from arcturus.home (adijon-653-1-80-34.w90-33.abo.wanadoo.fr [90.33.75.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.eu.adacore.com (Postfix) with ESMTPSA id 70C7F81353 for ; Fri, 2 Aug 2019 15:17:22 +0200 (CEST) From: Eric Botcazou To: gcc-patches@gcc.gnu.org Subject: [patch] Fix minor SLSR pessimization Date: Fri, 02 Aug 2019 15:15:10 +0200 Message-ID: <2513721.BpYKv4EoCR@arcturus.home> MIME-Version: 1.0 Hi, an user reported that, for pairs of consecutive memory accesses, the SLSR pass can slightly pessimize the generated code at -O2 on the x86 architecture: struct x { int a[16]; int b[16]; }; void set (struct x *p, unsigned int n, int i) { p->a[n] = i; p->b[n] = i; } is compiled with SLSR enabled into: leaq (%rdi,%rsi,4), %rax movl %edx, (%rax) movl %edx, 64(%rax) which is slightly worse than the expected: movl %edx, (%rdi,%rsi,4) movl %edx, 64(%rdi,%rsi,4) The attached patch is a tentative fix which doesn't seem to break anything. Tested on x86_64-suse-linux, OK for the mainline? 2019-08-02 Eric Botcazou * gimple-ssa-strength-reduction.c (valid_mem_ref_cand_p): New function. (replace_ref): Do not replace a chain of only two candidates which are valid memory references. 2019-08-02 Eric Botcazou * gcc.dg/tree-ssa/slsr-42.c: New test. Index: gimple-ssa-strength-reduction.c =================================================================== --- gimple-ssa-strength-reduction.c (revision 273907) +++ gimple-ssa-strength-reduction.c (working copy) @@ -1999,6 +1999,23 @@ replace_ref (tree *expr, slsr_cand_t c) update_stmt (c->cand_stmt); } +/* Return true if CAND_REF candidate C is a valid memory reference. */ + +static bool +valid_mem_ref_cand_p (slsr_cand_t c) +{ + if (TREE_CODE (TREE_OPERAND (c->stride, 1)) != INTEGER_CST) + return false; + + struct mem_address addr + = { NULL_TREE, c->base_expr, TREE_OPERAND (c->stride, 0), + TREE_OPERAND (c->stride, 1), wide_int_to_tree (sizetype, c->index) }; + + return + valid_mem_ref_p (TYPE_MODE (c->cand_type), TYPE_ADDR_SPACE (c->cand_type), + &addr); +} + /* Replace CAND_REF candidate C, each sibling of candidate C, and each dependent of candidate C with an equivalent strength-reduced data reference. */ @@ -2006,6 +2023,16 @@ replace_ref (tree *expr, slsr_cand_t c) static void replace_refs (slsr_cand_t c) { + /* Replacing a chain of only 2 candidates which are valid memory references + is generally counter-productive because you cannot recoup the additional + calculation added in front of them. */ + if (c->basis == 0 + && c->dependent + && !lookup_cand (c->dependent)->dependent + && valid_mem_ref_cand_p (c) + && valid_mem_ref_cand_p (lookup_cand (c->dependent))) + return; + if (dump_file && (dump_flags & TDF_DETAILS)) { fputs ("Replacing reference: ", dump_file);