From patchwork Mon Apr 20 23:00:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Jelinek X-Patchwork-Id: 1273787 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=8.43.85.97; helo=sourceware.org; envelope-from=gcc-patches-bounces@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=gcc.gnu.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.a=rsa-sha256 header.s=default header.b=QljMfy/8; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 495hy93DZ5z9sP7 for ; Tue, 21 Apr 2020 09:01:03 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id AB9DC386F437; Mon, 20 Apr 2020 23:00:59 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org AB9DC386F437 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1587423659; bh=xp5x58py9zWXKjxgr5GB1rHR9i97Rmm51W0yUYm1gJY=; h=Date:To:Subject:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:Cc:From; b=QljMfy/8sU642d6jFesDlfIlh2/jsMw2trHW2ZzZ2OzUx/DugIa2GanCXOolTU5uj 2/Xmg5uAO4+33yLDVds1mqLs5auDV7tlHgSTJ4il8jGWZ+FxCGJd2uiv2f823xOKZk m5pvxZH4okUFIW97rHvQod57tDd163S2/VUJphxU= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from us-smtp-delivery-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by sourceware.org (Postfix) with ESMTP id 238A1384A87E for ; Mon, 20 Apr 2020 23:00:56 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 238A1384A87E Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-286-g94V7mrmPTaxWnbuSe1Ypg-1; Mon, 20 Apr 2020 19:00:53 -0400 X-MC-Unique: g94V7mrmPTaxWnbuSe1Ypg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 02A98800D53 for ; Mon, 20 Apr 2020 23:00:53 +0000 (UTC) Received: from tucnak.zalov.cz (ovpn-112-104.ams2.redhat.com [10.36.112.104]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 631A476E74; Mon, 20 Apr 2020 23:00:52 +0000 (UTC) Received: from tucnak.zalov.cz (localhost [127.0.0.1]) by tucnak.zalov.cz (8.15.2/8.15.2) with ESMTP id 03KN0o7Y023925; Tue, 21 Apr 2020 01:00:50 +0200 Received: (from jakub@localhost) by tucnak.zalov.cz (8.15.2/8.15.2/Submit) id 03KN0nst023924; Tue, 21 Apr 2020 01:00:49 +0200 Date: Tue, 21 Apr 2020 01:00:49 +0200 To: Jeff Law Subject: [PATCH] csa, postreload: Improve csa after recent cselib changes [PR94516] Message-ID: <20200420230049.GT2424@tucnak> MIME-Version: 1.0 User-Agent: Mutt/1.11.3 (2019-02-01) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Disposition: inline X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_2, GIT_PATCH_3, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Jakub Jelinek via Gcc-patches From: Jakub Jelinek Reply-To: Jakub Jelinek Cc: gcc-patches@gcc.gnu.org Errors-To: gcc-patches-bounces@gcc.gnu.org Sender: "Gcc-patches" Hi! This patch addresses a missed optimization caused by the cselib changes. Already in the past postreload could replace sp = sp + const_int with sp = regxy if regxy already has the right value, but with the cselib changes it happens several times more often. It can result in smaller code, so it seems undesirable to prevent such optimizations, but unfortunately it can get into the way of stack adjustment coalescing, where e.g. if we used to have sp = sp + 32; sp = sp - 8;, previously we'd turn that into sp = sp + 24;, but now postreload optimizes into sp = r12; sp = sp - 8; and csa gives up. The patch just adds a REG_EQUAL note when changing sp = sp + const into sp = reg, where we remember it was actually a stack adjustment by certain constant, and the combine-stack-adj changes than make use of those REG_EQUAL notes, together with LR tracking (csa did enable the note problem, just didn't simulate each insn) so that we can add the needed clobbers etc. (taken from the other stack adjustment insn). Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk? 2020-04-20 Jakub Jelinek PR rtl-optimization/94516 * postreload.c (reload_cse_simplify): When replacing sp = sp + const with sp = reg, add REG_EQUAL note with sp + const. * combine-stack-adj.c (try_apply_stack_adjustment): Change return type from int to bool. Add LIVE and OTHER_INSN arguments. Undo postreload sp = sp + const to sp = reg optimization if needed and possible. (combine_stack_adjustments_for_block): Add LIVE argument. Handle reg = sp insn with sp + const REG_EQUAL note. Adjust try_apply_stack_adjustment caller, call df_simulate_initialize_forwards and df_simulate_one_insn_forwards. (combine_stack_adjustments): Allocate and free LIVE bitmap, adjust combine_stack_adjustments_for_block caller. Jakub --- gcc/postreload.c.jj 2020-04-19 12:10:19.042877054 +0200 +++ gcc/postreload.c 2020-04-20 12:16:01.672848537 +0200 @@ -97,6 +97,16 @@ reload_cse_simplify (rtx_insn *insn, rtx if (NO_FUNCTION_CSE && CALL_P (insn)) return false; + /* Remember if this insn has been sp += const_int. */ + rtx sp_set = set_for_reg_notes (insn); + rtx sp_addend = NULL_RTX; + if (sp_set + && SET_DEST (sp_set) == stack_pointer_rtx + && GET_CODE (SET_SRC (sp_set)) == PLUS + && XEXP (SET_SRC (sp_set), 0) == stack_pointer_rtx + && CONST_INT_P (XEXP (SET_SRC (sp_set), 1))) + sp_addend = XEXP (SET_SRC (sp_set), 1); + if (GET_CODE (body) == SET) { int count = 0; @@ -180,6 +190,15 @@ reload_cse_simplify (rtx_insn *insn, rtx reload_cse_simplify_operands (insn, testreg); } + /* If sp += const_int insn is changed into sp = reg;, add REG_EQUAL + note so that the stack_adjustments pass can undo it if beneficial. */ + if (sp_addend + && SET_DEST (sp_set) == stack_pointer_rtx + && REG_P (SET_SRC (sp_set))) + set_dst_reg_note (insn, REG_EQUAL, + gen_rtx_PLUS (Pmode, stack_pointer_rtx, + sp_addend), stack_pointer_rtx); + done: return (EDGE_COUNT (insn_bb->succs) != insn_bb_succs); } --- gcc/combine-stack-adj.c.jj 2020-01-12 11:54:36.221416305 +0100 +++ gcc/combine-stack-adj.c 2020-04-20 14:52:47.294323991 +0200 @@ -70,9 +70,10 @@ static rtx single_set_for_csa (rtx_insn static void free_csa_reflist (struct csa_reflist *); static struct csa_reflist *record_one_stack_ref (rtx_insn *, rtx *, struct csa_reflist *); -static int try_apply_stack_adjustment (rtx_insn *, struct csa_reflist *, - HOST_WIDE_INT, HOST_WIDE_INT); -static void combine_stack_adjustments_for_block (basic_block); +static bool try_apply_stack_adjustment (rtx_insn *, struct csa_reflist *, + HOST_WIDE_INT, HOST_WIDE_INT, + bitmap, rtx_insn *); +static void combine_stack_adjustments_for_block (basic_block, bitmap); /* Main entry point for stack adjustment combination. */ @@ -81,9 +82,12 @@ static void combine_stack_adjustments (void) { basic_block bb; + bitmap live = BITMAP_ALLOC (®_obstack); FOR_EACH_BB_FN (bb, cfun) - combine_stack_adjustments_for_block (bb); + combine_stack_adjustments_for_block (bb, live); + + BITMAP_FREE (live); } /* Recognize a MEM of the form (sp) or (plus sp const). */ @@ -219,18 +223,60 @@ no_unhandled_cfa (rtx_insn *insn) as each of the memories and stack references in REFLIST. Return true on success. */ -static int +static bool try_apply_stack_adjustment (rtx_insn *insn, struct csa_reflist *reflist, - HOST_WIDE_INT new_adjust, HOST_WIDE_INT delta) + HOST_WIDE_INT new_adjust, HOST_WIDE_INT delta, + bitmap live, rtx_insn *other_insn) { struct csa_reflist *ml; rtx set; + bool remove_equal = false; set = single_set_for_csa (insn); if (MEM_P (SET_DEST (set))) validate_change (insn, &SET_DEST (set), replace_equiv_address (SET_DEST (set), stack_pointer_rtx), 1); + else if (REG_P (SET_SRC (set))) + { + if (other_insn == NULL_RTX || live == NULL) + return false; + rtx other_set = single_set_for_csa (other_insn); + if (SET_DEST (other_set) != stack_pointer_rtx + || GET_CODE (SET_SRC (other_set)) != PLUS + || XEXP (SET_SRC (other_set), 0) != stack_pointer_rtx + || !CONST_INT_P (XEXP (SET_SRC (other_set), 1))) + return false; + if (PATTERN (other_insn) != other_set) + { + if (GET_CODE (PATTERN (other_insn)) != PARALLEL) + return false; + int i; + rtx p = PATTERN (other_insn); + for (i = 0; i < XVECLEN (p, 0); ++i) + { + rtx this_rtx = XVECEXP (p, 0, i); + if (this_rtx == other_set) + continue; + if (GET_CODE (this_rtx) != CLOBBER) + return false; + if (!REG_P (XEXP (this_rtx, 0)) + || !HARD_REGISTER_P (XEXP (this_rtx, 0))) + return false; + unsigned int end_regno = END_REGNO (XEXP (this_rtx, 0)); + for (unsigned int regno = REGNO (XEXP (this_rtx, 0)); + regno < end_regno; ++regno) + if (bitmap_bit_p (live, regno)) + return false; + } + } + validate_change (insn, &PATTERN (insn), copy_rtx (PATTERN (other_insn)), + 1); + set = single_set_for_csa (insn); + validate_change (insn, &XEXP (SET_SRC (set), 1), GEN_INT (new_adjust), + 1); + remove_equal = true; + } else validate_change (insn, &XEXP (SET_SRC (set), 1), GEN_INT (new_adjust), 1); @@ -256,10 +302,12 @@ try_apply_stack_adjustment (rtx_insn *in for (ml = reflist; ml ; ml = ml->next) ml->sp_offset -= delta; - return 1; + if (remove_equal) + remove_reg_equal_equiv_notes (insn); + return true; } else - return 0; + return false; } /* For non-debug insns, record all stack memory references in INSN @@ -489,16 +537,21 @@ force_move_args_size_note (basic_block b /* Subroutine of combine_stack_adjustments, called for each basic block. */ static void -combine_stack_adjustments_for_block (basic_block bb) +combine_stack_adjustments_for_block (basic_block bb, bitmap live) { HOST_WIDE_INT last_sp_adjust = 0; rtx_insn *last_sp_set = NULL; rtx_insn *last2_sp_set = NULL; + bitmap last_sp_live = NULL; struct csa_reflist *reflist = NULL; + bitmap copy = NULL; rtx_insn *insn, *next; rtx set; bool end_of_block = false; + bitmap_copy (live, DF_LR_IN (bb)); + df_simulate_initialize_forwards (bb, live); + for (insn = BB_HEAD (bb); !end_of_block ; insn = next) { end_of_block = insn == BB_END (bb); @@ -514,21 +567,43 @@ combine_stack_adjustments_for_block (bas { rtx dest = SET_DEST (set); rtx src = SET_SRC (set); + HOST_WIDE_INT this_adjust = 0; /* Find constant additions to the stack pointer. */ if (dest == stack_pointer_rtx && GET_CODE (src) == PLUS && XEXP (src, 0) == stack_pointer_rtx && CONST_INT_P (XEXP (src, 1))) - { - HOST_WIDE_INT this_adjust = INTVAL (XEXP (src, 1)); + this_adjust = INTVAL (XEXP (src, 1)); + /* Or such additions turned by postreload into a store of + equivalent register. */ + else if (dest == stack_pointer_rtx + && REG_P (src) + && REGNO (src) != STACK_POINTER_REGNUM) + if (rtx equal = find_reg_note (insn, REG_EQUAL, NULL_RTX)) + if (GET_CODE (XEXP (equal, 0)) == PLUS + && XEXP (XEXP (equal, 0), 0) == stack_pointer_rtx + && CONST_INT_P (XEXP (XEXP (equal, 0), 1))) + this_adjust = INTVAL (XEXP (XEXP (equal, 0), 1)); + if (this_adjust) + { /* If we've not seen an adjustment previously, record it now and continue. */ if (! last_sp_set) { last_sp_set = insn; last_sp_adjust = this_adjust; + if (REG_P (src)) + { + if (copy == NULL) + copy = BITMAP_ALLOC (®_obstack); + last_sp_live = copy; + bitmap_copy (last_sp_live, live); + } + else + last_sp_live = NULL; + df_simulate_one_insn_forwards (bb, insn, live); continue; } @@ -560,13 +635,16 @@ combine_stack_adjustments_for_block (bas && try_apply_stack_adjustment (last_sp_set, reflist, last_sp_adjust + this_adjust, - this_adjust)) + this_adjust, + last_sp_live, + insn)) { /* It worked! */ maybe_move_args_size_note (last_sp_set, insn, false); maybe_merge_cfa_adjust (last_sp_set, insn, false); delete_insn (insn); last_sp_adjust += this_adjust; + last_sp_live = NULL; continue; } } @@ -577,10 +655,12 @@ combine_stack_adjustments_for_block (bas ? last_sp_adjust >= 0 : last_sp_adjust <= 0) { if (no_unhandled_cfa (last_sp_set) + && !REG_P (src) && try_apply_stack_adjustment (insn, reflist, last_sp_adjust + this_adjust, - -last_sp_adjust)) + -last_sp_adjust, + NULL, NULL)) { /* It worked! */ maybe_move_args_size_note (insn, last_sp_set, true); @@ -588,8 +668,10 @@ combine_stack_adjustments_for_block (bas delete_insn (last_sp_set); last_sp_set = insn; last_sp_adjust += this_adjust; + last_sp_live = NULL; free_csa_reflist (reflist); reflist = NULL; + df_simulate_one_insn_forwards (bb, insn, live); continue; } } @@ -612,6 +694,16 @@ combine_stack_adjustments_for_block (bas reflist = NULL; last_sp_set = insn; last_sp_adjust = this_adjust; + if (REG_P (src)) + { + if (copy == NULL) + copy = BITMAP_ALLOC (®_obstack); + last_sp_live = copy; + bitmap_copy (last_sp_live, live); + } + else + last_sp_live = NULL; + df_simulate_one_insn_forwards (bb, insn, live); continue; } @@ -641,7 +733,8 @@ combine_stack_adjustments_for_block (bas && !reg_mentioned_p (stack_pointer_rtx, src) && memory_address_p (GET_MODE (dest), stack_pointer_rtx) && try_apply_stack_adjustment (insn, reflist, 0, - -last_sp_adjust)) + -last_sp_adjust, + NULL, NULL)) { if (last2_sp_set) maybe_move_args_size_note (last2_sp_set, last_sp_set, false); @@ -652,13 +745,17 @@ combine_stack_adjustments_for_block (bas reflist = NULL; last_sp_set = NULL; last_sp_adjust = 0; + last_sp_live = NULL; + df_simulate_one_insn_forwards (bb, insn, live); continue; } } - if (!CALL_P (insn) && last_sp_set - && record_stack_refs (insn, &reflist)) - continue; + if (!CALL_P (insn) && last_sp_set && record_stack_refs (insn, &reflist)) + { + df_simulate_one_insn_forwards (bb, insn, live); + continue; + } /* Otherwise, we were not able to process the instruction. Do not continue collecting data across such a one. */ @@ -676,7 +773,10 @@ combine_stack_adjustments_for_block (bas last2_sp_set = NULL; last_sp_set = NULL; last_sp_adjust = 0; + last_sp_live = NULL; } + + df_simulate_one_insn_forwards (bb, insn, live); } if (last_sp_set && last_sp_adjust == 0) @@ -687,6 +787,8 @@ combine_stack_adjustments_for_block (bas if (reflist) free_csa_reflist (reflist); + if (copy) + BITMAP_FREE (copy); } static unsigned int