From patchwork Mon Dec 12 14:46:01 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Matz X-Patchwork-Id: 130756 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id 1697F1007D1 for ; Tue, 13 Dec 2011 01:46:22 +1100 (EST) Received: (qmail 1383 invoked by alias); 12 Dec 2011 14:46:19 -0000 Received: (qmail 1369 invoked by uid 22791); 12 Dec 2011 14:46:16 -0000 X-SWARE-Spam-Status: No, hits=-4.1 required=5.0 tests=AWL, BAYES_00, RP_MATCHES_RCVD, TW_CB, TW_TM X-Spam-Check-By: sourceware.org Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Mon, 12 Dec 2011 14:46:03 +0000 Received: from relay2.suse.de (nat.nue.novell.com [195.135.221.2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx2.suse.de (Postfix) with ESMTP id 95A888FE2D; Mon, 12 Dec 2011 15:46:01 +0100 (CET) Date: Mon, 12 Dec 2011 15:46:01 +0100 (CET) From: Michael Matz To: Jakub Jelinek Cc: Richard Guenther , Richard Henderson , gcc-patches@gcc.gnu.org Subject: Re: [PATCH] Sink clobbers if EH block contains just clobbers (PR tree-optimization/51117) In-Reply-To: <20111209194939.GC1957@tyan-ft48-01.lab.bos.redhat.com> Message-ID: References: <20111209194939.GC1957@tyan-ft48-01.lab.bos.redhat.com> MIME-Version: 1.0 X-IsSubscribed: yes Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Hello, On Fri, 9 Dec 2011, Jakub Jelinek wrote: > I had to tweak a little bit the expander conflict checking, because > if we have a BB with two incoming EH edges and clobber stmts from both > sunk into its beginning, then it would consider both variables (a and b > above) to be live at the same time, while there is no insn on which they > can actually live at the same time, the PHIs don't mention either of them > (and after all, PHIs aren't memory loads), and after the PHIs we have > immediately the clobbers. The idea is sound, the implementation can be tidied with the observation that only the first real instruction (instead of the BB start) is the point at which all currently live things need to be conflicted, like in the patch below (only cfgexpand.c part changed). I.e. moving the existing code from add_scope_clobbers_1 a bit is enough. I'm putting this through regstrapping on x86_64-linux and will commit if that succeeds, given rths approval for the other parts. I wonder how to best test this. Ciao, Michael. PR tree-optimization/51117 * tree-eh.c (sink_clobbers): New function. (execute_lower_eh_dispatch): Call it for BBs ending with internally throwing RESX. * cfgexpand.c (add_scope_conflicts_1): Add all conflicts only at the first real instruction. Index: cfgexpand.c =================================================================== --- cfgexpand.c (revision 182241) +++ cfgexpand.c (working copy) @@ -456,34 +456,14 @@ add_scope_conflicts_1 (basic_block bb, b FOR_EACH_EDGE (e, ei, bb->preds) bitmap_ior_into (work, (bitmap)e->src->aux); - if (for_conflict) - { - /* We need to add conflicts for everything life at the start of - this block. Unlike classical lifeness for named objects we can't - rely on seeing a def/use of the names we're interested in. - There might merely be indirect loads/stores. We'd not add any - conflicts for such partitions. */ - bitmap_iterator bi; - unsigned i; - EXECUTE_IF_SET_IN_BITMAP (work, 0, i, bi) - { - unsigned j; - bitmap_iterator bj; - EXECUTE_IF_SET_IN_BITMAP (work, i, j, bj) - add_stack_var_conflict (i, j); - } - visit = visit_conflict; - } - else - visit = visit_op; + visit = visit_op; for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi)) { gimple stmt = gsi_stmt (gsi); - if (!is_gimple_debug (stmt)) - walk_stmt_load_store_addr_ops (stmt, work, visit, visit, visit); + walk_stmt_load_store_addr_ops (stmt, work, NULL, NULL, visit); } - for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi)) + for (gsi = gsi_after_labels (bb); !gsi_end_p (gsi); gsi_next (&gsi)) { gimple stmt = gsi_stmt (gsi); @@ -501,7 +481,29 @@ add_scope_conflicts_1 (basic_block bb, b bitmap_clear_bit (work, *v); } else if (!is_gimple_debug (stmt)) - walk_stmt_load_store_addr_ops (stmt, work, visit, visit, visit); + { + if (for_conflict + && visit == visit_op) + { + /* If this is the first real instruction in this BB we need + to add conflicts for everything life at this point now. + Unlike classical lifeness for named objects we can't + rely on seeing a def/use of the names we're interested in. + There might merely be indirect loads/stores. We'd not add any + conflicts for such partitions. */ + bitmap_iterator bi; + unsigned i; + EXECUTE_IF_SET_IN_BITMAP (work, 0, i, bi) + { + unsigned j; + bitmap_iterator bj; + EXECUTE_IF_SET_IN_BITMAP (work, i, j, bj) + add_stack_var_conflict (i, j); + } + visit = visit_conflict; + } + walk_stmt_load_store_addr_ops (stmt, work, visit, visit, visit); + } } } Index: tree-eh.c =================================================================== --- tree-eh.c (revision 182241) +++ tree-eh.c (working copy) @@ -3194,6 +3194,76 @@ optimize_clobbers (basic_block bb) } } +/* Try to sink var = {v} {CLOBBER} stmts followed just by + internal throw to successor BB. */ + +static int +sink_clobbers (basic_block bb) +{ + edge e; + edge_iterator ei; + gimple_stmt_iterator gsi, dgsi; + basic_block succbb; + bool any_clobbers = false; + + /* Only optimize if BB has a single EH successor and + all predecessor edges are EH too. */ + if (!single_succ_p (bb) + || (single_succ_edge (bb)->flags & EDGE_EH) == 0) + return 0; + + FOR_EACH_EDGE (e, ei, bb->preds) + { + if ((e->flags & EDGE_EH) == 0) + return 0; + } + + /* And BB contains only CLOBBER stmts before the final + RESX. */ + gsi = gsi_last_bb (bb); + for (gsi_prev (&gsi); !gsi_end_p (gsi); gsi_prev (&gsi)) + { + gimple stmt = gsi_stmt (gsi); + if (is_gimple_debug (stmt)) + continue; + if (gimple_code (stmt) == GIMPLE_LABEL) + break; + if (!gimple_clobber_p (stmt) + || TREE_CODE (gimple_assign_lhs (stmt)) == SSA_NAME) + return 0; + any_clobbers = true; + } + if (!any_clobbers) + return 0; + + succbb = single_succ (bb); + dgsi = gsi_after_labels (succbb); + gsi = gsi_last_bb (bb); + for (gsi_prev (&gsi); !gsi_end_p (gsi); gsi_prev (&gsi)) + { + gimple stmt = gsi_stmt (gsi); + tree vdef; + if (is_gimple_debug (stmt)) + continue; + if (gimple_code (stmt) == GIMPLE_LABEL) + break; + unlink_stmt_vdef (stmt); + gsi_remove (&gsi, false); + vdef = gimple_vdef (stmt); + if (vdef && TREE_CODE (vdef) == SSA_NAME) + { + vdef = SSA_NAME_VAR (vdef); + mark_sym_for_renaming (vdef); + gimple_set_vdef (stmt, vdef); + gimple_set_vuse (stmt, vdef); + } + release_defs (stmt); + gsi_insert_before (&dgsi, stmt, GSI_SAME_STMT); + } + + return TODO_update_ssa_only_virtuals; +} + /* At the end of inlining, we can lower EH_DISPATCH. Return true when we have found some duplicate labels and removed some edges. */ @@ -3349,7 +3419,7 @@ static unsigned execute_lower_eh_dispatch (void) { basic_block bb; - bool any_rewritten = false; + int flags = 0; bool redirected = false; assign_filter_values (); @@ -3362,16 +3432,20 @@ execute_lower_eh_dispatch (void) if (gimple_code (last) == GIMPLE_EH_DISPATCH) { redirected |= lower_eh_dispatch (bb, last); - any_rewritten = true; + flags |= TODO_update_ssa_only_virtuals; + } + else if (gimple_code (last) == GIMPLE_RESX) + { + if (stmt_can_throw_external (last)) + optimize_clobbers (bb); + else + flags |= sink_clobbers (bb); } - else if (gimple_code (last) == GIMPLE_RESX - && stmt_can_throw_external (last)) - optimize_clobbers (bb); } if (redirected) delete_unreachable_blocks (); - return any_rewritten ? TODO_update_ssa_only_virtuals : 0; + return flags; } static bool