From patchwork Tue Nov 26 14:46:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Biener X-Patchwork-Id: 1201094 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=gcc-patches-return-514612-incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=suse.de Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="tmsqxOmW"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47Mmtb68DWz9sP3 for ; Wed, 27 Nov 2019 01:46:15 +1100 (AEDT) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:date :from:to:subject:message-id:mime-version:content-type; q=dns; s= default; b=KnWm9N5zcg30ZaHnpptVzEizjOAp/IpkaWrJaYSnpPX6jRqsl7LZe VW/Zm2byq5Pn6omqkkDPbyCmzKfm1y+23YnMhdAt0nUhZcBxQuUX5NHGL6VNr9b9 iJMYCqBUqkAyB8fJD/4cTsYJR3lQzBcf977DpiyxqMGMJmnO9p0sH8= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:date :from:to:subject:message-id:mime-version:content-type; s= default; bh=Z9RTklK+U/ZHNvTd8/+oKJSf+bE=; b=tmsqxOmWHgOAujo2Z1Et okT+NSqeZNgoT0yRGwKXG7jryWTSQbIMXFJpLa3IRZsymUme3MuivPPzG6l54Yd6 QgeeJ37uy5XLR/io2e2FnlHa6wjGs0eaHBOZg1Hmg25MmBd+w/W6AzZjmsFPajUy MaR12FJLo6Fi8/67bacNavQ= Received: (qmail 29710 invoked by alias); 26 Nov 2019 14:46:08 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 29701 invoked by uid 89); 26 Nov 2019 14:46:08 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-10.7 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_2, GIT_PATCH_3, KAM_NUMSUBJECT, SPF_PASS autolearn=ham version=3.3.1 spammy=mood, Delay X-HELO: mx1.suse.de Received: from mx2.suse.de (HELO mx1.suse.de) (195.135.220.15) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 26 Nov 2019 14:46:06 +0000 Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 41690ACE3 for ; Tue, 26 Nov 2019 14:46:04 +0000 (UTC) Date: Tue, 26 Nov 2019 15:46:04 +0100 (CET) From: Richard Biener To: gcc-patches@gcc.gnu.org Subject: [PATCH] Fix PR92674 Message-ID: User-Agent: Alpine 2.21 (LSU 202 2017-01-01) MIME-Version: 1.0 The following delays all edge pruning until inlining into a function is complete. This avoids crashes with stmt folding walking the SSA use->def chain and arriving at PHIs without arguments. Bootstrapped on x86_64-unknown-linux-gnu, testing in progress. Richard. 2019-11-26 Richard Biener PR middle-end/92674 * tree-inline.c (expand_call_inline): Delay purging EH/abnormal edges and instead record blocks in bitmap. (gimple_expand_calls_inline): Adjust. (fold_marked_statements): Delay EH cleanup until all folding is done. (optimize_inline_calls): Do EH/abnormal cleanup for calls after inlining finished. Index: gcc/tree-inline.c =================================================================== --- gcc/tree-inline.c (revision 278722) +++ gcc/tree-inline.c (working copy) @@ -4623,7 +4623,8 @@ reset_debug_bindings (copy_body_data *id /* If STMT is a GIMPLE_CALL, replace it with its inline expansion. */ static bool -expand_call_inline (basic_block bb, gimple *stmt, copy_body_data *id) +expand_call_inline (basic_block bb, gimple *stmt, copy_body_data *id, + bitmap to_purge) { tree use_retvar; tree fn; @@ -4768,7 +4769,7 @@ expand_call_inline (basic_block bb, gimp gimple_call_set_fndecl (stmt, edge->callee->decl); update_stmt (stmt); id->src_node->remove (); - expand_call_inline (bb, stmt, id); + expand_call_inline (bb, stmt, id, to_purge); maybe_remove_unused_call_args (cfun, stmt); return true; } @@ -5156,10 +5157,7 @@ expand_call_inline (basic_block bb, gimp } if (purge_dead_abnormal_edges) - { - gimple_purge_dead_eh_edges (return_block); - gimple_purge_dead_abnormal_call_edges (return_block); - } + bitmap_set_bit (to_purge, return_block->index); /* If the value of the new expression is ignored, that's OK. We don't warn about this for CALL_EXPRs, so we shouldn't warn about @@ -5197,7 +5195,8 @@ expand_call_inline (basic_block bb, gimp in a MODIFY_EXPR. */ static bool -gimple_expand_calls_inline (basic_block bb, copy_body_data *id) +gimple_expand_calls_inline (basic_block bb, copy_body_data *id, + bitmap to_purge) { gimple_stmt_iterator gsi; bool inlined = false; @@ -5209,7 +5208,7 @@ gimple_expand_calls_inline (basic_block if (is_gimple_call (stmt) && !gimple_call_internal_p (stmt)) - inlined |= expand_call_inline (bb, stmt, id); + inlined |= expand_call_inline (bb, stmt, id, to_purge); } return inlined; @@ -5222,6 +5221,7 @@ gimple_expand_calls_inline (basic_block static void fold_marked_statements (int first, hash_set *statements) { + auto_bitmap to_purge; for (; first < last_basic_block_for_fn (cfun); first++) if (BASIC_BLOCK_FOR_FN (cfun, first)) { @@ -5233,7 +5233,8 @@ fold_marked_statements (int first, hash_ if (statements->contains (gsi_stmt (gsi))) { gimple *old_stmt = gsi_stmt (gsi); - tree old_decl = is_gimple_call (old_stmt) ? gimple_call_fndecl (old_stmt) : 0; + tree old_decl + = is_gimple_call (old_stmt) ? gimple_call_fndecl (old_stmt) : 0; if (old_decl && fndecl_built_in_p (old_decl)) { @@ -5277,8 +5278,7 @@ fold_marked_statements (int first, hash_ is mood anyway. */ if (maybe_clean_or_replace_eh_stmt (old_stmt, new_stmt)) - gimple_purge_dead_eh_edges ( - BASIC_BLOCK_FOR_FN (cfun, first)); + bitmap_set_bit (to_purge, first); break; } gsi_next (&i2); @@ -5298,11 +5298,11 @@ fold_marked_statements (int first, hash_ new_stmt); if (maybe_clean_or_replace_eh_stmt (old_stmt, new_stmt)) - gimple_purge_dead_eh_edges (BASIC_BLOCK_FOR_FN (cfun, - first)); + bitmap_set_bit (to_purge, first); } } } + gimple_purge_all_dead_eh_edges (to_purge); } /* Expand calls to inline functions in the body of FN. */ @@ -5348,8 +5348,9 @@ optimize_inline_calls (tree fn) will split id->current_basic_block, and the new blocks will follow it; we'll trudge through them, processing their CALL_EXPRs along the way. */ + auto_bitmap to_purge; FOR_EACH_BB_FN (bb, cfun) - inlined_p |= gimple_expand_calls_inline (bb, &id); + inlined_p |= gimple_expand_calls_inline (bb, &id, to_purge); pop_gimplify_context (NULL); @@ -5369,6 +5370,21 @@ optimize_inline_calls (tree fn) fold_marked_statements (last, id.statements_to_fold); delete id.statements_to_fold; + /* Finally purge EH and abnormal edges from the call stmts we inlined. + We need to do this after fold_marked_statements since that may walk + the SSA use-def chain. */ + unsigned i; + bitmap_iterator bi; + EXECUTE_IF_SET_IN_BITMAP (to_purge, 0, i, bi) + { + basic_block bb = BASIC_BLOCK_FOR_FN (cfun, i); + if (bb) + { + gimple_purge_dead_eh_edges (bb); + gimple_purge_dead_abnormal_call_edges (bb); + } + } + gcc_assert (!id.debug_stmts.exists ()); /* If we didn't inline into the function there is nothing to do. */