From patchwork Wed Aug 21 19:28:13 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Law X-Patchwork-Id: 268891 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "localhost", Issuer "www.qmailtoaster.com" (not verified)) by ozlabs.org (Postfix) with ESMTPS id AD5C12C007E for ; Thu, 22 Aug 2013 05:28:22 +1000 (EST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :message-id:date:from:mime-version:to:subject:content-type; q= dns; s=default; b=VC3+r5bJf5S+fO2PeDcw6IQjFuyw8hH6QurxD48dNeS+id zaSXQ+atbgft0qUZ0VOlc/zmX7ags8Uk7OPuy12o7znLMeJhaLxYVGHvyI8zGJin 1peuvZBqJhJci9RUXVMC22P+7MYd+VQ7jhReF1NDcTY2IGg8iEogsWf2LFCz4= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :message-id:date:from:mime-version:to:subject:content-type; s= default; bh=Aqwm/m/93KfbverUeUdzEQseSG0=; b=AaqJuG9khKjGToYH/KRg 3FrieB2Bqm0MWpX4ypWdCQQVsOYyVjURZnlA3CKPZ1KxFwgoJNUhcqxmbxHNmp0u +sOJYErv/21HaNleC7GmFcFEWMFsTZ6/C1aLzd2ocpBGDWho+xIujDmIyjJT+k+I IaventHPxFSLBrMk+L2t4Xo= Received: (qmail 16594 invoked by alias); 21 Aug 2013 19:28:16 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 16570 invoked by uid 89); 21 Aug 2013 19:28:15 -0000 X-Spam-SWARE-Status: No, score=-7.9 required=5.0 tests=AWL, BAYES_00, RCVD_IN_HOSTKARMA_W, RCVD_IN_HOSTKARMA_WL, RP_MATCHES_RCVD, SPF_HELO_PASS, SPF_PASS autolearn=ham version=3.3.2 Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.84/v0.84-167-ge50287c) with ESMTP; Wed, 21 Aug 2013 19:28:15 +0000 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r7LJSDJn031957 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Wed, 21 Aug 2013 15:28:14 -0400 Received: from stumpy.slc.redhat.com (ovpn-113-178.phx2.redhat.com [10.3.113.178]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id r7LJSD0n022206 for ; Wed, 21 Aug 2013 15:28:13 -0400 Message-ID: <521514CD.60201@redhat.com> Date: Wed, 21 Aug 2013 13:28:13 -0600 From: Jeff Law User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130625 Thunderbird/17.0.7 MIME-Version: 1.0 To: gcc-patches Subject: Change API for register_jump_thread to pass in an entire path X-Virus-Found: No Right now, the API to register a requested jump thread passes three edges. The incoming edge we traverse, the outgoing edge we know will be taken as a result of traversing the incoming edge and an optional edge to allow us to find the joiner block when we thread through a join node. Note that incoming_edge->dest does not have to reference the same block as outgoing_edge->src as we allow the threader to thread across multiple blocks. When we thread through multiple blocks, the latter blocks must be empty (no side effects other than control transfer). The limitations we place when threading around empty blocks have kept the updating code relatively simple as we do not need to clone those empty blocks -- we just need to know where they're going to transfer control to. As a result, the current code to update the SSA and CFG representations after threading a jump just needs to clone a single block and its side effects and perform minimal PHI node updates. The general form of the FSA optimization changes things a bit in that we need to clone two blocks. This entails additional PHI node updates. To enable proper updating of the PHI nodes we need more pieces of the threaded path. Rather than add another special argument to the register_jump_thread API, I'm changing the API to record the entire threaded path. This patch just changes the API so the full path is available. The SSA/CFG updating code (right now) just converts the full path into the 3-edge form. This does not change the generated code in any way shape or form. Bootstrapped and regression tested on x86_64-unknown-linux-gnu. Installed. diff --git a/gcc/ChangeLog b/gcc/ChangeLog index 7162f34..ba9c7c9 100644 --- a/gcc/ChangeLog +++ b/gcc/ChangeLog @@ -1,5 +1,12 @@ 2013-08-21 Jeff Law + * tree-flow.h (register_jump_thread): Pass vector of edges + instead of each important edge. + * tree-ssa-threadedge.c (thread_across_edge): Build the jump + thread path into a vector and pass that to register_jump_thread. + * tree-ssa-threadupdate.c (register_jump_thread): Conver the + passed in edge vector to the current 3-edge form. + Revert: 2013-08-20 Alexey Makhalov diff --git a/gcc/tree-flow.h b/gcc/tree-flow.h index caa8d74..01e6562 100644 --- a/gcc/tree-flow.h +++ b/gcc/tree-flow.h @@ -750,7 +750,7 @@ bool may_be_nonaddressable_p (tree expr); /* In tree-ssa-threadupdate.c. */ extern bool thread_through_all_blocks (bool); -extern void register_jump_thread (edge, edge, edge); +extern void register_jump_thread (vec); /* In gimplify.c */ tree force_gimple_operand_1 (tree, gimple_seq *, gimple_predicate, tree); diff --git a/gcc/tree-ssa-threadedge.c b/gcc/tree-ssa-threadedge.c index 357b671..320dec5 100644 --- a/gcc/tree-ssa-threadedge.c +++ b/gcc/tree-ssa-threadedge.c @@ -937,10 +937,15 @@ thread_across_edge (gimple dummy_cond, } remove_temporary_equivalences (stack); - if (!taken_edge) - return; - propagate_threaded_block_debug_into (taken_edge->dest, e->dest); - register_jump_thread (e, taken_edge, NULL); + if (taken_edge) + { + vec path = vNULL; + propagate_threaded_block_debug_into (taken_edge->dest, e->dest); + path.safe_push (e); + path.safe_push (taken_edge); + register_jump_thread (path); + path.release (); + } return; } } @@ -969,9 +974,12 @@ thread_across_edge (gimple dummy_cond, bitmap_clear (visited); bitmap_set_bit (visited, taken_edge->dest->index); bitmap_set_bit (visited, e->dest->index); + vec path = vNULL; /* Record whether or not we were able to thread through a successor of E->dest. */ + path.safe_push (e); + path.safe_push (taken_edge); found = false; e3 = taken_edge; do @@ -988,6 +996,7 @@ thread_across_edge (gimple dummy_cond, if (e2) { + path.safe_push (e2); e3 = e2; found = true; } @@ -1008,10 +1017,11 @@ thread_across_edge (gimple dummy_cond, { propagate_threaded_block_debug_into (e3->dest, taken_edge->dest); - register_jump_thread (e, taken_edge, e3); + register_jump_thread (path); } } + path.release(); } BITMAP_FREE (visited); } diff --git a/gcc/tree-ssa-threadupdate.c b/gcc/tree-ssa-threadupdate.c index 0e4cbc9..e84542c 100644 --- a/gcc/tree-ssa-threadupdate.c +++ b/gcc/tree-ssa-threadupdate.c @@ -1264,8 +1264,19 @@ thread_through_all_blocks (bool may_peel_loop_headers) after fixing the SSA graph. */ void -register_jump_thread (edge e, edge e2, edge e3) +register_jump_thread (vec path) { + /* Convert PATH into 3 edge representation we've been using. This + is temporary until we convert this file to use a path representation + throughout. */ + edge e = path[0]; + edge e2 = path[1]; + + if (path.length () <= 2) + e3 = NULL; + else + e3 = path[path.length () - 1]; + /* This can occur if we're jumping to a constant address or or something similar. Just get out now. */ if (e2 == NULL)