Patchwork Another jump threading improvement

login
register
mail settings
Submitter Jeff Law
Date Aug. 28, 2013, 3:07 p.m.
Message ID <521E124D.9080800@redhat.com>
Download mbox | patch
Permalink /patch/270546/
State New
Headers show

Comments

Jeff Law - Aug. 28, 2013, 3:07 p.m.
When the code was written to thread jumps through joiner blocks, out of 
an abundance of caution I severely restricted the form allowed for the 
successors of the joiner block.  In particular we required the successor 
of the joiner block to have just one predecessor and more than one 
successor.  This patch removes those restrictions.

By allowing the successor of the joiner to have > 2 pred, we immediately 
expose more jump threading opportunities.  By allowing the successor to 
have a single successor, we can thread through forwarder blocks after 
the joiner (which is important for the FSA optimization).

This patch also filters out jump threading requests which are not useful 
because they're subsumed by a simpler jump threading request.

Consider (A, B) (B, C) (C, D).  In this case B is a joiner block and 
will need to be copied. The outgoing edge B->C will be threaded to C->D.

If we have a jump threading request (B, C) (C, D), we get the same 
effect without copying B and thus its preferred.  In fact, if we copy B 
it's likely B' will have a threadable jump, but we can't optimize it 
until the next iteration, which is obviously bad.


Bootstrapped and regression tested on x86_64-unknown-linux-gnu. 
Installed onto the trunk.
commit 06df8aadeebff5f4cad3082efe3c12e87d3347e5
Author: Jeff Law <law@redhat.com>
Date:   Wed Aug 28 09:04:09 2013 -0600

            * tree-ssa-threadedge.c (thread_around_empty_block): Remove
            checks for the number of predecessors and successors allowed.
            * tree-ssa-threadupdate.c (mark_threaded_blocks): Ignore requests
            which require copying a joiner block if there is a request which
            is a subpath that requires no joiner block copying.

Patch

diff --git a/gcc/ChangeLog b/gcc/ChangeLog
index 746adff..17a355c 100644
--- a/gcc/ChangeLog
+++ b/gcc/ChangeLog
@@ -1,3 +1,11 @@ 
+2013-08-28  Jeff Law  <law@redhat.com>
+
+	* tree-ssa-threadedge.c (thread_around_empty_block): Remove
+	checks for the number of predecessors and successors allowed.
+	* tree-ssa-threadupdate.c (mark_threaded_blocks): Ignore requests
+	which require copying a joiner block if there is a request which
+	is a subpath that requires no joiner block copying.
+
 2013-08-28  Jakub Jelinek  <jakub@redhat.com>
 
 	PR middle-end/58257
diff --git a/gcc/tree-ssa-threadedge.c b/gcc/tree-ssa-threadedge.c
index 320dec5..fc33647 100644
--- a/gcc/tree-ssa-threadedge.c
+++ b/gcc/tree-ssa-threadedge.c
@@ -761,14 +761,6 @@  thread_around_empty_block (edge taken_edge,
   gimple stmt;
   tree cond;
 
-  /* This block must have a single predecessor (E->dest).  */
-  if (!single_pred_p (bb))
-    return NULL;
-
-  /* This block must have more than one successor.  */
-  if (single_succ_p (bb))
-    return NULL;
-
   /* This block can have no PHI nodes.  This is overly conservative.  */
   if (!gsi_end_p (gsi_start_phis (bb)))
     return NULL;
diff --git a/gcc/tree-ssa-threadupdate.c b/gcc/tree-ssa-threadupdate.c
index 259a691..8a872a3 100644
--- a/gcc/tree-ssa-threadupdate.c
+++ b/gcc/tree-ssa-threadupdate.c
@@ -1146,17 +1146,56 @@  mark_threaded_blocks (bitmap threaded_blocks)
   edge e;
   edge_iterator ei;
 
+  /* It is possible to have jump threads in which one is a subpath
+     of the other.  ie, (A, B), (B, C), (C, D) where B is a joiner
+     block and (B, C), (C, D) where no joiner block exists.
+
+     When this occurs ignore the jump thread request with the joiner
+     block.  It's totally subsumed by the simpler jump thread request.
+
+     This results in less block copying, simpler CFGs.  More improtantly,
+     when we duplicate the joiner block, B, in this case we will create
+     a new threading opportunity that we wouldn't be able to optimize
+     until the next jump threading iteration. 
+
+     So first convert the jump thread requests which do not require a
+     joiner block.  */
   for (i = 0; i < threaded_edges.length (); i += 3)
     {
       edge e = threaded_edges[i];
-      edge *x = XNEWVEC (edge, 2);
 
-      e->aux = x;
-      THREAD_TARGET (e) = threaded_edges[i + 1];
-      THREAD_TARGET2 (e) = threaded_edges[i + 2];
-      bitmap_set_bit (tmp, e->dest->index);
+      if (threaded_edges[i + 2] == NULL)
+	{
+	  edge *x = XNEWVEC (edge, 2);
+
+	  e->aux = x;
+	  THREAD_TARGET (e) = threaded_edges[i + 1];
+	  THREAD_TARGET2 (e) = NULL;
+	  bitmap_set_bit (tmp, e->dest->index);
+	}
     }
 
+
+  /* Now iterate again, converting cases where we threaded through
+     a joiner block, but ignoring those where we have already
+     threaded through the joiner block.  */
+  for (i = 0; i < threaded_edges.length (); i += 3)
+    {
+      edge e = threaded_edges[i];
+
+      if (threaded_edges[i + 2] != NULL
+	  && threaded_edges[i + 1]->aux == NULL)
+	{
+	  edge *x = XNEWVEC (edge, 2);
+
+	  e->aux = x;
+	  THREAD_TARGET (e) = threaded_edges[i + 1];
+	  THREAD_TARGET2 (e) = threaded_edges[i + 2];
+	  bitmap_set_bit (tmp, e->dest->index);
+	}
+    }
+
+
   /* If optimizing for size, only thread through block if we don't have
      to duplicate it or it's an otherwise empty redirection block.  */
   if (optimize_function_for_size_p (cfun))