diff mbox

[4/5] shrink-wrap: Shrink-wrapping for separate components

Message ID cd8a358e576756d1611ca2d7862d4c718ded8bf5.1474616087.git.segher@kernel.crashing.org
State New
Headers show

Commit Message

Segher Boessenkool Sept. 23, 2016, 8:21 a.m. UTC
This is the main substance of this patch series.

Instead of doing all of the prologue and epilogue in one spot, it often
is better to do components of it at different places, so that they are
executed less frequently.

What exactly is a component is completely up to the target; this code
treats it all abstractly, and uses hooks for the target to handle the
more concrete things.  Commonly there is one component for each callee-
saved register, for example.

Components can be executed more than once per function execution.  This
pass makes sure that a component's epilogue is not called more often
than the corresponding prologue has been, at any point in time; that the
prologue is called more often, wherever the prologue's effect is needed;
and that the epilogue is called as often as the prologue has been, when
the function exits.  It does this by first deciding which blocks need
which components "active", and then placing prologue and epilogue
components to make that exactly true.

Deciding what blocks should run with a certain component active so that
the total cost of executing the prologues (and epilogues) is optimal, is
not a computationally feasible problem.  Instead, for each basic block,
we estimate the cost of putting a prologue right before the block, and
if that is cheaper than the total cost of putting prologues optimally
(according to the estimated cost) in the dominator subtrees strictly
dominated by this first block, place it at the first block instead.
This simple procedure places the components optimally for any dominator
sub tree where the root node's cost does not depend on anything outside
its subtree.

The cost is the execution frequency of all edges into the block coming
from blocks that do not have this component active.  The estimated cost
is the execution frequency of the block, minus the execution frequency
of any backedges (which by definition are coming from subtrees, so if
the "head" block gets a prologue, the source block of any backedge has
that component active as well).

Currently, the epilogues are placed as late as possible, given the
constraints.  This does not matter for execution cost, but we could
save a little bit of code size by placing the epilogues in a smarter
way.  This is a possible future optimisation.

Now all that is left is inserting prologues and epilogues on all edges
that jump into resp. out of the "active" set of blocks.  Often we need
to insert some components' prologues (or epilogues) on all edges into
(or out of) a block.  In theory cross-jumping can unify all such, but
in practice that often fails; besides, that is a lot of work.  So in
this case we insert the prologue and epilogue components at the "head"
or "tail" of a block, instead.

As a final optimisation, if a block needs a prologue and its immediate
dominator has the block as a post-dominator, that immediate dominator
gets the prologue as well.


2016-09-23  Segher Boessenkool  <segher@kernel.crashing.org>

	* function.c (thread_prologue_and_epilogue_insns): Recompute the
	live info.  Call try_shrink_wrapping_separate.  Compute the
	prologue_seq afterwards, if it has possibly changed.  Compute the
	split_prologue_seq and epilogue_seq later, too.
	* shrink-wrap.c: #include cfgbuild.h.
	(dump_components): New function.
	(struct sw): New struct.
	(SW): New function.
	(init_separate_shrink_wrap): New function.
	(fini_separate_shrink_wrap): New function.
	(place_prologue_for_one_component): New function.
	(spread_components): New function.
	(disqualify_problematic_components): New function.
	(emit_common_heads_for_components): New function.
	(emit_common_tails_for_components): New function.
	(insert_prologue_epilogue_for_components): New function.
	(try_shrink_wrapping_separate): New function.
	* shrink-wrap.h: Declare try_shrink_wrapping_separate.

---
 gcc/function.c    |   9 +-
 gcc/shrink-wrap.c | 729 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 gcc/shrink-wrap.h |   1 +
 3 files changed, 737 insertions(+), 2 deletions(-)

Comments

Jeff Law Sept. 27, 2016, 9:14 p.m. UTC | #1
On 09/23/2016 02:21 AM, Segher Boessenkool wrote:
> Deciding what blocks should run with a certain component active so that
> the total cost of executing the prologues (and epilogues) is optimal, is
> not a computationally feasible problem.  Instead, for each basic block,
> we estimate the cost of putting a prologue right before the block, and
> if that is cheaper than the total cost of putting prologues optimally
> (according to the estimated cost) in the dominator subtrees strictly
> dominated by this first block, place it at the first block instead.
> This simple procedure places the components optimally for any dominator
> sub tree where the root node's cost does not depend on anything outside
> its subtree.
>
> The cost is the execution frequency of all edges into the block coming
> from blocks that do not have this component active.  The estimated cost
> is the execution frequency of the block, minus the execution frequency
> of any backedges (which by definition are coming from subtrees, so if
> the "head" block gets a prologue, the source block of any backedge has
> that component active as well).
>
> Currently, the epilogues are placed as late as possible, given the
> constraints.  This does not matter for execution cost, but we could
> save a little bit of code size by placing the epilogues in a smarter
> way.  This is a possible future optimisation.
>
> Now all that is left is inserting prologues and epilogues on all edges
> that jump into resp. out of the "active" set of blocks.  Often we need
> to insert some components' prologues (or epilogues) on all edges into
> (or out of) a block.  In theory cross-jumping can unify all such, but
> in practice that often fails; besides, that is a lot of work.  So in
> this case we insert the prologue and epilogue components at the "head"
> or "tail" of a block, instead.
>
> As a final optimisation, if a block needs a prologue and its immediate
> dominator has the block as a post-dominator, that immediate dominator
> gets the prologue as well.
>
>
> 2016-09-23  Segher Boessenkool  <segher@kernel.crashing.org>
>
> 	* function.c (thread_prologue_and_epilogue_insns): Recompute the
> 	live info.  Call try_shrink_wrapping_separate.  Compute the
> 	prologue_seq afterwards, if it has possibly changed.  Compute the
> 	split_prologue_seq and epilogue_seq later, too.
> 	* shrink-wrap.c: #include cfgbuild.h.
> 	(dump_components): New function.
> 	(struct sw): New struct.
> 	(SW): New function.
> 	(init_separate_shrink_wrap): New function.
> 	(fini_separate_shrink_wrap): New function.
> 	(place_prologue_for_one_component): New function.
> 	(spread_components): New function.
> 	(disqualify_problematic_components): New function.
> 	(emit_common_heads_for_components): New function.
> 	(emit_common_tails_for_components): New function.
> 	(insert_prologue_epilogue_for_components): New function.
> 	(try_shrink_wrapping_separate): New function.
> 	* shrink-wrap.h: Declare try_shrink_wrapping_separate.
>
> ---
>  gcc/function.c    |   9 +-
>  gcc/shrink-wrap.c | 729 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  gcc/shrink-wrap.h |   1 +
>  3 files changed, 737 insertions(+), 2 deletions(-)
>
> diff --git a/gcc/function.c b/gcc/function.c
> index 53bad87..79e7b4f 100644
> --- a/gcc/function.c
> +++ b/gcc/function.c
> @@ -5920,9 +5920,7 @@ thread_prologue_and_epilogue_insns (void)
>    edge entry_edge = single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun));
>    edge orig_entry_edge = entry_edge;
>
> -  rtx_insn *split_prologue_seq = make_split_prologue_seq ();
>    rtx_insn *prologue_seq = make_prologue_seq ();
> -  rtx_insn *epilogue_seq = make_epilogue_seq ();
>
>    /* Try to perform a kind of shrink-wrapping, making sure the
>       prologue/epilogue is emitted only around those parts of the
> @@ -5930,6 +5928,13 @@ thread_prologue_and_epilogue_insns (void)
>
>    try_shrink_wrapping (&entry_edge, prologue_seq);
>
> +  try_shrink_wrapping_separate (entry_edge->dest);
> +
> +  if (crtl->shrink_wrapped_separate)
> +    prologue_seq = make_prologue_seq ();
Note this implies that make_prologue_seq (and consequently the target 
specific bits for prologue generation) can be safely called more than 
once.  That's probably OK, but might be worth a comment -- your decision 
whether or not to add the comment.


> diff --git a/gcc/shrink-wrap.c b/gcc/shrink-wrap.c
> index b85b1c3..0dced22 100644
> --- a/gcc/shrink-wrap.c
> +++ b/gcc/shrink-wrap.c

> +/* Place the prologue for component WHICH, in the basic blocks dominated
> +   by HEAD.  Do a DFS over the dominator tree, and set bit WHICH in the
> +   HAS_COMPONENTS of a block if either the block has that bit set in
> +   NEEDS_COMPONENTS, or it is cheaper to place the prologue here than in all
> +   dominator subtrees separately.  */
> +static void
> +place_prologue_for_one_component (unsigned int which, basic_block head)
> +{
> +  /* The block we are currently dealing with.  */
> +  basic_block bb = head;
> +  /* Is this the first time we visit this block, i.e. have we just gone
> +     down the tree.  */
> +  bool first_visit = true;
> +
> +  /* Walk the dominator tree, visit one block per iteration of this loop.
> +     Each basic block is visited twice: once before visiting any children
> +     of the block, and once after visiting all of them (leaf nodes are
> +     visited only once).  As an optimization, we do not visit subtrees
> +     that can no longer influence the prologue placement.  */
> +  for (;;)
Is there some reason you wrote this as a loop rather than recursion? 
IMHO it makes this function (and spread_components) more difficult to 
reason about than it needs to be.



> +
> +/* Mark HAS_COMPONENTS for every block dominated by at least one block with
> +   PRO_COMPONENTS set for the respective components, starting at HEAD.  */
> +static void
> +spread_components (basic_block head)
I don't see any references to PRO_COMPONENTS in the actual code.  If I'm 
reading the code correctly, you're just accumulating/propagating 
HAS_COMPONENTS from parents to children via a DFS walk of the dominator 
tree, right?





> +
> +/* Place code for prologues and epilogues for COMPONENTS where we can put
> +   that code at the end of basic blocks.  */
> +static void
> +emit_common_tails_for_components (sbitmap components)
[ Snip. ]
> +
> +      /* Put the code at the end of the BB, but before any final jump.  */
> +      if (!bitmap_empty_p (epi))
So what if the final jump uses hard registers in one way or another?   I 
don't immediately see anything that verifies it is safe to transpose the 
epilogue and the final jump.

Conceptually want the epilogue to occur on the outgoing edge(s).  But 
you want to actually transpose the epilogue and the control flow insn so 
that you can insert the epilogue in one place.

But naive transposition isn't safe.  The control flow insn may use one 
or more registers that you were restoring or you may be on a cc0 target. 
   I think you need to handle the former more cleanly.  The latter I'd 
be comfortable filtering out in try_shrink_wrapping_separate.

This has similarities to some of the problems we've had in the past with 
rtl-gcse insertion as well as output reloads on insns with control flow.

With transposition issue addressed, the only blocker I see are some 
simple testcases we can add to the suite.  They don't have to be real 
extensive.  And one motivating example for the list archives, ideally 
the glibc malloc case.

Jeff
Segher Boessenkool Sept. 28, 2016, 9:04 a.m. UTC | #2
On Tue, Sep 27, 2016 at 03:14:51PM -0600, Jeff Law wrote:
> On 09/23/2016 02:21 AM, Segher Boessenkool wrote:
> >--- a/gcc/function.c
> >+++ b/gcc/function.c
> >@@ -5920,9 +5920,7 @@ thread_prologue_and_epilogue_insns (void)
> >   edge entry_edge = single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun));
> >   edge orig_entry_edge = entry_edge;
> >
> >-  rtx_insn *split_prologue_seq = make_split_prologue_seq ();
> >   rtx_insn *prologue_seq = make_prologue_seq ();
> >-  rtx_insn *epilogue_seq = make_epilogue_seq ();
> >
> >   /* Try to perform a kind of shrink-wrapping, making sure the
> >      prologue/epilogue is emitted only around those parts of the
> >@@ -5930,6 +5928,13 @@ thread_prologue_and_epilogue_insns (void)
> >
> >   try_shrink_wrapping (&entry_edge, prologue_seq);
> >
> >+  try_shrink_wrapping_separate (entry_edge->dest);
> >+
> >+  if (crtl->shrink_wrapped_separate)
> >+    prologue_seq = make_prologue_seq ();
> Note this implies that make_prologue_seq (and consequently the target 
> specific bits for prologue generation) can be safely called more than 
> once.  That's probably OK, but might be worth a comment -- your decision 
> whether or not to add the comment.

It is only called twice if separately shrink-wrapping (and that did
anything), so it won't change current behaviour.

I'll add a comment.

> >+static void
> >+place_prologue_for_one_component (unsigned int which, basic_block head)
> >+{
> >+  /* The block we are currently dealing with.  */
> >+  basic_block bb = head;
> >+  /* Is this the first time we visit this block, i.e. have we just gone
> >+     down the tree.  */
> >+  bool first_visit = true;
> >+
> >+  /* Walk the dominator tree, visit one block per iteration of this loop.
> >+     Each basic block is visited twice: once before visiting any children
> >+     of the block, and once after visiting all of them (leaf nodes are
> >+     visited only once).  As an optimization, we do not visit subtrees
> >+     that can no longer influence the prologue placement.  */
> >+  for (;;)
> Is there some reason you wrote this as a loop rather than recursion? 
> IMHO it makes this function (and spread_components) more difficult to 
> reason about than it needs to be.

It would recurse a few thousand frames deep on not terribly big testcases
(i.e. I've seen it happen).  This uses a lot of stack space on some ABIs,
and is very slow as well (this is the slowest function here by far).
Unlimited recursion is bad.

> >+/* Mark HAS_COMPONENTS for every block dominated by at least one block 
> >with
> >+   PRO_COMPONENTS set for the respective components, starting at HEAD.  */
> >+static void
> >+spread_components (basic_block head)
> I don't see any references to PRO_COMPONENTS in the actual code.  If I'm 
> reading the code correctly, you're just accumulating/propagating 
> HAS_COMPONENTS from parents to children via a DFS walk of the dominator 
> tree, right?

Yeah, s/PRO_COMPONENTS/HAS_COMPONENTS/.  I have renamed things a few
time but missed this one due to the ALL VARS IN ALL CAPS style :-)

> >+/* Place code for prologues and epilogues for COMPONENTS where we can put
> >+   that code at the end of basic blocks.  */
> >+static void
> >+emit_common_tails_for_components (sbitmap components)
> [ Snip. ]
> >+
> >+      /* Put the code at the end of the BB, but before any final jump.  */
> >+      if (!bitmap_empty_p (epi))
> So what if the final jump uses hard registers in one way or another?   I 
> don't immediately see anything that verifies it is safe to transpose the 
> epilogue and the final jump.

Whoops.  Thanks for catching this.

> Conceptually want the epilogue to occur on the outgoing edge(s).  But 
> you want to actually transpose the epilogue and the control flow insn so 
> that you can insert the epilogue in one place.

The same problem happens with prologues, too.

> But naive transposition isn't safe.  The control flow insn may use one 
> or more registers that you were restoring or you may be on a cc0 target. 

A cc0 target can not use separate shrink-wrapping *anyway* if any of the
components would clobber cc0, so that is no problem here.

>   I think you need to handle the former more cleanly.  The latter I'd 
> be comfortable filtering out in try_shrink_wrapping_separate.

I'm thinking to not do the common tail optimisation if BB_END is a
JUMP_INSN but not simplejump_p (or a return or a sibling call).  Do
you see any problem with that?

> This has similarities to some of the problems we've had in the past with 
> rtl-gcse insertion as well as output reloads on insns with control flow.

Oh, I had so much fun recently with the latter :-/

> With transposition issue addressed, the only blocker I see are some 
> simple testcases we can add to the suite.  They don't have to be real 
> extensive.

Yeah, working on it.

> And one motivating example for the list archives, ideally 
> the glibc malloc case.

Okay.

Thanks for the reviews,


Segher
Jeff Law Sept. 28, 2016, 4:33 p.m. UTC | #3
On 09/28/2016 03:04 AM, Segher Boessenkool wrote:
>
>>> +static void
>>> +place_prologue_for_one_component (unsigned int which, basic_block head)
>>> +{
>>> +  /* The block we are currently dealing with.  */
>>> +  basic_block bb = head;
>>> +  /* Is this the first time we visit this block, i.e. have we just gone
>>> +     down the tree.  */
>>> +  bool first_visit = true;
>>> +
>>> +  /* Walk the dominator tree, visit one block per iteration of this loop.
>>> +     Each basic block is visited twice: once before visiting any children
>>> +     of the block, and once after visiting all of them (leaf nodes are
>>> +     visited only once).  As an optimization, we do not visit subtrees
>>> +     that can no longer influence the prologue placement.  */
>>> +  for (;;)
>> Is there some reason you wrote this as a loop rather than recursion?
>> IMHO it makes this function (and spread_components) more difficult to
>> reason about than it needs to be.
>
> It would recurse a few thousand frames deep on not terribly big testcases
> (i.e. I've seen it happen).  This uses a lot of stack space on some ABIs,
> and is very slow as well (this is the slowest function here by far).
> Unlimited recursion is bad.
I'm surprised the recursion was that deep.  Such is life.   Thanks for 
clarifying.  I won't object to the iterative version. :-)

>>> +/* Place code for prologues and epilogues for COMPONENTS where we can put
>>> +   that code at the end of basic blocks.  */
>>> +static void
>>> +emit_common_tails_for_components (sbitmap components)
>> [ Snip. ]
>>> +
>>> +      /* Put the code at the end of the BB, but before any final jump.  */
>>> +      if (!bitmap_empty_p (epi))
>> So what if the final jump uses hard registers in one way or another?   I
>> don't immediately see anything that verifies it is safe to transpose the
>> epilogue and the final jump.
>
> Whoops.  Thanks for catching this.
I missed it the first time though the code too.

>
>> Conceptually want the epilogue to occur on the outgoing edge(s).  But
>> you want to actually transpose the epilogue and the control flow insn so
>> that you can insert the epilogue in one place.
>
> The same problem happens with prologues, too.
Yea.  I guess if a had a jump with an embedded side effect (such as movb 
or addb on the PA), then transposing the control flow insn with the 
prologue would be problematical as well.

>
> A cc0 target can not use separate shrink-wrapping *anyway* if any of the
> components would clobber cc0, so that is no problem here.
True, but I'd be more comfortable if we filtered out cc0 targets explicitly.

>
>>   I think you need to handle the former more cleanly.  The latter I'd
>> be comfortable filtering out in try_shrink_wrapping_separate.
>
> I'm thinking to not do the common tail optimisation if BB_END is a
> JUMP_INSN but not simplejump_p (or a return or a sibling call).  Do
> you see any problem with that?
Seems reasonable.


Jeff
Segher Boessenkool Sept. 30, 2016, 10:34 a.m. UTC | #4
[ whoops, message too big, resending with the attachment compressed ]

On Tue, Sep 27, 2016 at 03:14:51PM -0600, Jeff Law wrote:
> With transposition issue addressed, the only blocker I see are some 
> simple testcases we can add to the suite.  They don't have to be real 
> extensive.  And one motivating example for the list archives, ideally 
> the glibc malloc case.

And here is the malloc testcase.

A very important (for performance) function is _int_malloc, which starts
with


static void *
_int_malloc (mstate av, size_t bytes)
{
// [ variable declarations culled ]

  if (((unsigned long) (bytes) >= (unsigned long) (size_t) (-2 * (unsigned long)((((
 __builtin_offsetof (
 struct malloc_chunk
 ,-
 fd_nextsize
 )
 )+((2 *(sizeof(size_t)) < __alignof__ (long double) ? __alignof__ (long double) : 2 *(sizeof(size_t))) - 1)) & ~((2 *(sizeof(size_t)) < __alignof__ (long double) ? __alignof__ (long double) : 2 *(sizeof(size_t))) - 1)))))) { (__libc_errno = (
 12
 )); return 0; }

  if (__builtin_expect ((av ==-
     ((
     void-
     *)0)
     ), 0))
    {
      void *p = sysmalloc (nb, av);
      if (p !=-
              ((
              void-
              *)0)
                  )
 alloc_perturb (p, bytes);
      return p;
    }


which without separate shrink-wrapping ends up as (reordered the blocks):


.L._int_malloc:
        mflr 0
        li 9,-65
        std 14,-144(1)
        std 15,-136(1)
        cmpld 7,4,9
        std 16,-128(1)
        std 17,-120(1)
        std 18,-112(1)
        std 19,-104(1)
        std 20,-96(1)
        std 21,-88(1)
        std 22,-80(1)
        std 23,-72(1)
        std 0,16(1)
        std 24,-64(1)
        std 25,-56(1)
        std 26,-48(1)
        std 27,-40(1)
        std 28,-32(1)
        std 29,-24(1)
        std 30,-16(1)
        std 31,-8(1)
        stdu 1,-288(1)
        bgt 7,.L768
        addi 14,4,23
        mr 15,3
        cmpldi 0,14,31
        ble 0,.L769

# ...

.L768:
        addis 27,2,__libc_errno@got@tprel@ha
        li 19,12
        ld 28,__libc_errno@got@tprel@l(27)
        li 3,0
        add 17,28,__libc_errno@tls
        stw 19,0(17)
        b .L631

# ...

.L631:
        addi 1,1,288
        ld 29,16(1)
        ld 14,-144(1)
        ld 15,-136(1)
        ld 16,-128(1)
        ld 17,-120(1)
        ld 18,-112(1)
        ld 19,-104(1)
        ld 20,-96(1)
        ld 21,-88(1)
        ld 22,-80(1)
        ld 23,-72(1)
        ld 24,-64(1)
        mtlr 29
        ld 25,-56(1)
        ld 26,-48(1)
        ld 27,-40(1)
        ld 28,-32(1)
        ld 29,-24(1)
        ld 30,-16(1)
        ld 31,-8(1)
        blr
# ...

.L769:
        cmpdi 1,3,0
        beq 1,.L715

# ...

.L715:
        li 14,32
.L635:
        li 4,0
.L762:
        addi 1,1,288
        mr 3,14
        ld 14,16(1)
        ld 15,-136(1)
        ld 16,-128(1)
        ld 17,-120(1)
        ld 18,-112(1)
        ld 19,-104(1)
        ld 20,-96(1)
        ld 21,-88(1)
        ld 22,-80(1)
        ld 23,-72(1)
        ld 24,-64(1)
        ld 25,-56(1)
        mtlr 14
        ld 26,-48(1)
        ld 14,-144(1)
        ld 27,-40(1)
        ld 28,-32(1)
        ld 29,-24(1)
        ld 30,-16(1)
        ld 31,-8(1)
        b sysmalloc


[ I see have regrename on by default still; doesn't matter much for
this test, it's just less readable ]


With separate shrink-wrapping we get instead


.L._int_malloc:
        li 9,-65
        stdu 1,-288(1)
        cmpld 7,4,9
        bgt 7,.L811
        std 14,144(1)
        addi 14,4,23
        std 15,152(1)
        mr 15,3
        cmpldi 0,14,31
        std 25,232(1)
        std 30,272(1)
        ble 0,.L812

# ...

.L811:
        addis 6,2,__libc_errno@got@tprel@ha
        li 11,12
        ld 10,__libc_errno@got@tprel@l(6)
        li 3,0
        add 12,10,__libc_errno@tls
        stw 11,0(12)
        b .L673

# ...

.L673:
        addi 1,1,288
        blr

# ...

.L812:
        cmpdi 1,3,0
        beq 1,.L757

# ...

.L757:
        li 14,32
.L677:
        mr 3,14
        ld 15,152(1)
        ld 14,144(1)
        ld 25,232(1)
        ld 30,272(1)
        li 4,0
        addi 1,1,288
        b sysmalloc


I'm attaching the full testcase (pre-processed for powerpc64-linux).


Segher
Jeff Law Oct. 10, 2016, 9:21 p.m. UTC | #5
On 09/30/2016 04:34 AM, Segher Boessenkool wrote:
> [ whoops, message too big, resending with the attachment compressed ]
>
> On Tue, Sep 27, 2016 at 03:14:51PM -0600, Jeff Law wrote:
>> With transposition issue addressed, the only blocker I see are some
>> simple testcases we can add to the suite.  They don't have to be real
>> extensive.  And one motivating example for the list archives, ideally
>> the glibc malloc case.
>
> And here is the malloc testcase.
>
> A very important (for performance) function is _int_malloc, which starts
> with
[ ... ]
THanks.  What I think is important to note with this example is the bits 
that were pushed into the path with the sysmalloc/alloc_perturb calls. 
That's an unlikely path.

We have to extrapolate a bit from the assembly provided.  In the not 
separately shrink-wrapped version, we have a full prologue of stores and 
two instances of a full epilogue (though only one ever executes) provided.

With separate shrink wrapping the (presumably) very cold path where we 
error has virtually no prologue/epilogue.  That's probably a nop from a 
performance standpoint.

More interesting is the path where we call sysmalloc/alloc_perturb, it's 
a cold path, but not as cold as the error path.  We save/restore 4 regs 
in that case.  Rather than a full prologue/epilogue.  So there's clearly 
a savings there, though again, via the expect it's a cold path.

Where we have to extrapolate is the hot path.  Presumably on the hot 
path we're saving/restoring ~4 fewer registers.   I haven't verified 
that, but that is kindof the whole point here.



Thanks,
Jeff
Segher Boessenkool Oct. 10, 2016, 10:23 p.m. UTC | #6
On Mon, Oct 10, 2016 at 03:21:31PM -0600, Jeff Law wrote:
> On 09/30/2016 04:34 AM, Segher Boessenkool wrote:
> >[ whoops, message too big, resending with the attachment compressed ]
> >
> >On Tue, Sep 27, 2016 at 03:14:51PM -0600, Jeff Law wrote:
> >>With transposition issue addressed, the only blocker I see are some
> >>simple testcases we can add to the suite.  They don't have to be real
> >>extensive.  And one motivating example for the list archives, ideally
> >>the glibc malloc case.
> >
> >And here is the malloc testcase.
> >
> >A very important (for performance) function is _int_malloc, which starts
> >with
> [ ... ]
> THanks.  What I think is important to note with this example is the bits 
> that were pushed into the path with the sysmalloc/alloc_perturb calls. 
> That's an unlikely path.

alloc_perturb is a no-op, and inlined as such: as nothing :-)

> We have to extrapolate a bit from the assembly provided.  In the not 
> separately shrink-wrapped version, we have a full prologue of stores and 
> two instances of a full epilogue (though only one ever executes) provided.
> 
> With separate shrink wrapping the (presumably) very cold path where we 
> error has virtually no prologue/epilogue.  That's probably a nop from a 
> performance standpoint.
> 
> More interesting is the path where we call sysmalloc/alloc_perturb, it's 
> a cold path, but not as cold as the error path.  We save/restore 4 regs 
> in that case.  Rather than a full prologue/epilogue.  So there's clearly 
> a savings there, though again, via the expect it's a cold path.
> 
> Where we have to extrapolate is the hot path.  Presumably on the hot 
> path we're saving/restoring ~4 fewer registers.   I haven't verified 
> that, but that is kindof the whole point here.

We save/restore just four registers total on the hot path.  And yes,
that is the point :-)

The hot exit is

.L683:
	ld 14,144(1)
	ld 15,152(1)
	ld 25,232(1)
	ld 30,272(1)
	addi 3,4,16
.L673:
	addi 1,1,288
	blr

so four GPR restores and no LR restore.  Without separate shrink-wrapping
this was

.L641:
	addi 3,21,16
	b .L631

[ ... ]

.L631:
	addi 1,1,288
	ld 29,16(1)
	ld 14,-144(1)
	ld 15,-136(1)
	ld 16,-128(1)
	ld 17,-120(1)
	ld 18,-112(1)
	ld 19,-104(1)
	ld 20,-96(1)
	ld 21,-88(1)
	ld 22,-80(1)
	ld 23,-72(1)
	ld 24,-64(1)
	mtlr 29
	ld 25,-56(1)
	ld 26,-48(1)
	ld 27,-40(1)
	ld 28,-32(1)
	ld 29,-24(1)
	ld 30,-16(1)
	ld 31,-8(1)
	blr

(18 GPRs as well as LR).

I didn't show this path because there is a whole bunch of branches with
inline asm in the way.

The sysmalloc path was

.L635:
	li 4,0
.L761:
	addi 1,1,288
	mr 3,14
	ld 14,16(1)
	ld 15,-136(1)
	ld 16,-128(1)
	ld 17,-120(1)
	ld 18,-112(1)
	ld 19,-104(1)
	ld 20,-96(1)
	ld 21,-88(1)
	ld 22,-80(1)
	ld 23,-72(1)
	ld 24,-64(1)
	ld 25,-56(1)
	mtlr 14
	ld 26,-48(1)
	ld 14,-144(1)
	ld 27,-40(1)
	ld 28,-32(1)
	ld 29,-24(1)
	ld 30,-16(1)
	ld 31,-8(1)
	b sysmalloc

and now is

.L677:
	mr 3,14
	ld 15,152(1)
	ld 14,144(1)
	ld 25,232(1)
	ld 30,272(1)
	li 4,0
	addi 1,1,288
	b sysmalloc

I attach malloc.s.{no,yes}, I hope you can stomach that.  Well you
can read HP-PA, heh.


Segher
diff mbox

Patch

diff --git a/gcc/function.c b/gcc/function.c
index 53bad87..79e7b4f 100644
--- a/gcc/function.c
+++ b/gcc/function.c
@@ -5920,9 +5920,7 @@  thread_prologue_and_epilogue_insns (void)
   edge entry_edge = single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun));
   edge orig_entry_edge = entry_edge;
 
-  rtx_insn *split_prologue_seq = make_split_prologue_seq ();
   rtx_insn *prologue_seq = make_prologue_seq ();
-  rtx_insn *epilogue_seq = make_epilogue_seq ();
 
   /* Try to perform a kind of shrink-wrapping, making sure the
      prologue/epilogue is emitted only around those parts of the
@@ -5930,6 +5928,13 @@  thread_prologue_and_epilogue_insns (void)
 
   try_shrink_wrapping (&entry_edge, prologue_seq);
 
+  try_shrink_wrapping_separate (entry_edge->dest);
+
+  if (crtl->shrink_wrapped_separate)
+    prologue_seq = make_prologue_seq ();
+
+  rtx_insn *split_prologue_seq = make_split_prologue_seq ();
+  rtx_insn *epilogue_seq = make_epilogue_seq ();
 
   rtl_profile_for_bb (EXIT_BLOCK_PTR_FOR_FN (cfun));
 
diff --git a/gcc/shrink-wrap.c b/gcc/shrink-wrap.c
index b85b1c3..0dced22 100644
--- a/gcc/shrink-wrap.c
+++ b/gcc/shrink-wrap.c
@@ -34,6 +34,7 @@  along with GCC; see the file COPYING3.  If not see
 #include "output.h"
 #include "tree-pass.h"
 #include "cfgrtl.h"
+#include "cfgbuild.h"
 #include "params.h"
 #include "bb-reorder.h"
 #include "shrink-wrap.h"
@@ -1006,3 +1007,731 @@  try_shrink_wrapping (edge *entry_edge, rtx_insn *prologue_seq)
   BITMAP_FREE (bb_with);
   free_dominance_info (CDI_DOMINATORS);
 }
+
+/* Separate shrink-wrapping
+
+   Instead of putting all of the prologue and epilogue in one spot, we
+   can put parts of it in places where those components are executed less
+   frequently.  The following code does this, for prologue and epilogue
+   components that can be put in more than one location, and where those
+   components can be executed more than once (the epilogue component will
+   always be executed before the prologue component is executed a second
+   time).
+
+   What exactly is a component is target-dependent.  The more usual
+   components are simple saves/restores to/from the frame of callee-saved
+   registers.  This code treats components abstractly (as an sbitmap),
+   letting the target handle all details.
+
+   Prologue components are placed in such a way that for every component
+   the prologue is executed as infrequently as possible.  We do this by
+   walking the dominator tree, comparing the cost of placing a prologue
+   component before a block to the sum of costs determined for all subtrees
+   of that block.
+
+   From this placement, we then determine for each component all blocks
+   where at least one of this block's dominators (including itself) will
+   get a prologue inserted.  That then is how the components are placed.
+   We could place the epilogue components a bit smarter (we can save a
+   bit of code size sometimes); this is a possible future improvement.
+
+   Prologues and epilogues are preferably placed into a block, either at
+   the beginning or end of it, if it is needed for all predecessor resp.
+   successor edges; or placed on the edge otherwise.
+
+   If the placement of any prologue/epilogue leads to a situation we cannot
+   handle (for example, an abnormal edge would need to be split, or some
+   targets want to use some specific registers that may not be available
+   where we want to put them), separate shrink-wrapping for the components
+   in that prologue/epilogue is aborted.  */
+
+
+/* Print the sbitmap COMPONENTS to the DUMP_FILE if not empty, with the
+   label LABEL.  */
+static void
+dump_components (const char *label, sbitmap components)
+{
+  if (bitmap_empty_p (components))
+    return;
+
+  fprintf (dump_file, " [%s", label);
+
+  for (unsigned int j = 0; j < components->n_bits; j++)
+    if (bitmap_bit_p (components, j))
+      fprintf (dump_file, " %u", j);
+
+  fprintf (dump_file, "]");
+}
+
+/* The data we collect for each bb.  */
+struct sw {
+  /* What components does this BB need?  */
+  sbitmap needs_components;
+
+  /* What components does this BB have?  This is the main decision this
+     pass makes.  */
+  sbitmap has_components;
+
+  /* The components for which we placed code at the start of the BB (instead
+     of on all incoming edges).  */
+  sbitmap head_components;
+
+  /* The components for which we placed code at the end of the BB (instead
+     of on all outgoing edges).  */
+  sbitmap tail_components;
+
+  /* The frequency of executing the prologue for this BB, if a prologue is
+     placed on this BB.  This is a pessimistic estimate (no prologue is
+     needed for edges from blocks that have the component under consideration
+     active already).  */
+  gcov_type own_cost;
+
+  /* The frequency of executing the prologue for this BB and all BBs
+     dominated by it.  */
+  gcov_type total_cost;
+};
+
+/* A helper function for accessing the pass-specific info.  */
+static inline struct sw *
+SW (basic_block bb)
+{
+  gcc_assert (bb->aux);
+  return (struct sw *) bb->aux;
+}
+
+/* Create the pass-specific data structures for separately shrink-wrapping
+   with components COMPONENTS.  */
+static void
+init_separate_shrink_wrap (sbitmap components)
+{
+  basic_block bb;
+  FOR_ALL_BB_FN (bb, cfun)
+    {
+      bb->aux = xcalloc (1, sizeof (struct sw));
+
+      SW (bb)->needs_components = targetm.shrink_wrap.components_for_bb (bb);
+
+      /* Mark all basic blocks without successor as needing all components.
+	 This avoids problems in at least cfgcleanup, sel-sched, and
+	 regrename (largely to do with all paths to such a block still
+	 needing the same dwarf CFI info).  */
+      if (EDGE_COUNT (bb->succs) == 0)
+	bitmap_copy (SW (bb)->needs_components, components);
+
+      if (dump_file)
+	{
+	  fprintf (dump_file, "bb %d components:", bb->index);
+	  dump_components ("has", SW (bb)->needs_components);
+	  fprintf (dump_file, "\n");
+	}
+
+      SW (bb)->has_components = sbitmap_alloc (SBITMAP_SIZE (components));
+      SW (bb)->head_components = sbitmap_alloc (SBITMAP_SIZE (components));
+      SW (bb)->tail_components = sbitmap_alloc (SBITMAP_SIZE (components));
+      bitmap_clear (SW (bb)->has_components);
+      bitmap_clear (SW (bb)->head_components);
+      bitmap_clear (SW (bb)->tail_components);
+    }
+}
+
+/* Destroy the pass-specific data.  */
+static void
+fini_separate_shrink_wrap (void)
+{
+  basic_block bb;
+  FOR_ALL_BB_FN (bb, cfun)
+    if (bb->aux)
+      {
+	sbitmap_free (SW (bb)->needs_components);
+	sbitmap_free (SW (bb)->has_components);
+	sbitmap_free (SW (bb)->head_components);
+	sbitmap_free (SW (bb)->tail_components);
+	free (bb->aux);
+	bb->aux = 0;
+      }
+}
+
+/* Place the prologue for component WHICH, in the basic blocks dominated
+   by HEAD.  Do a DFS over the dominator tree, and set bit WHICH in the
+   HAS_COMPONENTS of a block if either the block has that bit set in
+   NEEDS_COMPONENTS, or it is cheaper to place the prologue here than in all
+   dominator subtrees separately.  */
+static void
+place_prologue_for_one_component (unsigned int which, basic_block head)
+{
+  /* The block we are currently dealing with.  */
+  basic_block bb = head;
+  /* Is this the first time we visit this block, i.e. have we just gone
+     down the tree.  */
+  bool first_visit = true;
+
+  /* Walk the dominator tree, visit one block per iteration of this loop.
+     Each basic block is visited twice: once before visiting any children
+     of the block, and once after visiting all of them (leaf nodes are
+     visited only once).  As an optimization, we do not visit subtrees
+     that can no longer influence the prologue placement.  */
+  for (;;)
+    {
+      /* First visit of a block: set the (children) cost accumulator to zero;
+	 if the block does not have the component itself, walk down.  */
+      if (first_visit)
+	{
+	  /* Initialize the cost.  The cost is the block execution frequency
+	     that does not come from backedges.  Calculating this by simply
+	     adding the cost of all edges that aren't backedges does not
+	     work: this does not always add up to the block frequency at
+	     all, and even if it does, rounding error makes for bad
+	     decisions.  */
+	  SW (bb)->own_cost = bb->frequency;
+
+	  edge e;
+	  edge_iterator ei;
+	  FOR_EACH_EDGE (e, ei, bb->preds)
+	    if (dominated_by_p (CDI_DOMINATORS, e->src, bb))
+	      {
+		if (SW (bb)->own_cost > EDGE_FREQUENCY (e))
+		  SW (bb)->own_cost -= EDGE_FREQUENCY (e);
+		else
+		  SW (bb)->own_cost = 0;
+	      }
+
+	  SW (bb)->total_cost = 0;
+
+	  if (!bitmap_bit_p (SW (bb)->needs_components, which)
+	      && first_dom_son (CDI_DOMINATORS, bb))
+	    {
+	      bb = first_dom_son (CDI_DOMINATORS, bb);
+	      continue;
+	    }
+	}
+
+      /* If this block does need the component itself, or it is cheaper to
+	 put the prologue here than in all the descendants that need it,
+	 mark it so.  If this block's immediate post-dominator is dominated
+	 by this block, and that needs the prologue, we can put it on this
+	 block as well (earlier is better).  */
+      if (bitmap_bit_p (SW (bb)->needs_components, which)
+	  || SW (bb)->total_cost > SW (bb)->own_cost)
+	{
+	  SW (bb)->total_cost = SW (bb)->own_cost;
+	  bitmap_set_bit (SW (bb)->has_components, which);
+	}
+      else
+	{
+	  basic_block kid = get_immediate_dominator (CDI_POST_DOMINATORS, bb);
+	  if (dominated_by_p (CDI_DOMINATORS, kid, bb)
+	      && bitmap_bit_p (SW (kid)->has_components, which))
+	    {
+	      SW (bb)->total_cost = SW (bb)->own_cost;
+	      bitmap_set_bit (SW (bb)->has_components, which);
+	    }
+	}
+
+      /* We are back where we started, so we are done now.  */
+      if (bb == head)
+	return;
+
+      /* We now know the cost of the subtree rooted at the current block.
+	 Accumulate this cost in the parent.  */
+      basic_block parent = get_immediate_dominator (CDI_DOMINATORS, bb);
+      SW (parent)->total_cost += SW (bb)->total_cost;
+
+      /* Don't walk the tree down unless necessary.  */
+      if (next_dom_son (CDI_DOMINATORS, bb)
+          && SW (parent)->total_cost <= SW (parent)->own_cost)
+	{
+	  bb = next_dom_son (CDI_DOMINATORS, bb);
+	  first_visit = true;
+	}
+      else
+	{
+	  bb = parent;
+	  first_visit = false;
+	}
+    }
+}
+
+/* Mark HAS_COMPONENTS for every block dominated by at least one block with
+   PRO_COMPONENTS set for the respective components, starting at HEAD.  */
+static void
+spread_components (basic_block head)
+{
+  basic_block bb = head;
+  bool first_visit = true;
+  /* This keeps a tally of all components active.  */
+  sbitmap components = SW (head)->has_components;
+
+  for (;;)
+    {
+      if (first_visit)
+	{
+	  bitmap_ior (SW (bb)->has_components, SW (bb)->has_components,
+		      components);
+
+	  if (first_dom_son (CDI_DOMINATORS, bb))
+	    {
+	      components = SW (bb)->has_components;
+	      bb = first_dom_son (CDI_DOMINATORS, bb);
+	      continue;
+	    }
+	}
+
+      components = SW (bb)->has_components;
+
+      if (next_dom_son (CDI_DOMINATORS, bb))
+	{
+	  bb = next_dom_son (CDI_DOMINATORS, bb);
+	  basic_block parent = get_immediate_dominator (CDI_DOMINATORS, bb);
+	  components = SW (parent)->has_components;
+	  first_visit = true;
+	}
+      else
+	{
+	  if (bb == head)
+	    return;
+	  bb = get_immediate_dominator (CDI_DOMINATORS, bb);
+	  first_visit = false;
+	}
+    }
+}
+
+/* If we cannot handle placing some component's prologues or epilogues where
+   we decided we should place them, unmark that component in COMPONENTS so
+   that it is not wrapped separately.  */
+static void
+disqualify_problematic_components (sbitmap components)
+{
+  sbitmap pro = sbitmap_alloc (SBITMAP_SIZE (components));
+  sbitmap epi = sbitmap_alloc (SBITMAP_SIZE (components));
+
+  basic_block bb;
+  FOR_EACH_BB_FN (bb, cfun)
+    {
+      edge e;
+      edge_iterator ei;
+      FOR_EACH_EDGE (e, ei, bb->succs)
+	{
+	  /* Find which components we want pro/epilogues for here.  */
+	  bitmap_and_compl (epi, SW (e->src)->has_components,
+			    SW (e->dest)->has_components);
+	  bitmap_and_compl (pro, SW (e->dest)->has_components,
+			    SW (e->src)->has_components);
+
+	  /* Ask the target what it thinks about things.  */
+	  if (!bitmap_empty_p (epi))
+	    targetm.shrink_wrap.disqualify_components (components, e, epi,
+						       false);
+	  if (!bitmap_empty_p (pro))
+	    targetm.shrink_wrap.disqualify_components (components, e, pro,
+						       true);
+
+	  /* If this edge doesn't need splitting, we're fine.  */
+	  if (single_pred_p (e->dest)
+	      && e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
+	    continue;
+
+	  /* If the edge can be split, that is fine too.  */
+	  if ((e->flags & EDGE_ABNORMAL) == 0)
+	    continue;
+
+	  /* We also can handle sibcalls.  */
+	  if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
+	    {
+	      gcc_assert (e->flags & EDGE_SIBCALL);
+	      continue;
+	    }
+
+	  /* Remove from consideration those components we would need
+	     pro/epilogues for on edges where we cannot insert them.  */
+	  bitmap_and_compl (components, components, epi);
+	  bitmap_and_compl (components, components, pro);
+
+	  if (dump_file && !bitmap_subset_p (epi, components))
+	    {
+	      fprintf (dump_file, "  BAD epi %d->%d", e->src->index,
+		       e->dest->index);
+	      if (e->flags & EDGE_EH)
+		fprintf (dump_file, " for EH");
+	      dump_components ("epi", epi);
+	      fprintf (dump_file, "\n");
+	    }
+
+	  if (dump_file && !bitmap_subset_p (pro, components))
+	    {
+	      fprintf (dump_file, "  BAD pro %d->%d", e->src->index,
+		       e->dest->index);
+	      if (e->flags & EDGE_EH)
+		fprintf (dump_file, " for EH");
+	      dump_components ("pro", pro);
+	      fprintf (dump_file, "\n");
+	    }
+	}
+    }
+
+  sbitmap_free (pro);
+  sbitmap_free (epi);
+}
+
+/* Place code for prologues and epilogues for COMPONENTS where we can put
+   that code at the start of basic blocks.  */
+static void
+emit_common_heads_for_components (sbitmap components)
+{
+  sbitmap pro = sbitmap_alloc (SBITMAP_SIZE (components));
+  sbitmap epi = sbitmap_alloc (SBITMAP_SIZE (components));
+  sbitmap tmp = sbitmap_alloc (SBITMAP_SIZE (components));
+
+  basic_block bb;
+  FOR_EACH_BB_FN (bb, cfun)
+    {
+      /* Find which prologue resp. epilogue components are needed for all
+	 predecessor edges to this block.  */
+
+      /* First, select all possible components.  */
+      bitmap_copy (epi, components);
+      bitmap_copy (pro, components);
+
+      edge e;
+      edge_iterator ei;
+      FOR_EACH_EDGE (e, ei, bb->preds)
+	{
+	  if (e->flags & EDGE_ABNORMAL)
+	    {
+	      bitmap_clear (epi);
+	      bitmap_clear (pro);
+	      break;
+	    }
+
+	  /* Deselect those epilogue components that should not be inserted
+	     for this edge.  */
+	  bitmap_and_compl (tmp, SW (e->src)->has_components,
+			    SW (e->dest)->has_components);
+	  bitmap_and (epi, epi, tmp);
+
+	  /* Similar, for the prologue.  */
+	  bitmap_and_compl (tmp, SW (e->dest)->has_components,
+			    SW (e->src)->has_components);
+	  bitmap_and (pro, pro, tmp);
+	}
+
+      if (dump_file && !(bitmap_empty_p (epi) && bitmap_empty_p (pro)))
+	fprintf (dump_file, "  bb %d", bb->index);
+
+      if (dump_file && !bitmap_empty_p (epi))
+	dump_components ("epi", epi);
+      if (dump_file && !bitmap_empty_p (pro))
+	dump_components ("pro", pro);
+
+      if (dump_file && !(bitmap_empty_p (epi) && bitmap_empty_p (pro)))
+	fprintf (dump_file, "\n");
+
+      /* Place code after the BB note.  */
+      if (!bitmap_empty_p (pro))
+	{
+	  start_sequence ();
+	  targetm.shrink_wrap.emit_prologue_components (pro);
+	  rtx_insn *seq = get_insns ();
+	  end_sequence ();
+
+	  emit_insn_after (seq, bb_note (bb));
+
+	  bitmap_ior (SW (bb)->head_components, SW (bb)->head_components, pro);
+	}
+
+      if (!bitmap_empty_p (epi))
+	{
+	  start_sequence ();
+	  targetm.shrink_wrap.emit_epilogue_components (epi);
+	  rtx_insn *seq = get_insns ();
+	  end_sequence ();
+
+	  emit_insn_after (seq, bb_note (bb));
+
+	  bitmap_ior (SW (bb)->head_components, SW (bb)->head_components, epi);
+	}
+    }
+
+  sbitmap_free (pro);
+  sbitmap_free (epi);
+  sbitmap_free (tmp);
+}
+
+/* Place code for prologues and epilogues for COMPONENTS where we can put
+   that code at the end of basic blocks.  */
+static void
+emit_common_tails_for_components (sbitmap components)
+{
+  sbitmap pro = sbitmap_alloc (SBITMAP_SIZE (components));
+  sbitmap epi = sbitmap_alloc (SBITMAP_SIZE (components));
+  sbitmap tmp = sbitmap_alloc (SBITMAP_SIZE (components));
+
+  basic_block bb;
+  FOR_EACH_BB_FN (bb, cfun)
+    {
+      /* Find which prologue resp. epilogue components are needed for all
+	 successor edges from this block.  */
+      if (EDGE_COUNT (bb->succs) == 0)
+	continue;
+
+      /* First, select all possible components.  */
+      bitmap_copy (epi, components);
+      bitmap_copy (pro, components);
+
+      edge e;
+      edge_iterator ei;
+      FOR_EACH_EDGE (e, ei, bb->succs)
+	{
+	  if (e->flags & EDGE_ABNORMAL)
+	    {
+	      bitmap_clear (epi);
+	      bitmap_clear (pro);
+	      break;
+	    }
+
+	  /* Deselect those epilogue components that should not be inserted
+	     for this edge, and also those that are already put at the head
+	     of the successor block.  */
+	  bitmap_and_compl (tmp, SW (e->src)->has_components,
+			    SW (e->dest)->has_components);
+	  bitmap_and_compl (tmp, tmp, SW (e->dest)->head_components);
+	  bitmap_and (epi, epi, tmp);
+
+	  /* Similarly, for the prologue.  */
+	  bitmap_and_compl (tmp, SW (e->dest)->has_components,
+			    SW (e->src)->has_components);
+	  bitmap_and_compl (tmp, tmp, SW (e->dest)->head_components);
+	  bitmap_and (pro, pro, tmp);
+	}
+
+      if (dump_file && !(bitmap_empty_p (epi) && bitmap_empty_p (pro)))
+	fprintf (dump_file, "  bb %d", bb->index);
+
+      if (dump_file && !bitmap_empty_p (epi))
+	dump_components ("epi", epi);
+      if (dump_file && !bitmap_empty_p (pro))
+	dump_components ("pro", pro);
+
+      if (dump_file && !(bitmap_empty_p (epi) && bitmap_empty_p (pro)))
+	fprintf (dump_file, "\n");
+
+      /* Put the code at the end of the BB, but before any final jump.  */
+      if (!bitmap_empty_p (epi))
+	{
+	  start_sequence ();
+	  targetm.shrink_wrap.emit_epilogue_components (epi);
+	  rtx_insn *seq = get_insns ();
+	  end_sequence ();
+
+	  rtx_insn *insn = BB_END (bb);
+	  if (control_flow_insn_p (insn))
+	    emit_insn_before (seq, insn);
+	  else
+	    emit_insn_after (seq, insn);
+
+	  bitmap_ior (SW (bb)->tail_components, SW (bb)->tail_components, epi);
+	}
+
+      if (!bitmap_empty_p (pro))
+	{
+	  start_sequence ();
+	  targetm.shrink_wrap.emit_prologue_components (pro);
+	  rtx_insn *seq = get_insns ();
+	  end_sequence ();
+
+	  rtx_insn *insn = BB_END (bb);
+	  if (JUMP_P (insn) || (CALL_P (insn) && SIBLING_CALL_P (insn)))
+	    emit_insn_before (seq, insn);
+	  else
+	    emit_insn_after (seq, insn);
+
+	  bitmap_ior (SW (bb)->tail_components, SW (bb)->tail_components, pro);
+	}
+    }
+
+  sbitmap_free (pro);
+  sbitmap_free (epi);
+  sbitmap_free (tmp);
+}
+
+/* Place prologues and epilogues for COMPONENTS on edges, if we haven't already
+   placed them inside blocks directly.  */
+static void
+insert_prologue_epilogue_for_components (sbitmap components)
+{
+  sbitmap pro = sbitmap_alloc (SBITMAP_SIZE (components));
+  sbitmap epi = sbitmap_alloc (SBITMAP_SIZE (components));
+
+  basic_block bb;
+  FOR_EACH_BB_FN (bb, cfun)
+    {
+      if (!bb->aux)
+	continue;
+
+      edge e;
+      edge_iterator ei;
+      FOR_EACH_EDGE (e, ei, bb->succs)
+	{
+	  /* Find which pro/epilogue components are needed on this edge.  */
+	  bitmap_and_compl (epi, SW (e->src)->has_components,
+			    SW (e->dest)->has_components);
+	  bitmap_and_compl (pro, SW (e->dest)->has_components,
+			    SW (e->src)->has_components);
+	  bitmap_and (epi, epi, components);
+	  bitmap_and (pro, pro, components);
+
+	  /* Deselect those we already have put at the head or tail of the
+	     edge's dest resp. src.  */
+	  bitmap_and_compl (epi, epi, SW (e->dest)->head_components);
+	  bitmap_and_compl (pro, pro, SW (e->dest)->head_components);
+	  bitmap_and_compl (epi, epi, SW (e->src)->tail_components);
+	  bitmap_and_compl (pro, pro, SW (e->src)->tail_components);
+
+	  if (!bitmap_empty_p (epi) || !bitmap_empty_p (pro))
+	    {
+	      if (dump_file)
+		{
+		  fprintf (dump_file, "  %d->%d", e->src->index,
+			   e->dest->index);
+		  dump_components ("epi", epi);
+		  dump_components ("pro", pro);
+		  fprintf (dump_file, "\n");
+		}
+
+	      /* Put the epilogue components in place.  */
+	      start_sequence ();
+	      targetm.shrink_wrap.emit_epilogue_components (epi);
+	      rtx_insn *seq = get_insns ();
+	      end_sequence ();
+
+	      if (e->flags & EDGE_SIBCALL)
+		{
+		  gcc_assert (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun));
+
+		  rtx_insn *insn = BB_END (e->src);
+		  gcc_assert (CALL_P (insn) && SIBLING_CALL_P (insn));
+		  emit_insn_before (seq, insn);
+		}
+	      else if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
+		{
+		  gcc_assert (e->flags & EDGE_FALLTHRU);
+		  basic_block new_bb = split_edge (e);
+		  emit_insn_after (seq, BB_END (new_bb));
+		}
+	      else
+		insert_insn_on_edge (seq, e);
+
+	      /* Put the prologue components in place.  */
+	      start_sequence ();
+	      targetm.shrink_wrap.emit_prologue_components (pro);
+	      seq = get_insns ();
+	      end_sequence ();
+
+	      insert_insn_on_edge (seq, e);
+	    }
+	}
+    }
+
+  sbitmap_free (pro);
+  sbitmap_free (epi);
+
+  commit_edge_insertions ();
+}
+
+/* The main entry point to this subpass.  FIRST_BB is where the prologue
+   would be normally put.  */
+void
+try_shrink_wrapping_separate (basic_block first_bb)
+{
+  if (!(SHRINK_WRAPPING_ENABLED
+	&& flag_shrink_wrap_separate
+	&& optimize_function_for_speed_p (cfun)
+	&& targetm.shrink_wrap.get_separate_components))
+    return;
+
+  /* We don't handle "strange" functions.  */
+  if (cfun->calls_alloca
+      || cfun->calls_setjmp
+      || cfun->can_throw_non_call_exceptions
+      || crtl->calls_eh_return
+      || crtl->has_nonlocal_goto
+      || crtl->saves_all_registers)
+    return;
+
+  /* Ask the target what components there are.  If it returns NULL, don't
+     do anything.  */
+  sbitmap components = targetm.shrink_wrap.get_separate_components ();
+  if (!components)
+    return;
+
+  /* We need LIVE info.  */
+  df_live_add_problem ();
+  df_live_set_all_dirty ();
+  df_analyze ();
+
+  calculate_dominance_info (CDI_DOMINATORS);
+  calculate_dominance_info (CDI_POST_DOMINATORS);
+
+  init_separate_shrink_wrap (components);
+
+  sbitmap_iterator sbi;
+  unsigned int j;
+  EXECUTE_IF_SET_IN_BITMAP (components, 0, j, sbi)
+    place_prologue_for_one_component (j, first_bb);
+
+  spread_components (first_bb);
+
+  disqualify_problematic_components (components);
+
+  /* Don't separately shrink-wrap anything where the "main" prologue will
+     go; the target code can often optimize things if it is presented with
+     all components together (say, if it generates store-multiple insns).  */
+  bitmap_and_compl (components, components, SW (first_bb)->has_components);
+
+  if (bitmap_empty_p (components))
+    {
+      if (dump_file)
+	fprintf (dump_file, "Not wrapping anything separately.\n");
+    }
+  else
+    {
+      if (dump_file)
+	{
+	  fprintf (dump_file, "The components we wrap separately are");
+	  dump_components ("sep", components);
+	  fprintf (dump_file, "\n");
+
+	  fprintf (dump_file, "... Inserting common heads...\n");
+	}
+
+      emit_common_heads_for_components (components);
+
+      if (dump_file)
+	fprintf (dump_file, "... Inserting common tails...\n");
+
+      emit_common_tails_for_components (components);
+
+      if (dump_file)
+	fprintf (dump_file, "... Inserting the more difficult ones...\n");
+
+      insert_prologue_epilogue_for_components (components);
+
+      if (dump_file)
+	fprintf (dump_file, "... Done.\n");
+
+      targetm.shrink_wrap.set_handled_components (components);
+
+      crtl->shrink_wrapped_separate = true;
+    }
+
+  fini_separate_shrink_wrap ();
+
+  sbitmap_free (components);
+  free_dominance_info (CDI_DOMINATORS);
+  free_dominance_info (CDI_POST_DOMINATORS);
+
+  if (crtl->shrink_wrapped_separate)
+    {
+      df_live_set_all_dirty ();
+      df_analyze ();
+    }
+}
diff --git a/gcc/shrink-wrap.h b/gcc/shrink-wrap.h
index e06ab37..05fcb41 100644
--- a/gcc/shrink-wrap.h
+++ b/gcc/shrink-wrap.h
@@ -25,6 +25,7 @@  along with GCC; see the file COPYING3.  If not see
 /* In shrink-wrap.c.  */
 extern bool requires_stack_frame_p (rtx_insn *, HARD_REG_SET, HARD_REG_SET);
 extern void try_shrink_wrapping (edge *entry_edge, rtx_insn *prologue_seq);
+extern void try_shrink_wrapping_separate (basic_block first_bb);
 #define SHRINK_WRAPPING_ENABLED \
   (flag_shrink_wrap && targetm.have_simple_return ())