[patch/committed] PR middle-end/65233 make walk-ssa_copies handle empty PHIs
diff mbox

Message ID 54F3F648.8090400@redhat.com
State New
Headers show

Commit Message

Aldy Hernandez March 2, 2015, 5:34 a.m. UTC
As I mention in the PR...

What's happening here is that the ipa_polymorphic_call_context 
constructor is calling walk_ssa_copies on a PHI node that has no 
arguments.  This happens because finalize_jump_threads eventually 
removes some PHI arguments as it's redirecting some edges, leaving a PHI 
with no arguments:

SR.33_23 = PHI <>

This should get cleaned up later, but the IPA polymorphic code gets 
called during the actual CFG clean-up, and walk_ssa_copies cannot handle 
an empty PHI.

Approved by Honza.

Fully tested on x86-64 Linux and verified that the patch fixes the ICE 
on an x86-64 Linux cross aarch64-linux-gnu cc1plus.

Committed to mainline.
commit cdb5a8f26178f61d6a2fdb2543f6c8b4c7136c94
Author: Aldy Hernandez <aldyh@redhat.com>
Date:   Sun Mar 1 21:21:37 2015 -0800

    	PR middle-end/65233
    	* ipa-polymorphic-call.c (walk_ssa_copies): Handle empty PHIs.

Comments

Richard Biener March 2, 2015, 8:38 a.m. UTC | #1
On Mon, Mar 2, 2015 at 6:34 AM, Aldy Hernandez <aldyh@redhat.com> wrote:
> As I mention in the PR...
>
> What's happening here is that the ipa_polymorphic_call_context constructor
> is calling walk_ssa_copies on a PHI node that has no arguments.  This
> happens because finalize_jump_threads eventually removes some PHI arguments
> as it's redirecting some edges, leaving a PHI with no arguments:
>
> SR.33_23 = PHI <>
>
> This should get cleaned up later, but the IPA polymorphic code gets called
> during the actual CFG clean-up, and walk_ssa_copies cannot handle an empty
> PHI.
>
> Approved by Honza.
>
> Fully tested on x86-64 Linux and verified that the patch fixes the ICE on an
> x86-64 Linux cross aarch64-linux-gnu cc1plus.
>
> Committed to mainline.

I think the real issue is that the walking code is executed via fold_stmt when
called with an API that tells you not to walk SSA use-def chains.

Richard.
Jan Hubicka March 3, 2015, 8:18 p.m. UTC | #2
> On Mon, Mar 2, 2015 at 6:34 AM, Aldy Hernandez <aldyh@redhat.com> wrote:
> > As I mention in the PR...
> >
> > What's happening here is that the ipa_polymorphic_call_context constructor
> > is calling walk_ssa_copies on a PHI node that has no arguments.  This
> > happens because finalize_jump_threads eventually removes some PHI arguments
> > as it's redirecting some edges, leaving a PHI with no arguments:
> >
> > SR.33_23 = PHI <>
> >
> > This should get cleaned up later, but the IPA polymorphic code gets called
> > during the actual CFG clean-up, and walk_ssa_copies cannot handle an empty
> > PHI.
> >
> > Approved by Honza.
> >
> > Fully tested on x86-64 Linux and verified that the patch fixes the ICE on an
> > x86-64 Linux cross aarch64-linux-gnu cc1plus.
> >
> > Committed to mainline.
> 
> I think the real issue is that the walking code is executed via fold_stmt when
> called with an API that tells you not to walk SSA use-def chains.

OK, adding arugment to ipa_polymorphic_call_context disabling use-def walks on
request is easy. How does one say what uses of fold_stmt are not supposed to walk
use-def chains?

Honza
Jeff Law March 4, 2015, 5:27 a.m. UTC | #3
On 03/02/15 01:38, Richard Biener wrote:
> On Mon, Mar 2, 2015 at 6:34 AM, Aldy Hernandez <aldyh@redhat.com> wrote:
>> As I mention in the PR...
>>
>> What's happening here is that the ipa_polymorphic_call_context constructor
>> is calling walk_ssa_copies on a PHI node that has no arguments.  This
>> happens because finalize_jump_threads eventually removes some PHI arguments
>> as it's redirecting some edges, leaving a PHI with no arguments:
>>
>> SR.33_23 = PHI <>
>>
>> This should get cleaned up later, but the IPA polymorphic code gets called
>> during the actual CFG clean-up, and walk_ssa_copies cannot handle an empty
>> PHI.
>>
>> Approved by Honza.
>>
>> Fully tested on x86-64 Linux and verified that the patch fixes the ICE on an
>> x86-64 Linux cross aarch64-linux-gnu cc1plus.
>>
>> Committed to mainline.
>
> I think the real issue is that the walking code is executed via fold_stmt when
> called with an API that tells you not to walk SSA use-def chains.
?  We have something that tells us not to walk the chains?  I don't see 
it in an API for fold_stmt.  How is the ipa-polymorphic code supposed to 
know when it can't follow the chains?

The restrictions on what we can do while we're in the inconsistent state 
prior to updating the ssa graph aren't defined anywhere and I doubt 
anyone really knows what they are.  That's obviously concerning.

We might consider trying to narrow the window in which these 
inconsistencies are allowed.  To do that I think we need to split 
cfgcleanup into two distinct parts.  First is unreachable block removal 
(which is needed so that we can compute the dominators).  Second is 
everything else.

The order of operations would be something like

remove unreachable blocks
ssa graph update
rest of cfg_cleanup

That just feels too intrusive to try at this stage though.

jeff
Richard Biener March 4, 2015, 12:41 p.m. UTC | #4
On Wed, Mar 4, 2015 at 6:27 AM, Jeff Law <law@redhat.com> wrote:
> On 03/02/15 01:38, Richard Biener wrote:
>>
>> On Mon, Mar 2, 2015 at 6:34 AM, Aldy Hernandez <aldyh@redhat.com> wrote:
>>>
>>> As I mention in the PR...
>>>
>>> What's happening here is that the ipa_polymorphic_call_context
>>> constructor
>>> is calling walk_ssa_copies on a PHI node that has no arguments.  This
>>> happens because finalize_jump_threads eventually removes some PHI
>>> arguments
>>> as it's redirecting some edges, leaving a PHI with no arguments:
>>>
>>> SR.33_23 = PHI <>
>>>
>>> This should get cleaned up later, but the IPA polymorphic code gets
>>> called
>>> during the actual CFG clean-up, and walk_ssa_copies cannot handle an
>>> empty
>>> PHI.
>>>
>>> Approved by Honza.
>>>
>>> Fully tested on x86-64 Linux and verified that the patch fixes the ICE on
>>> an
>>> x86-64 Linux cross aarch64-linux-gnu cc1plus.
>>>
>>> Committed to mainline.
>>
>>
>> I think the real issue is that the walking code is executed via fold_stmt
>> when
>> called with an API that tells you not to walk SSA use-def chains.
>
> ?  We have something that tells us not to walk the chains?  I don't see it
> in an API for fold_stmt.  How is the ipa-polymorphic code supposed to know
> when it can't follow the chains?

It gets passed the valueize callback now which returns NULL_TREE for
SSA names we can't follow.

> The restrictions on what we can do while we're in the inconsistent state
> prior to updating the ssa graph aren't defined anywhere and I doubt anyone
> really knows what they are.  That's obviously concerning.
>
> We might consider trying to narrow the window in which these inconsistencies
> are allowed.  To do that I think we need to split cfgcleanup into two
> distinct parts.  First is unreachable block removal (which is needed so that
> we can compute the dominators).  Second is everything else.
>
> The order of operations would be something like
>
> remove unreachable blocks
> ssa graph update
> rest of cfg_cleanup
>
> That just feels too intrusive to try at this stage though.

Well, not folding statements from cfg-cleanup would be better.

I'll have a look at the testcase in the PR and will come back with a
suggestion on what to do for GCC 5.

Richard.

>
> jeff
Richard Biener March 4, 2015, 1:30 p.m. UTC | #5
On Wed, Mar 4, 2015 at 1:41 PM, Richard Biener
<richard.guenther@gmail.com> wrote:
> On Wed, Mar 4, 2015 at 6:27 AM, Jeff Law <law@redhat.com> wrote:
>> On 03/02/15 01:38, Richard Biener wrote:
>>>
>>> On Mon, Mar 2, 2015 at 6:34 AM, Aldy Hernandez <aldyh@redhat.com> wrote:
>>>>
>>>> As I mention in the PR...
>>>>
>>>> What's happening here is that the ipa_polymorphic_call_context
>>>> constructor
>>>> is calling walk_ssa_copies on a PHI node that has no arguments.  This
>>>> happens because finalize_jump_threads eventually removes some PHI
>>>> arguments
>>>> as it's redirecting some edges, leaving a PHI with no arguments:
>>>>
>>>> SR.33_23 = PHI <>
>>>>
>>>> This should get cleaned up later, but the IPA polymorphic code gets
>>>> called
>>>> during the actual CFG clean-up, and walk_ssa_copies cannot handle an
>>>> empty
>>>> PHI.
>>>>
>>>> Approved by Honza.
>>>>
>>>> Fully tested on x86-64 Linux and verified that the patch fixes the ICE on
>>>> an
>>>> x86-64 Linux cross aarch64-linux-gnu cc1plus.
>>>>
>>>> Committed to mainline.
>>>
>>>
>>> I think the real issue is that the walking code is executed via fold_stmt
>>> when
>>> called with an API that tells you not to walk SSA use-def chains.
>>
>> ?  We have something that tells us not to walk the chains?  I don't see it
>> in an API for fold_stmt.  How is the ipa-polymorphic code supposed to know
>> when it can't follow the chains?
>
> It gets passed the valueize callback now which returns NULL_TREE for
> SSA names we can't follow.

Btw, for match-and-simplify I had to use that as default for fold_stmt
_exactly_ because of the call to fold_stmt from replace_uses_by
via merge-blocks from cfgcleanup.  This is because replace-uses-by
doesn't have all uses replaced before it folds the stmt!

We also have the "weaker" in-place flag.

>> The restrictions on what we can do while we're in the inconsistent state
>> prior to updating the ssa graph aren't defined anywhere and I doubt anyone
>> really knows what they are.  That's obviously concerning.
>>
>> We might consider trying to narrow the window in which these inconsistencies
>> are allowed.  To do that I think we need to split cfgcleanup into two
>> distinct parts.  First is unreachable block removal (which is needed so that
>> we can compute the dominators).  Second is everything else.
>>
>> The order of operations would be something like
>>
>> remove unreachable blocks
>> ssa graph update
>> rest of cfg_cleanup
>>
>> That just feels too intrusive to try at this stage though.
>
> Well, not folding statements from cfg-cleanup would be better.
>
> I'll have a look at the testcase in the PR and will come back with a
> suggestion on what to do for GCC 5.

I'd say that the devirtualization code is quite a heavy thing do to from
fold_stmt.  Yes - it want's to catch all cases if a stmt is modified
(after which passes should fold it).

So I am testing the following on x86_64 (verified it fixes the testcase
with a aarch64 cross).

Richard.

2015-03-04  Richard Biener  <rguenther@suse.de>

        PR middle-end/65233
        * ipa-polymorphic-call.c: Include tree-ssa-operands.h and
        tree-into-ssa.h.
        (walk_ssa_copies): Revert last chage.  Instead do not walk
        SSA names registered for SSA update.
Jan Hubicka March 4, 2015, 9:22 p.m. UTC | #6
> 
> I'd say that the devirtualization code is quite a heavy thing do to from
> fold_stmt.  Yes - it want's to catch all cases if a stmt is modified
> (after which passes should fold it).

Yep, I have no prolem doing the heavy part just at a specified point.
(it usually converges quickly, but there are no guarantees)

In fact one of reasons why I fully separated the context lattice operations
from ipa-prop is that I think it may make sense to have special purpose
intraprocedural pass to do the propagation probably twice - once in early
passes and once in late.  Did not get into that for GCC 5 though.

Honza
> 
> So I am testing the following on x86_64 (verified it fixes the testcase
> with a aarch64 cross).
> 
> Richard.
> 
> 2015-03-04  Richard Biener  <rguenther@suse.de>
> 
>         PR middle-end/65233
>         * ipa-polymorphic-call.c: Include tree-ssa-operands.h and
>         tree-into-ssa.h.
>         (walk_ssa_copies): Revert last chage.  Instead do not walk
>         SSA names registered for SSA update.
Jan Hubicka March 5, 2015, 12:54 a.m. UTC | #7
> >
> > It gets passed the valueize callback now which returns NULL_TREE for
> > SSA names we can't follow.
> 
> Btw, for match-and-simplify I had to use that as default for fold_stmt
> _exactly_ because of the call to fold_stmt from replace_uses_by
> via merge-blocks from cfgcleanup.  This is because replace-uses-by
> doesn't have all uses replaced before it folds the stmt!
> 
> We also have the "weaker" in-place flag.
> 
> 2015-03-04  Richard Biener  <rguenther@suse.de>
> 
>         PR middle-end/65233
>         * ipa-polymorphic-call.c: Include tree-ssa-operands.h and
>         tree-into-ssa.h.
>         (walk_ssa_copies): Revert last chage.  Instead do not walk
>         SSA names registered for SSA update.

Maybe include the patch?  It should not be problem to make the function
to valuelize everything it looks into.

Honza
Richard Biener March 5, 2015, 8:47 a.m. UTC | #8
On Thu, Mar 5, 2015 at 1:54 AM, Jan Hubicka <hubicka@ucw.cz> wrote:
>> >
>> > It gets passed the valueize callback now which returns NULL_TREE for
>> > SSA names we can't follow.
>>
>> Btw, for match-and-simplify I had to use that as default for fold_stmt
>> _exactly_ because of the call to fold_stmt from replace_uses_by
>> via merge-blocks from cfgcleanup.  This is because replace-uses-by
>> doesn't have all uses replaced before it folds the stmt!
>>
>> We also have the "weaker" in-place flag.
>>
>> 2015-03-04  Richard Biener  <rguenther@suse.de>
>>
>>         PR middle-end/65233
>>         * ipa-polymorphic-call.c: Include tree-ssa-operands.h and
>>         tree-into-ssa.h.
>>         (walk_ssa_copies): Revert last chage.  Instead do not walk
>>         SSA names registered for SSA update.
>
> Maybe include the patch?  It should not be problem to make the function
> to valuelize everything it looks into.

I attached it.

Well, I think for stage1 the fix is to not call fold_stmt from CFG hooks or
CFG cleanup.  Merge-blocks can just demote PHIs to assignments and
leave propagation to followup cleanups (we can of course propagate
virtual operands).

I can try to do it for this stage (I can only find merge-blocks doing this)
as well.  Opinions?

Richard.

> Honza
Jeff Law March 5, 2015, 4:10 p.m. UTC | #9
On 03/05/15 01:47, Richard Biener wrote:
> On Thu, Mar 5, 2015 at 1:54 AM, Jan Hubicka <hubicka@ucw.cz> wrote:
>>>>
>>>> It gets passed the valueize callback now which returns NULL_TREE for
>>>> SSA names we can't follow.
>>>
>>> Btw, for match-and-simplify I had to use that as default for fold_stmt
>>> _exactly_ because of the call to fold_stmt from replace_uses_by
>>> via merge-blocks from cfgcleanup.  This is because replace-uses-by
>>> doesn't have all uses replaced before it folds the stmt!
>>>
>>> We also have the "weaker" in-place flag.
>>>
>>> 2015-03-04  Richard Biener  <rguenther@suse.de>
>>>
>>>          PR middle-end/65233
>>>          * ipa-polymorphic-call.c: Include tree-ssa-operands.h and
>>>          tree-into-ssa.h.
>>>          (walk_ssa_copies): Revert last chage.  Instead do not walk
>>>          SSA names registered for SSA update.
>>
>> Maybe include the patch?  It should not be problem to make the function
>> to valuelize everything it looks into.
>
> I attached it.
>
> Well, I think for stage1 the fix is to not call fold_stmt from CFG hooks or
> CFG cleanup.  Merge-blocks can just demote PHIs to assignments and
> leave propagation to followup cleanups (we can of course propagate
> virtual operands).
Seems reasonable.  Though I'd also like to see us look to narrow the 
window in which things are in this odd state.  It's just asking for long 
term maintenance headaches.

Removal of unreachable blocks so that we can compute dominators should, 
in theory, never need to look at this stuff and that's a much smaller 
window than all of tree_cleanup_cfg.


Along the same lines I want to tackle the ssa name manager issues that 
are loosely related.

Jeff

Patch
diff mbox

diff --git a/gcc/ipa-polymorphic-call.c b/gcc/ipa-polymorphic-call.c
index aaa549e..13cc7f6 100644
--- a/gcc/ipa-polymorphic-call.c
+++ b/gcc/ipa-polymorphic-call.c
@@ -835,7 +835,10 @@  walk_ssa_copies (tree op, hash_set<tree> **global_visited = NULL)
 	{
 	  gimple phi = SSA_NAME_DEF_STMT (op);
 
-	  if (gimple_phi_num_args (phi) > 2)
+	  if (gimple_phi_num_args (phi) > 2
+	      /* We can be called while cleaning up the CFG and can
+		 have empty PHIs about to be removed.  */
+	      || gimple_phi_num_args (phi) == 0)
 	    goto done;
 	  if (gimple_phi_num_args (phi) == 1)
 	    op = gimple_phi_arg_def (phi, 0);