diff mbox

Fix two more memory leaks in threader

Message ID 555CB809.7020308@redhat.com
State New
Headers show

Commit Message

Jeff Law May 20, 2015, 4:36 p.m. UTC
These fix the remaining leaks in the threader that I'm aware of.  We 
failed to properly clean-up when we had to cancel certain jump threading 
opportunities.  So thankfully this wasn't a big leak.

Bootstrapped and regression tested on x86_64-unknown-linux-gnu. 
Installed on the trunk.

Jeff

Comments

Jakub Jelinek May 20, 2015, 4:41 p.m. UTC | #1
On Wed, May 20, 2015 at 10:36:25AM -0600, Jeff Law wrote:
> 
> These fix the remaining leaks in the threader that I'm aware of.  We failed
> to properly clean-up when we had to cancel certain jump threading
> opportunities.  So thankfully this wasn't a big leak.
> 
> Bootstrapped and regression tested on x86_64-unknown-linux-gnu. Installed on
> the trunk.
> 
> Jeff

> diff --git a/gcc/ChangeLog b/gcc/ChangeLog
> index fe4dfc4..27435c6 100644
> --- a/gcc/ChangeLog
> +++ b/gcc/ChangeLog
> @@ -1,3 +1,9 @@
> +2015-05-20  Jeff Law  <law@redhat.com>
> +
> +	* tree-ssa-threadupdate.c (mark_threaded_blocks): Properly
> +	dispose of the jump thread path when the jump threading
> +	opportunity is cancelled.
> +
>  2015-05-20  Manuel López-Ibáñez  <manu@gcc.gnu.org>
>  
>  	* diagnostic.c (diagnostic_print_caret_line): Fix off-by-one error
> diff --git a/gcc/tree-ssa-threadupdate.c b/gcc/tree-ssa-threadupdate.c
> index c5b78a4..4bccad0 100644
> --- a/gcc/tree-ssa-threadupdate.c
> +++ b/gcc/tree-ssa-threadupdate.c
> @@ -2159,9 +2159,16 @@ mark_threaded_blocks (bitmap threaded_blocks)
>          {
>  	  /* Attach the path to the starting edge if none is yet recorded.  */
>            if ((*path)[0]->e->aux == NULL)
> -            (*path)[0]->e->aux = path;
> -	  else if (dump_file && (dump_flags & TDF_DETAILS))
> -	    dump_jump_thread_path (dump_file, *path, false);
> +	    {
> +              (*path)[0]->e->aux = path;
> +	    }

Why the braces around single stmt if body?
Also, the indentation seems to be weird.

	Jakub
Jeff Law May 22, 2015, 9:39 p.m. UTC | #2
On 05/20/2015 10:41 AM, Jakub Jelinek wrote:
> On Wed, May 20, 2015 at 10:36:25AM -0600, Jeff Law wrote:
>>
>> These fix the remaining leaks in the threader that I'm aware of.  We failed
>> to properly clean-up when we had to cancel certain jump threading
>> opportunities.  So thankfully this wasn't a big leak.
>>
>> Bootstrapped and regression tested on x86_64-unknown-linux-gnu. Installed on
>> the trunk.
>>
>> Jeff
>
>> diff --git a/gcc/ChangeLog b/gcc/ChangeLog
>> index fe4dfc4..27435c6 100644
>> --- a/gcc/ChangeLog
>> +++ b/gcc/ChangeLog
>> @@ -1,3 +1,9 @@
>> +2015-05-20  Jeff Law  <law@redhat.com>
>> +
>> +	* tree-ssa-threadupdate.c (mark_threaded_blocks): Properly
>> +	dispose of the jump thread path when the jump threading
>> +	opportunity is cancelled.
>> +
>>   2015-05-20  Manuel López-Ibáñez  <manu@gcc.gnu.org>
>>
>>   	* diagnostic.c (diagnostic_print_caret_line): Fix off-by-one error
>> diff --git a/gcc/tree-ssa-threadupdate.c b/gcc/tree-ssa-threadupdate.c
>> index c5b78a4..4bccad0 100644
>> --- a/gcc/tree-ssa-threadupdate.c
>> +++ b/gcc/tree-ssa-threadupdate.c
>> @@ -2159,9 +2159,16 @@ mark_threaded_blocks (bitmap threaded_blocks)
>>           {
>>   	  /* Attach the path to the starting edge if none is yet recorded.  */
>>             if ((*path)[0]->e->aux == NULL)
>> -            (*path)[0]->e->aux = path;
>> -	  else if (dump_file && (dump_flags & TDF_DETAILS))
>> -	    dump_jump_thread_path (dump_file, *path, false);
>> +	    {
>> +              (*path)[0]->e->aux = path;
>> +	    }
>
> Why the braces around single stmt if body?
To match what was happening in the else clause.  I always find

if (fubar)
   something
else
   {
     more stuff
     even more stuff
   }

more painful to parse because of the mis-matched indentation than

if (fubar)
   {
     something
   }
else
   {
     more stuff
     even more stuff
   }

It's purely a style thing and if you'd prefer not to have the extra 
braces, I wouldn't lose any sleep over removing them.

> Also, the indentation seems to be weird.
Looks like spaces vs tabs issue.  I'll do a pass over 
tree-ssa-threadedge.c and fix them all up at once.

jeff
James Greenhalgh July 20, 2015, 2:19 p.m. UTC | #3
On Wed, May 20, 2015 at 05:36:25PM +0100, Jeff Law wrote:
> 
> These fix the remaining leaks in the threader that I'm aware of.  We 
> failed to properly clean-up when we had to cancel certain jump threading 
> opportunities.  So thankfully this wasn't a big leak.

Hi Jeff,

I don't have a reduced testcase to go on, but by inspection this patch
looks wrong to me... 

> diff --git a/gcc/tree-ssa-threadupdate.c b/gcc/tree-ssa-threadupdate.c
> index c5b78a4..4bccad0 100644
> --- a/gcc/tree-ssa-threadupdate.c
> +++ b/gcc/tree-ssa-threadupdate.c
> @@ -2159,9 +2159,16 @@ mark_threaded_blocks (bitmap threaded_blocks)
>          {
>  	  /* Attach the path to the starting edge if none is yet recorded.  */
>            if ((*path)[0]->e->aux == NULL)
> -            (*path)[0]->e->aux = path;
> -	  else if (dump_file && (dump_flags & TDF_DETAILS))
> -	    dump_jump_thread_path (dump_file, *path, false);
> +	    {
> +              (*path)[0]->e->aux = path;
> +	    }
> +	  else
> +	    {
> +	      paths.unordered_remove (i);

Here we are part-way through iterating through PATHS. With unordered_remove
we are going to move the end element of the vector to position 'i'. We'll
then move on and look at 'i + 1' and so never look at the element we just
moved.

This manifests as a lower number of cancelled jump-threads, and in
turn some extra jumps threaded - some of which may no longer be safe.

For a particular workload we've talked about before in relation to
jump-threading, dom1 ends up cancelling and threading these edges with
your patch applied:

  Cancelling jump thread: (28, 32) incoming edge;  (32, 36) joiner;  (36, 61) normal;
  Cancelling jump thread: (31, 7) incoming edge;  (7, 92) joiner;  (92, 8) normal;
  Cancelling jump thread: (63, 39) incoming edge;  (39, 89) joiner;  (89, 40) normal;
  Cancelling jump thread: (67, 68) incoming edge;  (68, 69) joiner;  (69, 93) normal;
  Cancelling jump thread: (4, 32) incoming edge;  (32, 36) joiner;  (36, 64) normal;
  Threaded jump 30 --> 28 to 299
  Threaded jump 91 --> 28 to 300
  Threaded jump 35 --> 36 to 302
  Threaded jump 88 --> 60 to 305
  Threaded jump 62 --> 60 to 306
  Threaded jump 32 --> 36 to 304

Reverting the patch we get these edges cancelled and threaded:

  Cancelling jump thread: (28, 32) incoming edge;  (32, 36) joiner;  (36, 61) normal;
  Cancelling jump thread: (31, 7) incoming edge;  (7, 92) joiner;  (92, 8) normal;
  Cancelling jump thread: (63, 39) incoming edge;  (39, 89) joiner;  (89, 40) normal;
  Cancelling jump thread: (67, 68) incoming edge;  (68, 69) joiner;  (69, 93) normal;
  Cancelling jump thread: (4, 32) incoming edge;  (32, 36) joiner;  (36, 64) normal;
  Cancelling jump thread: (4, 29) incoming edge;  (29, 91) joiner;  (91, 30) normal; (30, 31) nocopy;
  Cancelling jump thread: (32, 36) incoming edge;  (36, 64) joiner;  (64, 68) normal;
  Cancelling jump thread: (36, 61) incoming edge;  (61, 88) joiner;  (88, 62) normal; (62, 63) nocopy;
  Cancelling jump thread: (64, 68) incoming edge;  (68, 69) joiner;  (69, 93) normal;
  Threaded jump 30 --> 28 to 299
  Threaded jump 91 --> 28 to 300
  Threaded jump 35 --> 36 to 302
  Threaded jump 88 --> 60 to 303
  Threaded jump 62 --> 60 to 304

Note the extra thread of 32 --> 36 to 304 with this patch applied.

I think we either want to defer the unordered_remove until we're done
processing all the vector elements, or make sure to look at element 'i'
again after we've moved something new in to it.

A testcase would need to expose at least two threads which we later want
to cancel, one of which ends up at the end of the vector of threading
opportunities. We should then see only the first of those threads
actually get cancelled, and the second skipped over. Reproducing these
conditions is quite tough, which has stopped me finding a useful example
for the list, I'll be sure to follow-up this thread if I do get to one.

> +	      if (dump_file && (dump_flags & TDF_DETAILS))
> +	        dump_jump_thread_path (dump_file, *path, false);
> +	      delete_jump_thread_path (path);
> +	    }
>          }
>      }
>    /* Second, look for paths that have any other jump thread attached to
> @@ -2185,8 +2192,10 @@ mark_threaded_blocks (bitmap threaded_blocks)
>  	  else
>  	    {
>  	      e->aux = NULL;
> +	      paths.unordered_remove (i);

Likewise here.

Thanks,
James

>  	      if (dump_file && (dump_flags & TDF_DETAILS))
>  	        dump_jump_thread_path (dump_file, *path, false);
> +	      delete_jump_thread_path (path);
>  	    }
>  	}
>      }
Marek Polacek July 20, 2015, 2:30 p.m. UTC | #4
On Mon, Jul 20, 2015 at 03:19:06PM +0100, James Greenhalgh wrote:
> On Wed, May 20, 2015 at 05:36:25PM +0100, Jeff Law wrote:
> > 
> > These fix the remaining leaks in the threader that I'm aware of.  We 
> > failed to properly clean-up when we had to cancel certain jump threading 
> > opportunities.  So thankfully this wasn't a big leak.
> 
> Hi Jeff,
> 
> I don't have a reduced testcase to go on, but by inspection this patch
> looks wrong to me... 
> 
> > diff --git a/gcc/tree-ssa-threadupdate.c b/gcc/tree-ssa-threadupdate.c
> > index c5b78a4..4bccad0 100644
> > --- a/gcc/tree-ssa-threadupdate.c
> > +++ b/gcc/tree-ssa-threadupdate.c
> > @@ -2159,9 +2159,16 @@ mark_threaded_blocks (bitmap threaded_blocks)
> >          {
> >  	  /* Attach the path to the starting edge if none is yet recorded.  */
> >            if ((*path)[0]->e->aux == NULL)
> > -            (*path)[0]->e->aux = path;
> > -	  else if (dump_file && (dump_flags & TDF_DETAILS))
> > -	    dump_jump_thread_path (dump_file, *path, false);
> > +	    {
> > +              (*path)[0]->e->aux = path;
> > +	    }
> > +	  else
> > +	    {
> > +	      paths.unordered_remove (i);
> 
> Here we are part-way through iterating through PATHS. With unordered_remove
> we are going to move the end element of the vector to position 'i'. We'll
> then move on and look at 'i + 1' and so never look at the element we just
> moved.
> 
> This manifests as a lower number of cancelled jump-threads, and in
> turn some extra jumps threaded - some of which may no longer be safe.
> 
> For a particular workload we've talked about before in relation to
> jump-threading, dom1 ends up cancelling and threading these edges with
> your patch applied:
> 
>   Cancelling jump thread: (28, 32) incoming edge;  (32, 36) joiner;  (36, 61) normal;
>   Cancelling jump thread: (31, 7) incoming edge;  (7, 92) joiner;  (92, 8) normal;
>   Cancelling jump thread: (63, 39) incoming edge;  (39, 89) joiner;  (89, 40) normal;
>   Cancelling jump thread: (67, 68) incoming edge;  (68, 69) joiner;  (69, 93) normal;
>   Cancelling jump thread: (4, 32) incoming edge;  (32, 36) joiner;  (36, 64) normal;
>   Threaded jump 30 --> 28 to 299
>   Threaded jump 91 --> 28 to 300
>   Threaded jump 35 --> 36 to 302
>   Threaded jump 88 --> 60 to 305
>   Threaded jump 62 --> 60 to 306
>   Threaded jump 32 --> 36 to 304
> 
> Reverting the patch we get these edges cancelled and threaded:
> 
>   Cancelling jump thread: (28, 32) incoming edge;  (32, 36) joiner;  (36, 61) normal;
>   Cancelling jump thread: (31, 7) incoming edge;  (7, 92) joiner;  (92, 8) normal;
>   Cancelling jump thread: (63, 39) incoming edge;  (39, 89) joiner;  (89, 40) normal;
>   Cancelling jump thread: (67, 68) incoming edge;  (68, 69) joiner;  (69, 93) normal;
>   Cancelling jump thread: (4, 32) incoming edge;  (32, 36) joiner;  (36, 64) normal;
>   Cancelling jump thread: (4, 29) incoming edge;  (29, 91) joiner;  (91, 30) normal; (30, 31) nocopy;
>   Cancelling jump thread: (32, 36) incoming edge;  (36, 64) joiner;  (64, 68) normal;
>   Cancelling jump thread: (36, 61) incoming edge;  (61, 88) joiner;  (88, 62) normal; (62, 63) nocopy;
>   Cancelling jump thread: (64, 68) incoming edge;  (68, 69) joiner;  (69, 93) normal;
>   Threaded jump 30 --> 28 to 299
>   Threaded jump 91 --> 28 to 300
>   Threaded jump 35 --> 36 to 302
>   Threaded jump 88 --> 60 to 303
>   Threaded jump 62 --> 60 to 304
> 
> Note the extra thread of 32 --> 36 to 304 with this patch applied.
> 
> I think we either want to defer the unordered_remove until we're done
> processing all the vector elements, or make sure to look at element 'i'
> again after we've moved something new in to it.
> 
> A testcase would need to expose at least two threads which we later want
> to cancel, one of which ends up at the end of the vector of threading
> opportunities. We should then see only the first of those threads
> actually get cancelled, and the second skipped over. Reproducing these
> conditions is quite tough, which has stopped me finding a useful example
> for the list, I'll be sure to follow-up this thread if I do get to one.
 
Yes, there's something wrong about this patch, see PR66372.
I guess what you wrote above is the problem in this PR.

	Marek
Jeff Law July 24, 2015, 10:44 p.m. UTC | #5
On 07/20/2015 08:19 AM, James Greenhalgh wrote:
> On Wed, May 20, 2015 at 05:36:25PM +0100, Jeff Law wrote:
>>
>> These fix the remaining leaks in the threader that I'm aware of.  We
>> failed to properly clean-up when we had to cancel certain jump threading
>> opportunities.  So thankfully this wasn't a big leak.
>
> Hi Jeff,
>
> I don't have a reduced testcase to go on, but by inspection this patch
> looks wrong to me...
>
>> diff --git a/gcc/tree-ssa-threadupdate.c b/gcc/tree-ssa-threadupdate.c
>> index c5b78a4..4bccad0 100644
>> --- a/gcc/tree-ssa-threadupdate.c
>> +++ b/gcc/tree-ssa-threadupdate.c
>> @@ -2159,9 +2159,16 @@ mark_threaded_blocks (bitmap threaded_blocks)
>>           {
>>   	  /* Attach the path to the starting edge if none is yet recorded.  */
>>             if ((*path)[0]->e->aux == NULL)
>> -            (*path)[0]->e->aux = path;
>> -	  else if (dump_file && (dump_flags & TDF_DETAILS))
>> -	    dump_jump_thread_path (dump_file, *path, false);
>> +	    {
>> +              (*path)[0]->e->aux = path;
>> +	    }
>> +	  else
>> +	    {
>> +	      paths.unordered_remove (i);
>
> Here we are part-way through iterating through PATHS. With unordered_remove
> we are going to move the end element of the vector to position 'i'. We'll
> then move on and look at 'i + 1' and so never look at the element we just
> moved.
>
> This manifests as a lower number of cancelled jump-threads, and in
> turn some extra jumps threaded - some of which may no longer be safe.
>
> For a particular workload we've talked about before in relation to
> jump-threading, dom1 ends up cancelling and threading these edges with
> your patch applied:
>
>    Cancelling jump thread: (28, 32) incoming edge;  (32, 36) joiner;  (36, 61) normal;
>    Cancelling jump thread: (31, 7) incoming edge;  (7, 92) joiner;  (92, 8) normal;
>    Cancelling jump thread: (63, 39) incoming edge;  (39, 89) joiner;  (89, 40) normal;
>    Cancelling jump thread: (67, 68) incoming edge;  (68, 69) joiner;  (69, 93) normal;
>    Cancelling jump thread: (4, 32) incoming edge;  (32, 36) joiner;  (36, 64) normal;
>    Threaded jump 30 --> 28 to 299
>    Threaded jump 91 --> 28 to 300
>    Threaded jump 35 --> 36 to 302
>    Threaded jump 88 --> 60 to 305
>    Threaded jump 62 --> 60 to 306
>    Threaded jump 32 --> 36 to 304
>
> Reverting the patch we get these edges cancelled and threaded:
>
>    Cancelling jump thread: (28, 32) incoming edge;  (32, 36) joiner;  (36, 61) normal;
>    Cancelling jump thread: (31, 7) incoming edge;  (7, 92) joiner;  (92, 8) normal;
>    Cancelling jump thread: (63, 39) incoming edge;  (39, 89) joiner;  (89, 40) normal;
>    Cancelling jump thread: (67, 68) incoming edge;  (68, 69) joiner;  (69, 93) normal;
>    Cancelling jump thread: (4, 32) incoming edge;  (32, 36) joiner;  (36, 64) normal;
>    Cancelling jump thread: (4, 29) incoming edge;  (29, 91) joiner;  (91, 30) normal; (30, 31) nocopy;
>    Cancelling jump thread: (32, 36) incoming edge;  (36, 64) joiner;  (64, 68) normal;
>    Cancelling jump thread: (36, 61) incoming edge;  (61, 88) joiner;  (88, 62) normal; (62, 63) nocopy;
>    Cancelling jump thread: (64, 68) incoming edge;  (68, 69) joiner;  (69, 93) normal;
>    Threaded jump 30 --> 28 to 299
>    Threaded jump 91 --> 28 to 300
>    Threaded jump 35 --> 36 to 302
>    Threaded jump 88 --> 60 to 303
>    Threaded jump 62 --> 60 to 304
>
> Note the extra thread of 32 --> 36 to 304 with this patch applied.
>
> I think we either want to defer the unordered_remove until we're done
> processing all the vector elements, or make sure to look at element 'i'
> again after we've moved something new in to it.
>
> A testcase would need to expose at least two threads which we later want
> to cancel, one of which ends up at the end of the vector of threading
> opportunities. We should then see only the first of those threads
> actually get cancelled, and the second skipped over. Reproducing these
> conditions is quite tough, which has stopped me finding a useful example
> for the list, I'll be sure to follow-up this thread if I do get to one.
I think you're right.  All the uses of unordered_remove ought to be 
audited.  There's also a bug in BZ which I believe has been bisected to 
this change.  Hopefully that'll have a reasonable testcase.

jeff
diff mbox

Patch

diff --git a/gcc/ChangeLog b/gcc/ChangeLog
index fe4dfc4..27435c6 100644
--- a/gcc/ChangeLog
+++ b/gcc/ChangeLog
@@ -1,3 +1,9 @@ 
+2015-05-20  Jeff Law  <law@redhat.com>
+
+	* tree-ssa-threadupdate.c (mark_threaded_blocks): Properly
+	dispose of the jump thread path when the jump threading
+	opportunity is cancelled.
+
 2015-05-20  Manuel López-Ibáñez  <manu@gcc.gnu.org>
 
 	* diagnostic.c (diagnostic_print_caret_line): Fix off-by-one error
diff --git a/gcc/tree-ssa-threadupdate.c b/gcc/tree-ssa-threadupdate.c
index c5b78a4..4bccad0 100644
--- a/gcc/tree-ssa-threadupdate.c
+++ b/gcc/tree-ssa-threadupdate.c
@@ -2159,9 +2159,16 @@  mark_threaded_blocks (bitmap threaded_blocks)
         {
 	  /* Attach the path to the starting edge if none is yet recorded.  */
           if ((*path)[0]->e->aux == NULL)
-            (*path)[0]->e->aux = path;
-	  else if (dump_file && (dump_flags & TDF_DETAILS))
-	    dump_jump_thread_path (dump_file, *path, false);
+	    {
+              (*path)[0]->e->aux = path;
+	    }
+	  else
+	    {
+	      paths.unordered_remove (i);
+	      if (dump_file && (dump_flags & TDF_DETAILS))
+	        dump_jump_thread_path (dump_file, *path, false);
+	      delete_jump_thread_path (path);
+	    }
         }
     }
   /* Second, look for paths that have any other jump thread attached to
@@ -2185,8 +2192,10 @@  mark_threaded_blocks (bitmap threaded_blocks)
 	  else
 	    {
 	      e->aux = NULL;
+	      paths.unordered_remove (i);
 	      if (dump_file && (dump_flags & TDF_DETAILS))
 	        dump_jump_thread_path (dump_file, *path, false);
+	      delete_jump_thread_path (path);
 	    }
 	}
     }