diff mbox

RFC: LRA for x86/x86-64 [0/9]

Message ID CABu31nP-19ZPkW6twoWZbPqThsY6zhyvhiW445Co+Gr_CykL=A@mail.gmail.com
State New
Headers show

Commit Message

Steven Bosscher Sept. 29, 2012, 8:26 p.m. UTC
Hi Vlad,

Thanks for the testing and the logs. You must have good hardware, your
timings are all ~3 times faster than mine :-)

On Sat, Sep 29, 2012 at 3:01 AM, Vladimir Makarov <vmakarov@redhat.com> wrote:
> ----------------------------------32-bit------------------------------------
> Reload:
> 581.85user 29.91system 27:15.18elapsed 37%CPU (0avgtext+0avgdata
> LRA:
> 629.67user 24.16system 24:31.08elapsed 44%CPU (0avgtext+0avgdata

This is a ~8% slowdown.


> ----------------------------------64-bit:-----------------------------------
> Reload:
> 503.26user 36.54system 30:16.62elapsed 29%CPU (0avgtext+0avgdata
> LRA:
> 598.70user 30.90system 27:26.92elapsed 38%CPU (0avgtext+0avgdata

This is a ~19% slowdown


> Here is the numbers for PR54146 on the same machine with -O1 only for
> 64-bit (compiler reports error for -m32).

Right, the test case is for 64-bits only, I think it's preprocessed
code for AMD64.

> Reload:
> 350.40user 21.59system 17:09.75elapsed 36%CPU (0avgtext+0avgdata
> LRA:
> 468.29user 21.35system 15:47.76elapsed 51%CPU (0avgtext+0avgdata

This is a ~34% slowdown.

To put it in another perspective, here are my timings of trunk vs lra
(both checkouts done today):

trunk:
 integrated RA           : 181.68 (24%) usr   1.68 (11%) sys 183.43
(24%) wall  643564 kB (20%) ggc
 reload                  :  11.00 ( 1%) usr   0.18 ( 1%) sys  11.17 (
1%) wall   32394 kB ( 1%) ggc
 TOTAL                 : 741.64            14.76           756.41
      3216164 kB

lra branch:
 integrated RA           : 174.65 (16%) usr   1.33 ( 8%) sys 176.33
(16%) wall  643560 kB (20%) ggc
 reload                  : 399.69 (36%) usr   2.48 (15%) sys 402.69
(36%) wall   41852 kB ( 1%) ggc
 TOTAL                 :1102.06            16.05          1120.83
      3231738 kB

That's a 49% slowdown. The difference is completely accounted for by
the timing difference between reload and LRA.
(Timings done on gcc17, which is AMD Opteron(tm) Processor 8354 with
15GB ram, so swapping is no issue.)

It looks like the reload timevar is used for LRA. Why not have
multiple timevars, one per phase of LRA? Sth like the patch below
would be nice. This gives me the following timings:

 integrated RA           : 189.34 (16%) usr   1.84 (11%) sys 191.18
(16%) wall  643560 kB (20%) ggc
 LRA non-specific        :  59.82 ( 5%) usr   0.22 ( 1%) sys  60.12 (
5%) wall   18202 kB ( 1%) ggc
 LRA virtuals eliminatenon:  56.79 ( 5%) usr   0.03 ( 0%) sys  56.80 (
5%) wall   19223 kB ( 1%) ggc
 LRA reload inheritance  :   6.41 ( 1%) usr   0.01 ( 0%) sys   6.42 (
1%) wall    1665 kB ( 0%) ggc
 LRA create live ranges  : 175.30 (15%) usr   2.14 (13%) sys 177.44
(15%) wall    2761 kB ( 0%) ggc
 LRA hard reg assignment : 130.85 (11%) usr   0.20 ( 1%) sys 131.17
(11%) wall       0 kB ( 0%) ggc
 LRA coalesce pseudo regs:   2.54 ( 0%) usr   0.00 ( 0%) sys   2.55 (
0%) wall       0 kB ( 0%) ggc
 reload                  :   6.73 ( 1%) usr   0.20 ( 1%) sys   6.92 (
1%) wall       0 kB ( 0%) ggc

so the LRA "slowness" (for lack of a better word) appears to be due to
scalability problems in all sub-passes.

The code size changes are impressive, but I think that this kind of
slowdown should be addressed before making LRA the default for any
target.

Ciao!
Steven

Comments

Richard Biener Sept. 30, 2012, 4:01 p.m. UTC | #1
On Sat, Sep 29, 2012 at 10:26 PM, Steven Bosscher <stevenb.gcc@gmail.com> wrote:
> Hi Vlad,
>
> Thanks for the testing and the logs. You must have good hardware, your
> timings are all ~3 times faster than mine :-)
>
> On Sat, Sep 29, 2012 at 3:01 AM, Vladimir Makarov <vmakarov@redhat.com> wrote:
>> ----------------------------------32-bit------------------------------------
>> Reload:
>> 581.85user 29.91system 27:15.18elapsed 37%CPU (0avgtext+0avgdata
>> LRA:
>> 629.67user 24.16system 24:31.08elapsed 44%CPU (0avgtext+0avgdata
>
> This is a ~8% slowdown.
>
>
>> ----------------------------------64-bit:-----------------------------------
>> Reload:
>> 503.26user 36.54system 30:16.62elapsed 29%CPU (0avgtext+0avgdata
>> LRA:
>> 598.70user 30.90system 27:26.92elapsed 38%CPU (0avgtext+0avgdata
>
> This is a ~19% slowdown

I think both measurements run into swap (low CPU utilization), from the LRA
numbers I'd say that LRA uses less memory but the timings are somewhat
useless with the swapping.

>> Here is the numbers for PR54146 on the same machine with -O1 only for
>> 64-bit (compiler reports error for -m32).
>
> Right, the test case is for 64-bits only, I think it's preprocessed
> code for AMD64.
>
>> Reload:
>> 350.40user 21.59system 17:09.75elapsed 36%CPU (0avgtext+0avgdata
>> LRA:
>> 468.29user 21.35system 15:47.76elapsed 51%CPU (0avgtext+0avgdata
>
> This is a ~34% slowdown.
>
> To put it in another perspective, here are my timings of trunk vs lra
> (both checkouts done today):
>
> trunk:
>  integrated RA           : 181.68 (24%) usr   1.68 (11%) sys 183.43
> (24%) wall  643564 kB (20%) ggc
>  reload                  :  11.00 ( 1%) usr   0.18 ( 1%) sys  11.17 (
> 1%) wall   32394 kB ( 1%) ggc
>  TOTAL                 : 741.64            14.76           756.41
>       3216164 kB
>
> lra branch:
>  integrated RA           : 174.65 (16%) usr   1.33 ( 8%) sys 176.33
> (16%) wall  643560 kB (20%) ggc
>  reload                  : 399.69 (36%) usr   2.48 (15%) sys 402.69
> (36%) wall   41852 kB ( 1%) ggc
>  TOTAL                 :1102.06            16.05          1120.83
>       3231738 kB
>
> That's a 49% slowdown. The difference is completely accounted for by
> the timing difference between reload and LRA.
> (Timings done on gcc17, which is AMD Opteron(tm) Processor 8354 with
> 15GB ram, so swapping is no issue.)
>
> It looks like the reload timevar is used for LRA. Why not have
> multiple timevars, one per phase of LRA? Sth like the patch below
> would be nice. This gives me the following timings:
>
>  integrated RA           : 189.34 (16%) usr   1.84 (11%) sys 191.18
> (16%) wall  643560 kB (20%) ggc
>  LRA non-specific        :  59.82 ( 5%) usr   0.22 ( 1%) sys  60.12 (
> 5%) wall   18202 kB ( 1%) ggc
>  LRA virtuals eliminatenon:  56.79 ( 5%) usr   0.03 ( 0%) sys  56.80 (
> 5%) wall   19223 kB ( 1%) ggc
>  LRA reload inheritance  :   6.41 ( 1%) usr   0.01 ( 0%) sys   6.42 (
> 1%) wall    1665 kB ( 0%) ggc
>  LRA create live ranges  : 175.30 (15%) usr   2.14 (13%) sys 177.44
> (15%) wall    2761 kB ( 0%) ggc
>  LRA hard reg assignment : 130.85 (11%) usr   0.20 ( 1%) sys 131.17
> (11%) wall       0 kB ( 0%) ggc
>  LRA coalesce pseudo regs:   2.54 ( 0%) usr   0.00 ( 0%) sys   2.55 (
> 0%) wall       0 kB ( 0%) ggc
>  reload                  :   6.73 ( 1%) usr   0.20 ( 1%) sys   6.92 (
> 1%) wall       0 kB ( 0%) ggc
>
> so the LRA "slowness" (for lack of a better word) appears to be due to
> scalability problems in all sub-passes.

It would be nice to see if LRA just has a larger constant cost factor
compared to reload or if it has bigger complexity.

> The code size changes are impressive, but I think that this kind of
> slowdown should be addressed before making LRA the default for any
> target.

Certainly if it shows bigger complexity, not sure for the constant factor
(but for sure improvements are welcome).

I suppose there is the option to revert back to reload by default for
x86_64 as well for 4.8, right?  That is, do both reload and LRA
co-exist for each target or is it a definite decision target by target?

Thanks,
Richard.

> Ciao!
> Steven
>
>
>
>
> Index: lra-assigns.c
> ===================================================================
> --- lra-assigns.c       (revision 191858)
> +++ lra-assigns.c       (working copy)
> @@ -1261,6 +1261,8 @@ lra_assign (void)
>    bitmap_head insns_to_process;
>    bool no_spills_p;
>
> +  timevar_push (TV_LRA_ASSIGN);
> +
>    init_lives ();
>    sorted_pseudos = (int *) xmalloc (sizeof (int) * max_reg_num ());
>    sorted_reload_pseudos = (int *) xmalloc (sizeof (int) * max_reg_num ());
> @@ -1312,5 +1314,6 @@ lra_assign (void)
>    free (sorted_pseudos);
>    free (sorted_reload_pseudos);
>    finish_lives ();
> +  timevar_pop (TV_LRA_ASSIGN);
>    return no_spills_p;
>  }
> Index: lra.c
> ===================================================================
> --- lra.c       (revision 191858)
> +++ lra.c       (working copy)
> @@ -2193,6 +2193,7 @@ lra (FILE *f)
>
>    lra_dump_file = f;
>
> +  timevar_push (TV_LRA);
>
>    init_insn_recog_data ();
>
> @@ -2271,6 +2272,7 @@ lra (FILE *f)
>              to use a constant pool.  */
>           lra_eliminate (false);
>           lra_inheritance ();
> +
>           /* We need live ranges for lra_assign -- so build them.  */
>           lra_create_live_ranges (true);
>           live_p = true;
> @@ -2343,6 +2345,8 @@ lra (FILE *f)
>  #ifdef ENABLE_CHECKING
>    check_rtl (true);
>  #endif
> +
> +  timevar_pop (TV_LRA);
>  }
>
>  /* Called once per compiler to initialize LRA data once.  */
> Index: lra-eliminations.c
> ===================================================================
> --- lra-eliminations.c  (revision 191858)
> +++ lra-eliminations.c  (working copy)
> @@ -1297,6 +1297,8 @@ lra_eliminate (bool final_p)
>    struct elim_table *ep;
>    int regs_num = max_reg_num ();
>
> +  timevar_push (TV_LRA_ELIMINATE);
> +
>    bitmap_initialize (&insns_with_changed_offsets, &reg_obstack);
>    if (final_p)
>      {
> @@ -1317,7 +1319,7 @@ lra_eliminate (bool final_p)
>      {
>        update_reg_eliminate (&insns_with_changed_offsets);
>        if (bitmap_empty_p (&insns_with_changed_offsets))
> -       return;
> +       goto lra_eliminate_done;
>      }
>    if (lra_dump_file != NULL)
>      {
> @@ -1349,4 +1351,7 @@ lra_eliminate (bool final_p)
>           process_insn_for_elimination (insn, final_p);
>        }
>    bitmap_clear (&insns_with_changed_offsets);
> +
> +lra_eliminate_done:
> +  timevar_pop (TV_LRA_ELIMINATE);
>  }
> Index: lra-lives.c
> ===================================================================
> --- lra-lives.c (revision 191858)
> +++ lra-lives.c (working copy)
> @@ -962,6 +962,8 @@ lra_create_live_ranges (bool all_p)
>    basic_block bb;
>    int i, hard_regno, max_regno = max_reg_num ();
>
> +  timevar_push (TV_LRA_CREATE_LIVE_RANGES);
> +
>    complete_info_p = all_p;
>    if (lra_dump_file != NULL)
>      fprintf (lra_dump_file,
> @@ -1016,6 +1018,7 @@ lra_create_live_ranges (bool all_p)
>    sparseset_free (pseudos_live_through_setjumps);
>    sparseset_free (pseudos_live);
>    compress_live_ranges ();
> +  timevar_pop (TV_LRA_CREATE_LIVE_RANGES);
>  }
>
>  /* Finish all live ranges.  */
> Index: timevar.def
> ===================================================================
> --- timevar.def (revision 191858)
> +++ timevar.def (working copy)
> @@ -223,10 +223,16 @@ DEFTIMEVAR (TV_REGMOVE               , "
>  DEFTIMEVAR (TV_MODE_SWITCH           , "mode switching")
>  DEFTIMEVAR (TV_SMS                  , "sms modulo scheduling")
>  DEFTIMEVAR (TV_SCHED                 , "scheduling")
> -DEFTIMEVAR (TV_IRA                  , "integrated RA")
> -DEFTIMEVAR (TV_RELOAD               , "reload")
> +DEFTIMEVAR (TV_IRA                  , "integrated RA")
> +DEFTIMEVAR (TV_LRA                  , "LRA non-specific")
> +DEFTIMEVAR (TV_LRA_ELIMINATE        , "LRA virtuals eliminatenon")
> +DEFTIMEVAR (TV_LRA_INHERITANCE      , "LRA reload inheritance")
> +DEFTIMEVAR (TV_LRA_CREATE_LIVE_RANGES, "LRA create live ranges")
> +DEFTIMEVAR (TV_LRA_ASSIGN           , "LRA hard reg assignment")
> +DEFTIMEVAR (TV_LRA_COALESCE         , "LRA coalesce pseudo regs")
> +DEFTIMEVAR (TV_RELOAD               , "reload")
>  DEFTIMEVAR (TV_RELOAD_CSE_REGS       , "reload CSE regs")
> -DEFTIMEVAR (TV_GCSE_AFTER_RELOAD      , "load CSE after reload")
> +DEFTIMEVAR (TV_GCSE_AFTER_RELOAD     , "load CSE after reload")
>  DEFTIMEVAR (TV_REE                  , "ree")
>  DEFTIMEVAR (TV_THREAD_PROLOGUE_AND_EPILOGUE, "thread pro- & epilogue")
>  DEFTIMEVAR (TV_IFCVT2               , "if-conversion 2")
> Index: lra-coalesce.c
> ===================================================================
> --- lra-coalesce.c      (revision 191858)
> +++ lra-coalesce.c      (working copy)
> @@ -221,6 +221,8 @@ lra_coalesce (void)
>    bitmap_head involved_insns_bitmap, split_origin_bitmap;
>    bitmap_iterator bi;
>
> +  timevar_push (TV_LRA_COALESCE);
> +
>    if (lra_dump_file != NULL)
>      fprintf (lra_dump_file,
>              "\n********** Pseudos coalescing #%d: **********\n\n",
> @@ -371,5 +373,6 @@ lra_coalesce (void)
>    free (sorted_moves);
>    free (next_coalesced_pseudo);
>    free (first_coalesced_pseudo);
> +  timevar_pop (TV_LRA_COALESCE);
>    return coalesced_moves != 0;
>  }
> Index: lra-constraints.c
> ===================================================================
> --- lra-constraints.c   (revision 191858)
> +++ lra-constraints.c   (working copy)
> @@ -4859,6 +4859,8 @@ lra_inheritance (void)
>    basic_block bb, start_bb;
>    edge e;
>
> +  timevar_push (TV_LRA_INHERITANCE);
> +
>    lra_inheritance_iter++;
>    if (lra_dump_file != NULL)
>      fprintf (lra_dump_file, "\n********** Inheritance #%d: **********\n\n",
> @@ -4907,6 +4909,8 @@ lra_inheritance (void)
>    bitmap_clear (&live_regs);
>    bitmap_clear (&check_only_regs);
>    free (usage_insns);
> +
> +  timevar_pop (TV_LRA_INHERITANCE);
>  }
>
>  ^L
Steven Bosscher Sept. 30, 2012, 4:47 p.m. UTC | #2
On Sun, Sep 30, 2012 at 6:01 PM, Richard Guenther
<richard.guenther@gmail.com> wrote:
>>> ----------------------------------64-bit:-----------------------------------
>>> Reload:
>>> 503.26user 36.54system 30:16.62elapsed 29%CPU (0avgtext+0avgdata
>>> LRA:
>>> 598.70user 30.90system 27:26.92elapsed 38%CPU (0avgtext+0avgdata
>>
>> This is a ~19% slowdown
>
> I think both measurements run into swap (low CPU utilization), from the LRA
> numbers I'd say that LRA uses less memory but the timings are somewhat
> useless with the swapping.

Not on gcc17. It has almost no swap to begin with, but the max.
resident size is less than half of the machine's RAM (~7GB max.
resident vs 16GB machine RAM). It obviously has to do with memory
behavior, but it's probably more a matter of size (>200,000 basic
blocks, >600,000 pseudos, etc., basic blocks with livein/liveout sets
with a cardinality in the 10,000s, etc.), not swapping.


> It would be nice to see if LRA just has a larger constant cost factor
> compared to reload or if it has bigger complexity.

It is complexity in all typical measures of size (basic blocks, number
of insns, number of basic blocks), that's easily verified with
artificial test cases.


>> The code size changes are impressive, but I think that this kind of
>> slowdown should be addressed before making LRA the default for any
>> target.
>
> Certainly if it shows bigger complexity, not sure for the constant factor
> (but for sure improvements are welcome).
>
> I suppose there is the option to revert back to reload by default for
> x86_64 as well for 4.8, right?  That is, do both reload and LRA
> co-exist for each target or is it a definite decision target by target?

Do you really want to have two such bug-sensitive paths through of the compiler?

Ciao!
Steven
Andi Kleen Sept. 30, 2012, 4:50 p.m. UTC | #3
Richard Guenther <richard.guenther@gmail.com> writes:
>
> I think both measurements run into swap (low CPU utilization), from the LRA
> numbers I'd say that LRA uses less memory but the timings are somewhat
> useless with the swapping.

On Linux I would normally recommend to use

/usr/bin/time -f 'real=%e user=%U system=%S share=%P%% maxrss=%M ins=%I
outs=%O mfaults=%R waits=%w'

instead of plain time. It gives you much more information
(especially maxrss and waits), so it's more reliable to tell if you 
have a memory problem or not.

-Andi
Steven Bosscher Sept. 30, 2012, 4:52 p.m. UTC | #4
Hi,


To look at it in yet another way:

>  integrated RA           : 189.34 (16%) usr
>  LRA non-specific        :  59.82 ( 5%) usr
>  LRA virtuals eliminatenon:  56.79 ( 5%) usr
>  LRA create live ranges  : 175.30 (15%) usr
>  LRA hard reg assignment : 130.85 (11%) usr

The IRA pass is slower than the next-slowest pass (tree PRA) by almost
a factor 2.5.  Each of the individually-measured *phases* of LRA is
slower than the complete IRA *pass*. These 5 timevars together make up
for 52% of all compile time.

IRA already has scalability problems, let's not add more of that with LRA.

Ciao!
Steven
Richard Biener Sept. 30, 2012, 5:03 p.m. UTC | #5
On Sun, Sep 30, 2012 at 6:52 PM, Steven Bosscher <stevenb.gcc@gmail.com> wrote:
> Hi,
>
>
> To look at it in yet another way:
>
>>  integrated RA           : 189.34 (16%) usr
>>  LRA non-specific        :  59.82 ( 5%) usr
>>  LRA virtuals eliminatenon:  56.79 ( 5%) usr
>>  LRA create live ranges  : 175.30 (15%) usr
>>  LRA hard reg assignment : 130.85 (11%) usr
>
> The IRA pass is slower than the next-slowest pass (tree PRA) by almost
> a factor 2.5.  Each of the individually-measured *phases* of LRA is
> slower than the complete IRA *pass*. These 5 timevars together make up
> for 52% of all compile time.

That figure indeed makes IRA + LRA look bad.  Did you by chance identify
anything obvious that can be done to improve the situation?

Thanks,
Richard.

> IRA already has scalability problems, let's not add more of that with LRA.
>
> Ciao!
> Steven
Steven Bosscher Sept. 30, 2012, 8:42 p.m. UTC | #6
On Sun, Sep 30, 2012 at 7:03 PM, Richard Guenther
<richard.guenther@gmail.com> wrote:
> On Sun, Sep 30, 2012 at 6:52 PM, Steven Bosscher <stevenb.gcc@gmail.com> wrote:
>> Hi,
>>
>>
>> To look at it in yet another way:
>>
>>>  integrated RA           : 189.34 (16%) usr
>>>  LRA non-specific        :  59.82 ( 5%) usr
>>>  LRA virtuals eliminatenon:  56.79 ( 5%) usr
>>>  LRA create live ranges  : 175.30 (15%) usr
>>>  LRA hard reg assignment : 130.85 (11%) usr
>>
>> The IRA pass is slower than the next-slowest pass (tree PRA) by almost
>> a factor 2.5.  Each of the individually-measured *phases* of LRA is
>> slower than the complete IRA *pass*. These 5 timevars together make up
>> for 52% of all compile time.
>
> That figure indeed makes IRA + LRA look bad.  Did you by chance identify
> anything obvious that can be done to improve the situation?

Not really. It was what I was looking/hoping for with the multiple
timevars, but no cheese.

Ciao!
Steven
Vladimir Makarov Sept. 30, 2012, 10:50 p.m. UTC | #7
On 12-09-30 1:03 PM, Richard Guenther wrote:
> On Sun, Sep 30, 2012 at 6:52 PM, Steven Bosscher <stevenb.gcc@gmail.com> wrote:
>> Hi,
>>
>>
>> To look at it in yet another way:
>>
>>>   integrated RA           : 189.34 (16%) usr
>>>   LRA non-specific        :  59.82 ( 5%) usr
>>>   LRA virtuals eliminatenon:  56.79 ( 5%) usr
>>>   LRA create live ranges  : 175.30 (15%) usr
>>>   LRA hard reg assignment : 130.85 (11%) usr
>> The IRA pass is slower than the next-slowest pass (tree PRA) by almost
>> a factor 2.5.  Each of the individually-measured *phases* of LRA is
>> slower than the complete IRA *pass*. These 5 timevars together make up
>> for 52% of all compile time.
> That figure indeed makes IRA + LRA look bad.  Did you by chance identify
> anything obvious that can be done to improve the situation?
>
>
   As I wrote, I don't see that LRA has a problem right now because even 
on 8GB machine, GCC with LRA is 10% faster than GCC with reload with 
real time point of view (not saying that LRA generates 15% smaller 
code).  And real time is what really matters for users.

   But I think that LRA cpu time problem for this test can be fixed. But 
I don't think I can fix it for 2 weeks.  So if people believe that 
current LRA behaviour on this PR is a stopper to include it into gcc4.8 
than we should postpone its inclusion until gcc4.9 when I hope to fix it.

   As for IRA, IRA uses Chaitin-Briggs algorithm which scales worse than 
most other optimizations.  So the bigger test, the bigger percent of IRA 
in compilation time.  I don't believe that somebdoy can achieve a better 
code using other faster RA algorithms.  LLVM has no such problem because 
even their new RA (a big improvement for llvm3.0) is not based on CB 
algorithm.  It is still based on modification of linear-scan RA.  It 
would be interesting to check how other compilers behave on this test.  
Particually Intel one is most interesting (but I have doubts that it 
will be doing well because I saw programs when Intel compiler with 
optimizations struggled more than 40 mins on a file compilation).

   Still we can improve IRA behaviour from simple solutions like using a 
fast algorithm (currently used for -O0) for huge functions or by 
implementing division of function on smaller regions (but it is hard to 
implement and it will not work well for tests when most pseudos have 
very long live range).  I will work on it when I am less busy.

   About 14 years ago in Cygnus time, I worked on some problem from a 
customer.  GCC was not able to compile a big program by that time. 
Fixing GCC would have required a lot of efforts.  Finally, the customer 
modifies its code to generate smaller functions and problem was gone.  I 
mean that we could spend a lot of efforts to fix corner cases, ignoring 
improvements for majority of users.  But it seems to me that I'll have 
to work on these PRs.
Vladimir Makarov Sept. 30, 2012, 10:55 p.m. UTC | #8
On 12-09-30 4:42 PM, Steven Bosscher wrote:
> On Sun, Sep 30, 2012 at 7:03 PM, Richard Guenther
> <richard.guenther@gmail.com> wrote:
>> On Sun, Sep 30, 2012 at 6:52 PM, Steven Bosscher <stevenb.gcc@gmail.com> wrote:
>>> Hi,
>>>
>>>
>>> To look at it in yet another way:
>>>
>>>>   integrated RA           : 189.34 (16%) usr
>>>>   LRA non-specific        :  59.82 ( 5%) usr
>>>>   LRA virtuals eliminatenon:  56.79 ( 5%) usr
>>>>   LRA create live ranges  : 175.30 (15%) usr
>>>>   LRA hard reg assignment : 130.85 (11%) usr
>>> The IRA pass is slower than the next-slowest pass (tree PRA) by almost
>>> a factor 2.5.  Each of the individually-measured *phases* of LRA is
>>> slower than the complete IRA *pass*. These 5 timevars together make up
>>> for 52% of all compile time.
>> That figure indeed makes IRA + LRA look bad.  Did you by chance identify
>> anything obvious that can be done to improve the situation?
> Not really. It was what I was looking/hoping for with the multiple
> timevars, but no cheese.
>
I spent a lot of time to speed up LRA code.  So I don't think there is a 
simple solution.  The problem can be solved by using by simpler 
algorithms which results in generation of worse code.  It is not one 
week work even for me.
Steven Bosscher Sept. 30, 2012, 11:11 p.m. UTC | #9
On Mon, Oct 1, 2012 at 12:44 AM, Vladimir Makarov <vmakarov@redhat.com> wrote:
>   Actually, I don't see there is a problem with LRA right now.  I think we
> should first to solve a whole compiler memory footprint problem for this
> test because cpu utilization is very small for this test.  On my machine
> with 8GB, the maximal resident space achieves almost 8GB.

Sure. But note that up to IRA, the max. resident memory size of the
test case is "only" 3.6 GB. IRA/reload allocate more than 4GB,
doubling the foot print. If you want to solve that first, that'd be
great of course...

Ciao!
Steven
Steven Bosscher Sept. 30, 2012, 11:15 p.m. UTC | #10
On Mon, Oct 1, 2012 at 12:50 AM, Vladimir Makarov <vmakarov@redhat.com> wrote:
>   As I wrote, I don't see that LRA has a problem right now because even on
> 8GB machine, GCC with LRA is 10% faster than GCC with reload with real time
> point of view (not saying that LRA generates 15% smaller code).  And real
> time is what really matters for users.

For me, those compile times I reported *are* real times.

But you are right that the test case is a bit extreme. Before GCC 4.8
other parts of the compiler also choked on it. Still, the test case
comes from real user's code (combination of Eigen library with MPFR),
and it shows scalability problems in LRA (and IRA) that one can't just
"explain away" with an "RA is just expensive" claim. The test case for
PR26854 is Brad Lucier's Scheme interpreter, that is also real user's
code.

FWIW, I had actually expected IRA to extremely well on this test case
because IRA is supposed to be a regional allocator and I had expected
that would help for scalability. But most of the region data
structures in IRA are designed to hold whole functions (e.g. several
per region arrays of size max_reg_num / max_insn_uid / ...) and that
appears to be a problem for IRA's memory foot print. Perhaps something
similar is going on with LRA?

Ciao!
Steven
Vladimir Makarov Oct. 1, 2012, 12:13 a.m. UTC | #11
On 12-09-30 7:15 PM, Steven Bosscher wrote:
> On Mon, Oct 1, 2012 at 12:50 AM, Vladimir Makarov <vmakarov@redhat.com> wrote:
>>    As I wrote, I don't see that LRA has a problem right now because even on
>> 8GB machine, GCC with LRA is 10% faster than GCC with reload with real time
>> point of view (not saying that LRA generates 15% smaller code).  And real
>> time is what really matters for users.
> For me, those compile times I reported *are* real times.
Sorry, I missed your data (it was buried in calculations of percents 
from my data).  I saw that on my machine maxrss was 8GB with a lot of 
page faults and small cpu utillizations (about 30%).  I guess you used 
16GB machine and 16GB is enough for this test.  Ok, I'll work on this 
problem although I think it will take some time to solve it or make it 
more tolerable.  Although I think it is not right to pay attention only 
to compilation time.  See my reasons below.
> But you are right that the test case is a bit extreme. Before GCC 4.8
> other parts of the compiler also choked on it. Still, the test case
> comes from real user's code (combination of Eigen library with MPFR),
> and it shows scalability problems in LRA (and IRA) that one can't just
> "explain away" with an "RA is just expensive" claim. The test case for
> PR26854 is Brad Lucier's Scheme interpreter, that is also real user's
> code.
>
>
    I myself wrote a few interpreters, so I looked at the code of Scheme 
interpreter.

    It seems to me it is a computer generated code.  So the first 
solution would be generate a few functions instead of one. Generating a 
huge function is not wise for performance critical applications because 
compilers for this corner cases use simpler faster algorithms for 
optimization generating worse code.  By the way, I can solve the 
compilation time problem by using simpler algorithms harming 
performance.  The author will be happy with compilation speed but will 
be disappointed by saying 10% slower interpreter.  I don't think it is a 
solution the problem, it is creating a bigger problem.  It seems to me I 
have to do this :)  Or if I tell him that waiting 40% more time he can 
get 15% smaller code, I guess he would prefer this.  Of course it is 
interesting problem to speed up the compiler but we don't look at whole 
picture when we solve compilation time by hurting the performance.

   Scalability problem is a problem of computer generated programs and 
usually there is simpler and better solution for this by generating 
smaller functions.

   By the way, I also found that the author uses label values.  It is 
not the best solution although there are still a lot of articles 
recommending it.  One switch is faster for modern computers.  Anton Ertl 
proposed to use several switches (one switch after each interpreter 
insn) for better branch predictions but I found this work worse than one 
switch solution at least for my interpreters.
Jakub Jelinek Oct. 1, 2012, 5:48 a.m. UTC | #12
On Sun, Sep 30, 2012 at 06:50:50PM -0400, Vladimir Makarov wrote:
>   But I think that LRA cpu time problem for this test can be fixed.
> But I don't think I can fix it for 2 weeks.  So if people believe
> that current LRA behaviour on this PR is a stopper to include it
> into gcc4.8 than we should postpone its inclusion until gcc4.9 when
> I hope to fix it.

I think this testcase shouldn't be a show stopper for LRA inclusion into
4.8, but something to look at for stage3.

I think a lot of GCC passes have scalability issues on that testcase,
that is why it must be compiled with -O1 and not higher optimization
options, so perhaps it would be enough to choose a faster algorithm
generating worse code for the huge functions and -O1.

And I agree it is primarily a bug in the generator that it creates such huge
functions, that can't perform very well.

	Jakub
Jakub Jelinek Oct. 1, 2012, 7:16 a.m. UTC | #13
On Mon, Oct 01, 2012 at 08:47:13AM +0200, Steven Bosscher wrote:
> The test case compiles just fine at -O2, only VRP has trouble with it.
> Let's try to stick with facts, not speculation.

I was talking about the other PR, PR26854, which from what I remember when
trying it myself and even the latest -O3 time reports from the reduced
testcase show that IRA/reload aren't there very significant (for -O3 IRA
takes ~ 6% and reload ~ 1%).

> I've put a lot of hard work into it to fix almost all scalability problems
> on this PR for gcc 4.8. LRA undoes all of that work. I understand it is
> painful for some people to hear, but I remain of opinion that LRA cannot be
> considered "ready" if it scales so much worse than everything else in the
> compiler.

Judging the whole implementation from just these corner cases and not how it
performs on other testcases (SPEC, rebuild a distro, ...) is IMHO not the
right thing, if Vlad thinks the corner cases are fixable during stage3; IMHO
we should allow LRA in, worst case it can be disabled by default even for
i?86/x86_64.

	Jakub
Steven Bosscher Oct. 1, 2012, 8:18 a.m. UTC | #14
[ Sorry for re-send, it seems that mobile gmail sends text/html and
the sourceware mailer daemon rejects that. ]

On Monday, October 1, 2012, Jakub Jelinek <jakub@redhat.com> wrote:
> On Sun, Sep 30, 2012 at 06:50:50PM -0400, Vladimir Makarov wrote:
>
> I think this testcase shouldn't be a show stopper for LRA inclusion into
> 4.8, but something to look at for stage3.
>
> I think a lot of GCC passes have scalability issues on that testcase,
> that is why it must be compiled with -O1 and not higher optimization
> options,

The test case compiles just fine at -O2, only VRP has trouble with it. Let's
try to stick with facts, not speculation.

And the test case is not generated, it is the Eigen template library applied
to mpfr.

I've put a lot of hard work into it to fix almost all scalability problems
on this PR for gcc 4.8. LRA undoes all of that work. I understand it is
painful for some people to hear, but I remain of opinion that LRA cannot be
considered "ready" if it scales so much worse than everything else in the
compiler.

Ciao!
Steven
Richard Biener Oct. 1, 2012, 9:52 a.m. UTC | #15
On Mon, Oct 1, 2012 at 7:48 AM, Jakub Jelinek <jakub@redhat.com> wrote:
> On Sun, Sep 30, 2012 at 06:50:50PM -0400, Vladimir Makarov wrote:
>>   But I think that LRA cpu time problem for this test can be fixed.
>> But I don't think I can fix it for 2 weeks.  So if people believe
>> that current LRA behaviour on this PR is a stopper to include it
>> into gcc4.8 than we should postpone its inclusion until gcc4.9 when
>> I hope to fix it.
>
> I think this testcase shouldn't be a show stopper for LRA inclusion into
> 4.8, but something to look at for stage3.

I agree here.

> I think a lot of GCC passes have scalability issues on that testcase,
> that is why it must be compiled with -O1 and not higher optimization
> options, so perhaps it would be enough to choose a faster algorithm
> generating worse code for the huge functions and -O1.

Yes, we spent quite some time in making basic optimization work for
insane testcases (basically avoid quadratic or bigger complexity in any
IL size variable (number of basic-blocks, edges, instructions, pseudos, etc.)).

And indeed if you use -O2 we do have issues with existing passes (and even
at -O1 points-to analysis can wreck things, or even profile guessing - see
existing bugs for that).  Basically I would tune -O1 towards being able to
compile and optimize insane testcases with memory and compile-time
requirements that are linear in any of the above complexity measures.

Thus, falling back to the -O0 register allocating strathegy at certain
thresholds for the above complexity measures is fine (existing IRA
for example has really bad scaling on the number of loops in the function,
but you can tweak with flags to make it not consider that).

> And I agree it is primarily a bug in the generator that it creates such huge
> functions, that can't perform very well.

Well, not for -O2+, yes, but at least we should try(!) hard.

Thanks,
Richard.

>         Jakub
Steven Bosscher Oct. 1, 2012, 9:55 a.m. UTC | #16
On Mon, Oct 1, 2012 at 9:16 AM, Jakub Jelinek <jakub@redhat.com> wrote:
> On Mon, Oct 01, 2012 at 08:47:13AM +0200, Steven Bosscher wrote:
>> The test case compiles just fine at -O2, only VRP has trouble with it.
>> Let's try to stick with facts, not speculation.
>
> I was talking about the other PR, PR26854, which from what I remember when
> trying it myself and even the latest -O3 time reports from the reduced
> testcase show that IRA/reload aren't there very significant (for -O3 IRA
> takes ~ 6% and reload ~ 1%).

OK, but what does LRA take? Vlad's numbers for 64-bits and looking at user time:

Reload: 503.26user
LRA: 598.70user

So if reload is ~1% of 503s then that'd be ~5s. And the only
difference between the two timings is LRA instead of reload, so LRA
takes ~100s, or 20%.


>> I've put a lot of hard work into it to fix almost all scalability problems
>> on this PR for gcc 4.8. LRA undoes all of that work. I understand it is
>> painful for some people to hear, but I remain of opinion that LRA cannot be
>> considered "ready" if it scales so much worse than everything else in the
>> compiler.
>
> Judging the whole implementation from just these corner cases and not how it
> performs on other testcases (SPEC, rebuild a distro, ...) is IMHO not the
> right thing, if Vlad thinks the corner cases are fixable during stage3; IMHO
> we should allow LRA in, worst case it can be disabled by default even for
> i?86/x86_64.

I'd be asked to do a guest lecture on compiler construction (to be
clear: I'd be highly surprised if anyone would ask me to, but for sake
of argument, bear with me ;-) then I'd start by stating that
algorithms should be designed for the corner cases, because the
devil's always in the details.

But more to the point regarding stage3: It will already be a busy
stage3 if the other, probably even more significant, scalability
issues have to be fixed, i.e. var-tracking and macro expansion. And
there's also the symtab work that's bound to cause some interesting
bugs still to be shaken out. With all due respect to Vlad, and,
seriously, hats off to Vlad for tacking reload and coming up with a
much easier to understand and nicely phase-split replacement, I just
don't believe that these scalability issues can be addressed in
stage3.

It's now very late stage1, and LRA was originally scheduled for GCC
4.9. Why the sudden hurrying? Did I miss the 2 minute warning?

Ciao!
Steven
Steven Bosscher Oct. 1, 2012, 10:01 a.m. UTC | #17
On Mon, Oct 1, 2012 at 11:52 AM, Richard Guenther
<richard.guenther@gmail.com> wrote:
>> I think this testcase shouldn't be a show stopper for LRA inclusion into
>> 4.8, but something to look at for stage3.
>
> I agree here.

I would also agree if it were not for the fact that IRA is already a
scalability bottle-neck and that has been known for a long time, too.
I have no confidence at all that if LRA goes in now, these scalability
problems will be solved in stage3 or at any next release cycle. It's
always the same thing with GCC: Once a patch is in, everyone moves on
to the next fancy new thing dropping the not-quite-broken but also
not-quite-working things on the floor.

Ciao!
Steven
Steven Bosscher Oct. 1, 2012, 10:10 a.m. UTC | #18
On Sun, Sep 30, 2012 at 7:03 PM, Richard Guenther
<richard.guenther@gmail.com> wrote:
> On Sun, Sep 30, 2012 at 6:52 PM, Steven Bosscher <stevenb.gcc@gmail.com> wrote:
>> Hi,
>>
>>
>> To look at it in yet another way:
>>
>>>  integrated RA           : 189.34 (16%) usr
>>>  LRA non-specific        :  59.82 ( 5%) usr
>>>  LRA virtuals eliminatenon:  56.79 ( 5%) usr
>>>  LRA create live ranges  : 175.30 (15%) usr
>>>  LRA hard reg assignment : 130.85 (11%) usr
>>
>> The IRA pass is slower than the next-slowest pass (tree PRA) by almost
>> a factor 2.5.  Each of the individually-measured *phases* of LRA is
>> slower than the complete IRA *pass*. These 5 timevars together make up
>> for 52% of all compile time.
>
> That figure indeed makes IRA + LRA look bad.  Did you by chance identify
> anything obvious that can be done to improve the situation?

The " LRA create live range" time is mostly spent in merge_live_ranges
walking lists. Perhaps the live ranges can be represented better with
a sorted VEC, so that the start and finish points can be looked up on
log-time instead of linear.

Ciao!
Steven
Jakub Jelinek Oct. 1, 2012, 10:14 a.m. UTC | #19
On Mon, Oct 01, 2012 at 12:01:36PM +0200, Steven Bosscher wrote:
> I would also agree if it were not for the fact that IRA is already a
> scalability bottle-neck and that has been known for a long time, too.
> I have no confidence at all that if LRA goes in now, these scalability
> problems will be solved in stage3 or at any next release cycle. It's
> always the same thing with GCC: Once a patch is in, everyone moves on
> to the next fancy new thing dropping the not-quite-broken but also
> not-quite-working things on the floor.

If we open a P1 bug for it for 4.8, then it will need to be resolved some
way before branching.  I think Vlad is committed to bugfixing LRA, after
all the intent is for 4.9 to enable it on more (all?) targets, and all the
bugfixing and scalability work on LRA is needed for that anyway.

	Jakub
Steven Bosscher Oct. 1, 2012, 10:30 a.m. UTC | #20
On Mon, Oct 1, 2012 at 12:14 PM, Jakub Jelinek <jakub@redhat.com> wrote:
> On Mon, Oct 01, 2012 at 12:01:36PM +0200, Steven Bosscher wrote:
>> I would also agree if it were not for the fact that IRA is already a
>> scalability bottle-neck and that has been known for a long time, too.
>> I have no confidence at all that if LRA goes in now, these scalability
>> problems will be solved in stage3 or at any next release cycle. It's
>> always the same thing with GCC: Once a patch is in, everyone moves on
>> to the next fancy new thing dropping the not-quite-broken but also
>> not-quite-working things on the floor.
>
> If we open a P1 bug for it for 4.8, then it will need to be resolved some
> way before branching.  I think Vlad is committed to bugfixing LRA, after
> all the intent is for 4.9 to enable it on more (all?) targets, and all the
> bugfixing and scalability work on LRA is needed for that anyway.

I don't question Vlad's commitment, but the last time I met him, he
only had two hands just like everyone else.

But I've made my point and it seems that I'm not voting with the majority.

Ciao!
Steven
Bernd Schmidt Oct. 1, 2012, 10:30 a.m. UTC | #21
On 10/01/2012 12:14 PM, Jakub Jelinek wrote:
> On Mon, Oct 01, 2012 at 12:01:36PM +0200, Steven Bosscher wrote:
>> I would also agree if it were not for the fact that IRA is already a
>> scalability bottle-neck and that has been known for a long time, too.
>> I have no confidence at all that if LRA goes in now, these scalability
>> problems will be solved in stage3 or at any next release cycle. It's
>> always the same thing with GCC: Once a patch is in, everyone moves on
>> to the next fancy new thing dropping the not-quite-broken but also
>> not-quite-working things on the floor.
> 
> If we open a P1 bug for it for 4.8, then it will need to be resolved some
> way before branching.  I think Vlad is committed to bugfixing LRA, after
> all the intent is for 4.9 to enable it on more (all?) targets, and all the
> bugfixing and scalability work on LRA is needed for that anyway.

Why can't this be done on the branch? We've made the mistake of rushing
things into mainline too early a few times before, we should have
learned by now. And adding more half transitions is not something we
really want either.


Bernd
Steven Bosscher Oct. 1, 2012, 11:19 a.m. UTC | #22
On Mon, Oct 1, 2012 at 12:10 PM, Steven Bosscher <stevenb.gcc@gmail.com> wrote:
> The " LRA create live range" time is mostly spent in merge_live_ranges
> walking lists.

Hmm no, that's just gcc17's ancient debugger telling me lies.
lra_live_range_in_p is not even used.

/me upgrades to something newer than gdb 6.8...

Ciao!
Steven
Steven Bosscher Oct. 1, 2012, 1:03 p.m. UTC | #23
On Sat, Sep 29, 2012 at 10:26 PM, Steven Bosscher <stevenb.gcc@gmail.com> wrote:
>  LRA create live ranges  : 175.30 (15%) usr   2.14 (13%) sys 177.44
> (15%) wall    2761 kB ( 0%) ggc

I've tried to split this up a bit more:

process_bb_lives ~50%
create_start_finish_chains ~25%
remove_some_program_points_and_update_live_ranges ~25%

The latter two have a common structure with loops that look like this:

  for (i = FIRST_PSEUDO_REGISTER; i < max_regno; i++)
    {
      for (r = lra_reg_info[i].live_ranges; r != NULL; r = r->next)

Perhaps it's possible to do some of the work of compress_live_ranges
during process_bb_lives, to create shorter live_ranges chains.

Also, maybe doing something else than a linked list of live_ranges
will help (unfortunately that's not a trivial change, it seems,
because the lra_live_range_t type is used everywhere and there are no
iterators or anything abstracted out like that -- just chain
walks...).

Still it does seem to me that a sorted VEC of lra_live_range objects
probably would speed things up. Question is of course how much... :-)

Ciao!
Steven
Vladimir Makarov Oct. 1, 2012, 5:51 p.m. UTC | #24
On 12-10-01 6:30 AM, Bernd Schmidt wrote:
> On 10/01/2012 12:14 PM, Jakub Jelinek wrote:
>> On Mon, Oct 01, 2012 at 12:01:36PM +0200, Steven Bosscher wrote:
>>> I would also agree if it were not for the fact that IRA is already a
>>> scalability bottle-neck and that has been known for a long time, too.
>>> I have no confidence at all that if LRA goes in now, these scalability
>>> problems will be solved in stage3 or at any next release cycle. It's
>>> always the same thing with GCC: Once a patch is in, everyone moves on
>>> to the next fancy new thing dropping the not-quite-broken but also
>>> not-quite-working things on the floor.
>> If we open a P1 bug for it for 4.8, then it will need to be resolved some
>> way before branching.  I think Vlad is committed to bugfixing LRA, after
>> all the intent is for 4.9 to enable it on more (all?) targets, and all the
>> bugfixing and scalability work on LRA is needed for that anyway.
> Why can't this be done on the branch? We've made the mistake of rushing
> things into mainline too early a few times before, we should have
> learned by now. And adding more half transitions is not something we
> really want either.
>
I should clearly express that the transition will be not happen for 
short time because of the task complexity.  I believe that lra will 
coexist with reload for 1-2 releases.  I only ported LRA for 9 major 
targets.  The transition completion will be dependent on secondary 
target maintainers too because I alone can not do porting LRA for all 
supported targets.  It was discussed with a lot of people on 2012 GNU 
Tools Cauldron.  Maintenance of LRA on the branch is a big burden, even 
x86-64 is sometimes broken after merge with the trunk.

When I proposed merge LRA to gcc4.8, I had in mind that:
   o moving most changes from LRA branch will help LRA maintenance on 
the branch and I'll have more time to work on other targets and problems.
   o the earlier we start the transition, the better it will be for LRA 
because LRA on the trunk will have more feedback and better testing.

I've chosen x86/x86-64 for this because I am confident in this port.  On 
majority of tests, it generates faster, smaller code (even for these two 
extreme tests it generates 15% smaller code) for less time.  IMO, the 
slow compilation of the extreme tests are much less important than what 
I've just mentioned.

But because I got clear objections from at least two people and no clear 
support for the LRA inclusion (there were just no objections to include 
it), I will not insists on LRA merge now.

I believe in the importance of this work as LLVM catches GCC on RA front 
by implementing a new RA for LLVM3.0.  I believe we should get rid off 
reload as outdated, hard to maintain, and preventing implementation of 
new RA optimizations.

In any case submitting the patches was a good thing to do because I got 
a lot of feedback.  I still appreciate any comments on the patches.
Ian Lance Taylor Oct. 1, 2012, 6:55 p.m. UTC | #25
On Mon, Oct 1, 2012 at 10:51 AM, Vladimir Makarov <vmakarov@redhat.com> wrote:
>
> When I proposed merge LRA to gcc4.8, I had in mind that:
>   o moving most changes from LRA branch will help LRA maintenance on the
> branch and I'll have more time to work on other targets and problems.
>   o the earlier we start the transition, the better it will be for LRA
> because LRA on the trunk will have more feedback and better testing.
>
> I've chosen x86/x86-64 for this because I am confident in this port.  On
> majority of tests, it generates faster, smaller code (even for these two
> extreme tests it generates 15% smaller code) for less time.  IMO, the slow
> compilation of the extreme tests are much less important than what I've just
> mentioned.
>
> But because I got clear objections from at least two people and no clear
> support for the LRA inclusion (there were just no objections to include it),
> I will not insists on LRA merge now.

I believe that we should proceed with the LRA merge as Vlad has
proposed, and treat the compilation time slowdowns on specific test
cases as bugs to be addressed.

Clearly these slowdowns are not good.  However, requiring significant
work like LRA to be better or equal to the current code in every
single case is making the perfect the enemy of the good.  We must
weigh the benefits and drawbacks, not require that there be no
drawbacks at all.  In this case I believe that the benefits of LRA
significantly outweigh the drawbacks.

Steven is correct in saying that there is a tendency to move on and
never address GCC bugs.  However, there is also a counter-vailing
tendency to fix GCC bugs.  Anyhow I'm certainly not saying that in all
cases it's OK to accept a merge with regressions; I'm saying that in
this specific case it is OK.

(I say all this based on Vlad's descriptions, I have not actually
looked at the patches.)

Ian
David Miller Oct. 1, 2012, 7:19 p.m. UTC | #26
From: Ian Lance Taylor <iant@google.com>
Date: Mon, 1 Oct 2012 11:55:56 -0700

> Steven is correct in saying that there is a tendency to move on and
> never address GCC bugs.  However, there is also a counter-vailing
> tendency to fix GCC bugs.  Anyhow I'm certainly not saying that in all
> cases it's OK to accept a merge with regressions; I'm saying that in
> this specific case it is OK.

I think it's more important in this case to recognize Steven's real
point, which is that for an identical situation (IRA), and with an
identical patch author, we had similar bugs.  They were promised to be
worked on, and yet some of those regressions are still very much with
us.

The likelyhood of a repeat is therefore very real.

I really don't have a lot of confidence given what has happened in
the past.  I also don't understand what's so evil about sorting this
out on a branch.  It's the perfect carrot to get the compile time
regressions fixed.
Vladimir Makarov Oct. 1, 2012, 7:51 p.m. UTC | #27
On 10/01/2012 03:19 PM, David Miller wrote:
> From: Ian Lance Taylor <iant@google.com>
> Date: Mon, 1 Oct 2012 11:55:56 -0700
>
>> Steven is correct in saying that there is a tendency to move on and
>> never address GCC bugs.  However, there is also a counter-vailing
>> tendency to fix GCC bugs.  Anyhow I'm certainly not saying that in all
>> cases it's OK to accept a merge with regressions; I'm saying that in
>> this specific case it is OK.
> I think it's more important in this case to recognize Steven's real
> point, which is that for an identical situation (IRA), and with an
> identical patch author, we had similar bugs.  They were promised to be
> worked on, and yet some of those regressions are still very much with
> us.
That is not true.  I worked on many compiler time regression bugs. I 
remeber one serious degradation of compilation time on 
all_cp2k_gfortran.f90.  I solved the problem and make IRA working faster 
and generating much better code than the old RA.

http://blog.gmane.org/gmane.comp.gcc.patches/month=20080501/page=15

About other two mentioned PRs by Steven:

PR26854.  I worked on this bug even when IRA was on the branch and make 
again GCC with IRA 5% faster on this test than GCC with the old RA.

PR 54146 is 3 months old.  There were a lot work on other optimizations 
before IRA became important.  It happens only 2 months ago. I had no 
time to work on it but I am going to.

People sometimes see that RA takes a lot of compilation time but it is 
in the nature of RA.  I'd recommend first to check how the old RA 
behaves and then call it a degradation.

And please, don't listen just one side.
> The likelyhood of a repeat is therefore very real.
>
> I really don't have a lot of confidence given what has happened in
> the past.  I also don't understand what's so evil about sorting this
> out on a branch.  It's the perfect carrot to get the compile time
> regressions fixed.
Wrong assumptions result in wrong conclusions.
Steven Bosscher Oct. 1, 2012, 8:03 p.m. UTC | #28
On Mon, Oct 1, 2012 at 9:19 PM, David Miller <davem@davemloft.net> wrote:
> From: Ian Lance Taylor <iant@google.com>
> Date: Mon, 1 Oct 2012 11:55:56 -0700
>
>> Steven is correct in saying that there is a tendency to move on and
>> never address GCC bugs.  However, there is also a counter-vailing
>> tendency to fix GCC bugs.  Anyhow I'm certainly not saying that in all
>> cases it's OK to accept a merge with regressions; I'm saying that in
>> this specific case it is OK.
>
> I think it's more important in this case to recognize Steven's real
> point, which is that for an identical situation (IRA), and with an
> identical patch author, we had similar bugs.  They were promised to be
> worked on, and yet some of those regressions are still very much with
> us.

My point is not to single out Vlad here! I don't think this patch
author is any worse or better than the next one. There are other
examples enough, e.g. VRP is from other contributors and it has had a
few horrible pieces of code from the start that just don't get
addressed, or var-tracking for which cleaning up a few serious compile
time problems will be a Big Job for stage3. It's the general pattern
that worries me.

Ciao!
Steven
Steven Bosscher Oct. 1, 2012, 8:24 p.m. UTC | #29
On Mon, Oct 1, 2012 at 9:51 PM, Vladimir Makarov <vmakarov@redhat.com> wrote:
>> I think it's more important in this case to recognize Steven's real
>> point, which is that for an identical situation (IRA), and with an
>> identical patch author, we had similar bugs.  They were promised to be
>> worked on, and yet some of those regressions are still very much with
>> us.
>
> That is not true.  I worked on many compiler time regression bugs. I remeber
> one serious degradation of compilation time on all_cp2k_gfortran.f90.  I
> solved the problem and make IRA working faster and generating much better
> code than the old RA.
>
> http://blog.gmane.org/gmane.comp.gcc.patches/month=20080501/page=15
>
> About other two mentioned PRs by Steven:
>
> PR26854.  I worked on this bug even when IRA was on the branch and make
> again GCC with IRA 5% faster on this test than GCC with the old RA.
>
> PR 54146 is 3 months old.  There were a lot work on other optimizations
> before IRA became important.  It happens only 2 months ago. I had no time to
> work on it but I am going to.

This is also not quite true, see PR37448, which shows the problems as
the test case for PR54146.

I just think scalability is a very important issue. If some pass or
algorithm scales bad on some measure, then users _will_ run into that
at some point and report bugs about it (if you're lucky enough to have
a user patient enough to sit out the long compile time :-) ). Also,
good scalability opens up opportunities. For example, historically GCC
has been conservative on inlining heuristics to avoid compile time
explosions. I think it's better to address the causes of that
explosion and to avoid introducing new potential bottlenecks.


> People sometimes see that RA takes a lot of compilation time but it is in
> the nature of RA.  I'd recommend first to check how the old RA behaves and
> then call it a degradation.

There's no question that RA is one of the hardest problems the
compiler has to solve, being NP-complete and all that. I like LRA's
iterative approach, but if you know you're going to solve a hard
problem with a number potentially expensive iterations, there's even
more reason to make scalability a design goal!

As I said earlier in this thread, I was really looking forward to IRA
at the time you worked on it, because it is supposed to be a regional
allocator and I had expected that to mean it could, well, allocate
per-region which is usually very helpful for scalability (partition
your function and insert compensation code on strategically picked
region boundaries). But that's not what IRA has turned out to be.
(Instead, its regional nature is one of the reasons for its
scalability problems.)  IRA is certainly not worse than old global.c
in very many ways, and LRA looks like a well thought-through and
welcome replacement of old reload. But scalability is an issue in the
design of IRA and LRA looks to be the same in that regard.

Ciao!
Steven
Vladimir Makarov Oct. 2, 2012, 1:11 a.m. UTC | #30
On 12-10-01 4:24 PM, Steven Bosscher wrote:
> On Mon, Oct 1, 2012 at 9:51 PM, Vladimir Makarov <vmakarov@redhat.com> wrote:
>>> I think it's more important in this case to recognize Steven's real
>>> point, which is that for an identical situation (IRA), and with an
>>> identical patch author, we had similar bugs.  They were promised to be
>>> worked on, and yet some of those regressions are still very much with
>>> us.
>> That is not true.  I worked on many compiler time regression bugs. I remeber
>> one serious degradation of compilation time on all_cp2k_gfortran.f90.  I
>> solved the problem and make IRA working faster and generating much better
>> code than the old RA.
>>
>> http://blog.gmane.org/gmane.comp.gcc.patches/month=20080501/page=15
>>
>> About other two mentioned PRs by Steven:
>>
>> PR26854.  I worked on this bug even when IRA was on the branch and make
>> again GCC with IRA 5% faster on this test than GCC with the old RA.
>>
>> PR 54146 is 3 months old.  There were a lot work on other optimizations
>> before IRA became important.  It happens only 2 months ago. I had no time to
>> work on it but I am going to.
> This is also not quite true, see PR37448, which shows the problems as
> the test case for PR54146.
I worked on this PR too and did some progress but it was much less than 
on other PRs.

Sometimes I think that I should have maintained the old RA to compare 
with it and show that IRA is still a step ahead. Unfortunately, 
comparison with old releases has no sense now because more aggressive 
optimizations (inlining).
> I just think scalability is a very important issue. If some pass or
> algorithm scales bad on some measure, then users _will_ run into that
> at some point and report bugs about it (if you're lucky enough to have
> a user patient enough to sit out the long compile time :-) ). Also,
> good scalability opens up opportunities. For example, historically GCC
> has been conservative on inlining heuristics to avoid compile time
> explosions. I think it's better to address the causes of that
> explosion and to avoid introducing new potential bottlenecks.
As I wrote, scalability sometimes misleading as in case PR (a Scheme 
interpreter) where 30% more compilation time with LRA is translated into 
15% decrease in code size and most probably better performance.  Now we 
can manage to achieve scalability with worse performance but that is not 
what user expects and even worse he did know it.  It is even a bigger 
problem IMO.

Ideally, we should have more scalable algorithms as fallback but we 
should warn the programmer that worse performance algorithms are used 
and he could achieve a better performance by dividing a huge function 
into several ones.
>
>> People sometimes see that RA takes a lot of compilation time but it is in
>> the nature of RA.  I'd recommend first to check how the old RA behaves and
>> then call it a degradation.
> There's no question that RA is one of the hardest problems the
> compiler has to solve, being NP-complete and all that. I like LRA's
> iterative approach, but if you know you're going to solve a hard
> problem with a number potentially expensive iterations, there's even
> more reason to make scalability a design goal!
When I designed IRA I kept this goal in my mind too.  Although my first 
priority was the performance.  The old RA was ok when the register 
pressure was not high.  I knew aggressive inlining and LTO were coming.  
And my goal was to design RA generating a good code for bigger programs 
with much higher register pressure where the old RA drawbacks would be 
obvious.
> As I said earlier in this thread, I was really looking forward to IRA
> at the time you worked on it, because it is supposed to be a regional
> allocator and I had expected that to mean it could, well, allocate
> per-region which is usually very helpful for scalability (partition
> your function and insert compensation code on strategically picked
> region boundaries). But that's not what IRA has turned out to be.
> (Instead, its regional nature is one of the reasons for its
> scalability problems.)  IRA is certainly not worse than old global.c
> in very many ways, and LRA looks like a well thought-through and
> welcome replacement of old reload. But scalability is an issue in the
> design of IRA and LRA looks to be the same in that regard.
Regional allocator means that allocation is made taking mostly a region 
into account to achieve a better allocation.  But a good regional RA 
(e.g. Callahan-Koblenz or Intel fusion based RA) takes interactions with 
other regions too and such allocators might be even slower than 
non-regional.  Although I should say that in my impressions CK-allocator 
(I tried and implemented this about 6 years ago) is probably better 
scalable than IRA but it generates worse code.
Vladimir Makarov Oct. 2, 2012, 1:14 a.m. UTC | #31
On 10/01/2012 09:03 AM, Steven Bosscher wrote:
> On Sat, Sep 29, 2012 at 10:26 PM, Steven Bosscher <stevenb.gcc@gmail.com> wrote:
>>   LRA create live ranges  : 175.30 (15%) usr   2.14 (13%) sys 177.44
>> (15%) wall    2761 kB ( 0%) ggc
> I've tried to split this up a bit more:
>
> process_bb_lives ~50%
> create_start_finish_chains ~25%
> remove_some_program_points_and_update_live_ranges ~25%
>
> The latter two have a common structure with loops that look like this:
>
>    for (i = FIRST_PSEUDO_REGISTER; i < max_regno; i++)
>      {
>        for (r = lra_reg_info[i].live_ranges; r != NULL; r = r->next)
>
> Perhaps it's possible to do some of the work of compress_live_ranges
> during process_bb_lives, to create shorter live_ranges chains.
   It is aready done.  Moreover program points are compressed to minimal 
to guarantee right conflict resolution.  For example, if only 2 pseudos 
live in 1..10 and 2..14, they actually will have the same range like 1..1.

   Analogous live ranges are used in IRA as intermidiate step to build a 
conflict graph.  Actually, the first approach was to use IRA code to 
assign hard registers to pseudos (e.g.  Jeff Law tried this approach) 
but it was rejected as requiring much more compilation time.  In some 
way, one can look at the assignment in LRA is a compromise between 
quality (which could achieved through repeated buidling conflict graphs 
and using graph coloring) and compilation speed.
> Also, maybe doing something else than a linked list of live_ranges
> will help (unfortunately that's not a trivial change, it seems,
> because the lra_live_range_t type is used everywhere and there are no
> iterators or anything abstracted out like that -- just chain
> walks...).
>
> Still it does seem to me that a sorted VEC of lra_live_range objects
> probably would speed things up. Question is of course how much... :-)
>
   My experience shows that these lists are usually 1-2 elements. 
Although in this case, there are pseudos with huge number elements 
(hundreeds).  I tried -fweb for this tests because it can decrease the 
number elements but GCC (I don't know what pass) scales even worse: 
after 20 min of waiting and when virt memory achieved 20GB I stoped it.

    I guess the same speed or close to reload one on this test can be 
achieved through implementing other simpler algorithms generating worse 
code.  I need some time to design them and implement this.
Jeff Law Oct. 2, 2012, 4:22 a.m. UTC | #32
On 10/01/2012 07:14 PM, Vladimir Makarov wrote:
>
>    Analogous live ranges are used in IRA as intermidiate step to build a
> conflict graph.  Actually, the first approach was to use IRA code to
> assign hard registers to pseudos (e.g.  Jeff Law tried this approach)
> but it was rejected as requiring much more compilation time.  In some
> way, one can look at the assignment in LRA is a compromise between
> quality (which could achieved through repeated buidling conflict graphs
> and using graph coloring) and compilation speed.
Not only was it slow (iterating IRA), guaranteeing termination was a 
major problem.  There's some algorithmic games that have to be played 
(they're at least discussed in literature, but not under the heading of 
termination) and there's some issues specific to the IRA implementation 
which make ensuring termination difficult.

I got nearly as good of results by conservative updates of the conflicts 
after splitting ranges and (ab)using ira's reload hooks to give the new 
pseudos for the split range a chance to be allocated again.

The biggest problem with that approach was getting the costing right for 
the new pseudos.  That requires running a fair amount of IRA a second 
time.  I'd still like to return to some of the ideas from that work as I 
think some of the bits are still relevant in the IRA+LRA world.

>    My experience shows that these lists are usually 1-2 elements.
That's been my experience as well.  The vast majority of the time the 
range lists are very small.

Jeff
Steven Bosscher Oct. 2, 2012, 7:28 a.m. UTC | #33
On Tue, Oct 2, 2012 at 3:14 AM, Vladimir Makarov <vmakarov@redhat.com> wrote:
>   My experience shows that these lists are usually 1-2 elements. Although in
> this case, there are pseudos with huge number elements (hundreeds).  I tried
> -fweb for this tests because it can decrease the number elements but GCC (I
> don't know what pass) scales even worse: after 20 min of waiting and when
> virt memory achieved 20GB I stoped it.

Ouch :-)

The webizer itself never even runs, the compiler blows up somewhere
during the df_analyze call from web_main. The issue here is probably
in the DF_UD_CHAIN problem or in the DF_RD problem.

Ciao!
Steven
Paolo Bonzini Oct. 2, 2012, 8:29 a.m. UTC | #34
Il 02/10/2012 09:28, Steven Bosscher ha scritto:
>>   My experience shows that these lists are usually 1-2 elements. Although in
>> > this case, there are pseudos with huge number elements (hundreeds).  I tried
>> > -fweb for this tests because it can decrease the number elements but GCC (I
>> > don't know what pass) scales even worse: after 20 min of waiting and when
>> > virt memory achieved 20GB I stoped it.
> Ouch :-)
> 
> The webizer itself never even runs, the compiler blows up somewhere
> during the df_analyze call from web_main. The issue here is probably
> in the DF_UD_CHAIN problem or in the DF_RD problem.

/me is glad to have fixed fwprop when his GCC contribution time was more
than 1-2 days per year...

Unfortunately, the fwprop solution (actually a rewrite) was very
specific to the problem and cannot be reused in other parts of the compiler.

I guess here it is where we could experiment with region-based
optimization.  If a loop (including the parent dummy loop) is too big,
ignore it and only do LRS on smaller loops inside it.  Reaching
definitions is insanely expensive on an entire function, but works well
on smaller loops.

Perhaps something similar could be applied also to IRA/LRA.

Paolo
Steven Bosscher Oct. 2, 2012, 8:49 a.m. UTC | #35
On Tue, Oct 2, 2012 at 10:29 AM, Paolo Bonzini <bonzini@gnu.org> wrote:
> Il 02/10/2012 09:28, Steven Bosscher ha scritto:
>>>   My experience shows that these lists are usually 1-2 elements. Although in
>>> > this case, there are pseudos with huge number elements (hundreeds).  I tried
>>> > -fweb for this tests because it can decrease the number elements but GCC (I
>>> > don't know what pass) scales even worse: after 20 min of waiting and when
>>> > virt memory achieved 20GB I stoped it.
>> Ouch :-)
>>
>> The webizer itself never even runs, the compiler blows up somewhere
>> during the df_analyze call from web_main. The issue here is probably
>> in the DF_UD_CHAIN problem or in the DF_RD problem.
>
> /me is glad to have fixed fwprop when his GCC contribution time was more
> than 1-2 days per year...

I thought you spent more time on GCC nowadays, working for RedHat?
Who's your manager, perhaps we can coerce him/her into letting you
spend more time on GCC :-P


> Unfortunately, the fwprop solution (actually a rewrite) was very
> specific to the problem and cannot be reused in other parts of the compiler.

That'd be too bad... But is this really true? I thought you had
something done that builds chains only for USEs reached by multiple
DEFs? That's the only interesting kind for web, too.


> I guess here it is where we could experiment with region-based
> optimization.  If a loop (including the parent dummy loop) is too big,
> ignore it and only do LRS on smaller loops inside it.  Reaching
> definitions is insanely expensive on an entire function, but works well
> on smaller loops.

Heh, yes. In fact I have been working on a region-based version of web
because it is (or at least: used to be) a useful pass that only isn't
enabled by default because the underlying RD problem scales so badly.
My current collection of hacks doesn't bootstrap, doesn't even build
libgcc yet, but I plan to finish it for GCC 4.9. It's based on
identifying SEME regions using structural analysis, and DF's partial
CFG analysis (the latter is currently the problem).

FWIW: part of the problem for this particular test case is that there
are many registers with partial defs (vector registers) and the RD
problem doesn't (and probably cannot) keep track of one partial
def/use killing another partial def/use. This handling of vector regs
appears to be a general problem with much of the RTL infrastructure.

Ciao!
Steven
Paolo Bonzini Oct. 2, 2012, 9:33 a.m. UTC | #36
Il 02/10/2012 10:49, Steven Bosscher ha scritto:
> On Tue, Oct 2, 2012 at 10:29 AM, Paolo Bonzini <bonzini@gnu.org> wrote:
>> Il 02/10/2012 09:28, Steven Bosscher ha scritto:
>>>>   My experience shows that these lists are usually 1-2 elements. Although in
>>>>> this case, there are pseudos with huge number elements (hundreeds).  I tried
>>>>> -fweb for this tests because it can decrease the number elements but GCC (I
>>>>> don't know what pass) scales even worse: after 20 min of waiting and when
>>>>> virt memory achieved 20GB I stoped it.
>>> Ouch :-)
>>>
>>> The webizer itself never even runs, the compiler blows up somewhere
>>> during the df_analyze call from web_main. The issue here is probably
>>> in the DF_UD_CHAIN problem or in the DF_RD problem.
>>
>> /me is glad to have fixed fwprop when his GCC contribution time was more
>> than 1-2 days per year...
> 
> I thought you spent more time on GCC nowadays, working for Red Hat?

No, I work on QEMU most of the time. :)  Knowing myself, if I had
GCC-related assignments you'd see me _a lot_ on upstream mailing lists!

>> Unfortunately, the fwprop solution (actually a rewrite) was very
>> specific to the problem and cannot be reused in other parts of the compiler.
> 
> That'd be too bad... But is this really true? I thought you had
> something done that builds chains only for USEs reached by multiple
> DEFs? That's the only interesting kind for web, too.

No, it's the other way round.  I have a dataflow problem that recognizes
USEs reached by multiple DEFs, so that I can use a dominator walk to
build singleton def-use chains.  It's very similar to how you build SSA,
but punting instead of inserting phis.

Another solution is to build factored use-def chains for web, and use
them instead of RD.  In the end it's not very different from regional
live range splitting, since the phi functions factor out the state of
the pass at loop (that is region) boundaries.  I thought you had looked
at FUD chains years ago?

> FWIW: part of the problem for this particular test case is that there
> are many registers with partial defs (vector registers) and the RD
> problem doesn't (and probably cannot) keep track of one partial
> def/use killing another partial def/use.

So they are subregs of regs?  Perhaps they could be represented with
VEC_MERGE to break the live range:

 (set (reg:V4SI 94) (vec_merge:V4SI (reg:V4SI 94)
                                    (const_vector:V4SI [(const_int 0)
                                                        (const_int 0)
                                                        (const_int 0)
                                                        (reg:SI 95)])
                                    (const_int 7)))

And then reload, or something after reload, would know how to split
these when spilling V4SI to memory.

Paolo
Vladimir Makarov Oct. 2, 2012, 2:57 p.m. UTC | #37
On 10/02/2012 12:22 AM, Jeff Law wrote:
> On 10/01/2012 07:14 PM, Vladimir Makarov wrote:
>>
>>    Analogous live ranges are used in IRA as intermidiate step to build a
>> conflict graph.  Actually, the first approach was to use IRA code to
>> assign hard registers to pseudos (e.g.  Jeff Law tried this approach)
>> but it was rejected as requiring much more compilation time.  In some
>> way, one can look at the assignment in LRA is a compromise between
>> quality (which could achieved through repeated buidling conflict graphs
>> and using graph coloring) and compilation speed.
> Not only was it slow (iterating IRA), guaranteeing termination was a 
> major problem.  There's some algorithmic games that have to be played 
> (they're at least discussed in literature, but not under the heading 
> of termination) and there's some issues specific to the IRA 
> implementation which make ensuring termination difficult.
>
Chaitin-Briggs literature does not discuss the termination, just saying 
that live-ranges shortening will result to assigning hard regs to all 
necessary pseudos which is not clearly guaranteed. There is the same 
problem in LRA.  So LRA checks that too many passes are done or to many 
reloads for one insn are made and abort LRA.  Porting LRA is mostly 
fixing such aborts.

Another thing omitted by literature is inheritance which is very 
important for performance.  Although it could be considered as a special 
case of live-range splitting.  There are also a lot of small important 
details (e.g. what to do in case of displacement constraints, or when 
non-load/store insns permits memory and registers etc) not discussed 
well or at all in the literature I read.
> I got nearly as good of results by conservative updates of the 
> conflicts after splitting ranges and (ab)using ira's reload hooks to 
> give the new pseudos for the split range a chance to be allocated again.
>
> The biggest problem with that approach was getting the costing right 
> for the new pseudos.  That requires running a fair amount of IRA a 
> second time.  I'd still like to return to some of the ideas from that 
> work as I think some of the bits are still relevant in the IRA+LRA world.
>
>>    My experience shows that these lists are usually 1-2 elements.
> That's been my experience as well.  The vast majority of the time the 
> range lists are very small.
>
Steven Bosscher Oct. 4, 2012, 6:43 p.m. UTC | #38
On Sat, Sep 29, 2012 at 10:26 PM, Steven Bosscher <stevenb.gcc@gmail.com> wrote:
> To put it in another perspective, here are my timings of trunk vs lra
> (both checkouts done today):
>
> trunk:
>  integrated RA           : 181.68 (24%) usr   1.68 (11%) sys 183.43
> (24%) wall  643564 kB (20%) ggc
>  reload                  :  11.00 ( 1%) usr   0.18 ( 1%) sys  11.17 (
> 1%) wall   32394 kB ( 1%) ggc
>  TOTAL                 : 741.64            14.76           756.41
>       3216164 kB
>
> lra branch:
>  integrated RA           : 174.65 (16%) usr   1.33 ( 8%) sys 176.33
> (16%) wall  643560 kB (20%) ggc
>  reload                  : 399.69 (36%) usr   2.48 (15%) sys 402.69
> (36%) wall   41852 kB ( 1%) ggc
>  TOTAL                 :1102.06            16.05          1120.83
>       3231738 kB
>
> That's a 49% slowdown. The difference is completely accounted for by
> the timing difference between reload and LRA.

With Vlad's patch to switch off expensive LRA parts for extreme
functions ([lra revision 192093]), the numbers are:

 integrated RA           : 154.27 (17%) usr   1.27 ( 8%) sys 155.64
(17%) wall  131534 kB ( 5%) ggc
 LRA non-specific        :  69.67 ( 8%) usr   0.79 ( 5%) sys  70.40 (
8%) wall   18805 kB ( 1%) ggc
 LRA virtuals elimination:  55.53 ( 6%) usr   0.00 ( 0%) sys  55.49 (
6%) wall   20465 kB ( 1%) ggc
 LRA reload inheritance  :   0.06 ( 0%) usr   0.00 ( 0%) sys   0.02 (
0%) wall      57 kB ( 0%) ggc
 LRA create live ranges  :  80.46 ( 4%) usr   1.05 ( 6%) sys  81.49 (
4%) wall    2459 kB ( 0%) ggc
 LRA hard reg assignment :   1.78 ( 0%) usr   0.05 ( 0%) sys   1.85 (
0%) wall       0 kB ( 0%) ggc
 reload                  :   6.38 ( 1%) usr   0.13 ( 1%) sys   6.51 (
1%) wall       0 kB ( 0%) ggc
 TOTAL                 : 917.42            16.35           933.78
      2720151 kB

Recalling trunk total time (r191835):

>  TOTAL                 : 741.64            14.76           756.41

the slowdown due to LRA is down from 49% to 23%, with still room for
improvement (even without crippling LRA further). Size with the
expensive LRA parts switched off is still better thank trunk:
$ size slow.o*
   text    data     bss     dec     hex filename
3499938       8     583 3500529  3569f1 slow.o.00_trunk_r191835
3386117       8     583 3386708  33ad54 slow.o.01_lra_r191626
3439755       8     583 3440346  347eda slow.o.02_lra_r192093

The lra-branch outperforms trunk on everything else I've thrown at it,
in terms of compile time and code size at least, and also e.g. on
Fortran polyhedron runtime.

Ciao!
Steven
Steven Bosscher Oct. 4, 2012, 8:56 p.m. UTC | #39
On Tue, Oct 2, 2012 at 3:14 AM, Vladimir Makarov <vmakarov@redhat.com> wrote:
>   Analogous live ranges are used in IRA as intermidiate step to build a
> conflict graph.

Right, ira-lives.c and lra-lives.c look very much alike, the only
major difference is that the object of interest in an IRA live range
is an ira_object_t, and in an LRA live range it's just a regno. But
the code of create_start_finish_chains,
ira_rebuild_start_finish_chains,
remove_some_program_points_and_update_live_ranges, and most the live
range printing functions, are almost identical between lra-lives.c and
ira-lives.c. That looks like unnecessary code duplication. Do you
think some of that code be shared (esp. in a C++ world where maybe a
live range can be a container for another type)?

Ciao!
Steven
Vladimir Makarov Oct. 5, 2012, 3:37 a.m. UTC | #40
On 12-10-04 4:56 PM, Steven Bosscher wrote:
> On Tue, Oct 2, 2012 at 3:14 AM, Vladimir Makarov <vmakarov@redhat.com> wrote:
>>    Analogous live ranges are used in IRA as intermidiate step to build a
>> conflict graph.
> Right, ira-lives.c and lra-lives.c look very much alike, the only
> major difference is that the object of interest in an IRA live range
> is an ira_object_t, and in an LRA live range it's just a regno. But
> the code of create_start_finish_chains,
> ira_rebuild_start_finish_chains,
> remove_some_program_points_and_update_live_ranges, and most the live
> range printing functions, are almost identical between lra-lives.c and
> ira-lives.c. That looks like unnecessary code duplication. Do you
> think some of that code be shared (esp. in a C++ world where maybe a
> live range can be a container for another type)?
>
Yes, C++ could help here.  The simplest solution is inheritance from a 
common class.  But it is not a high priority task now.
Peter Bergner Oct. 11, 2012, 9:53 p.m. UTC | #41
On Tue, 2012-10-02 at 10:57 -0400, Vladimir Makarov wrote:
> Chaitin-Briggs literature does not discuss the termination, just saying 
> that live-ranges shortening will result to assigning hard regs to all 
> necessary pseudos which is not clearly guaranteed. There is the same 
> problem in LRA.  So LRA checks that too many passes are done or to many 
> reloads for one insn are made and abort LRA.  Porting LRA is mostly 
> fixing such aborts.

IIRC, talking with the guys from Rice, they had a limit on the number of
color/spill iterations (20) before aborting, since anything more than
that would be due to a bug.  I believe the largest number of iterations
I ever saw in my allocator was about 6 iterations of color/spill.  I hit
a few cases that iterated forever, but those were always due to bugs in
my code or special hairy details I hadn't handled.  You're correct that
the hairy details are never discussed in papers. :)



> Another thing omitted by literature is inheritance which is very 
> important for performance.  Although it could be considered as a special 
> case of live-range splitting.  There are also a lot of small important 
> details (e.g. what to do in case of displacement constraints,

To handle displacement constraints, instead of spilling to stack slots,
we spilled to spill pseudos which look like normal register pseudos.
We would then color them just like normal pseudos, but the colors
represent stack slots and not registers.  If "k" becomes too big, it
means you surpassed the maximum displacement, and you'll have to spill
the spill pseudo.  For small displacement cpus, coloring the spill pseudos
does a good job reusing stack slots which reduces the largest displacement
you'll see.  For cpus with no displacement issues, you could just give
each spill pseudo a different color which would mean you wouldn't have
to compute a interference graph of the spill pseudos and all the work
and space that goes into building that.

Note, with spill pseudos, you can perform dead code elimination, coalescing
and other optimizations on them just like normal pseudos to reduce the
amount of spill code generated.

Peter
Vladimir Makarov Oct. 12, 2012, 3:34 a.m. UTC | #42
On 12-10-11 5:53 PM, Peter Bergner wrote:
> On Tue, 2012-10-02 at 10:57 -0400, Vladimir Makarov wrote:
>> Chaitin-Briggs literature does not discuss the termination, just saying
>> that live-ranges shortening will result to assigning hard regs to all
>> necessary pseudos which is not clearly guaranteed. There is the same
>> problem in LRA.  So LRA checks that too many passes are done or to many
>> reloads for one insn are made and abort LRA.  Porting LRA is mostly
>> fixing such aborts.
> IIRC, talking with the guys from Rice, they had a limit on the number of
> color/spill iterations (20) before aborting, since anything more than
> that would be due to a bug.  I believe the largest number of iterations
> I ever saw in my allocator was about 6 iterations of color/spill.  I hit
> a few cases that iterated forever, but those were always due to bugs in
> my code or special hairy details I hadn't handled.  You're correct that
> the hairy details are never discussed in papers. :)
>
>
Interesting.  The max number passes is very dependent on the target.  
The biggest I saw was about 20 on a test on m68k
>> Another thing omitted by literature is inheritance which is very
>> important for performance.  Although it could be considered as a special
>> case of live-range splitting.  There are also a lot of small important
>> details (e.g. what to do in case of displacement constraints,
> To handle displacement constraints, instead of spilling to stack slots,
> we spilled to spill pseudos which look like normal register pseudos.
> We would then color them just like normal pseudos, but the colors
> represent stack slots and not registers.  If "k" becomes too big, it
> means you surpassed the maximum displacement, and you'll have to spill
> the spill pseudo.  For small displacement cpus, coloring the spill pseudos
> does a good job reusing stack slots which reduces the largest displacement
> you'll see.  For cpus with no displacement issues, you could just give
> each spill pseudo a different color which would mean you wouldn't have
> to compute a interference graph of the spill pseudos and all the work
> and space that goes into building that.
Interesting approach to spill a spilled pseudo.

Although it is not wise to give a different color for spilled pseudos on 
targets without displacement issues.  Small space for pseudos (by 
reusing slots) gives better data locality and smaller displacements 
which are important to reduce code size for targets having different 
size of insn displacement field for different displacements (e.g. x86).  
I know only one drawback of reusing stack slots.  It is less freedom for 
insn-scheduling after RA.  But still it is more important to reuse the 
stack than better 2nd insn scheduling at least for most widely used 
target x86/x86-64.

For targets with different displacement sizes, in my experience coloring 
is also not best algorithm for this.  It usually results in smaller 
stack space but it has a tendency to evenly spread pseudos to different 
slots instead of putting important pseudos into slots with smaller 
displacements to generate smaller insns.

Just my observations, coloring is pretty good for register allocation.  
It works particularly well for unknown (or varying) execution profiles.  
But if you have exact execution profiles, there are heuristic approaches 
which could work better coloring.
>
> Note, with spill pseudos, you can perform dead code elimination, coalescing
> and other optimizations on them just like normal pseudos to reduce the
> amount of spill code generated.
True.
diff mbox

Patch

Index: lra-assigns.c
===================================================================
--- lra-assigns.c       (revision 191858)
+++ lra-assigns.c       (working copy)
@@ -1261,6 +1261,8 @@  lra_assign (void)
   bitmap_head insns_to_process;
   bool no_spills_p;

+  timevar_push (TV_LRA_ASSIGN);
+
   init_lives ();
   sorted_pseudos = (int *) xmalloc (sizeof (int) * max_reg_num ());
   sorted_reload_pseudos = (int *) xmalloc (sizeof (int) * max_reg_num ());
@@ -1312,5 +1314,6 @@  lra_assign (void)
   free (sorted_pseudos);
   free (sorted_reload_pseudos);
   finish_lives ();
+  timevar_pop (TV_LRA_ASSIGN);
   return no_spills_p;
 }
Index: lra.c
===================================================================
--- lra.c       (revision 191858)
+++ lra.c       (working copy)
@@ -2193,6 +2193,7 @@  lra (FILE *f)

   lra_dump_file = f;

+  timevar_push (TV_LRA);

   init_insn_recog_data ();

@@ -2271,6 +2272,7 @@  lra (FILE *f)
             to use a constant pool.  */
          lra_eliminate (false);
          lra_inheritance ();
+
          /* We need live ranges for lra_assign -- so build them.  */
          lra_create_live_ranges (true);
          live_p = true;
@@ -2343,6 +2345,8 @@  lra (FILE *f)
 #ifdef ENABLE_CHECKING
   check_rtl (true);
 #endif
+
+  timevar_pop (TV_LRA);
 }

 /* Called once per compiler to initialize LRA data once.  */
Index: lra-eliminations.c
===================================================================
--- lra-eliminations.c  (revision 191858)
+++ lra-eliminations.c  (working copy)
@@ -1297,6 +1297,8 @@  lra_eliminate (bool final_p)
   struct elim_table *ep;
   int regs_num = max_reg_num ();

+  timevar_push (TV_LRA_ELIMINATE);
+
   bitmap_initialize (&insns_with_changed_offsets, &reg_obstack);
   if (final_p)
     {
@@ -1317,7 +1319,7 @@  lra_eliminate (bool final_p)
     {
       update_reg_eliminate (&insns_with_changed_offsets);
       if (bitmap_empty_p (&insns_with_changed_offsets))
-       return;
+       goto lra_eliminate_done;
     }
   if (lra_dump_file != NULL)
     {
@@ -1349,4 +1351,7 @@  lra_eliminate (bool final_p)
          process_insn_for_elimination (insn, final_p);
       }
   bitmap_clear (&insns_with_changed_offsets);
+
+lra_eliminate_done:
+  timevar_pop (TV_LRA_ELIMINATE);
 }
Index: lra-lives.c
===================================================================
--- lra-lives.c (revision 191858)
+++ lra-lives.c (working copy)
@@ -962,6 +962,8 @@  lra_create_live_ranges (bool all_p)
   basic_block bb;
   int i, hard_regno, max_regno = max_reg_num ();

+  timevar_push (TV_LRA_CREATE_LIVE_RANGES);
+
   complete_info_p = all_p;
   if (lra_dump_file != NULL)
     fprintf (lra_dump_file,
@@ -1016,6 +1018,7 @@  lra_create_live_ranges (bool all_p)
   sparseset_free (pseudos_live_through_setjumps);
   sparseset_free (pseudos_live);
   compress_live_ranges ();
+  timevar_pop (TV_LRA_CREATE_LIVE_RANGES);
 }

 /* Finish all live ranges.  */
Index: timevar.def
===================================================================
--- timevar.def (revision 191858)
+++ timevar.def (working copy)
@@ -223,10 +223,16 @@  DEFTIMEVAR (TV_REGMOVE               , "
 DEFTIMEVAR (TV_MODE_SWITCH           , "mode switching")
 DEFTIMEVAR (TV_SMS                  , "sms modulo scheduling")
 DEFTIMEVAR (TV_SCHED                 , "scheduling")
-DEFTIMEVAR (TV_IRA                  , "integrated RA")
-DEFTIMEVAR (TV_RELOAD               , "reload")
+DEFTIMEVAR (TV_IRA                  , "integrated RA")
+DEFTIMEVAR (TV_LRA                  , "LRA non-specific")
+DEFTIMEVAR (TV_LRA_ELIMINATE        , "LRA virtuals eliminatenon")
+DEFTIMEVAR (TV_LRA_INHERITANCE      , "LRA reload inheritance")
+DEFTIMEVAR (TV_LRA_CREATE_LIVE_RANGES, "LRA create live ranges")
+DEFTIMEVAR (TV_LRA_ASSIGN           , "LRA hard reg assignment")
+DEFTIMEVAR (TV_LRA_COALESCE         , "LRA coalesce pseudo regs")
+DEFTIMEVAR (TV_RELOAD               , "reload")
 DEFTIMEVAR (TV_RELOAD_CSE_REGS       , "reload CSE regs")
-DEFTIMEVAR (TV_GCSE_AFTER_RELOAD      , "load CSE after reload")
+DEFTIMEVAR (TV_GCSE_AFTER_RELOAD     , "load CSE after reload")
 DEFTIMEVAR (TV_REE                  , "ree")
 DEFTIMEVAR (TV_THREAD_PROLOGUE_AND_EPILOGUE, "thread pro- & epilogue")
 DEFTIMEVAR (TV_IFCVT2               , "if-conversion 2")
Index: lra-coalesce.c
===================================================================
--- lra-coalesce.c      (revision 191858)
+++ lra-coalesce.c      (working copy)
@@ -221,6 +221,8 @@  lra_coalesce (void)
   bitmap_head involved_insns_bitmap, split_origin_bitmap;
   bitmap_iterator bi;

+  timevar_push (TV_LRA_COALESCE);
+
   if (lra_dump_file != NULL)
     fprintf (lra_dump_file,
             "\n********** Pseudos coalescing #%d: **********\n\n",
@@ -371,5 +373,6 @@  lra_coalesce (void)
   free (sorted_moves);
   free (next_coalesced_pseudo);
   free (first_coalesced_pseudo);
+  timevar_pop (TV_LRA_COALESCE);
   return coalesced_moves != 0;
 }
Index: lra-constraints.c
===================================================================
--- lra-constraints.c   (revision 191858)
+++ lra-constraints.c   (working copy)
@@ -4859,6 +4859,8 @@  lra_inheritance (void)
   basic_block bb, start_bb;
   edge e;

+  timevar_push (TV_LRA_INHERITANCE);
+
   lra_inheritance_iter++;
   if (lra_dump_file != NULL)
     fprintf (lra_dump_file, "\n********** Inheritance #%d: **********\n\n",
@@ -4907,6 +4909,8 @@  lra_inheritance (void)
   bitmap_clear (&live_regs);
   bitmap_clear (&check_only_regs);
   free (usage_insns);
+
+  timevar_pop (TV_LRA_INHERITANCE);
 }

 ^L