Message ID | orvcna24h4.fsf@livre.localdomain |
---|---|
State | New |
Headers | show |
On Mon, Feb 13, 2012 at 12:27:35PM -0200, Alexandre Oliva wrote: > Jakub asked to have a closer look at the problem, and I found we could > do somewhat better. The first thing I noticed was that the problem was > that, in each block that computed a (base+const), we created a new VALUE > for the expression (with the same const and global base), and a new > reverse operation. I'm not convinced you want the > + /* Keep VALUE equivalences around. */ > + for (l = v->locs; l; l = l->next) > + if (GET_CODE (l->loc) == VALUE) > + return true; hunk in invariant_p, I'd say it should stay in preserve_only_values, a value equivalence isn't necessarily invariant. Otherwise the cselib.c changes look ok to me, but I don't understand why are you removing the var-tracking.c loop. While cselib will with your changes handle the situation better, for values that are already invariant (guess canonical_cselib_val should be called before that loop and perhaps instead of testing CONSTANT_P it could test invatiant_p if you rename it to cselib_invariant_p and export) adding any reverse ops for it is really just a waste of resources, because we have a better location already in the list. Adding the extra loc doesn't improve it in any way. Jakub
Alexandre Oliva <aoliva@redhat.com> writes: > Jakub asked to have a closer look at the problem, and I found we could > do somewhat better. The first thing I noticed was that the problem was > that, in each block that computed a (base+const), we created a new VALUE > for the expression (with the same const and global base), and a new > reverse operation. > > This was wrong. Clearly we should reuse the same expression. I had to > arrange for the expression to be retained across basic blocks, for it > was function invariant. I split out the code to detect invariants from > the function that removes entries from the cselib hash table across > blocks, and made it recursive so that a VALUE equivalent to (plus > (value) (const_int)) will be retained, if the base value fits (maybe > recursively) the definition of invariant. > > An earlier attempt to address this issue remained in cselib: using the > canonical value to build the reverse expression. I believe it has a > potential of avoiding the creation of redundant reverse expressions, for > expressions involving equivalent but different VALUEs will evaluate to > different hashes. I haven't observed effects WRT the given testcase, > before or after the change that actually fixed the problem, because we > now find the same base expression and thus reuse the reverse_op as well, > but I figured I'd keep it in for it is very cheap and possibly useful. Thanks for looking at this. Just to be sure: does this avoid the kind of memrefs_conflict_p cycle I was seeing in: http://gcc.gnu.org/ml/gcc-patches/2012-01/msg01051.html (in theory, I mean). Richard
On Feb 13, 2012, Richard Sandiford <rdsandiford@googlemail.com> wrote:
> does this avoid the kind of memrefs_conflict_p cycle I was seeing in:
I don't know that it does, I'd missed that bit.
If you still have a preprocessed testcase, I'd be glad to give it a
quick try. Failing that, I can try a build on my yeeloong, but... that
takes forever minus a few days ;-)
On Feb 13, 2012, Jakub Jelinek <jakub@redhat.com> wrote: > I'm not convinced you want the >> + /* Keep VALUE equivalences around. */ >> + for (l = v->locs; l; l = l->next) >> + if (GET_CODE (l->loc) == VALUE) >> + return true; > hunk in invariant_p, Yeah, maybe “invariant_p” is a misnomer. The thinking is that, if we preserve a value, we preserve other values based on it, and we do preserve values with equivalences to avoid having to carry the equivalences in the var-tracking dataflow sets. > Otherwise the cselib.c changes look ok to me, but I don't understand > why are you removing the var-tracking.c loop. I thought completeness called for retaining those equivalences, but now I see that, since they're always going to be computed values, rather than locations, the constant value provides sufficient and better information for completeness, rendering them irrelevant indeed. I'll put that hunk back in and retest. Thanks,
Alexandre Oliva <aoliva@redhat.com> writes: > On Feb 13, 2012, Richard Sandiford <rdsandiford@googlemail.com> wrote: >> does this avoid the kind of memrefs_conflict_p cycle I was seeing in: > > I don't know that it does, I'd missed that bit. > > If you still have a preprocessed testcase, I'd be glad to give it a > quick try. Failing that, I can try a build on my yeeloong, but... that > takes forever minus a few days ;-) Unfortunately, I've not kept the preprocessed source, and I'd need to wind back to an old compiler to get it. If it's "in practice" rather than "in theory" that we're talking about, then I'm fine with putting it in and seeing what breaks. But I'd really prefer if we knew in theory. :-) Like I say, my understanding before this patch series went in was that cselib values weren't supposed to be cyclic. Now that they are, what should consumers like memrefs_conflict_p do to avoid getting stuck? Thanks, Richard
for gcc/ChangeLog from Alexandre Oliva <aoliva@redhat.com> PR debug/52001 * cselib.c (invariant_p): Split out of... (preserve_only_constants): ... this. Preserve plus expressions of invariant values and constants. * var-tracking.c (reverse_op): Don't drop equivs of constants. Use canonical value to build reverse operation. Index: gcc/cselib.c =================================================================== --- gcc/cselib.c.orig 2012-02-12 06:13:40.676385499 -0200 +++ gcc/cselib.c 2012-02-12 09:07:00.653579375 -0200 @@ -383,22 +383,29 @@ cselib_clear_table (void) cselib_reset_table (1); } -/* Remove from hash table all VALUEs except constants - and function invariants. */ +/* Return TRUE if V is a constant or a function invariant, FALSE + otherwise. */ -static int -preserve_only_constants (void **x, void *info ATTRIBUTE_UNUSED) +static bool +invariant_p (cselib_val *v) { - cselib_val *v = (cselib_val *)*x; struct elt_loc_list *l; + if (v == cfa_base_preserved_val) + return true; + + /* Keep VALUE equivalences around. */ + for (l = v->locs; l; l = l->next) + if (GET_CODE (l->loc) == VALUE) + return true; + if (v->locs != NULL && v->locs->next == NULL) { if (CONSTANT_P (v->locs->loc) && (GET_CODE (v->locs->loc) != CONST || !references_value_p (v->locs->loc, 0))) - return 1; + return true; /* Although a debug expr may be bound to different expressions, we can preserve it as if it was constant, to get unification and proper merging within var-tracking. */ @@ -406,24 +413,29 @@ preserve_only_constants (void **x, void || GET_CODE (v->locs->loc) == DEBUG_IMPLICIT_PTR || GET_CODE (v->locs->loc) == ENTRY_VALUE || GET_CODE (v->locs->loc) == DEBUG_PARAMETER_REF) - return 1; - if (cfa_base_preserved_val) - { - if (v == cfa_base_preserved_val) - return 1; - if (GET_CODE (v->locs->loc) == PLUS - && CONST_INT_P (XEXP (v->locs->loc, 1)) - && XEXP (v->locs->loc, 0) == cfa_base_preserved_val->val_rtx) - return 1; - } + return true; + + /* (plus (value V) (const_int C)) is invariant iff V is invariant. */ + if (GET_CODE (v->locs->loc) == PLUS + && CONST_INT_P (XEXP (v->locs->loc, 1)) + && GET_CODE (XEXP (v->locs->loc, 0)) == VALUE + && invariant_p (CSELIB_VAL_PTR (XEXP (v->locs->loc, 0)))) + return true; } - /* Keep VALUE equivalences around. */ - for (l = v->locs; l; l = l->next) - if (GET_CODE (l->loc) == VALUE) - return 1; + return false; +} + +/* Remove from hash table all VALUEs except constants + and function invariants. */ + +static int +preserve_only_constants (void **x, void *info ATTRIBUTE_UNUSED) +{ + cselib_val *v = (cselib_val *)*x; - htab_clear_slot (cselib_hash_table, x); + if (!invariant_p (v)) + htab_clear_slot (cselib_hash_table, x); return 1; } Index: gcc/var-tracking.c =================================================================== --- gcc/var-tracking.c.orig 2012-02-12 06:13:38.633412886 -0200 +++ gcc/var-tracking.c 2012-02-12 10:09:49.000000000 -0200 @@ -5298,7 +5298,6 @@ reverse_op (rtx val, const_rtx expr, rtx { rtx src, arg, ret; cselib_val *v; - struct elt_loc_list *l; enum rtx_code code; if (GET_CODE (expr) != SET) @@ -5334,13 +5333,9 @@ reverse_op (rtx val, const_rtx expr, rtx if (!v || !cselib_preserved_value_p (v)) return; - /* Adding a reverse op isn't useful if V already has an always valid - location. Ignore ENTRY_VALUE, while it is always constant, we should - prefer non-ENTRY_VALUE locations whenever possible. */ - for (l = v->locs; l; l = l->next) - if (CONSTANT_P (l->loc) - && (GET_CODE (l->loc) != CONST || !references_value_p (l->loc, 0))) - return; + /* Use canonical V to avoid creating multiple redundant expressions + for different VALUES equivalent to V. */ + v = canonical_cselib_val (v); switch (GET_CODE (src)) {