Message ID | 569D0A85.50501@redhat.com |
---|---|
State | New |
Headers | show |
On Mon, Jan 18, 2016 at 10:53:41AM -0500, Jason Merrill wrote: > In this testcase, we weren't getting the benefits of fold's cleverness in > handling COND_EXPR because we were only calling fold_for_warn on the > condition itself. This patch changes check_function_arguments_recurse to > fold the entire COND_EXPR, and also fixes cp_fold to actually fold > COND_EXPR. > > Along with this, I've cleaned up some other bits I noticed in cp_fold: there > was various unnecessary special-casing for unary and binary ops as well, and > we were clobbering an input CONSTRUCTOR rather than returning a new folded > one. > > Tested x86_64-pc-linux-gnu, applying to trunk. Guess too late for GCC 6, but I'm slightly worried about fold_for_warn compile time complexity, if we have very large trees of expressions and try to fold_for_warn it all, then on the original fold_for_warn 2-3 operands thereof and so on, with something foldable somewhere very deep in the trees. For that perhaps best would be if we cache cp_fold results on expressions somewhere (custom tree that holds original and corresponding folded expression, or hash table, whatever) and if we try to cp_fold it again, we just reuse what we've folded it to last time. Jakub
On 01/18/2016 11:06 AM, Jakub Jelinek wrote: > On Mon, Jan 18, 2016 at 10:53:41AM -0500, Jason Merrill wrote: >> In this testcase, we weren't getting the benefits of fold's cleverness in >> handling COND_EXPR because we were only calling fold_for_warn on the >> condition itself. This patch changes check_function_arguments_recurse to >> fold the entire COND_EXPR, and also fixes cp_fold to actually fold >> COND_EXPR. >> >> Along with this, I've cleaned up some other bits I noticed in cp_fold: there >> was various unnecessary special-casing for unary and binary ops as well, and >> we were clobbering an input CONSTRUCTOR rather than returning a new folded >> one. >> >> Tested x86_64-pc-linux-gnu, applying to trunk. > > Guess too late for GCC 6, but I'm slightly worried about fold_for_warn > compile time complexity, if we have very large trees of expressions and try > to fold_for_warn it all, then on the original fold_for_warn 2-3 operands > thereof and so on, with something foldable somewhere very deep in the trees. > For that perhaps best would be if we cache cp_fold results on > expressions somewhere (custom tree that holds original and corresponding > folded expression, or hash table, whatever) and if we try to cp_fold it > again, we just reuse what we've folded it to last time. cp_fold already caches its results. Jason
commit 4e480f0e4eede58f60caf8e05c94709e5a5f3fc0 Author: Jason Merrill <jason@redhat.com> Date: Fri Jan 15 13:00:10 2016 -0500 * cp-gimplify.c (cp_fold) [CONSTRUCTOR]: Don't clobber the input. diff --git a/gcc/cp/cp-gimplify.c b/gcc/cp/cp-gimplify.c index 2dc53ae..5c4d3c1 100644 --- a/gcc/cp/cp-gimplify.c +++ b/gcc/cp/cp-gimplify.c @@ -2125,9 +2125,22 @@ cp_fold (tree x) { unsigned i; constructor_elt *p; + bool changed = false; vec<constructor_elt, va_gc> *elts = CONSTRUCTOR_ELTS (x); + vec<constructor_elt, va_gc> *nelts = NULL; + vec_safe_reserve (nelts, vec_safe_length (elts)); FOR_EACH_VEC_SAFE_ELT (elts, i, p) - p->value = cp_fold (p->value); + { + tree op = cp_fold (p->value); + constructor_elt e = { p->index, op }; + nelts->quick_push (e); + if (op != p->value) + changed = true; + } + if (changed) + x = build_constructor (TREE_TYPE (x), nelts); + else + vec_free (nelts); break; } case TREE_VEC: