Message ID | 20210831012654.3293811-1-ppalka@redhat.com |
---|---|
State | New |
Headers | show |
Series | c++: shortcut bad convs during overload resolution [PR101904] | expand |
On Mon, 30 Aug 2021, Patrick Palka wrote: > In the context of overload resolution we have the notion of a "bad" > argument conversion, which is a conversion that "would be a permitted > with a bending of the language standards", and we handle such bad > conversions specially. In particular, we rank a bad conversion as > better than no conversion but worse than a good conversion, and a bad > conversion doesn't necessarily make a candidate unviable. With the > flag -fpermissive, we permit the situation where overload resolution > selects a candidate that contains a bad conversion (which we call a > non-strictly viable candidate). And without the flag we issue a > distinct permerror in this situation instead. > > One consequence of this defacto behavior is that in order to distinguish > a non-strictly viable candidate from an unviable candidate, if we > encounter a bad argument conversion during overload resolution we must > keep converting subsequent arguments because a subsequent conversion > could render the candidate unviable instead of just non-strictly viable. > But checking subsequent arguments can force template instantiations and > result in otherwise avoidable hard errors. And in particular, all > 'this' conversions are at worst bad, so this means the const/ref-qualifiers > of a member function can't be used to prune a candidate quickly, which > is the subject of the mentioned PR. > > This patch tries to improve the situation without changing the defacto > output of add_candidates. Specifically, when considering a candidate > during overload resolution this patch makes us shortcut argument > conversion checking upon encountering the first bad conversion > (tentatively marking the candidate as non-strictly viable, though it > could ultimately be unviable) under the assumption that we'll eventually > find a strictly viable candidate anyway (rendering the distinction > between non-strictly viable and unviable moot, since both are worse > than a strictly viable candidate). If this assumption turns out to be > false, we'll fully reconsider the candidate under the defacto behavior > (without the shortcutting). > > So in the best case (there's a strictly viable candidate), we avoid > some argument conversions and/or template argument deduction that may > cause a hard error. In the worst case (there's no such candidate), we > have to redundantly consider some candidates twice. (In a previous > version of the patch, to avoid this redundant checking I created a new > "deferred" conversion type that represents a conversion that is yet to > be performed, and instead of reconsidering a candidate I just realized > its deferred conversions. But it doesn't seem this redundancy is a > significant performance issue to justify the added complexity of this > other approach.) > > Lots of care was taken to preserve the defacto behavior w.r.t. > non-strictly viable candidates, but I wonder how important this behavior > is nowadays? Can the notion of a non-strictly viable candidate be done > away with, or is it here to stay? To expand on this, as a concrete alternative to this optimistic shortcutting trick we could maybe recognize non-strictly viable candidates only when -fpermissive (and just mark them as unviable when not -fpermissive). IIUC this would be a backwards compatible change overall -- only diagnostics would be affected, probably for the better, since we'd explain the rejection reason for more candidates in the event of overload resolution failure. Here's a testcase for which such a change would result in better diagnostics: struct A { void f(int, int) const; // #1 void f(int); // #2 }; int main() { const A a; a.f(0); } We currently consider #2 to be a better candidate than #1 because the bad conversion of the 'this' argument makes it only non-strictly viable, whereas #1 is considered unviable due to the arity mismatch. So overload resolution selects #2 and we end up making no mention of #1 in the subsequent diagnostic: <stdin>: In function ‘int main()’: <stdin>:8:8: error: passing ‘const A’ as ‘this’ argument discards qualifiers [-fpermissive] <stdin>:3:8: note: in call to ‘void A::f(int)’ Better would be to explain why neither candidate is a match: <stdin>:8:6: error: no matching function for call to ‘A::f(int) const’ <stdin>:2:8: note: candidate: ‘void A::f(int, int) const’ <stdin>:2:8: note: candidate expects 2 arguments, 1 provided <stdin>:3:8: note: candidate: ‘void A::f(int)’ <stdin>:3:8: note: passing ‘const A*’ as ‘this’ argument discards qualifiers Same for void f(int, int); void f(int*); int main() { f(42); } for which we currently emit <stdin>: In function ‘int main()’: <stdin>:5:5: error: invalid conversion from ‘int’ to ‘int*’ [-fpermissive] <stdin>:2:8: note: initializing argument 1 of ‘void f(int*)’ instead of <stdin>: In function ‘int main()’: <stdin>:5:4: error: no matching function for call to ‘f(int)’ <stdin>:1:6: note: candidate: ‘void f(int, int)’ <stdin>:1:6: note: candidate expects 2 arguments, 1 provided <stdin>:2:6: note: candidate: ‘void f(int*)’ <stdin>:2:6: note: conversion of argument 1 would be ill-formed: <stdin>:5:5: error: invalid conversion from ‘int’ to ‘int*’ [-fpermissive]
On 8/31/21 3:15 PM, Patrick Palka wrote: > On Mon, 30 Aug 2021, Patrick Palka wrote: > >> In the context of overload resolution we have the notion of a "bad" >> argument conversion, which is a conversion that "would be a permitted >> with a bending of the language standards", and we handle such bad >> conversions specially. In particular, we rank a bad conversion as >> better than no conversion but worse than a good conversion, and a bad >> conversion doesn't necessarily make a candidate unviable. With the >> flag -fpermissive, we permit the situation where overload resolution >> selects a candidate that contains a bad conversion (which we call a >> non-strictly viable candidate). And without the flag we issue a >> distinct permerror in this situation instead. >> >> One consequence of this defacto behavior is that in order to distinguish >> a non-strictly viable candidate from an unviable candidate, if we >> encounter a bad argument conversion during overload resolution we must >> keep converting subsequent arguments because a subsequent conversion >> could render the candidate unviable instead of just non-strictly viable. >> But checking subsequent arguments can force template instantiations and >> result in otherwise avoidable hard errors. And in particular, all >> 'this' conversions are at worst bad, so this means the const/ref-qualifiers >> of a member function can't be used to prune a candidate quickly, which >> is the subject of the mentioned PR. >> >> This patch tries to improve the situation without changing the defacto >> output of add_candidates. Specifically, when considering a candidate >> during overload resolution this patch makes us shortcut argument >> conversion checking upon encountering the first bad conversion >> (tentatively marking the candidate as non-strictly viable, though it >> could ultimately be unviable) under the assumption that we'll eventually >> find a strictly viable candidate anyway (rendering the distinction >> between non-strictly viable and unviable moot, since both are worse >> than a strictly viable candidate). If this assumption turns out to be >> false, we'll fully reconsider the candidate under the defacto behavior >> (without the shortcutting). >> >> So in the best case (there's a strictly viable candidate), we avoid >> some argument conversions and/or template argument deduction that may >> cause a hard error. In the worst case (there's no such candidate), we >> have to redundantly consider some candidates twice. (In a previous >> version of the patch, to avoid this redundant checking I created a new >> "deferred" conversion type that represents a conversion that is yet to >> be performed, and instead of reconsidering a candidate I just realized >> its deferred conversions. But it doesn't seem this redundancy is a >> significant performance issue to justify the added complexity of this >> other approach.) OK, thanks. >> Lots of care was taken to preserve the defacto behavior w.r.t. >> non-strictly viable candidates, but I wonder how important this behavior >> is nowadays? Can the notion of a non-strictly viable candidate be done >> away with, or is it here to stay? > > To expand on this, as a concrete alternative to this optimistic shortcutting > trick we could maybe recognize non-strictly viable candidates only when > -fpermissive (and just mark them as unviable when not -fpermissive). IIUC > this would be a backwards compatible change overall -- only diagnostics would > be affected, probably for the better, since we'd explain the rejection reason > for more candidates in the event of overload resolution failure. > > Here's a testcase for which such a change would result in better diagnostics: > > struct A { > void f(int, int) const; // #1 > void f(int); // #2 > }; > > int main() { > const A a; > a.f(0); > } > > We currently consider #2 to be a better candidate than #1 because the > bad conversion of the 'this' argument makes it only non-strictly > viable, whereas #1 is considered unviable due to the arity mismatch. > So overload resolution selects #2 and we end up making no mention of #1 > in the subsequent diagnostic: > > <stdin>: In function ‘int main()’: > <stdin>:8:8: error: passing ‘const A’ as ‘this’ argument discards qualifiers [-fpermissive] > <stdin>:3:8: note: in call to ‘void A::f(int)’ > > Better would be to explain why neither candidate is a match: > > <stdin>:8:6: error: no matching function for call to ‘A::f(int) const’ > <stdin>:2:8: note: candidate: ‘void A::f(int, int) const’ > <stdin>:2:8: note: candidate expects 2 arguments, 1 provided > <stdin>:3:8: note: candidate: ‘void A::f(int)’ > <stdin>:3:8: note: passing ‘const A*’ as ‘this’ argument discards qualifiers Is that better? Focusing diagnostics on the candidate you probably meant seems helpful in most cases. > > Same for > > void f(int, int); > void f(int*); > > int main() { > f(42); > } > > for which we currently emit > > <stdin>: In function ‘int main()’: > <stdin>:5:5: error: invalid conversion from ‘int’ to ‘int*’ [-fpermissive] > <stdin>:2:8: note: initializing argument 1 of ‘void f(int*)’ > > instead of > > <stdin>: In function ‘int main()’: > <stdin>:5:4: error: no matching function for call to ‘f(int)’ > <stdin>:1:6: note: candidate: ‘void f(int, int)’ > <stdin>:1:6: note: candidate expects 2 arguments, 1 provided > <stdin>:2:6: note: candidate: ‘void f(int*)’ > <stdin>:2:6: note: conversion of argument 1 would be ill-formed: > <stdin>:5:5: error: invalid conversion from ‘int’ to ‘int*’ [-fpermissive] Indeed, this one is less helpful; maybe it's time to remove the int<->pointer bad conversion. In 2001 apparently it was still needed (https://gcc.gnu.org/pipermail/gcc-patches/2001-October/059124.html), but I would hope that the relevant code is dead by now. Jason
On Thu, 2 Sep 2021, Jason Merrill wrote: > On 8/31/21 3:15 PM, Patrick Palka wrote: > > On Mon, 30 Aug 2021, Patrick Palka wrote: > > > > > In the context of overload resolution we have the notion of a "bad" > > > argument conversion, which is a conversion that "would be a permitted > > > with a bending of the language standards", and we handle such bad > > > conversions specially. In particular, we rank a bad conversion as > > > better than no conversion but worse than a good conversion, and a bad > > > conversion doesn't necessarily make a candidate unviable. With the > > > flag -fpermissive, we permit the situation where overload resolution > > > selects a candidate that contains a bad conversion (which we call a > > > non-strictly viable candidate). And without the flag we issue a > > > distinct permerror in this situation instead. > > > > > > One consequence of this defacto behavior is that in order to distinguish > > > a non-strictly viable candidate from an unviable candidate, if we > > > encounter a bad argument conversion during overload resolution we must > > > keep converting subsequent arguments because a subsequent conversion > > > could render the candidate unviable instead of just non-strictly viable. > > > But checking subsequent arguments can force template instantiations and > > > result in otherwise avoidable hard errors. And in particular, all > > > 'this' conversions are at worst bad, so this means the > > > const/ref-qualifiers > > > of a member function can't be used to prune a candidate quickly, which > > > is the subject of the mentioned PR. > > > > > > This patch tries to improve the situation without changing the defacto > > > output of add_candidates. Specifically, when considering a candidate > > > during overload resolution this patch makes us shortcut argument > > > conversion checking upon encountering the first bad conversion > > > (tentatively marking the candidate as non-strictly viable, though it > > > could ultimately be unviable) under the assumption that we'll eventually > > > find a strictly viable candidate anyway (rendering the distinction > > > between non-strictly viable and unviable moot, since both are worse > > > than a strictly viable candidate). If this assumption turns out to be > > > false, we'll fully reconsider the candidate under the defacto behavior > > > (without the shortcutting). > > > > > > So in the best case (there's a strictly viable candidate), we avoid > > > some argument conversions and/or template argument deduction that may > > > cause a hard error. In the worst case (there's no such candidate), we > > > have to redundantly consider some candidates twice. (In a previous > > > version of the patch, to avoid this redundant checking I created a new > > > "deferred" conversion type that represents a conversion that is yet to > > > be performed, and instead of reconsidering a candidate I just realized > > > its deferred conversions. But it doesn't seem this redundancy is a > > > significant performance issue to justify the added complexity of this > > > other approach.) > > OK, thanks. Thanks a lot. After retesting the patch, I ran into a libstdc++ testsuite failure involving reversed operator candidates that I somehow missed earlier. The failure is due to a bug in the last check below if (cand->viable == -1 && shortcut_bad_convs && !cand->convs[cand->num_convs - 1]) // BUG { /* This candidate has been tentatively marked non-strictly viable, and we didn't compute all argument conversions for it (having stopped at the first bad conversion). Add the function to BAD_FNS to fully reconsider later if we don't find any strictly viable candidates. */ bad_fns = lookup_add (fn, bad_fns); *candidates = (*candidates)->next; } which assumes a shortcutted candidate must have a missing conversion at the end of the conversion array (since argument conversions are processed in left-to-right order). But for a reversed candidate the missing conversion would instead be at the front of the array, since we reversed the order of its conversions in add_candidate. So I changed the problematic check to !cand->convs[cand->reversed () ? 0 : cand->num_convs - 1]) and committed the patch with this (presumably uncontroversial) change. > > > > Lots of care was taken to preserve the defacto behavior w.r.t. > > > non-strictly viable candidates, but I wonder how important this behavior > > > is nowadays? Can the notion of a non-strictly viable candidate be done > > > away with, or is it here to stay? > > > > To expand on this, as a concrete alternative to this optimistic shortcutting > > trick we could maybe recognize non-strictly viable candidates only when > > -fpermissive (and just mark them as unviable when not -fpermissive). IIUC > > this would be a backwards compatible change overall -- only diagnostics > > would > > be affected, probably for the better, since we'd explain the rejection > > reason > > for more candidates in the event of overload resolution failure. > > > > Here's a testcase for which such a change would result in better > > diagnostics: > > > > struct A { > > void f(int, int) const; // #1 > > void f(int); // #2 > > }; > > int main() { > > const A a; > > a.f(0); > > } > > > > We currently consider #2 to be a better candidate than #1 because the > > bad conversion of the 'this' argument makes it only non-strictly > > viable, whereas #1 is considered unviable due to the arity mismatch. > > So overload resolution selects #2 and we end up making no mention of #1 > > in the subsequent diagnostic: > > > > <stdin>: In function ‘int main()’: > > <stdin>:8:8: error: passing ‘const A’ as ‘this’ argument discards > > qualifiers [-fpermissive] > > <stdin>:3:8: note: in call to ‘void A::f(int)’ > > > > Better would be to explain why neither candidate is a match: > > > > <stdin>:8:6: error: no matching function for call to ‘A::f(int) const’ > > <stdin>:2:8: note: candidate: ‘void A::f(int, int) const’ > > <stdin>:2:8: note: candidate expects 2 arguments, 1 provided > > <stdin>:3:8: note: candidate: ‘void A::f(int)’ > > <stdin>:3:8: note: passing ‘const A*’ as ‘this’ argument discards > > qualifiers > > Is that better? Focusing diagnostics on the candidate you probably meant > seems helpful in most cases. Ah yeah, I see what you mean. > > > > > Same for > > > > void f(int, int); > > void f(int*); > > int main() { > > f(42); > > } > > > > for which we currently emit > > > > <stdin>: In function ‘int main()’: > > <stdin>:5:5: error: invalid conversion from ‘int’ to ‘int*’ > > [-fpermissive] > > <stdin>:2:8: note: initializing argument 1 of ‘void f(int*)’ > > > > instead of > > > > <stdin>: In function ‘int main()’: > > <stdin>:5:4: error: no matching function for call to ‘f(int)’ > > <stdin>:1:6: note: candidate: ‘void f(int, int)’ > > <stdin>:1:6: note: candidate expects 2 arguments, 1 provided > > <stdin>:2:6: note: candidate: ‘void f(int*)’ > > <stdin>:2:6: note: conversion of argument 1 would be ill-formed: > > <stdin>:5:5: error: invalid conversion from ‘int’ to ‘int*’ > > [-fpermissive] > > Indeed, this one is less helpful; maybe it's time to remove the int<->pointer > bad conversion. In 2001 apparently it was still needed > (https://gcc.gnu.org/pipermail/gcc-patches/2001-October/059124.html), but I > would hope that the relevant code is dead by now. Interesting, I'll look removing it and see what might break.
diff --git a/gcc/cp/call.c b/gcc/cp/call.c index e4df72ec1a3..2a0bf933a89 100644 --- a/gcc/cp/call.c +++ b/gcc/cp/call.c @@ -175,17 +175,17 @@ static struct z_candidate *splice_viable (struct z_candidate *, bool, bool *); static bool any_strictly_viable (struct z_candidate *); static struct z_candidate *add_template_candidate (struct z_candidate **, tree, tree, tree, tree, const vec<tree, va_gc> *, - tree, tree, tree, int, unification_kind_t, tsubst_flags_t); + tree, tree, tree, int, unification_kind_t, bool, tsubst_flags_t); static struct z_candidate *add_template_candidate_real (struct z_candidate **, tree, tree, tree, tree, const vec<tree, va_gc> *, - tree, tree, tree, int, tree, unification_kind_t, tsubst_flags_t); + tree, tree, tree, int, tree, unification_kind_t, bool, tsubst_flags_t); static bool is_complete (tree); static struct z_candidate *add_conv_candidate (struct z_candidate **, tree, tree, const vec<tree, va_gc> *, tree, tree, tsubst_flags_t); static struct z_candidate *add_function_candidate (struct z_candidate **, tree, tree, tree, const vec<tree, va_gc> *, tree, - tree, int, conversion**, tsubst_flags_t); + tree, int, conversion**, bool, tsubst_flags_t); static conversion *implicit_conversion (tree, tree, tree, bool, int, tsubst_flags_t); static conversion *reference_binding (tree, tree, tree, bool, int, @@ -2241,6 +2241,56 @@ conv_flags (int i, int nargs, tree fn, tree arg, int flags) return lflags; } +/* Build an appropriate 'this' conversion for the method FN and class + type CTYPE from the value ARG (having type ARGTYPE) to the type PARMTYPE. + This function modifies PARMTYPE, ARGTYPE and ARG. */ + +static conversion * +build_this_conversion (tree fn, tree ctype, + tree& parmtype, tree& argtype, tree& arg, + int flags, tsubst_flags_t complain) +{ + gcc_assert (DECL_NONSTATIC_MEMBER_FUNCTION_P (fn) + && !DECL_CONSTRUCTOR_P (fn)); + + /* The type of the implicit object parameter ('this') for + overload resolution is not always the same as for the + function itself; conversion functions are considered to + be members of the class being converted, and functions + introduced by a using-declaration are considered to be + members of the class that uses them. + + Since build_over_call ignores the ICS for the `this' + parameter, we can just change the parm type. */ + parmtype = cp_build_qualified_type (ctype, + cp_type_quals (TREE_TYPE (parmtype))); + bool this_p = true; + if (FUNCTION_REF_QUALIFIED (TREE_TYPE (fn))) + { + /* If the function has a ref-qualifier, the implicit + object parameter has reference type. */ + bool rv = FUNCTION_RVALUE_QUALIFIED (TREE_TYPE (fn)); + parmtype = cp_build_reference_type (parmtype, rv); + /* The special handling of 'this' conversions in compare_ics + does not apply if there is a ref-qualifier. */ + this_p = false; + } + else + { + parmtype = build_pointer_type (parmtype); + /* We don't use build_this here because we don't want to + capture the object argument until we've chosen a + non-static member function. */ + arg = build_address (arg); + argtype = lvalue_type (arg); + } + flags |= LOOKUP_ONLYCONVERTING; + conversion *t = implicit_conversion (parmtype, argtype, arg, + /*c_cast_p=*/false, flags, complain); + t->this_p = this_p; + return t; +} + /* Create an overload candidate for the function or method FN called with the argument list FIRST_ARG/ARGS and add it to CANDIDATES. FLAGS is passed on to implicit_conversion. @@ -2248,7 +2298,14 @@ conv_flags (int i, int nargs, tree fn, tree arg, int flags) This does not change ARGS. CTYPE, if non-NULL, is the type we want to pretend this function - comes from for purposes of overload resolution. */ + comes from for purposes of overload resolution. + + SHORTCUT_BAD_CONVS controls how we handle "bad" argument conversions. + If true, we stop computing conversions upon seeing the first bad + conversion. This is used by add_candidates to avoid computing + more conversions than necessary in the presence of a strictly viable + candidate, while preserving the defacto behavior of overload resolution + when it turns out there are only non-strictly viable candidates. */ static struct z_candidate * add_function_candidate (struct z_candidate **candidates, @@ -2256,6 +2313,7 @@ add_function_candidate (struct z_candidate **candidates, const vec<tree, va_gc> *args, tree access_path, tree conversion_path, int flags, conversion **convs, + bool shortcut_bad_convs, tsubst_flags_t complain) { tree parmlist = TYPE_ARG_TYPES (TREE_TYPE (fn)); @@ -2377,8 +2435,6 @@ add_function_candidate (struct z_candidate **candidates, { tree argtype, to_type; tree arg; - conversion *t; - int is_this; if (parmnode == void_list_node) break; @@ -2397,54 +2453,23 @@ add_function_candidate (struct z_candidate **candidates, (*args)[i + skip - (first_arg != NULL_TREE ? 1 : 0)]); argtype = lvalue_type (arg); - is_this = (i == 0 && DECL_NONSTATIC_MEMBER_FUNCTION_P (fn) - && ! DECL_CONSTRUCTOR_P (fn)); - + conversion *t; if (parmnode) { tree parmtype = TREE_VALUE (parmnode); - - parmnode = TREE_CHAIN (parmnode); - - /* The type of the implicit object parameter ('this') for - overload resolution is not always the same as for the - function itself; conversion functions are considered to - be members of the class being converted, and functions - introduced by a using-declaration are considered to be - members of the class that uses them. - - Since build_over_call ignores the ICS for the `this' - parameter, we can just change the parm type. */ - if (ctype && is_this) + if (i == 0 + && DECL_NONSTATIC_MEMBER_FUNCTION_P (fn) + && !DECL_CONSTRUCTOR_P (fn)) + t = build_this_conversion (fn, ctype, parmtype, argtype, arg, + flags, complain); + else { - parmtype = cp_build_qualified_type - (ctype, cp_type_quals (TREE_TYPE (parmtype))); - if (FUNCTION_REF_QUALIFIED (TREE_TYPE (fn))) - { - /* If the function has a ref-qualifier, the implicit - object parameter has reference type. */ - bool rv = FUNCTION_RVALUE_QUALIFIED (TREE_TYPE (fn)); - parmtype = cp_build_reference_type (parmtype, rv); - /* The special handling of 'this' conversions in compare_ics - does not apply if there is a ref-qualifier. */ - is_this = false; - } - else - { - parmtype = build_pointer_type (parmtype); - /* We don't use build_this here because we don't want to - capture the object argument until we've chosen a - non-static member function. */ - arg = build_address (arg); - argtype = lvalue_type (arg); - } + int lflags = conv_flags (i, len-skip, fn, arg, flags); + t = implicit_conversion (parmtype, argtype, arg, + /*c_cast_p=*/false, lflags, complain); } - - int lflags = conv_flags (i, len-skip, fn, arg, flags); - - t = implicit_conversion (parmtype, argtype, arg, - /*c_cast_p=*/false, lflags, complain); to_type = parmtype; + parmnode = TREE_CHAIN (parmnode); } else { @@ -2453,9 +2478,6 @@ add_function_candidate (struct z_candidate **candidates, to_type = argtype; } - if (t && is_this) - t->this_p = true; - convs[i] = t; if (! t) { @@ -2470,7 +2492,8 @@ add_function_candidate (struct z_candidate **candidates, viable = -1; reason = bad_arg_conversion_rejection (first_arg, i, arg, to_type, EXPR_LOCATION (arg)); - + if (shortcut_bad_convs) + break; } } @@ -3354,7 +3377,9 @@ add_builtin_candidates (struct z_candidate **candidates, enum tree_code code, This does not change ARGLIST. The RETURN_TYPE is the desired type for conversion operators. If OBJ is NULL_TREE, FLAGS and CTYPE are as for add_function_candidate. If an OBJ is supplied, FLAGS and - CTYPE are ignored, and OBJ is as for add_conv_candidate. */ + CTYPE are ignored, and OBJ is as for add_conv_candidate. + + SHORTCUT_BAD_CONVS is as in add_function_candidate. */ static struct z_candidate* add_template_candidate_real (struct z_candidate **candidates, tree tmpl, @@ -3362,7 +3387,7 @@ add_template_candidate_real (struct z_candidate **candidates, tree tmpl, const vec<tree, va_gc> *arglist, tree return_type, tree access_path, tree conversion_path, int flags, tree obj, unification_kind_t strict, - tsubst_flags_t complain) + bool shortcut_bad_convs, tsubst_flags_t complain) { int ntparms = DECL_NTPARMS (tmpl); tree targs = make_tree_vec (ntparms); @@ -3454,7 +3479,33 @@ add_template_candidate_real (struct z_candidate **candidates, tree tmpl, errs = errorcount+sorrycount; if (!obj) - convs = alloc_conversions (nargs); + { + convs = alloc_conversions (nargs); + + if (shortcut_bad_convs + && DECL_NONSTATIC_MEMBER_FUNCTION_P (tmpl) + && !DECL_CONSTRUCTOR_P (tmpl)) + { + /* Check the 'this' conversion before proceeding with deduction. + This is effectively an extension of the DR 1391 resolution + that we perform in check_non_deducible_conversions, though it's + convenient to do this extra check here instead of there. */ + tree parmtype = TREE_VALUE (TYPE_ARG_TYPES (TREE_TYPE (tmpl))); + tree argtype = lvalue_type (first_arg); + tree arg = first_arg; + conversion *t = build_this_conversion (tmpl, ctype, + parmtype, argtype, arg, + flags, complain); + convs[0] = t; + if (t->bad_p) + { + reason = bad_arg_conversion_rejection (first_arg, 0, + arg, parmtype, + EXPR_LOCATION (arg)); + goto fail; + } + } + } fn = fn_type_unification (tmpl, explicit_targs, targs, args_without_in_chrg, nargs_without_in_chrg, @@ -3501,7 +3552,8 @@ add_template_candidate_real (struct z_candidate **candidates, tree tmpl, else cand = add_function_candidate (candidates, fn, ctype, first_arg, arglist, access_path, - conversion_path, flags, convs, complain); + conversion_path, flags, convs, + shortcut_bad_convs, complain); if (DECL_TI_TEMPLATE (fn) != tmpl) /* This situation can occur if a member template of a template class is specialized. Then, instantiate_template might return @@ -3527,8 +3579,9 @@ add_template_candidate_real (struct z_candidate **candidates, tree tmpl, return cand; fail: - return add_candidate (candidates, tmpl, first_arg, arglist, nargs, NULL, - access_path, conversion_path, 0, reason, flags); + int viable = (reason->code == rr_bad_arg_conversion ? -1 : 0); + return add_candidate (candidates, tmpl, first_arg, arglist, nargs, convs, + access_path, conversion_path, viable, reason, flags); } @@ -3537,13 +3590,15 @@ add_template_candidate (struct z_candidate **candidates, tree tmpl, tree ctype, tree explicit_targs, tree first_arg, const vec<tree, va_gc> *arglist, tree return_type, tree access_path, tree conversion_path, int flags, - unification_kind_t strict, tsubst_flags_t complain) + unification_kind_t strict, bool shortcut_bad_convs, + tsubst_flags_t complain) { return add_template_candidate_real (candidates, tmpl, ctype, explicit_targs, first_arg, arglist, return_type, access_path, conversion_path, - flags, NULL_TREE, strict, complain); + flags, NULL_TREE, strict, shortcut_bad_convs, + complain); } /* Create an overload candidate for the conversion function template TMPL, @@ -3569,7 +3624,7 @@ add_template_conv_candidate (struct z_candidate **candidates, tree tmpl, add_template_candidate_real (candidates, tmpl, NULL_TREE, NULL_TREE, NULL_TREE, arglist, return_type, access_path, conversion_path, 0, obj, DEDUCE_CALL, - complain); + /*shortcut_bad_convs=*/false, complain); } /* The CANDS are the set of candidates that were considered for @@ -6013,6 +6068,7 @@ add_candidates (tree fns, tree first_arg, const vec<tree, va_gc> *args, /* Delay creating the implicit this parameter until it is needed. */ non_static_args = NULL; + bool seen_strictly_viable = any_strictly_viable (*candidates); /* If there's a non-template perfect match, we don't need to consider templates. So check non-templates first. This optimization is only really needed for the defaulted copy constructor of tuple and the like @@ -6024,6 +6080,19 @@ add_candidates (tree fns, tree first_arg, const vec<tree, va_gc> *args, else /*if (flags & LOOKUP_DEFAULTED)*/ which = non_templates; + /* During overload resolution, we first consider each function under the + assumption that we'll eventually find a strictly viable candidate. + This allows us to circumvent our defacto behavior when checking + argument conversions and shortcut consideration of the candidate + upon encountering the first bad conversion. If this assumption + turns out to be false, and all candidates end up being non-strictly + viable, then we reconsider such candidates under the defacto behavior. + This trick is important in order to be able to prune member function + overloads according to their const/ref-qualifiers (since all 'this' + conversions are at worst bad) without breaking -fpermissive. */ + tree bad_fns = NULL_TREE; + bool shortcut_bad_convs = true; + again: for (tree fn : lkp_range (fns)) { @@ -6070,18 +6139,22 @@ add_candidates (tree fns, tree first_arg, const vec<tree, va_gc> *args, } if (TREE_CODE (fn) == TEMPLATE_DECL) - add_template_candidate (candidates, - fn, - ctype, - explicit_targs, - fn_first_arg, - fn_args, - return_type, - access_path, - conversion_path, - flags, - strict, - complain); + { + if (!add_template_candidate (candidates, + fn, + ctype, + explicit_targs, + fn_first_arg, + fn_args, + return_type, + access_path, + conversion_path, + flags, + strict, + shortcut_bad_convs, + complain)) + continue; + } else { add_function_candidate (candidates, @@ -6093,16 +6166,47 @@ add_candidates (tree fns, tree first_arg, const vec<tree, va_gc> *args, conversion_path, flags, NULL, + shortcut_bad_convs, complain); if (perfect_candidate_p (*candidates)) seen_perfect = true; } + + z_candidate *cand = *candidates; + if (cand->viable == 1) + seen_strictly_viable = true; + + if (cand->viable == -1 + && shortcut_bad_convs + && !cand->convs[cand->num_convs - 1]) + { + /* This candidate has been tentatively marked non-strictly viable, + and we didn't compute all argument conversions for it (having + stopped at the first bad conversion). Add the function to BAD_FNS + to fully reconsider later if we fail to obtain any strictly viable + candidates. */ + bad_fns = lookup_add (fn, bad_fns); + *candidates = (*candidates)->next; + } } if (which == non_templates && !seen_perfect) { which = templates; goto again; } + else if (which == templates + && !seen_strictly_viable + && shortcut_bad_convs + && bad_fns) + { + /* None of the candidates are strictly viable, so check again those + functions in BAD_FNS, this time without shortcutting bad conversions + so that all their argument conversions are computed. */ + which = either; + fns = bad_fns; + shortcut_bad_convs = false; + goto again; + } } /* Returns 1 if P0145R2 says that the LHS of operator CODE is evaluated first, diff --git a/gcc/testsuite/g++.dg/template/conv17.C b/gcc/testsuite/g++.dg/template/conv17.C new file mode 100644 index 00000000000..794dcf00657 --- /dev/null +++ b/gcc/testsuite/g++.dg/template/conv17.C @@ -0,0 +1,56 @@ +// PR c++/101904 +// Verify we stop at the first bad conversion when considering a candidate +// during overload resolution. + +template<class T> +struct A { typedef typename T::type type; }; + +struct B { + // A conversion function that always induces a hard error. + template<class T> B(T, typename A<T>::type = 0); +}; + +struct C { + template<class T> void f(T, typename A<T>::type); // #1 + template<class T> void f(T, T) const; // #2 + + static void g(int*, B); // #3 + static void g(int, int); // #4 + +#if __cpp_ref_qualifiers + void h(B) &; // #5 + void h(int) &&; // #6 +#endif +}; + +int main() { + const C c; + + // The bad conversion for the 'this' argument should preclude us from further + // considering the non-const #1 (which would have caused a hard error during + // instantiation). This behavior is essentially DR 1391 extended to the + // 'this' argument. + c.f(0, 0); // resolves to #2 + c.f<int>(0, 0); + + // Likewise for the bad conversion for the 1st argument in #3. + C::g(42, 42); // resolves to #4 + +#if __cpp_ref_qualifiers + // Likewise for the bad 'this' conversion in #5. + C().h(0); // resolves to #6 +#endif +} + +#if __cpp_concepts +// Test the same calls in a SFINAE context. +template<class T> +concept D = requires (const T t) { + t.f(0, 0); + t.f<int>(0, ); + T::g(42, 42); + T().h(0); +}; + +static_assert(D<C>); +#endif