From patchwork Tue Sep 12 13:05:40 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nathan Sidwell X-Patchwork-Id: 812837 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=gcc.gnu.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=gcc-patches-return-461916-incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="JDcyYjoS"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3xs4mR1dDRz9rxl for ; Tue, 12 Sep 2017 23:05:58 +1000 (AEST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:to :from:subject:message-id:date:mime-version:content-type; q=dns; s=default; b=LjoHzCHlzzxdtgwXDLRKqt12QxHIW/pmogsnl65F8ssm5otBxy 1GghZyBgMszXqCnJK+wQ0pKTDPkfBsGAqzlx02pk9EpV+UawrunyTizSDLwnb+Q2 fxQD7KxM+rWgLigV8qNHDEbvNDU+npIfCMYm/pmh8zcXJJoCSHB6oR59I= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:to :from:subject:message-id:date:mime-version:content-type; s= default; bh=BnL9MQYSUwAY5ZDKJGOCM534reo=; b=JDcyYjoStALir1ewuWHN MnRQV3HUwcibrGTMGooqA9fVDBjENsfcVyninHoqbIC5HrQ3ySo+HIGsrH6p5NiY si2QhpKyAyuguQCmcJDIiTRJ2Jm9wvOPYj6urTqgZq9sln2d2aPrMETNu/BIw68U wWUef1Ji1hy8lkMB86ZlDKY= Received: (qmail 108601 invoked by alias); 12 Sep 2017 13:05:49 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 108575 invoked by uid 89); 12 Sep 2017 13:05:48 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-10.6 required=5.0 tests=BAYES_00, FREEMAIL_FROM, GIT_PATCH_2, GIT_PATCH_3, KAM_ASCII_DIVIDERS, RCVD_IN_DNSWL_NONE, RCVD_IN_SORBS_SPAM, SPF_PASS autolearn=ham version=3.3.2 spammy=kicked, Lose, Linear, tre X-HELO: mail-yw0-f169.google.com Received: from mail-yw0-f169.google.com (HELO mail-yw0-f169.google.com) (209.85.161.169) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 12 Sep 2017 13:05:44 +0000 Received: by mail-yw0-f169.google.com with SMTP id r85so28041114ywg.1 for ; Tue, 12 Sep 2017 06:05:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:to:from:subject:message-id:date :user-agent:mime-version:content-language; bh=BAQZha0eVeVnWikHCFVeDPVMAr20obYQjAgQbcZmzz0=; b=o1YOZA0T9XUZjBM1M0Sg0vicaJvJYgbXPs6psGEHGw0jqN1loV09LbSYSSJkpM9F6l YCxueYycqZm7bcjCfJvhOyqTNCJYep/PyPlV3jexA/hWNxcwIJPhktESFPZOD19Aq9aw Tzh/vd/q+UZDRpmZ5gqcZN6BQZOw32Ak6KwWn7HSKcpDvOrT7lK2zcqo9UMq6fNxdRcf gGptEPyB95rfDrFo7yaVVAnVkOLwf773sIS/zxikX4MOU5zgRrG/3rmEbFu9+K22Uo5A 5NtqSAMdSTF2X0XMcR+s0J2pejIyCBnONbmvWpzQlbLArvUAZcAZbufNtnPGh4tgkinJ yC+g== X-Gm-Message-State: AHPjjUhhnPuwHtCKvq3hFkOaF8X0Ez9EfUpkkEiWLro3HWht+rcaWG20 QdHU7YDdEU/5J3Vu X-Google-Smtp-Source: ADKCNb7kwjgDZqvc7bTXWSwMETRLvV51wUoD2xvuQ1hKKltebDWG7dvxh51AfnnoURKD0Txwp/YOvA== X-Received: by 10.129.87.3 with SMTP id l3mr9320585ywb.145.1505221542452; Tue, 12 Sep 2017 06:05:42 -0700 (PDT) Received: from ?IPv6:2620:10d:c0a3:20fb:7500:e7fb:4a6f:2254? ([2620:10d:c091:200::a68b]) by smtp.googlemail.com with ESMTPSA id y7sm3948465ywg.36.2017.09.12.06.05.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Sep 2017 06:05:41 -0700 (PDT) To: GCC Patches From: Nathan Sidwell Subject: [C++ PATCH] Kill SORTED_FIELDS Message-ID: <16ca2332-0c9f-4417-5963-8d5adaf836f6@acm.org> Date: Tue, 12 Sep 2017 09:05:40 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1 MIME-Version: 1.0 This patch kills the SORTED_FIELDS vector from the C++ FE. After a type is completed, we now hold everthing on METHOD_VEC, and thus only need 1 binary search, rather than 2 (or worse, under certain conditions). As I said at the Cauldron, my earlier attempt to approach this from a different direction proved too difficult, so I reverted the patch changing SORTED_FIELDS into a hash-map. There's no change in the lookup during class definition (so, linear METHOD_VEC walk followed by linear TYPE_FIELDS walk). At class completion, we extend the METHOD_VEC, push the (non-function) TYPE_FIELDS members onto it and then sort it. This can result in adjacent members with the same name. Examples being record-name and non-record member, or function-set and dependent using declaration. I modify the method-cmp function to provide a complete order for these, and then deploy a de-duping pass. The de-duping has to handle the following 2 cases: 1) elaborated-type name and something else. Deploy STAT_HACK machinery. 2) dependent using declaration and overload set. Wrap the using decl as the first member of an overload set. Once that is done, we have a single slot per name (unlike the SORTED_FIELDS handling, which dealt with the STAT_HACK by ordering them in a specific order and then tweaking the binary search behaviour). We need to handle the dependent using decl vs overload in the way we do, because some subsequent lookups explicitly want the overload set, but the general lookup needs the using-decl to make sure the lookup is redone at instantiation time. Cursory performance testing showed a 3%-4% parsing and instantiation improvement. I think memory useage is essentially unchanged as we're replacing 2 vectors with 1 vector containing the concatenation. There are 2 follow on patches: 1) method_vec is now the wrong name. It should be member_vec. 2) the remaining sorted_fields handling can be moved from c-common to the C FE itself. Applied to trunk nathan 2017-09-12 Nathan Sidwell Kill CLASSTYPE_SORTED_FIELDS. * cp-tree.h (struct lang_type): Lose sorted_fields member. (CLASSTYPE_SORTED_FIELDS): Delete. * name-lookup.h (set_class_bindings): Add EXTRA arg. * name-lookup.c (fields_linear_search): New, broken out of ... (lookup_field_1): ... here. Delete remainder of function. (get_class_binding_direct): Reimplement without sorted_fields. (get_class_binding): Rename TYPE arg to KLASS, for consistency. (get_method_slot): Call set_class_binding when creating method_vec on complete type. (method_name_cmp): Order identically named slots. (sorted_fields_type_new): Delete. (field_vc_append_class_fields): Rename to ... (method_vec_append_class_fields): ... here. Adjust. (field_vec_append_enum_values): Renme to ... (method_vec_append_enum_values): ... here. Adjust. (method_vec_dedup): New. (set_class_bindings): Reimplement. (insert_late_enum_def_bindings): Reimplement. Index: cp-tree.h =================================================================== --- cp-tree.h (revision 252002) +++ cp-tree.h (working copy) @@ -2008,10 +2008,6 @@ struct GTY(()) lang_type { as a list of adopted protocols or a pointer to a corresponding @interface. See objc/objc-act.h for details. */ tree objc_info; - /* sorted_fields is sorted based on a pointer, so we need to be able - to resort it if pointers get rearranged. */ - struct sorted_fields_type * GTY ((reorder ("resort_sorted_fields"))) - sorted_fields; /* FIXME reuse another field? */ tree lambda_expr; }; @@ -3229,11 +3225,6 @@ extern void decl_shadowed_for_var_insert && TREE_CODE (TYPE_NAME (NODE)) == TYPE_DECL \ && TYPE_DECL_ALIAS_P (TYPE_NAME (NODE))) -/* For a class type: if this structure has many fields, we'll sort them - and put them into a TREE_VEC. */ -#define CLASSTYPE_SORTED_FIELDS(NODE) \ - (LANG_TYPE_CLASS_CHECK (NODE)->sorted_fields) - /* If non-NULL for a VAR_DECL, FUNCTION_DECL, TYPE_DECL or TEMPLATE_DECL, the entity is either a template specialization (if DECL_USE_TEMPLATE is nonzero) or the abstract instance of the Index: name-lookup.c =================================================================== --- name-lookup.c (revision 252004) +++ name-lookup.c (working copy) @@ -1156,103 +1156,54 @@ method_vec_linear_search (vecelts[0]; - int lo = 0, hi = CLASSTYPE_SORTED_FIELDS (type)->len; - int i; + tree decl = fields; - while (lo < hi) + if (!want_type + && TREE_CODE (decl) == FIELD_DECL + && ANON_AGGR_TYPE_P (TREE_TYPE (decl))) { - i = (lo + hi) / 2; - - if (DECL_NAME (fields[i]) > name) - hi = i; - else if (DECL_NAME (fields[i]) < name) - lo = i + 1; + tree anon = TREE_TYPE (decl); + gcc_assert (COMPLETE_TYPE_P (anon)); + tree temp; + + if (vec *method_vec = CLASSTYPE_METHOD_VEC (anon)) + temp = method_vec_linear_search (method_vec, name); else - { - field = NULL_TREE; - - /* We might have a nested class and a field with the - same name; we sorted them appropriately via - field_decl_cmp, so just look for the first or last - field with this name. */ - if (want_type) - { - do - field = fields[i--]; - while (i >= lo && DECL_NAME (fields[i]) == name); - if (!DECL_DECLARES_TYPE_P (field)) - field = NULL_TREE; - } - else - { - do - field = fields[i++]; - while (i < hi && DECL_NAME (fields[i]) == name); - } - - if (field) - { - field = strip_using_decl (field); - if (is_overloaded_fn (field)) - field = NULL_TREE; - } + temp = fields_linear_search (anon, name, want_type); - return field; + if (temp) + { + /* Anon members can only contain fields. */ + gcc_assert (!STAT_HACK_P (temp) && !DECL_DECLARES_TYPE_P (temp)); + return temp; } } - return NULL_TREE; - } - field = TYPE_FIELDS (type); - - for (field = TYPE_FIELDS (type); field; field = DECL_CHAIN (field)) - { - tree decl = field; - - if (DECL_DECLARES_FUNCTION_P (decl)) - /* Functions are kep separately, at the moment. */ + if (DECL_NAME (decl) != name) continue; - - gcc_assert (DECL_P (field)); - if (DECL_NAME (field) == NULL_TREE - && ANON_AGGR_TYPE_P (TREE_TYPE (field))) - { - tree temp = lookup_field_1 (TREE_TYPE (field), name, want_type); - if (temp) - return temp; - } - - if (TREE_CODE (decl) == USING_DECL - && DECL_NAME (decl) == name) + + if (TREE_CODE (decl) == USING_DECL) { decl = strip_using_decl (decl); if (is_overloaded_fn (decl)) continue; } - if (DECL_NAME (decl) == name - && (!want_type || DECL_DECLARES_TYPE_P (decl))) + if (DECL_DECLARES_FUNCTION_P (decl)) + /* Functions are found separately. */ + continue; + + if (!want_type || DECL_DECLARES_TYPE_P (decl)) return decl; } - /* We used to special-case vptr_identifier. Make sure it's not - special any more. */ - gcc_assert (name != vptr_identifier || !TYPE_VFIELD (type)); - return NULL_TREE; } @@ -1273,25 +1224,49 @@ get_class_binding_direct (tree klass, tr bool conv_op = IDENTIFIER_CONV_OP_P (name); tree lookup = conv_op ? conv_op_identifier : name; tree val = NULL_TREE; + vec *method_vec = CLASSTYPE_METHOD_VEC (klass); - if (type_or_fns > 0) - /* User wants type. Don't look in method_vec. */; - else if (vec *method_vec = CLASSTYPE_METHOD_VEC (klass)) + if (COMPLETE_TYPE_P (klass) && method_vec) { - if (COMPLETE_TYPE_P (klass)) - val = method_vec_binary_search (method_vec, lookup); - else - val = method_vec_linear_search (method_vec, lookup); + val = method_vec_binary_search (method_vec, lookup); + if (!val) + ; + else if (type_or_fns > 0) + { + if (STAT_HACK_P (val)) + val = STAT_TYPE (val); + else if (!DECL_DECLARES_TYPE_P (val)) + val = NULL_TREE; + } + else if (STAT_HACK_P (val)) + val = STAT_DECL (val); + + if (val && TREE_CODE (val) == OVERLOAD + && TREE_CODE (OVL_FUNCTION (val)) == USING_DECL) + { + /* An overload with a dependent USING_DECL. Does the caller + want the USING_DECL or the functions? */ + if (type_or_fns < 0) + val = OVL_CHAIN (val); + else + val = OVL_FUNCTION (val); + } } + else + { + if (method_vec && type_or_fns <= 0) + val = method_vec_linear_search (method_vec, lookup); - if (type_or_fns < 0) - /* User wants functions. Don't look for a field. */; - else if (!val || (TREE_CODE (val) == OVERLOAD && OVL_USING_P (val))) - /* Dependent using declarations are a 'field', make sure we - return that even if we saw an overload already. */ - if (tree field_val = lookup_field_1 (klass, lookup, type_or_fns > 0)) - if (!val || TREE_CODE (field_val) == USING_DECL) - val = field_val; + if (type_or_fns < 0) + /* Don't bother looking for field. We don't want it. */; + else if (!val || (TREE_CODE (val) == OVERLOAD && OVL_USING_P (val))) + /* Dependent using declarations are a 'field', make sure we + return that even if we saw an overload already. */ + if (tree field_val = fields_linear_search (klass, lookup, + type_or_fns > 0)) + if (!val || TREE_CODE (field_val) == USING_DECL) + val = field_val; + } /* Extract the conversion operators asked for, unless the general conversion operator was requested. */ @@ -1359,6 +1334,15 @@ get_method_slot (tree klass, tree name) { vec_alloc (method_vec, 8); CLASSTYPE_METHOD_VEC (klass) = method_vec; + if (complete_p) + { + /* If the class is complete but had no method_vec, we need + to add the TYPE_FIELDS into it. We're also most likely + to be adding ctors & dtors, so ask for 6 spare slots (the + abstract cdtors and their clones). */ + set_class_bindings (klass, 6); + method_vec = CLASSTYPE_METHOD_VEC (klass); + } } if (IDENTIFIER_CONV_OP_P (name)) @@ -1372,6 +1356,12 @@ get_method_slot (tree klass, tree name) if (fn_name == name) { + /* If we found an existing slot, it must be a function set. + Even with insertion after completion, because those only + happen with artificial fns that have unspellable names. + This means we do not have to deal with the stat hack + either. */ + gcc_checking_assert (OVL_P (*slot)); if (name == conv_op_identifier) { gcc_checking_assert (OVL_FUNCTION (*slot) == conv_op_marker); @@ -1413,7 +1403,9 @@ get_method_slot (tree klass, tree name) } /* Comparison function to compare two TYPE_METHOD_VEC entries by - name. */ + name. Because we can have duplicates during insertion of + TYPE_FIELDS, we do extra checking so deduping doesn't have to deal + with so many cases. */ static int method_name_cmp (const void *a_p, const void *b_p) @@ -1423,8 +1415,48 @@ method_name_cmp (const void *a_p, const tree name_a = DECL_NAME (TREE_CODE (a) == OVERLOAD ? OVL_FUNCTION (a) : a); tree name_b = DECL_NAME (TREE_CODE (b) == OVERLOAD ? OVL_FUNCTION (b) : b); - gcc_checking_assert (name_a && name_b && name_a != name_b); - return name_a < name_b ? -1 : +1; + gcc_checking_assert (name_a && name_b); + if (name_a != name_b) + return name_a < name_b ? -1 : +1; + + if (name_a == conv_op_identifier) + { + /* Strip the conv-op markers. */ + gcc_checking_assert (OVL_FUNCTION (a) == conv_op_marker + && OVL_FUNCTION (b) == conv_op_marker); + a = OVL_CHAIN (a); + b = OVL_CHAIN (b); + } + + if (TREE_CODE (a) == OVERLOAD) + a = OVL_FUNCTION (a); + if (TREE_CODE (b) == OVERLOAD) + b = OVL_FUNCTION (b); + + /* We're in STAT_HACK or USING_DECL territory (or possibly error-land). */ + if (TREE_CODE (a) == TREE_CODE (b)) + /* We can get two TYPE_DECLs or two USING_DECLs. Place in source + order. */ + return DECL_SOURCE_LOCATION (a) < DECL_SOURCE_LOCATION (b) ? -1 : +1; + + /* If one of them is a TYPE_DECL, it loses. */ + if (TREE_CODE (a) == TYPE_DECL) + return +1; + else if (TREE_CODE (b) == TYPE_DECL) + return -1; + + /* If one of them is a USING_DECL, it loses. */ + if (TREE_CODE (a) == USING_DECL) + return +1; + else if (TREE_CODE (b) == USING_DECL) + return -1; + + /* There are no other cases, as duplicate detection should have + kicked in earlier. However, some erroneous cases get though. + Order by source location. We should really prevent this + happening. */ + gcc_assert (errorcount); + return DECL_SOURCE_LOCATION (a) < DECL_SOURCE_LOCATION (b) ? -1 : +1; } static struct { @@ -1467,20 +1499,6 @@ resort_type_method_vec (void *obj, void } } -/* Allocate and return an instance of struct sorted_fields_type with - N fields. */ - -static struct sorted_fields_type * -sorted_fields_type_new (int n) -{ - struct sorted_fields_type *sft; - sft = (sorted_fields_type *) ggc_internal_alloc (sizeof (sorted_fields_type) - + n * sizeof (tree)); - sft->len = n; - - return sft; -} - /* Recursively count the number of fields in KLASS, including anonymous union members. */ @@ -1501,58 +1519,176 @@ count_class_fields (tree klass) return n_fields; } -/* Append all the nonfunction members fields of KLASS to FIELD_VEC - starting at IDX. Recurse for anonymous members. The array must - have space. Returns the next available index. */ +/* Append all the nonfunction members fields of KLASS to METHOD_VEC. + Recurse for anonymous members. METHOD_VEC must have space. */ -static unsigned -field_vec_append_class_fields (struct sorted_fields_type *field_vec, - tree klass, unsigned idx) +static void +method_vec_append_class_fields (vec *method_vec, tree klass) { for (tree fields = TYPE_FIELDS (klass); fields; fields = DECL_CHAIN (fields)) if (DECL_DECLARES_FUNCTION_P (fields)) /* Functions are handled separately. */; else if (TREE_CODE (fields) == FIELD_DECL && ANON_AGGR_TYPE_P (TREE_TYPE (fields))) - idx = field_vec_append_class_fields (field_vec, TREE_TYPE (fields), idx); + method_vec_append_class_fields (method_vec, TREE_TYPE (fields)); else if (DECL_NAME (fields)) - field_vec->elts[idx++] = fields; - - return idx; + { + tree field = fields; + /* Mark a conv-op USING_DECL with the conv-op-marker. */ + if (TREE_CODE (field) == USING_DECL + && IDENTIFIER_CONV_OP_P (DECL_NAME (field))) + field = ovl_make (conv_op_marker, field); + method_vec->quick_push (field); + } } -/* Append all of the enum values of ENUMTYPE to FIELD_VEC starting at IDX. - FIELD_VEC must have space. */ +/* Append all of the enum values of ENUMTYPE to METHOD_VEC. + METHOD_VEC must have space. */ -static unsigned -field_vec_append_enum_values (struct sorted_fields_type *field_vec, - tree enumtype, unsigned idx) +static void +method_vec_append_enum_values (vec *method_vec, tree enumtype) { for (tree values = TYPE_VALUES (enumtype); values; values = TREE_CHAIN (values)) - field_vec->elts[idx++] = TREE_VALUE (values); + method_vec->quick_push (TREE_VALUE (values)); +} - return idx; +/* METHOD_VEC has just had new DECLs added to it, but is sorted. + DeDup adjacent DECLS of the same name. We already dealt with + conflict resolution when adding the fields or methods themselves. + There are three cases (which could all be combined): + 1) a TYPE_DECL and non TYPE_DECL. Deploy STAT_HACK as appropriate. + 2) a USING_DECL and an overload. If the USING_DECL is dependent, + it wins. Otherwise the OVERLOAD does. + 3) two USING_DECLS. ... + + method_name_cmp will have ordered duplicates as + */ + +static void +method_vec_dedup (vec *method_vec) +{ + unsigned len = method_vec->length (); + unsigned store = 0; + + tree current = (*method_vec)[0], name = OVL_NAME (current); + tree next = NULL_TREE, next_name = NULL_TREE; + for (unsigned jx, ix = 0; ix < len; + ix = jx, current = next, name = next_name) + { + tree to_type = NULL_TREE; + tree to_using = NULL_TREE; + tree marker = NULL_TREE; + if (IDENTIFIER_CONV_OP_P (name)) + { + marker = current; + current = OVL_CHAIN (current); + name = DECL_NAME (OVL_FUNCTION (marker)); + gcc_checking_assert (name == conv_op_identifier); + } + + if (TREE_CODE (current) == USING_DECL) + { + current = strip_using_decl (current); + if (is_overloaded_fn (current)) + current = NULL_TREE; + else if (TREE_CODE (current) == USING_DECL) + { + to_using = current; + current = NULL_TREE; + } + } + + if (current && DECL_DECLARES_TYPE_P (current)) + { + to_type = current; + current = NULL_TREE; + } + + for (jx = ix + 1; jx < len; jx++) + { + next = (*method_vec)[jx]; + next_name = OVL_NAME (next); + if (next_name != name) + break; + + if (marker) + { + gcc_checking_assert (OVL_FUNCTION (marker) + == OVL_FUNCTION (next)); + next = OVL_CHAIN (next); + } + + if (TREE_CODE (next) == USING_DECL) + { + next = strip_using_decl (next); + if (is_overloaded_fn (next)) + next = NULL_TREE; + else if (TREE_CODE (next) == USING_DECL) + { + to_using = next; + next = NULL_TREE; + } + } + + if (next && DECL_DECLARES_TYPE_P (next)) + to_type = next; + } + + if (to_using) + { + if (!current) + current = to_using; + else + current = ovl_make (to_using, current); + } + + if (to_type) + { + if (!current) + current = to_type; + else + current = stat_hack (current, to_type); + } + + gcc_assert (current); + if (marker) + { + OVL_CHAIN (marker) = current; + current = marker; + } + (*method_vec)[store++] = current; + } + + while (store++ < len) + method_vec->pop (); } -/* Insert FIELDS into KLASS for the sorted case if the FIELDS count is - big enough. Sort the METHOD_VEC too. */ +/* Add the non-function members to CLASSTYPE_METHOD_VEC. If there is + no existing METHOD_VEC and fewer than 8 fields, do nothing. We + know there must be at least 1 field -- the self-reference + TYPE_DECL, except for anon aggregates, which will have at least + one field. */ void -set_class_bindings (tree klass) +set_class_bindings (tree klass, unsigned extra) { - if (vec *method_vec = CLASSTYPE_METHOD_VEC (klass)) - qsort (method_vec->address (), method_vec->length (), - sizeof (tree), method_name_cmp); - - int n_fields = count_class_fields (klass); - if (n_fields >= 8) - { - struct sorted_fields_type *field_vec = sorted_fields_type_new (n_fields); - unsigned idx = field_vec_append_class_fields (field_vec, klass, 0); - gcc_assert (idx == unsigned (field_vec->len)); - qsort (field_vec->elts, n_fields, sizeof (tree), field_decl_cmp); - CLASSTYPE_SORTED_FIELDS (klass) = field_vec; + unsigned n_fields = count_class_fields (klass); + vec *method_vec = CLASSTYPE_METHOD_VEC (klass); + + if (method_vec || n_fields >= 8) + { + /* Append the new fields. */ + vec_safe_reserve_exact (method_vec, extra + n_fields); + method_vec_append_class_fields (method_vec, klass); + } + + if (method_vec) + { + CLASSTYPE_METHOD_VEC (klass) = method_vec; + qsort (method_vec->address (), method_vec->length (), + sizeof (tree), method_name_cmp); + method_vec_dedup (method_vec); } } @@ -1561,33 +1697,27 @@ set_class_bindings (tree klass) void insert_late_enum_def_bindings (tree klass, tree enumtype) { - unsigned n_fields; - struct sorted_fields_type *sorted_fields = CLASSTYPE_SORTED_FIELDS (klass); + int n_fields; + vec *method_vec = CLASSTYPE_METHOD_VEC (klass); - /* The enum values will already be on the TYPE_FIELDS, so don't + /* The enum bindings will already be on the TYPE_FIELDS, so don't count them twice. */ - if (sorted_fields) - n_fields = list_length (TYPE_VALUES (enumtype)) + sorted_fields->len; - else + if (!method_vec) n_fields = count_class_fields (klass); + else + n_fields = list_length (TYPE_VALUES (enumtype)); - if (n_fields >= 8) + if (method_vec || n_fields >= 8) { - struct sorted_fields_type *field_vec = sorted_fields_type_new (n_fields); - unsigned idx; - - if (sorted_fields) - { - for (idx = 0; idx < unsigned (sorted_fields->len); ++idx) - field_vec->elts[idx] = sorted_fields->elts[idx]; - - idx = field_vec_append_enum_values (field_vec, enumtype, idx); - } + vec_safe_reserve_exact (method_vec, n_fields); + if (CLASSTYPE_METHOD_VEC (klass)) + method_vec_append_enum_values (method_vec, enumtype); else - idx = field_vec_append_class_fields (field_vec, klass, 0); - gcc_assert (idx == unsigned (field_vec->len)); - qsort (field_vec->elts, n_fields, sizeof (tree), field_decl_cmp); - CLASSTYPE_SORTED_FIELDS (klass) = field_vec; + method_vec_append_class_fields (method_vec, klass); + CLASSTYPE_METHOD_VEC (klass) = method_vec; + qsort (method_vec->address (), method_vec->length (), + sizeof (tree), method_name_cmp); + method_vec_dedup (method_vec); } } Index: name-lookup.h =================================================================== --- name-lookup.h (revision 252002) +++ name-lookup.h (working copy) @@ -324,7 +324,7 @@ extern tree get_class_binding (tree, tre extern tree *get_method_slot (tree klass, tree name); extern void resort_type_method_vec (void *, void *, gt_pointer_operator, void *); -extern void set_class_bindings (tree); +extern void set_class_bindings (tree, unsigned extra = 0); extern void insert_late_enum_def_bindings (tree, tree); extern tree innermost_non_namespace_value (tree); extern cxx_binding *outer_binding (tree, cxx_binding *, bool);