From patchwork Tue Jan 15 20:32:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ian Lance Taylor X-Patchwork-Id: 1025454 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=gcc.gnu.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=gcc-patches-return-494100-incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=golang.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="D/KXvmUo"; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=golang-org.20150623.gappssmtp.com header.i=@golang-org.20150623.gappssmtp.com header.b="OyQxZFFK"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 43fMVT0glrz9sBZ for ; Wed, 16 Jan 2019 07:33:20 +1100 (AEDT) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :mime-version:from:date:message-id:subject:to:content-type; q= dns; s=default; b=jSAZ6mcpTSiDXcSOZQCAG/Ba4BvnJyDzXxf1cIFiluf7el oZKUtDr/6APrJQ9LS3VPTCJ8wfsuu1yuSes8OQZqZ3CyBBaPm+f4xmZ55Th+iVrU QTXyNBekE60vLWRCbycnDsayzmB0Obu58SRbiAxQaLF2gdJZZtASJTfkSX3OA= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :mime-version:from:date:message-id:subject:to:content-type; s= default; bh=QRhHbolum1RFt6cxyZngShnHJLA=; b=D/KXvmUoM3zgiksppu6V xn1UpkUC0m2OpBqBnuUpis4lOmBvfFhpYpLi+++JRpfFO6JjYpEjj+juJ8bsSKrF Md6peeoAUQCYtZ2NSrLa5DRCluS+HkZ+T0JyubmU4QEUu82/9qspPQ+R0YS/ake4 FuKJlegHzLwxSNg7zsbWc54= Received: (qmail 101654 invoked by alias); 15 Jan 2019 20:33:12 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 101644 invoked by uid 89); 15 Jan 2019 20:33:11 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-11.1 required=5.0 tests=BAYES_00, GIT_PATCH_2, GIT_PATCH_3, KAM_ASCII_DIVIDERS, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=ham version=3.3.2 spammy=lucky, cap, HTo:D*googlegroups.com, typ X-HELO: mail-lf1-f44.google.com Received: from mail-lf1-f44.google.com (HELO mail-lf1-f44.google.com) (209.85.167.44) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 15 Jan 2019 20:33:05 +0000 Received: by mail-lf1-f44.google.com with SMTP id p6so3135589lfc.1 for ; Tue, 15 Jan 2019 12:33:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=golang-org.20150623.gappssmtp.com; s=20150623; h=mime-version:from:date:message-id:subject:to; bh=lmRqJpiLXhoCgR6Q9ti3dldQMbv9BbVWwJ8Q7pZLDQk=; b=OyQxZFFKj0ninLW1WWTtXjvxCw8b1JsI9SpekufLEf+uU/LPrMTz8y2lfLWR4AyQ32 gu8RFt4P5KRvwhP8+BswdKIT5zjHdvzp4Ybh7J02g0dcrE2no7cVABDBqaIHBR4JUboQ T3toz0VZtaG1T82xK5n3RYFywhXf+fabIIP+11Bdy/uFVNt1rc8MxONMcSGUfK7TSm/B dzaLz1bFP9tpXavtuMYr6+DKdlpxCgKZreEMn2Kah/rTaQxC+YTIY9TvESjOOOip4HLm wL4XtL/QQSvb7LqRPC0Ohw1tmTtY0VA64AkqUzi8PmkVULTeBxbn3QzH7/KZSnIiPsKX gmYw== MIME-Version: 1.0 From: Ian Lance Taylor Date: Tue, 15 Jan 2019 12:32:50 -0800 Message-ID: Subject: Go patch committed: panic on uncomparable map key even if map is empty To: gcc-patches , gofrontend-dev This patch to the Go frontend and libgo changes map accesses to panic on an uncomparable map key, even if the map is empty. This ports https://golang.org/cl/155918 from the master repo. As part of this change it reorganizes the map type to use a single flag field rather than different bool fields. This is a step toward updating libgo to the Go 1.12beta2 release. Bootstrapped and ran Go testsuite on x86_64-pc-linux-gnu. Committed to mainline. Ian Index: gcc/go/gofrontend/MERGE =================================================================== --- gcc/go/gofrontend/MERGE (revision 267941) +++ gcc/go/gofrontend/MERGE (working copy) @@ -1,4 +1,4 @@ -0d64279c01a37b2579c0c62ca4f2c3e3f81de07c +87005025fcd0d7e7908b3aae7062b52cb80eb0f3 The first line of this file holds the git revision number of the last merge done from the gofrontend repository. Index: gcc/go/gofrontend/types.cc =================================================================== --- gcc/go/gofrontend/types.cc (revision 267941) +++ gcc/go/gofrontend/types.cc (working copy) @@ -5839,6 +5839,25 @@ Struct_type::do_needs_key_update() return false; } +// Return whether computing the hash value of an instance of this +// struct type might panic. + +bool +Struct_type::do_hash_might_panic() +{ + const Struct_field_list* fields = this->fields_; + if (fields == NULL) + return false; + for (Struct_field_list::const_iterator pf = fields->begin(); + pf != fields->end(); + ++pf) + { + if (pf->type()->hash_might_panic()) + return true; + } + return false; +} + // Return whether this struct type is permitted to be in the heap. bool @@ -7979,21 +7998,18 @@ Map_type::make_map_type_descriptor_type( Type* ptdt = Type::make_type_descriptor_ptr_type(); Type* uint8_type = Type::lookup_integer_type("uint8"); Type* uint16_type = Type::lookup_integer_type("uint16"); - Type* bool_type = Type::lookup_bool_type(); + Type* uint32_type = Type::lookup_integer_type("uint32"); Struct_type* sf = - Type::make_builtin_struct_type(11, + Type::make_builtin_struct_type(8, "", tdt, "key", ptdt, "elem", ptdt, "bucket", ptdt, "keysize", uint8_type, - "indirectkey", bool_type, "valuesize", uint8_type, - "indirectvalue", bool_type, "bucketsize", uint16_type, - "reflexivekey", bool_type, - "needkeyupdate", bool_type); + "flags", uint32_type); ret = Type::make_builtin_named_type("MapType", sf); } @@ -8011,6 +8027,7 @@ Map_type::do_type_descriptor(Gogo* gogo, Type* mtdt = Map_type::make_map_type_descriptor_type(); Type* uint8_type = Type::lookup_integer_type("uint8"); Type* uint16_type = Type::lookup_integer_type("uint16"); + Type* uint32_type = Type::lookup_integer_type("uint32"); int64_t keysize; if (!this->key_type_->backend_type_size(gogo, &keysize)) @@ -8078,11 +8095,6 @@ Map_type::do_type_descriptor(Gogo* gogo, vals->push_back(Expression::make_integer_int64(keysize, uint8_type, bloc)); ++p; - go_assert(p->is_field_name("indirectkey")); - vals->push_back(Expression::make_boolean(keysize > Map_type::max_key_size, - bloc)); - - ++p; go_assert(p->is_field_name("valuesize")); if (valsize > Map_type::max_val_size) vals->push_back(Expression::make_integer_int64(ptrsize, uint8_type, bloc)); @@ -8090,24 +8102,26 @@ Map_type::do_type_descriptor(Gogo* gogo, vals->push_back(Expression::make_integer_int64(valsize, uint8_type, bloc)); ++p; - go_assert(p->is_field_name("indirectvalue")); - vals->push_back(Expression::make_boolean(valsize > Map_type::max_val_size, - bloc)); - - ++p; go_assert(p->is_field_name("bucketsize")); vals->push_back(Expression::make_integer_int64(bucketsize, uint16_type, bloc)); ++p; - go_assert(p->is_field_name("reflexivekey")); - vals->push_back(Expression::make_boolean(this->key_type_->is_reflexive(), - bloc)); - - ++p; - go_assert(p->is_field_name("needkeyupdate")); - vals->push_back(Expression::make_boolean(this->key_type_->needs_key_update(), - bloc)); + go_assert(p->is_field_name("flags")); + // As with the other fields, the flag bits must match the reflect + // and runtime packages. + unsigned long flags = 0; + if (keysize > Map_type::max_key_size) + flags |= 1; + if (valsize > Map_type::max_val_size) + flags |= 2; + if (this->key_type_->is_reflexive()) + flags |= 4; + if (this->key_type_->needs_key_update()) + flags |= 8; + if (this->key_type_->hash_might_panic()) + flags |= 16; + vals->push_back(Expression::make_integer_ul(flags, uint32_type, bloc)); ++p; go_assert(p == fields->end()); Index: gcc/go/gofrontend/types.h =================================================================== --- gcc/go/gofrontend/types.h (revision 267941) +++ gcc/go/gofrontend/types.h (working copy) @@ -635,6 +635,12 @@ class Type needs_key_update() { return this->do_needs_key_update(); } + // Return whether the hash function of this type might panic. This + // is only called for types used as a key in a map type. + bool + hash_might_panic() + { return this->do_hash_might_panic(); } + // Whether the type is permitted in the heap. bool in_heap() @@ -1074,6 +1080,10 @@ class Type { return false; } virtual bool + do_hash_might_panic() + { return false; } + + virtual bool do_in_heap() { return true; } @@ -2601,6 +2611,9 @@ class Struct_type : public Type do_needs_key_update(); bool + do_hash_might_panic(); + + bool do_in_heap(); unsigned int @@ -2779,6 +2792,10 @@ class Array_type : public Type { return this->element_type_->needs_key_update(); } bool + do_hash_might_panic() + { return this->length_ != NULL && this->element_type_->hash_might_panic(); } + + bool do_in_heap() { return this->length_ == NULL || this->element_type_->in_heap(); } @@ -3170,6 +3187,11 @@ class Interface_type : public Type do_needs_key_update() { return true; } + // Hashing an unhashable type stored in an interface might panic. + bool + do_hash_might_panic() + { return true; } + unsigned int do_hash_for_method(Gogo*, int) const; Index: libgo/go/reflect/type.go =================================================================== --- libgo/go/reflect/type.go (revision 267941) +++ libgo/go/reflect/type.go (working copy) @@ -347,16 +347,13 @@ type interfaceType struct { // mapType represents a map type. type mapType struct { rtype - key *rtype // map key type - elem *rtype // map element (value) type - bucket *rtype // internal bucket structure - keysize uint8 // size of key slot - indirectkey uint8 // store ptr to key instead of key itself - valuesize uint8 // size of value slot - indirectvalue uint8 // store ptr to value instead of value itself - bucketsize uint16 // size of bucket - reflexivekey bool // true if k==k for all keys - needkeyupdate bool // true if we need to update key on an overwrite + key *rtype // map key type + elem *rtype // map element (value) type + bucket *rtype // internal bucket structure + keysize uint8 // size of key slot + valuesize uint8 // size of value slot + bucketsize uint16 // size of bucket + flags uint32 } // ptrType represents a pointer type. @@ -1506,6 +1503,8 @@ func MapOf(key, elem Type) Type { s := "map[" + *ktyp.string + "]" + *etyp.string // Make a map type. + // Note: flag values must match those used in the TMAP case + // in ../cmd/compile/internal/gc/reflect.go:dtypesym. var imap interface{} = (map[unsafe.Pointer]unsafe.Pointer)(nil) mt := **(**mapType)(unsafe.Pointer(&imap)) mt.string = &s @@ -1520,23 +1519,29 @@ func MapOf(key, elem Type) Type { mt.ptrToThis = nil mt.bucket = bucketOf(ktyp, etyp) + mt.flags = 0 if ktyp.size > maxKeySize { mt.keysize = uint8(ptrSize) - mt.indirectkey = 1 + mt.flags |= 1 // indirect key } else { mt.keysize = uint8(ktyp.size) - mt.indirectkey = 0 } if etyp.size > maxValSize { mt.valuesize = uint8(ptrSize) - mt.indirectvalue = 1 + mt.flags |= 2 // indirect value } else { mt.valuesize = uint8(etyp.size) - mt.indirectvalue = 0 } mt.bucketsize = uint16(mt.bucket.size) - mt.reflexivekey = isReflexive(ktyp) - mt.needkeyupdate = needKeyUpdate(ktyp) + if isReflexive(ktyp) { + mt.flags |= 4 + } + if needKeyUpdate(ktyp) { + mt.flags |= 8 + } + if hashMightPanic(ktyp) { + mt.flags |= 16 + } // Canonicalize before storing in lookupCache ti := toType(&mt.rtype) @@ -1715,6 +1720,27 @@ func needKeyUpdate(t *rtype) bool { } } +// hashMightPanic reports whether the hash of a map key of type t might panic. +func hashMightPanic(t *rtype) bool { + switch t.Kind() { + case Interface: + return true + case Array: + tt := (*arrayType)(unsafe.Pointer(t)) + return hashMightPanic(tt.elem) + case Struct: + tt := (*structType)(unsafe.Pointer(t)) + for _, f := range tt.fields { + if hashMightPanic(f.typ) { + return true + } + } + return false + default: + return false + } +} + // Make sure these routines stay in sync with ../../runtime/map.go! // These types exist only for GC, so we only fill out GC relevant info. // Currently, that's just size and the GC program. We also fill in string Index: libgo/go/runtime/map.go =================================================================== --- libgo/go/runtime/map.go (revision 267941) +++ libgo/go/runtime/map.go (working copy) @@ -414,14 +414,17 @@ func mapaccess1(t *maptype, h *hmap, key if msanenabled && h != nil { msanread(key, t.key.size) } + hashfn := t.key.hashfn + equalfn := t.key.equalfn if h == nil || h.count == 0 { + if t.hashMightPanic() { + hashfn(key, 0) // see issue 23734 + } return unsafe.Pointer(&zeroVal[0]) } if h.flags&hashWriting != 0 { throw("concurrent map read and map write") } - hashfn := t.key.hashfn - equalfn := t.key.equalfn hash := hashfn(key, uintptr(h.hash0)) m := bucketMask(h.B) b := (*bmap)(add(h.buckets, (hash&m)*uintptr(t.bucketsize))) @@ -442,12 +445,12 @@ func mapaccess1(t *maptype, h *hmap, key continue } k := add(unsafe.Pointer(b), dataOffset+i*uintptr(t.keysize)) - if t.indirectkey { + if t.indirectkey() { k = *((*unsafe.Pointer)(k)) } if equalfn(key, k) { v := add(unsafe.Pointer(b), dataOffset+bucketCnt*uintptr(t.keysize)+i*uintptr(t.valuesize)) - if t.indirectvalue { + if t.indirectvalue() { v = *((*unsafe.Pointer)(v)) } return v @@ -472,14 +475,17 @@ func mapaccess2(t *maptype, h *hmap, key if msanenabled && h != nil { msanread(key, t.key.size) } + hashfn := t.key.hashfn + equalfn := t.key.equalfn if h == nil || h.count == 0 { + if t.hashMightPanic() { + hashfn(key, 0) // see issue 23734 + } return unsafe.Pointer(&zeroVal[0]), false } if h.flags&hashWriting != 0 { throw("concurrent map read and map write") } - hashfn := t.key.hashfn - equalfn := t.key.equalfn hash := hashfn(key, uintptr(h.hash0)) m := bucketMask(h.B) b := (*bmap)(unsafe.Pointer(uintptr(h.buckets) + (hash&m)*uintptr(t.bucketsize))) @@ -500,12 +506,12 @@ func mapaccess2(t *maptype, h *hmap, key continue } k := add(unsafe.Pointer(b), dataOffset+i*uintptr(t.keysize)) - if t.indirectkey { + if t.indirectkey() { k = *((*unsafe.Pointer)(k)) } if equalfn(key, k) { v := add(unsafe.Pointer(b), dataOffset+bucketCnt*uintptr(t.keysize)+i*uintptr(t.valuesize)) - if t.indirectvalue { + if t.indirectvalue() { v = *((*unsafe.Pointer)(v)) } return v, true @@ -547,12 +553,12 @@ func mapaccessK(t *maptype, h *hmap, key continue } k := add(unsafe.Pointer(b), dataOffset+i*uintptr(t.keysize)) - if t.indirectkey { + if t.indirectkey() { k = *((*unsafe.Pointer)(k)) } if equalfn(key, k) { v := add(unsafe.Pointer(b), dataOffset+bucketCnt*uintptr(t.keysize)+i*uintptr(t.valuesize)) - if t.indirectvalue { + if t.indirectvalue() { v = *((*unsafe.Pointer)(v)) } return k, v @@ -634,14 +640,14 @@ again: continue } k := add(unsafe.Pointer(b), dataOffset+i*uintptr(t.keysize)) - if t.indirectkey { + if t.indirectkey() { k = *((*unsafe.Pointer)(k)) } if !equalfn(key, k) { continue } // already have a mapping for key. Update it. - if t.needkeyupdate { + if t.needkeyupdate() { typedmemmove(t.key, k, key) } val = add(unsafe.Pointer(b), dataOffset+bucketCnt*uintptr(t.keysize)+i*uintptr(t.valuesize)) @@ -672,12 +678,12 @@ again: } // store new key/value at insert position - if t.indirectkey { + if t.indirectkey() { kmem := newobject(t.key) *(*unsafe.Pointer)(insertk) = kmem insertk = kmem } - if t.indirectvalue { + if t.indirectvalue() { vmem := newobject(t.elem) *(*unsafe.Pointer)(val) = vmem } @@ -690,7 +696,7 @@ done: throw("concurrent map writes") } h.flags &^= hashWriting - if t.indirectvalue { + if t.indirectvalue() { val = *((*unsafe.Pointer)(val)) } return val @@ -706,15 +712,18 @@ func mapdelete(t *maptype, h *hmap, key if msanenabled && h != nil { msanread(key, t.key.size) } + hashfn := t.key.hashfn + equalfn := t.key.equalfn if h == nil || h.count == 0 { + if t.hashMightPanic() { + hashfn(key, 0) // see issue 23734 + } return } if h.flags&hashWriting != 0 { throw("concurrent map writes") } - hashfn := t.key.hashfn - equalfn := t.key.equalfn hash := hashfn(key, uintptr(h.hash0)) // Set hashWriting after calling alg.hash, since alg.hash may panic, @@ -735,20 +744,20 @@ search: } k := add(unsafe.Pointer(b), dataOffset+i*uintptr(t.keysize)) k2 := k - if t.indirectkey { + if t.indirectkey() { k2 = *((*unsafe.Pointer)(k2)) } if !equalfn(key, k2) { continue } // Only clear key if there are pointers in it. - if t.indirectkey { + if t.indirectkey() { *(*unsafe.Pointer)(k) = nil } else if t.key.kind&kindNoPointers == 0 { memclrHasPointers(k, t.key.size) } v := add(unsafe.Pointer(b), dataOffset+bucketCnt*uintptr(t.keysize)+i*uintptr(t.valuesize)) - if t.indirectvalue { + if t.indirectvalue() { *(*unsafe.Pointer)(v) = nil } else if t.elem.kind&kindNoPointers == 0 { memclrHasPointers(v, t.elem.size) @@ -894,7 +903,7 @@ next: continue } k := add(unsafe.Pointer(b), dataOffset+uintptr(offi)*uintptr(t.keysize)) - if t.indirectkey { + if t.indirectkey() { k = *((*unsafe.Pointer)(k)) } v := add(unsafe.Pointer(b), dataOffset+bucketCnt*uintptr(t.keysize)+uintptr(offi)*uintptr(t.valuesize)) @@ -906,7 +915,7 @@ next: // through the oldbucket, skipping any keys that will go // to the other new bucket (each oldbucket expands to two // buckets during a grow). - if t.reflexivekey || equalfn(k, k) { + if t.reflexivekey() || equalfn(k, k) { // If the item in the oldbucket is not destined for // the current new bucket in the iteration, skip it. hash := hashfn(k, uintptr(h.hash0)) @@ -927,13 +936,13 @@ next: } } if (b.tophash[offi] != evacuatedX && b.tophash[offi] != evacuatedY) || - !(t.reflexivekey || equalfn(k, k)) { + !(t.reflexivekey() || equalfn(k, k)) { // This is the golden data, we can return it. // OR // key!=key, so the entry can't be deleted or updated, so we can just return it. // That's lucky for us because when key!=key we can't look it up successfully. it.key = k - if t.indirectvalue { + if t.indirectvalue() { v = *((*unsafe.Pointer)(v)) } it.value = v @@ -1157,7 +1166,7 @@ func evacuate(t *maptype, h *hmap, oldbu throw("bad map state") } k2 := k - if t.indirectkey { + if t.indirectkey() { k2 = *((*unsafe.Pointer)(k2)) } var useY uint8 @@ -1165,7 +1174,7 @@ func evacuate(t *maptype, h *hmap, oldbu // Compute hash to make our evacuation decision (whether we need // to send this key/value to bucket x or bucket y). hash := t.key.hashfn(k2, uintptr(h.hash0)) - if h.flags&iterator != 0 && !t.reflexivekey && !t.key.equalfn(k2, k2) { + if h.flags&iterator != 0 && !t.reflexivekey() && !t.key.equalfn(k2, k2) { // If key != key (NaNs), then the hash could be (and probably // will be) entirely different from the old hash. Moreover, // it isn't reproducible. Reproducibility is required in the @@ -1200,12 +1209,12 @@ func evacuate(t *maptype, h *hmap, oldbu dst.v = add(dst.k, bucketCnt*uintptr(t.keysize)) } dst.b.tophash[dst.i&(bucketCnt-1)] = top // mask dst.i as an optimization, to avoid a bounds check - if t.indirectkey { + if t.indirectkey() { *(*unsafe.Pointer)(dst.k) = k2 // copy pointer } else { typedmemmove(t.key, dst.k, k) // copy value } - if t.indirectvalue { + if t.indirectvalue() { *(*unsafe.Pointer)(dst.v) = *(*unsafe.Pointer)(v) } else { typedmemmove(t.elem, dst.v, v) @@ -1271,12 +1280,12 @@ func reflect_makemap(t *maptype, cap int if !ismapkey(t.key) { throw("runtime.reflect_makemap: unsupported map key type") } - if t.key.size > maxKeySize && (!t.indirectkey || t.keysize != uint8(sys.PtrSize)) || - t.key.size <= maxKeySize && (t.indirectkey || t.keysize != uint8(t.key.size)) { + if t.key.size > maxKeySize && (!t.indirectkey() || t.keysize != uint8(sys.PtrSize)) || + t.key.size <= maxKeySize && (t.indirectkey() || t.keysize != uint8(t.key.size)) { throw("key size wrong") } - if t.elem.size > maxValueSize && (!t.indirectvalue || t.valuesize != uint8(sys.PtrSize)) || - t.elem.size <= maxValueSize && (t.indirectvalue || t.valuesize != uint8(t.elem.size)) { + if t.elem.size > maxValueSize && (!t.indirectvalue() || t.valuesize != uint8(sys.PtrSize)) || + t.elem.size <= maxValueSize && (t.indirectvalue() || t.valuesize != uint8(t.elem.size)) { throw("value size wrong") } if t.key.align > bucketCnt { Index: libgo/go/runtime/type.go =================================================================== --- libgo/go/runtime/type.go (revision 267941) +++ libgo/go/runtime/type.go (working copy) @@ -86,17 +86,32 @@ type interfacetype struct { } type maptype struct { - typ _type - key *_type - elem *_type - bucket *_type // internal type representing a hash bucket - keysize uint8 // size of key slot - indirectkey bool // store ptr to key instead of key itself - valuesize uint8 // size of value slot - indirectvalue bool // store ptr to value instead of value itself - bucketsize uint16 // size of bucket - reflexivekey bool // true if k==k for all keys - needkeyupdate bool // true if we need to update key on an overwrite + typ _type + key *_type + elem *_type + bucket *_type // internal type representing a hash bucket + keysize uint8 // size of key slot + valuesize uint8 // size of value slot + bucketsize uint16 // size of bucket + flags uint32 +} + +// Note: flag values must match those used in the TMAP case +// in ../cmd/compile/internal/gc/reflect.go:dtypesym. +func (mt *maptype) indirectkey() bool { // store ptr to key instead of key itself + return mt.flags&1 != 0 +} +func (mt *maptype) indirectvalue() bool { // store ptr to value instead of value itself + return mt.flags&2 != 0 +} +func (mt *maptype) reflexivekey() bool { // true if k==k for all keys + return mt.flags&4 != 0 +} +func (mt *maptype) needkeyupdate() bool { // true if we need to update key on an overwrite + return mt.flags&8 != 0 +} +func (mt *maptype) hashMightPanic() bool { // true if hash function might panic + return mt.flags&16 != 0 } type arraytype struct {