diff mbox

libgo patch committed: Merge from revision 18783 of master

Message ID mcrioogdu1a.fsf@iant-glaptop.roam.corp.google.com
State New
Headers show

Commit Message

Ian Lance Taylor June 5, 2014, 1:28 a.m. UTC
I have committed a patch to libgo to merge from revision
18783:00cce3a34d7e of the master library.  This revision was committed
January 7.  I picked this revision to merge to because the next revision
deleted a file that is explicitly merged in by the libgo/merge.sh
script.

Among other things, this patch changes type descriptors to add a new
pointer to a zero value.  In gccgo this is implemented as a common
variable, and that requires some changes to the compiler and a small
change to go-gcc.cc.

As usual the patch is too large to include in this e-mail message.  I've
appended the changes to parts of libgo that are more gccgo-specific.

Bootstrapped and ran Go testsuite on x86_64-unknown-linux-gnu.
Committed to mainline.

Ian


2014-06-04  Ian Lance Taylor  <iant@google.com>

	* go-gcc.cc (Gcc_backend::implicit_variable): Add is_common and
	alignment parameters.  Permit init parameter to be NULL.

Comments

Matthias Klose June 5, 2014, 10:24 a.m. UTC | #1
Am 05.06.2014 03:28, schrieb Ian Lance Taylor:
> I have committed a patch to libgo to merge from revision
> 18783:00cce3a34d7e of the master library.  This revision was committed
> January 7.  I picked this revision to merge to because the next revision
> deleted a file that is explicitly merged in by the libgo/merge.sh
> script.
> 
> Among other things, this patch changes type descriptors to add a new
> pointer to a zero value.  In gccgo this is implemented as a common
> variable, and that requires some changes to the compiler and a small
> change to go-gcc.cc.
> 
> As usual the patch is too large to include in this e-mail message.  I've
> appended the changes to parts of libgo that are more gccgo-specific.
> 
> Bootstrapped and ran Go testsuite on x86_64-unknown-linux-gnu.
> Committed to mainline.

Is it time to bump the soname on trunk?
Ian Lance Taylor June 5, 2014, 1:55 p.m. UTC | #2
On Thu, Jun 5, 2014 at 3:24 AM, Matthias Klose <doko@ubuntu.com> wrote:
> Am 05.06.2014 03:28, schrieb Ian Lance Taylor:
>> I have committed a patch to libgo to merge from revision
>> 18783:00cce3a34d7e of the master library.  This revision was committed
>> January 7.  I picked this revision to merge to because the next revision
>> deleted a file that is explicitly merged in by the libgo/merge.sh
>> script.
>>
>> Among other things, this patch changes type descriptors to add a new
>> pointer to a zero value.  In gccgo this is implemented as a common
>> variable, and that requires some changes to the compiler and a small
>> change to go-gcc.cc.
>>
>> As usual the patch is too large to include in this e-mail message.  I've
>> appended the changes to parts of libgo that are more gccgo-specific.
>>
>> Bootstrapped and ran Go testsuite on x86_64-unknown-linux-gnu.
>> Committed to mainline.
>
> Is it time to bump the soname on trunk?

Yes, I'll do that when I've merged gccgo all the way up to the Go 1.3
release.

Ian
Rainer Orth June 6, 2014, 9:12 a.m. UTC | #3
Ian Lance Taylor <iant@google.com> writes:

> I have committed a patch to libgo to merge from revision
> 18783:00cce3a34d7e of the master library.  This revision was committed
> January 7.  I picked this revision to merge to because the next revision
> deleted a file that is explicitly merged in by the libgo/merge.sh
> script.
>
> Among other things, this patch changes type descriptors to add a new
> pointer to a zero value.  In gccgo this is implemented as a common
> variable, and that requires some changes to the compiler and a small
> change to go-gcc.cc.

This change introduced many failures on Solaris with /bin/ld, e.g.

FAIL: go.test/test/bom.go -O (test for excess errors)

ld: warning: symbol 'go$zerovalue' has differing sizes:
        (file bom.o value=0x8; file /var/gcc/regression/trunk/11-gcc/build/i386-pc-solaris2.11/./libgo/.libs/libgo.so value=0x800);
        bom.o definition taken and updated with larger size

	Rainer
Gary Funck June 9, 2014, 8:12 p.m. UTC | #4
On 06/04/14 18:28:17, Ian Lance Taylor wrote:
> I have committed a patch to libgo to merge from revision
> 18783:00cce3a34d7e of the master library.

Based on trunk rev. 211365, we're seeing this warning:

libgo/runtime/chan.c:484:7: error: ‘received’ may be used uninitialized
in this function [-Werror=maybe-uninitialized]
  bool received;
       ^

Here:

 481 _Bool
 482 runtime_chanrecv2(ChanType *t, Hchan* c, byte* v)
 483 {
 484         bool received;
 485 
 486         chanrecv(t, c, v, true, &received);
 487         return received;
 488 }
Ian Lance Taylor June 10, 2014, 12:36 a.m. UTC | #5
On Mon, Jun 9, 2014 at 1:12 PM, Gary Funck <gary@intrepid.com> wrote:
> On 06/04/14 18:28:17, Ian Lance Taylor wrote:
>> I have committed a patch to libgo to merge from revision
>> 18783:00cce3a34d7e of the master library.
>
> Based on trunk rev. 211365, we're seeing this warning:
>
> libgo/runtime/chan.c:484:7: error: ‘received’ may be used uninitialized
> in this function [-Werror=maybe-uninitialized]
>   bool received;
>        ^

Thanks for the report.  There is no bug here, the control flow is just
too complicated for the compiler to sort out.  I don't know why I'm
not seeing the warning, but in any case the fix is simple.  This patch
bootstrapped and tested on x86_64-unknown-linux-gnu.  Committed to
mainline.

Ian
Ian Lance Taylor June 10, 2014, 12:37 a.m. UTC | #6
Forgot to CC gofrontend-dev.

On Mon, Jun 9, 2014 at 5:36 PM, Ian Lance Taylor <iant@google.com> wrote:
> On Mon, Jun 9, 2014 at 1:12 PM, Gary Funck <gary@intrepid.com> wrote:
>> On 06/04/14 18:28:17, Ian Lance Taylor wrote:
>>> I have committed a patch to libgo to merge from revision
>>> 18783:00cce3a34d7e of the master library.
>>
>> Based on trunk rev. 211365, we're seeing this warning:
>>
>> libgo/runtime/chan.c:484:7: error: ‘received’ may be used uninitialized
>> in this function [-Werror=maybe-uninitialized]
>>   bool received;
>>        ^
>
> Thanks for the report.  There is no bug here, the control flow is just
> too complicated for the compiler to sort out.  I don't know why I'm
> not seeing the warning, but in any case the fix is simple.  This patch
> bootstrapped and tested on x86_64-unknown-linux-gnu.  Committed to
> mainline.
>
> Ian
diff mbox

Patch

Index: gcc/go/gofrontend/types.cc
===================================================================
--- gcc/go/gofrontend/types.cc	(revision 211248)
+++ gcc/go/gofrontend/types.cc	(working copy)
@@ -1519,7 +1519,7 @@  Type::make_type_descriptor_type()
       // The type descriptor type.
 
       Struct_type* type_descriptor_type =
-	Type::make_builtin_struct_type(10,
+	Type::make_builtin_struct_type(11,
 				       "Kind", uint8_type,
 				       "align", uint8_type,
 				       "fieldAlign", uint8_type,
@@ -1530,7 +1530,8 @@  Type::make_type_descriptor_type()
 				       "string", pointer_string_type,
 				       "", pointer_uncommon_type,
 				       "ptrToThis",
-				       pointer_type_descriptor_type);
+				       pointer_type_descriptor_type,
+				       "zero", unsafe_pointer_type);
 
       Named_type* named = Type::make_builtin_named_type("commonType",
 							type_descriptor_type);
@@ -2050,6 +2051,15 @@  Type::type_descriptor_constructor(Gogo*
     }
 
   ++p;
+  go_assert(p->is_field_name("zero"));
+  Expression* z = Expression::make_var_reference(gogo->zero_value(this), bloc);
+  z = Expression::make_unary(OPERATOR_AND, z, bloc);
+  Type* void_type = Type::make_void_type();
+  Type* unsafe_pointer_type = Type::make_pointer_type(void_type);
+  z = Expression::make_cast(unsafe_pointer_type, z, bloc);
+  vals->push_back(z);
+
+  ++p;
   go_assert(p == fields->end());
 
   mpz_clear(iv);
@@ -2382,13 +2392,13 @@  Type::is_backend_type_size_known(Gogo* g
 // the backend.
 
 bool
-Type::backend_type_size(Gogo* gogo, unsigned int *psize)
+Type::backend_type_size(Gogo* gogo, unsigned long *psize)
 {
   if (!this->is_backend_type_size_known(gogo))
     return false;
   Btype* bt = this->get_backend_placeholder(gogo);
   size_t size = gogo->backend()->type_size(bt);
-  *psize = static_cast<unsigned int>(size);
+  *psize = static_cast<unsigned long>(size);
   if (*psize != size)
     return false;
   return true;
@@ -2398,13 +2408,13 @@  Type::backend_type_size(Gogo* gogo, unsi
 // the alignment in bytes and return true.  Otherwise, return false.
 
 bool
-Type::backend_type_align(Gogo* gogo, unsigned int *palign)
+Type::backend_type_align(Gogo* gogo, unsigned long *palign)
 {
   if (!this->is_backend_type_size_known(gogo))
     return false;
   Btype* bt = this->get_backend_placeholder(gogo);
   size_t align = gogo->backend()->type_alignment(bt);
-  *palign = static_cast<unsigned int>(align);
+  *palign = static_cast<unsigned long>(align);
   if (*palign != align)
     return false;
   return true;
@@ -2414,13 +2424,13 @@  Type::backend_type_align(Gogo* gogo, uns
 // field.
 
 bool
-Type::backend_type_field_align(Gogo* gogo, unsigned int *palign)
+Type::backend_type_field_align(Gogo* gogo, unsigned long *palign)
 {
   if (!this->is_backend_type_size_known(gogo))
     return false;
   Btype* bt = this->get_backend_placeholder(gogo);
   size_t a = gogo->backend()->type_field_alignment(bt);
-  *palign = static_cast<unsigned int>(a);
+  *palign = static_cast<unsigned long>(a);
   if (*palign != a)
     return false;
   return true;
@@ -4595,7 +4605,7 @@  Struct_type::do_compare_is_identity(Gogo
   const Struct_field_list* fields = this->fields_;
   if (fields == NULL)
     return true;
-  unsigned int offset = 0;
+  unsigned long offset = 0;
   for (Struct_field_list::const_iterator pf = fields->begin();
        pf != fields->end();
        ++pf)
@@ -4606,7 +4616,7 @@  Struct_type::do_compare_is_identity(Gogo
       if (!pf->type()->compare_is_identity(gogo))
 	return false;
 
-      unsigned int field_align;
+      unsigned long field_align;
       if (!pf->type()->backend_type_align(gogo, &field_align))
 	return false;
       if ((offset & (field_align - 1)) != 0)
@@ -4617,13 +4627,13 @@  Struct_type::do_compare_is_identity(Gogo
 	  return false;
 	}
 
-      unsigned int field_size;
+      unsigned long field_size;
       if (!pf->type()->backend_type_size(gogo, &field_size))
 	return false;
       offset += field_size;
     }
 
-  unsigned int struct_size;
+  unsigned long struct_size;
   if (!this->backend_type_size(gogo, &struct_size))
     return false;
   if (offset != struct_size)
@@ -5620,8 +5630,8 @@  Array_type::do_compare_is_identity(Gogo*
     return false;
 
   // If there is any padding, then we can't use memcmp.
-  unsigned int size;
-  unsigned int align;
+  unsigned long size;
+  unsigned long align;
   if (!this->element_type_->backend_type_size(gogo, &size)
       || !this->element_type_->backend_type_align(gogo, &align))
     return false;
Index: gcc/go/gofrontend/expressions.cc
===================================================================
--- gcc/go/gofrontend/expressions.cc	(revision 211248)
+++ gcc/go/gofrontend/expressions.cc	(working copy)
@@ -4105,7 +4105,8 @@  Unary_expression::do_get_backend(Transla
 			      && !context->is_const());
 	    }
 	  Bvariable* implicit =
-	    gogo->backend()->implicit_variable(buf, btype, bexpr, copy_to_heap);
+	    gogo->backend()->implicit_variable(buf, btype, bexpr, copy_to_heap,
+					       false, 0);
 	  bexpr = gogo->backend()->var_expression(implicit, loc);
 	}
       else if ((this->expr_->is_composite_literal()
@@ -7598,7 +7599,7 @@  Builtin_call_expression::do_numeric_cons
       if (this->seen_)
         return false;
 
-      unsigned int ret;
+      unsigned long ret;
       if (this->code_ == BUILTIN_SIZEOF)
 	{
           this->seen_ = true;
@@ -7626,8 +7627,7 @@  Builtin_call_expression::do_numeric_cons
       else
 	go_unreachable();
 
-      nc->set_unsigned_long(Type::lookup_integer_type("uintptr"),
-			    static_cast<unsigned long>(ret));
+      nc->set_unsigned_long(Type::lookup_integer_type("uintptr"), ret);
       return true;
     }
   else if (this->code_ == BUILTIN_OFFSETOF)
Index: gcc/go/gofrontend/gogo.cc
===================================================================
--- gcc/go/gofrontend/gogo.cc	(revision 211248)
+++ gcc/go/gofrontend/gogo.cc	(working copy)
@@ -41,6 +41,9 @@  Gogo::Gogo(Backend* backend, Linemap* li
     pkgpath_(),
     pkgpath_symbol_(),
     prefix_(),
+    zero_value_(NULL),
+    zero_value_size_(0),
+    zero_value_align_(0),
     pkgpath_set_(false),
     pkgpath_from_option_(false),
     prefix_from_option_(false),
@@ -575,6 +578,88 @@  Gogo::current_bindings() const
     return this->globals_;
 }
 
+// Return the special variable used as the zero value of types.
+
+Named_object*
+Gogo::zero_value(Type *type)
+{
+  if (this->zero_value_ == NULL)
+    {
+      Location bloc = Linemap::predeclared_location();
+
+      // We will change the type later, when we know the size.
+      Type* byte_type = this->lookup_global("byte")->type_value();
+
+      mpz_t val;
+      mpz_init_set_ui(val, 0);
+      Expression* zero = Expression::make_integer(&val, NULL, bloc);
+      mpz_clear(val);
+
+      Type* array_type = Type::make_array_type(byte_type, zero);
+
+      Variable* var = new Variable(array_type, NULL, true, false, false, bloc);
+      this->zero_value_ = Named_object::make_variable("go$zerovalue", NULL,
+						      var);
+    }
+
+  // The zero value will be the maximum required size.
+  unsigned long size;
+  bool ok = type->backend_type_size(this, &size);
+  if (!ok) {
+    go_assert(saw_errors());
+    size = 4;
+  }
+  if (size > this->zero_value_size_)
+    this->zero_value_size_ = size;
+
+  unsigned long align;
+  ok = type->backend_type_align(this, &align);
+  if (!ok) {
+    go_assert(saw_errors());
+    align = 4;
+  }
+  if (align > this->zero_value_align_)
+    this->zero_value_align_ = align;
+
+  return this->zero_value_;
+}
+
+// Return whether V is the zero value variable.
+
+bool
+Gogo::is_zero_value(Variable* v) const
+{
+  return this->zero_value_ != NULL && this->zero_value_->var_value() == v;
+}
+
+// Return the backend variable for the special zero value, or NULL if
+// it is not needed.
+
+Bvariable*
+Gogo::backend_zero_value()
+{
+  if (this->zero_value_ == NULL)
+    return NULL;
+
+  Type* byte_type = this->lookup_global("byte")->type_value();
+  Btype* bbtype_type = byte_type->get_backend(this);
+
+  Type* int_type = this->lookup_global("int")->type_value();
+  Btype* bint_type = int_type->get_backend(this);
+
+  mpz_t val;
+  mpz_init_set_ui(val, this->zero_value_size_);
+  Bexpression* blength =
+    this->backend()->integer_constant_expression(bint_type, val);
+  mpz_clear(val);
+
+  Btype* barray_type = this->backend()->array_type(bbtype_type, blength);
+
+  return this->backend()->implicit_variable(this->zero_value_->name(),
+					    barray_type, NULL, true, true,
+					    this->zero_value_align_);
+}
+
 // Add statements to INIT_STMTS which run the initialization
 // functions for imported packages.  This is only used for the "main"
 // package.
@@ -6078,7 +6163,9 @@  Variable::get_backend_variable(Gogo* gog
 	  Btype* btype = type->get_backend(gogo);
 
 	  Bvariable* bvar;
-	  if (this->is_global_)
+	  if (gogo->is_zero_value(this))
+	    bvar = gogo->backend_zero_value();
+	  else if (this->is_global_)
 	    bvar = backend->global_variable((package == NULL
 					     ? gogo->package_name()
 					     : package->package_name()),
Index: gcc/go/gofrontend/gogo.h
===================================================================
--- gcc/go/gofrontend/gogo.h	(revision 211248)
+++ gcc/go/gofrontend/gogo.h	(working copy)
@@ -591,6 +591,20 @@  class Gogo
   named_types_are_converted() const
   { return this->named_types_are_converted_; }
 
+  // Return the variable to use for the zero value of TYPE.  All types
+  // shared the same zero value, and we make sure that it is large
+  // enough.
+  Named_object*
+  zero_value(Type *type);
+
+  // Return whether a variable is the zero value variable.
+  bool
+  is_zero_value(Variable* v) const;
+
+  // Create the zero value variable.
+  Bvariable*
+  backend_zero_value();
+
   // Write out the global values.
   void
   write_globals();
@@ -727,6 +741,12 @@  class Gogo
   std::string pkgpath_symbol_;
   // The prefix to use for symbols, from the -fgo-prefix option.
   std::string prefix_;
+  // The special zero value variable.
+  Named_object* zero_value_;
+  // The size of the zero value variable.
+  unsigned long zero_value_size_;
+  // The alignment of the zero value variable, in bytes.
+  unsigned long zero_value_align_;
   // Whether pkgpath_ has been set.
   bool pkgpath_set_;
   // Whether an explicit package path was set by -fgo-pkgpath.
Index: gcc/go/gofrontend/types.h
===================================================================
--- gcc/go/gofrontend/types.h	(revision 211248)
+++ gcc/go/gofrontend/types.h	(working copy)
@@ -925,18 +925,18 @@  class Type
   // in bytes and return true.  Otherwise, return false.  This queries
   // the backend.
   bool
-  backend_type_size(Gogo*, unsigned int* psize);
+  backend_type_size(Gogo*, unsigned long* psize);
 
   // If the alignment of the type can be determined, set *PALIGN to
   // the alignment in bytes and return true.  Otherwise, return false.
   bool
-  backend_type_align(Gogo*, unsigned int* palign);
+  backend_type_align(Gogo*, unsigned long* palign);
 
   // If the alignment of a struct field of this type can be
   // determined, set *PALIGN to the alignment in bytes and return
   // true.  Otherwise, return false.
   bool
-  backend_type_field_align(Gogo*, unsigned int* palign);
+  backend_type_field_align(Gogo*, unsigned long* palign);
 
   // Whether the backend size is known.
   bool
Index: gcc/go/gofrontend/backend.h
===================================================================
--- gcc/go/gofrontend/backend.h	(revision 211248)
+++ gcc/go/gofrontend/backend.h	(working copy)
@@ -544,16 +544,24 @@  class Backend
 		     bool address_is_taken, Location location,
 		     Bstatement** pstatement) = 0;
 
-  // Create an implicit variable that is compiler-defined.  This is used when
-  // generating GC root variables and storing the values of a slice constructor.
-  // NAME is the name of the variable, either gc# for GC roots or C# for slice
-  // initializers.  TYPE is the type of the implicit variable with an initial
-  // value INIT.  IS_CONSTANT is true if the implicit variable should be treated
-  // like it is immutable.  For slice initializers, if the values must be copied
-  // to the heap, the variable IS_CONSTANT.
+  // Create an implicit variable that is compiler-defined.  This is
+  // used when generating GC root variables, when storing the values
+  // of a slice constructor, and for the zero value of types.  NAME is
+  // the name of the variable, either gc# for GC roots or C# for slice
+  // initializers.  TYPE is the type of the implicit variable with an
+  // initial value INIT.  IS_CONSTANT is true if the implicit variable
+  // should be treated like it is immutable.  For slice initializers,
+  // if the values must be copied to the heap, the variable
+  // IS_CONSTANT.  IS_COMMON is true if the implicit variable should
+  // be treated as a common variable (multiple definitions with
+  // different sizes permitted in different object files, all merged
+  // into the largest definition at link time); this will be true for
+  // the zero value.  If IS_COMMON is true, INIT will be NULL, and the
+  // variable should be initialized to all zeros.  If ALIGNMENT is not
+  // zero, it is the desired alignment of the variable.
   virtual Bvariable*
   implicit_variable(const std::string& name, Btype* type, Bexpression* init,
-		    bool is_constant) = 0;
+		    bool is_constant, bool is_common, size_t alignment) = 0;
 
   // Create a named immutable initialized data structure.  This is
   // used for type descriptors, map descriptors, and function
Index: gcc/go/go-gcc.cc
===================================================================
--- gcc/go/go-gcc.cc	(revision 211248)
+++ gcc/go/go-gcc.cc	(working copy)
@@ -389,7 +389,8 @@  class Gcc_backend : public Backend
 		     Location, Bstatement**);
 
   Bvariable*
-  implicit_variable(const std::string&, Btype*, Bexpression*, bool);
+  implicit_variable(const std::string&, Btype*, Bexpression*, bool, bool,
+		    size_t);
 
   Bvariable*
   immutable_struct(const std::string&, bool, bool, Btype*, Location);
@@ -2497,10 +2498,15 @@  Gcc_backend::temporary_variable(Bfunctio
 
 Bvariable*
 Gcc_backend::implicit_variable(const std::string& name, Btype* type,
-			       Bexpression* init, bool is_constant)
+			       Bexpression* init, bool is_constant,
+			       bool is_common, size_t alignment)
 {
   tree type_tree = type->get_tree();
-  tree init_tree = init->get_tree();
+  tree init_tree;
+  if (init == NULL)
+    init_tree = NULL_TREE;
+  else
+    init_tree = init->get_tree();
   if (type_tree == error_mark_node || init_tree == error_mark_node)
     return this->error_variable();
 
@@ -2510,12 +2516,25 @@  Gcc_backend::implicit_variable(const std
   TREE_PUBLIC(decl) = 0;
   TREE_STATIC(decl) = 1;
   DECL_ARTIFICIAL(decl) = 1;
-  if (is_constant)
+  if (is_common)
+    {
+      DECL_COMMON(decl) = 1;
+      TREE_PUBLIC(decl) = 1;
+      gcc_assert(init_tree == NULL_TREE);
+    }
+  else if (is_constant)
     {
       TREE_READONLY(decl) = 1;
       TREE_CONSTANT(decl) = 1;
     }
   DECL_INITIAL(decl) = init_tree;
+
+  if (alignment != 0)
+    {
+      DECL_ALIGN(decl) = alignment * BITS_PER_UNIT;
+      DECL_USER_ALIGN(decl) = 1;
+    }
+
   rest_of_decl_compilation(decl, 1, 0);
 
   return new Bvariable(decl);
Index: libgo/MERGE
===================================================================
--- libgo/MERGE	(revision 211248)
+++ libgo/MERGE	(working copy)
@@ -1,4 +1,4 @@ 
-0ddbdc3c7ce2
+00cce3a34d7e
 
 The first line of this file holds the Mercurial revision number of the
 last merge done from the master library sources.
Index: libgo/Makefile.am
===================================================================
--- libgo/Makefile.am	(revision 211248)
+++ libgo/Makefile.am	(working copy)
@@ -196,6 +196,7 @@  toolexeclibgodebugdir = $(toolexeclibgod
 toolexeclibgodebug_DATA = \
 	debug/dwarf.gox \
 	debug/elf.gox \
+	debug/goobj.gox \
 	debug/gosym.gox \
 	debug/macho.gox \
 	debug/pe.gox
@@ -998,6 +999,7 @@  go_sync_files = \
 	go/sync/cond.go \
 	go/sync/mutex.go \
 	go/sync/once.go \
+	go/sync/pool.go \
 	go/sync/race0.go \
 	go/sync/runtime.go \
 	go/sync/rwmutex.go \
@@ -1124,7 +1126,8 @@  go_crypto_cipher_files = \
 	go/crypto/cipher/ctr.go \
 	go/crypto/cipher/gcm.go \
 	go/crypto/cipher/io.go \
-	go/crypto/cipher/ofb.go
+	go/crypto/cipher/ofb.go \
+	go/crypto/cipher/xor.go
 go_crypto_des_files = \
 	go/crypto/des/block.go \
 	go/crypto/des/cipher.go \
@@ -1209,6 +1212,8 @@  go_debug_dwarf_files = \
 go_debug_elf_files = \
 	go/debug/elf/elf.go \
 	go/debug/elf/file.go
+go_debug_goobj_files = \
+	go/debug/goobj/read.go
 go_debug_gosym_files = \
 	go/debug/gosym/pclntab.go \
 	go/debug/gosym/symtab.go
@@ -1248,6 +1253,7 @@  go_encoding_hex_files = \
 go_encoding_json_files = \
 	go/encoding/json/decode.go \
 	go/encoding/json/encode.go \
+	go/encoding/json/fold.go \
 	go/encoding/json/indent.go \
 	go/encoding/json/scanner.go \
 	go/encoding/json/stream.go \
@@ -1363,7 +1369,6 @@  go_index_suffixarray_files = \
 	go/index/suffixarray/suffixarray.go
 
 go_io_ioutil_files = \
-	go/io/ioutil/blackhole.go \
 	go/io/ioutil/ioutil.go \
 	go/io/ioutil/tempfile.go
 
@@ -1867,6 +1872,7 @@  libgo_go_objs = \
 	database/sql/driver.lo \
 	debug/dwarf.lo \
 	debug/elf.lo \
+	debug/goobj.lo \
 	debug/gosym.lo \
 	debug/macho.lo \
 	debug/pe.lo \
@@ -2594,6 +2600,15 @@  debug/elf/check: $(CHECK_DEPS)
 	@$(CHECK)
 .PHONY: debug/elf/check
 
+@go_include@ debug/goobj.lo.dep
+debug/goobj.lo.dep: $(go_debug_goobj_files)
+	$(BUILDDEPS)
+debug/goobj.lo: $(go_debug_goobj_files)
+	$(BUILDPACKAGE)
+debug/goobj/check: $(CHECK_DEPS)
+	@$(CHECK)
+.PHONY: debug/goobj/check
+
 @go_include@ debug/gosym.lo.dep
 debug/gosym.lo.dep: $(go_debug_gosym_files)
 	$(BUILDDEPS)
@@ -3412,6 +3427,8 @@  debug/dwarf.gox: debug/dwarf.lo
 	$(BUILDGOX)
 debug/elf.gox: debug/elf.lo
 	$(BUILDGOX)
+debug/goobj.gox: debug/goobj.lo
+	$(BUILDGOX)
 debug/gosym.gox: debug/gosym.lo
 	$(BUILDGOX)
 debug/macho.gox: debug/macho.lo
Index: libgo/runtime/print.c
===================================================================
--- libgo/runtime/print.c	(revision 211248)
+++ libgo/runtime/print.c	(working copy)
@@ -208,7 +208,10 @@  runtime_printfloat(double v)
 	n = 7;	// digits printed
 	e = 0;	// exp
 	s = 0;	// sign
-	if(v != 0) {
+	if(v == 0) {
+		if(isinf(1/v) && 1/v < 0)
+			s = 1;
+	} else {
 		// sign
 		if(v < 0) {
 			v = -v;
Index: libgo/runtime/race.h
===================================================================
--- libgo/runtime/race.h	(revision 211248)
+++ libgo/runtime/race.h	(working copy)
@@ -24,6 +24,8 @@  void	runtime_racewritepc(void *addr, voi
 void	runtime_racereadpc(void *addr, void *callpc, void *pc);
 void	runtime_racewriterangepc(void *addr, uintptr sz, void *callpc, void *pc);
 void	runtime_racereadrangepc(void *addr, uintptr sz, void *callpc, void *pc);
+void	runtime_racereadobjectpc(void *addr, Type *t, void *callpc, void *pc);
+void	runtime_racewriteobjectpc(void *addr, Type *t, void *callpc, void *pc);
 void	runtime_racefingo(void);
 void	runtime_raceacquire(void *addr);
 void	runtime_raceacquireg(G *gp, void *addr);
Index: libgo/runtime/signal_unix.c
===================================================================
--- libgo/runtime/signal_unix.c	(revision 211248)
+++ libgo/runtime/signal_unix.c	(working copy)
@@ -122,6 +122,14 @@  os_sigpipe(void)
 }
 
 void
+runtime_unblocksignals(void)
+{
+	sigset_t sigset_none;
+	sigemptyset(&sigset_none);
+	pthread_sigmask(SIG_SETMASK, &sigset_none, nil);
+}
+
+void
 runtime_crash(void)
 {
 	int32 i;
@@ -137,6 +145,7 @@  runtime_crash(void)
 		return;
 #endif
 
+	runtime_unblocksignals();
 	for(i = 0; runtime_sigtab[i].sig != -1; i++)
 		if(runtime_sigtab[i].sig == SIGABRT)
 			break;
Index: libgo/runtime/mgc0.c
===================================================================
--- libgo/runtime/mgc0.c	(revision 211248)
+++ libgo/runtime/mgc0.c	(working copy)
@@ -45,7 +45,7 @@  enum {
 	Debug = 0,
 	DebugMark = 0,  // run second pass to check mark
 	CollectStats = 0,
-	ScanStackByFrames = 0,
+	ScanStackByFrames = 1,
 	IgnorePreciseGC = 0,
 
 	// Four bits per word (see #defines below).
@@ -68,6 +68,39 @@  enum {
 	BitsEface = 3,
 };
 
+static struct
+{
+	Lock;  
+	void* head;
+} pools;
+
+void sync_runtime_registerPool(void **)
+  __asm__ (GOSYM_PREFIX "sync.runtime_registerPool");
+
+void
+sync_runtime_registerPool(void **p)
+{
+	runtime_lock(&pools);
+	p[0] = pools.head;
+	pools.head = p;
+	runtime_unlock(&pools);
+}
+
+static void
+clearpools(void)
+{
+	void **p, **next;
+
+	for(p = pools.head; p != nil; p = next) {
+		next = p[0];
+		p[0] = nil; // next
+		p[1] = nil; // slice
+		p[2] = nil;
+		p[3] = nil;
+	}
+	pools.head = nil;
+}
+
 // Bits in per-word bitmap.
 // #defines because enum might not be able to hold the values.
 //
@@ -77,7 +110,7 @@  enum {
 // The bits in the word are packed together by type first, then by
 // heap location, so each 64-bit bitmap word consists of, from top to bottom,
 // the 16 bitSpecial bits for the corresponding heap words, then the 16 bitMarked bits,
-// then the 16 bitNoScan/bitBlockBoundary bits, then the 16 bitAllocated bits.
+// then the 16 bitScan/bitBlockBoundary bits, then the 16 bitAllocated bits.
 // This layout makes it easier to iterate over the bits of a given type.
 //
 // The bitmap starts at mheap.arena_start and extends *backward* from
@@ -93,13 +126,13 @@  enum {
 //	bits = *b >> shift;
 //	/* then test bits & bitAllocated, bits & bitMarked, etc. */
 //
-#define bitAllocated		((uintptr)1<<(bitShift*0))
-#define bitNoScan		((uintptr)1<<(bitShift*1))	/* when bitAllocated is set */
+#define bitAllocated		((uintptr)1<<(bitShift*0))	/* block start; eligible for garbage collection */
+#define bitScan			((uintptr)1<<(bitShift*1))	/* when bitAllocated is set */
 #define bitMarked		((uintptr)1<<(bitShift*2))	/* when bitAllocated is set */
 #define bitSpecial		((uintptr)1<<(bitShift*3))	/* when bitAllocated is set - has finalizer or being profiled */
-#define bitBlockBoundary	((uintptr)1<<(bitShift*1))	/* when bitAllocated is NOT set */
+#define bitBlockBoundary	((uintptr)1<<(bitShift*1))	/* when bitAllocated is NOT set - mark for FlagNoGC objects */
 
-#define bitMask (bitBlockBoundary | bitAllocated | bitMarked | bitSpecial)
+#define bitMask (bitAllocated | bitScan | bitMarked | bitSpecial)
 
 // Holding worldsema grants an M the right to try to stop the world.
 // The procedure is:
@@ -185,6 +218,7 @@  static struct {
 enum {
 	GC_DEFAULT_PTR = GC_NUM_INSTR,
 	GC_CHAN,
+	GC_G_PTR,
 
 	GC_NUM_INSTR2
 };
@@ -325,6 +359,24 @@  struct PtrTarget
 	uintptr ti;
 };
 
+typedef	struct Scanbuf Scanbuf;
+struct	Scanbuf
+{
+	struct {
+		PtrTarget *begin;
+		PtrTarget *end;
+		PtrTarget *pos;
+	} ptr;
+	struct {
+		Obj *begin;
+		Obj *end;
+		Obj *pos;
+	} obj;
+	Workbuf *wbuf;
+	Obj *wp;
+	uintptr nobj;
+};
+
 typedef struct BufferList BufferList;
 struct BufferList
 {
@@ -357,7 +409,7 @@  static void enqueue(Obj obj, Workbuf **_
 //     flushptrbuf
 //  (find block start, mark and enqueue)
 static void
-flushptrbuf(PtrTarget *ptrbuf, PtrTarget **ptrbufpos, Obj **_wp, Workbuf **_wbuf, uintptr *_nobj)
+flushptrbuf(Scanbuf *sbuf)
 {
 	byte *p, *arena_start, *obj;
 	uintptr size, *bitp, bits, shift, j, x, xbits, off, nobj, ti, n;
@@ -365,17 +417,19 @@  flushptrbuf(PtrTarget *ptrbuf, PtrTarget
 	PageID k;
 	Obj *wp;
 	Workbuf *wbuf;
+	PtrTarget *ptrbuf;
 	PtrTarget *ptrbuf_end;
 
 	arena_start = runtime_mheap.arena_start;
 
-	wp = *_wp;
-	wbuf = *_wbuf;
-	nobj = *_nobj;
-
-	ptrbuf_end = *ptrbufpos;
-	n = ptrbuf_end - ptrbuf;
-	*ptrbufpos = ptrbuf;
+	wp = sbuf->wp;
+	wbuf = sbuf->wbuf;
+	nobj = sbuf->nobj;
+
+	ptrbuf = sbuf->ptr.begin;
+	ptrbuf_end = sbuf->ptr.pos;
+	n = ptrbuf_end - sbuf->ptr.begin;
+	sbuf->ptr.pos = sbuf->ptr.begin;
 
 	if(CollectStats) {
 		runtime_xadd64(&gcstats.ptr.sum, n);
@@ -394,150 +448,146 @@  flushptrbuf(PtrTarget *ptrbuf, PtrTarget
 			runtime_throw("ptrbuf has to be smaller than WorkBuf");
 	}
 
-	// TODO(atom): This block is a branch of an if-then-else statement.
-	//             The single-threaded branch may be added in a next CL.
-	{
-		// Multi-threaded version.
-
-		while(ptrbuf < ptrbuf_end) {
-			obj = ptrbuf->p;
-			ti = ptrbuf->ti;
-			ptrbuf++;
-
-			// obj belongs to interval [mheap.arena_start, mheap.arena_used).
-			if(Debug > 1) {
-				if(obj < runtime_mheap.arena_start || obj >= runtime_mheap.arena_used)
-					runtime_throw("object is outside of mheap");
-			}
-
-			// obj may be a pointer to a live object.
-			// Try to find the beginning of the object.
-
-			// Round down to word boundary.
-			if(((uintptr)obj & ((uintptr)PtrSize-1)) != 0) {
-				obj = (void*)((uintptr)obj & ~((uintptr)PtrSize-1));
-				ti = 0;
-			}
-
-			// Find bits for this word.
-			off = (uintptr*)obj - (uintptr*)arena_start;
-			bitp = (uintptr*)arena_start - off/wordsPerBitmapWord - 1;
-			shift = off % wordsPerBitmapWord;
-			xbits = *bitp;
-			bits = xbits >> shift;
+	while(ptrbuf < ptrbuf_end) {
+		obj = ptrbuf->p;
+		ti = ptrbuf->ti;
+		ptrbuf++;
+
+		// obj belongs to interval [mheap.arena_start, mheap.arena_used).
+		if(Debug > 1) {
+			if(obj < runtime_mheap.arena_start || obj >= runtime_mheap.arena_used)
+				runtime_throw("object is outside of mheap");
+		}
+
+		// obj may be a pointer to a live object.
+		// Try to find the beginning of the object.
+
+		// Round down to word boundary.
+		if(((uintptr)obj & ((uintptr)PtrSize-1)) != 0) {
+			obj = (void*)((uintptr)obj & ~((uintptr)PtrSize-1));
+			ti = 0;
+		}
+
+		// Find bits for this word.
+		off = (uintptr*)obj - (uintptr*)arena_start;
+		bitp = (uintptr*)arena_start - off/wordsPerBitmapWord - 1;
+		shift = off % wordsPerBitmapWord;
+		xbits = *bitp;
+		bits = xbits >> shift;
+
+		// Pointing at the beginning of a block?
+		if((bits & (bitAllocated|bitBlockBoundary)) != 0) {
+			if(CollectStats)
+				runtime_xadd64(&gcstats.flushptrbuf.foundbit, 1);
+			goto found;
+		}
 
-			// Pointing at the beginning of a block?
-			if((bits & (bitAllocated|bitBlockBoundary)) != 0) {
+		ti = 0;
+
+		// Pointing just past the beginning?
+		// Scan backward a little to find a block boundary.
+		for(j=shift; j-->0; ) {
+			if(((xbits>>j) & (bitAllocated|bitBlockBoundary)) != 0) {
+				obj = (byte*)obj - (shift-j)*PtrSize;
+				shift = j;
+				bits = xbits>>shift;
 				if(CollectStats)
-					runtime_xadd64(&gcstats.flushptrbuf.foundbit, 1);
+					runtime_xadd64(&gcstats.flushptrbuf.foundword, 1);
 				goto found;
 			}
+		}
 
-			ti = 0;
+		// Otherwise consult span table to find beginning.
+		// (Manually inlined copy of MHeap_LookupMaybe.)
+		k = (uintptr)obj>>PageShift;
+		x = k;
+		x -= (uintptr)arena_start>>PageShift;
+		s = runtime_mheap.spans[x];
+		if(s == nil || k < s->start || obj >= s->limit || s->state != MSpanInUse)
+			continue;
+		p = (byte*)((uintptr)s->start<<PageShift);
+		if(s->sizeclass == 0) {
+			obj = p;
+		} else {
+			size = s->elemsize;
+			int32 i = ((byte*)obj - p)/size;
+			obj = p+i*size;
+		}
 
-			// Pointing just past the beginning?
-			// Scan backward a little to find a block boundary.
-			for(j=shift; j-->0; ) {
-				if(((xbits>>j) & (bitAllocated|bitBlockBoundary)) != 0) {
-					obj = (byte*)obj - (shift-j)*PtrSize;
-					shift = j;
-					bits = xbits>>shift;
-					if(CollectStats)
-						runtime_xadd64(&gcstats.flushptrbuf.foundword, 1);
-					goto found;
-				}
-			}
+		// Now that we know the object header, reload bits.
+		off = (uintptr*)obj - (uintptr*)arena_start;
+		bitp = (uintptr*)arena_start - off/wordsPerBitmapWord - 1;
+		shift = off % wordsPerBitmapWord;
+		xbits = *bitp;
+		bits = xbits >> shift;
+		if(CollectStats)
+			runtime_xadd64(&gcstats.flushptrbuf.foundspan, 1);
 
-			// Otherwise consult span table to find beginning.
-			// (Manually inlined copy of MHeap_LookupMaybe.)
-			k = (uintptr)obj>>PageShift;
-			x = k;
-			x -= (uintptr)arena_start>>PageShift;
-			s = runtime_mheap.spans[x];
-			if(s == nil || k < s->start || obj >= s->limit || s->state != MSpanInUse)
-				continue;
-			p = (byte*)((uintptr)s->start<<PageShift);
-			if(s->sizeclass == 0) {
-				obj = p;
-			} else {
-				size = s->elemsize;
-				int32 i = ((byte*)obj - p)/size;
-				obj = p+i*size;
+	found:
+		// Now we have bits, bitp, and shift correct for
+		// obj pointing at the base of the object.
+		// Only care about allocated and not marked.
+		if((bits & (bitAllocated|bitMarked)) != bitAllocated)
+			continue;
+		if(work.nproc == 1)
+			*bitp |= bitMarked<<shift;
+		else {
+			for(;;) {
+				x = *bitp;
+				if(x & (bitMarked<<shift))
+					goto continue_obj;
+				if(runtime_casp((void**)bitp, (void*)x, (void*)(x|(bitMarked<<shift))))
+					break;
 			}
+		}
 
-			// Now that we know the object header, reload bits.
-			off = (uintptr*)obj - (uintptr*)arena_start;
-			bitp = (uintptr*)arena_start - off/wordsPerBitmapWord - 1;
-			shift = off % wordsPerBitmapWord;
-			xbits = *bitp;
-			bits = xbits >> shift;
-			if(CollectStats)
-				runtime_xadd64(&gcstats.flushptrbuf.foundspan, 1);
+		// If object has no pointers, don't need to scan further.
+		if((bits & bitScan) == 0)
+			continue;
 
-		found:
-			// Now we have bits, bitp, and shift correct for
-			// obj pointing at the base of the object.
-			// Only care about allocated and not marked.
-			if((bits & (bitAllocated|bitMarked)) != bitAllocated)
-				continue;
-			if(work.nproc == 1)
-				*bitp |= bitMarked<<shift;
-			else {
-				for(;;) {
-					x = *bitp;
-					if(x & (bitMarked<<shift))
-						goto continue_obj;
-					if(runtime_casp((void**)bitp, (void*)x, (void*)(x|(bitMarked<<shift))))
-						break;
-				}
-			}
+		// Ask span about size class.
+		// (Manually inlined copy of MHeap_Lookup.)
+		x = (uintptr)obj >> PageShift;
+		x -= (uintptr)arena_start>>PageShift;
+		s = runtime_mheap.spans[x];
 
-			// If object has no pointers, don't need to scan further.
-			if((bits & bitNoScan) != 0)
-				continue;
+		PREFETCH(obj);
 
-			// Ask span about size class.
-			// (Manually inlined copy of MHeap_Lookup.)
-			x = (uintptr)obj >> PageShift;
-			x -= (uintptr)arena_start>>PageShift;
-			s = runtime_mheap.spans[x];
-
-			PREFETCH(obj);
-
-			*wp = (Obj){obj, s->elemsize, ti};
-			wp++;
-			nobj++;
-		continue_obj:;
-		}
+		*wp = (Obj){obj, s->elemsize, ti};
+		wp++;
+		nobj++;
+	continue_obj:;
+	}
 
-		// If another proc wants a pointer, give it some.
-		if(work.nwait > 0 && nobj > handoffThreshold && work.full == 0) {
-			wbuf->nobj = nobj;
-			wbuf = handoff(wbuf);
-			nobj = wbuf->nobj;
-			wp = wbuf->obj + nobj;
-		}
+	// If another proc wants a pointer, give it some.
+	if(work.nwait > 0 && nobj > handoffThreshold && work.full == 0) {
+		wbuf->nobj = nobj;
+		wbuf = handoff(wbuf);
+		nobj = wbuf->nobj;
+		wp = wbuf->obj + nobj;
 	}
 
-	*_wp = wp;
-	*_wbuf = wbuf;
-	*_nobj = nobj;
+	sbuf->wp = wp;
+	sbuf->wbuf = wbuf;
+	sbuf->nobj = nobj;
 }
 
 static void
-flushobjbuf(Obj *objbuf, Obj **objbufpos, Obj **_wp, Workbuf **_wbuf, uintptr *_nobj)
+flushobjbuf(Scanbuf *sbuf)
 {
 	uintptr nobj, off;
 	Obj *wp, obj;
 	Workbuf *wbuf;
+	Obj *objbuf;
 	Obj *objbuf_end;
 
-	wp = *_wp;
-	wbuf = *_wbuf;
-	nobj = *_nobj;
-
-	objbuf_end = *objbufpos;
-	*objbufpos = objbuf;
+	wp = sbuf->wp;
+	wbuf = sbuf->wbuf;
+	nobj = sbuf->nobj;
+
+	objbuf = sbuf->obj.begin;
+	objbuf_end = sbuf->obj.pos;
+	sbuf->obj.pos = sbuf->obj.begin;
 
 	while(objbuf < objbuf_end) {
 		obj = *objbuf++;
@@ -575,9 +625,9 @@  flushobjbuf(Obj *objbuf, Obj **objbufpos
 		wp = wbuf->obj + nobj;
 	}
 
-	*_wp = wp;
-	*_wbuf = wbuf;
-	*_nobj = nobj;
+	sbuf->wp = wp;
+	sbuf->wbuf = wbuf;
+	sbuf->nobj = nobj;
 }
 
 // Program that scans the whole block and treats every block element as a potential pointer
@@ -588,6 +638,11 @@  static uintptr defaultProg[2] = {PtrSize
 static uintptr chanProg[2] = {0, GC_CHAN};
 #endif
 
+#if 0
+// G* program
+static uintptr gptrProg[2] = {0, GC_G_PTR};
+#endif
+
 // Local variables of a program fragment or loop
 typedef struct Frame Frame;
 struct Frame {
@@ -676,8 +731,7 @@  scanblock(Workbuf *wbuf, Obj *wp, uintpt
 	Slice *sliceptr;
 	Frame *stack_ptr, stack_top, stack[GC_STACK_CAPACITY+4];
 	BufferList *scanbuffers;
-	PtrTarget *ptrbuf, *ptrbuf_end, *ptrbufpos;
-	Obj *objbuf, *objbuf_end, *objbufpos;
+	Scanbuf sbuf;
 	Eface *eface;
 	Iface *iface;
 #if 0
@@ -693,21 +747,22 @@  scanblock(Workbuf *wbuf, Obj *wp, uintpt
 	arena_used = runtime_mheap.arena_used;
 
 	stack_ptr = stack+nelem(stack)-1;
-	
+
 	precise_type = false;
 	nominal_size = 0;
 
-	// Allocate ptrbuf
-	{
-		scanbuffers = &bufferList[runtime_m()->helpgc];
-		ptrbuf = &scanbuffers->ptrtarget[0];
-		ptrbuf_end = &scanbuffers->ptrtarget[0] + nelem(scanbuffers->ptrtarget);
-		objbuf = &scanbuffers->obj[0];
-		objbuf_end = &scanbuffers->obj[0] + nelem(scanbuffers->obj);
-	}
+	// Initialize sbuf
+	scanbuffers = &bufferList[runtime_m()->helpgc];
+
+	sbuf.ptr.begin = sbuf.ptr.pos = &scanbuffers->ptrtarget[0];
+	sbuf.ptr.end = sbuf.ptr.begin + nelem(scanbuffers->ptrtarget);
 
-	ptrbufpos = ptrbuf;
-	objbufpos = objbuf;
+	sbuf.obj.begin = sbuf.obj.pos = &scanbuffers->obj[0];
+	sbuf.obj.end = sbuf.obj.begin + nelem(scanbuffers->obj);
+
+	sbuf.wbuf = wbuf;
+	sbuf.wp = wp;
+	sbuf.nobj = nobj;
 
 	// (Silence the compiler)
 #if 0
@@ -727,7 +782,7 @@  scanblock(Workbuf *wbuf, Obj *wp, uintpt
 
 		if(CollectStats) {
 			runtime_xadd64(&gcstats.nbytes, n);
-			runtime_xadd64(&gcstats.obj.sum, nobj);
+			runtime_xadd64(&gcstats.obj.sum, sbuf.nobj);
 			runtime_xadd64(&gcstats.obj.cnt, 1);
 		}
 
@@ -857,9 +912,9 @@  scanblock(Workbuf *wbuf, Obj *wp, uintpt
 			if((const byte*)t >= arena_start && (const byte*)t < arena_used) {
 				union { const Type *tc; Type *tr; } u;
 				u.tc = t;
-				*ptrbufpos++ = (struct PtrTarget){(void*)u.tr, 0};
-				if(ptrbufpos == ptrbuf_end)
-					flushptrbuf(ptrbuf, &ptrbufpos, &wp, &wbuf, &nobj);
+				*sbuf.ptr.pos++ = (PtrTarget){u.tr, 0};
+				if(sbuf.ptr.pos == sbuf.ptr.end)
+					flushptrbuf(&sbuf);
 			}
 
 			// eface->__object
@@ -888,10 +943,9 @@  scanblock(Workbuf *wbuf, Obj *wp, uintpt
 			
 			// iface->tab
 			if((byte*)iface->tab >= arena_start && (byte*)iface->tab < arena_used) {
-				// *ptrbufpos++ = (struct PtrTarget){iface->tab, (uintptr)itabtype->gc};
-				*ptrbufpos++ = (struct PtrTarget){iface->tab, 0};
-				if(ptrbufpos == ptrbuf_end)
-					flushptrbuf(ptrbuf, &ptrbufpos, &wp, &wbuf, &nobj);
+				*sbuf.ptr.pos++ = (PtrTarget){iface->tab, /* (uintptr)itabtype->gc */ 0};
+				if(sbuf.ptr.pos == sbuf.ptr.end)
+					flushptrbuf(&sbuf);
 			}
 
 			// iface->data
@@ -919,9 +973,9 @@  scanblock(Workbuf *wbuf, Obj *wp, uintpt
 				obj = *(byte**)stack_top.b;
 				stack_top.b += PtrSize;
 				if((byte*)obj >= arena_start && (byte*)obj < arena_used) {
-					*ptrbufpos++ = (struct PtrTarget){obj, 0};
-					if(ptrbufpos == ptrbuf_end)
-						flushptrbuf(ptrbuf, &ptrbufpos, &wp, &wbuf, &nobj);
+					*sbuf.ptr.pos++ = (PtrTarget){obj, 0};
+					if(sbuf.ptr.pos == sbuf.ptr.end)
+						flushptrbuf(&sbuf);
 				}
 			}
 			goto next_block;
@@ -950,7 +1004,7 @@  scanblock(Workbuf *wbuf, Obj *wp, uintpt
 					if(*(byte**)i != nil) {
 						// Found a value that may be a pointer.
 						// Do a rescan of the entire block.
-						enqueue((Obj){b, n, 0}, &wbuf, &wp, &nobj);
+						enqueue((Obj){b, n, 0}, &sbuf.wbuf, &sbuf.wp, &sbuf.nobj);
 						if(CollectStats) {
 							runtime_xadd64(&gcstats.rescan, 1);
 							runtime_xadd64(&gcstats.rescanbytes, n);
@@ -996,9 +1050,9 @@  scanblock(Workbuf *wbuf, Obj *wp, uintpt
 			objti = pc[3];
 			pc += 4;
 
-			*objbufpos++ = (Obj){obj, size, objti};
-			if(objbufpos == objbuf_end)
-				flushobjbuf(objbuf, &objbufpos, &wp, &wbuf, &nobj);
+			*sbuf.obj.pos++ = (Obj){obj, size, objti};
+			if(sbuf.obj.pos == sbuf.obj.end)
+				flushobjbuf(&sbuf);
 			continue;
 
 #if 0
@@ -1032,10 +1086,10 @@  scanblock(Workbuf *wbuf, Obj *wp, uintpt
 					// in-use part of the circular buffer is scanned.
 					// (Channel routines zero the unused part, so the current
 					// code does not lead to leaks, it's just a little inefficient.)
-					*objbufpos++ = (Obj){(byte*)chan+runtime_Hchansize, chancap*chantype->elem->size,
+					*sbuf.obj.pos++ = (Obj){(byte*)chan+runtime_Hchansize, chancap*chantype->elem->size,
 						(uintptr)chantype->elem->gc | PRECISE | LOOP};
-					if(objbufpos == objbuf_end)
-						flushobjbuf(objbuf, &objbufpos, &wp, &wbuf, &nobj);
+					if(sbuf.obj.pos == sbuf.obj.end)
+						flushobjbuf(&sbuf);
 				}
 			}
 			if(chan_ret == nil)
@@ -1044,15 +1098,22 @@  scanblock(Workbuf *wbuf, Obj *wp, uintpt
 			continue;
 #endif
 
+#if 0
+		case GC_G_PTR:
+			obj = (void*)stack_top.b;
+			scanstack(obj, &sbuf);
+			goto next_block;
+#endif
+
 		default:
 			runtime_throw("scanblock: invalid GC instruction");
 			return;
 		}
 
 		if((byte*)obj >= arena_start && (byte*)obj < arena_used) {
-			*ptrbufpos++ = (struct PtrTarget){obj, objti};
-			if(ptrbufpos == ptrbuf_end)
-				flushptrbuf(ptrbuf, &ptrbufpos, &wp, &wbuf, &nobj);
+			*sbuf.ptr.pos++ = (PtrTarget){obj, objti};
+			if(sbuf.ptr.pos == sbuf.ptr.end)
+				flushptrbuf(&sbuf);
 		}
 	}
 
@@ -1060,34 +1121,32 @@  scanblock(Workbuf *wbuf, Obj *wp, uintpt
 		// Done scanning [b, b+n).  Prepare for the next iteration of
 		// the loop by setting b, n, ti to the parameters for the next block.
 
-		if(nobj == 0) {
-			flushptrbuf(ptrbuf, &ptrbufpos, &wp, &wbuf, &nobj);
-			flushobjbuf(objbuf, &objbufpos, &wp, &wbuf, &nobj);
+		if(sbuf.nobj == 0) {
+			flushptrbuf(&sbuf);
+			flushobjbuf(&sbuf);
 
-			if(nobj == 0) {
+			if(sbuf.nobj == 0) {
 				if(!keepworking) {
-					if(wbuf)
-						putempty(wbuf);
-					goto endscan;
+					if(sbuf.wbuf)
+						putempty(sbuf.wbuf);
+					return;
 				}
 				// Emptied our buffer: refill.
-				wbuf = getfull(wbuf);
-				if(wbuf == nil)
-					goto endscan;
-				nobj = wbuf->nobj;
-				wp = wbuf->obj + wbuf->nobj;
+				sbuf.wbuf = getfull(sbuf.wbuf);
+				if(sbuf.wbuf == nil)
+					return;
+				sbuf.nobj = sbuf.wbuf->nobj;
+				sbuf.wp = sbuf.wbuf->obj + sbuf.wbuf->nobj;
 			}
 		}
 
 		// Fetch b from the work buffer.
-		--wp;
-		b = wp->p;
-		n = wp->n;
-		ti = wp->ti;
-		nobj--;
+		--sbuf.wp;
+		b = sbuf.wp->p;
+		n = sbuf.wp->n;
+		ti = sbuf.wp->ti;
+		sbuf.nobj--;
 	}
-
-endscan:;
 }
 
 // debug_scanblock is the debug copy of scanblock.
@@ -1159,7 +1218,7 @@  debug_scanblock(byte *b, uintptr n)
 			runtime_printf("found unmarked block %p in %p\n", obj, vp+i);
 
 		// If object has no pointers, don't need to scan further.
-		if((bits & bitNoScan) != 0)
+		if((bits & bitScan) == 0)
 			continue;
 
 		debug_scanblock(obj, size);
@@ -1536,6 +1595,28 @@  addroots(void)
 	addroot((Obj){(byte*)&work, sizeof work, 0});
 }
 
+static void
+addfreelists(void)
+{
+	int32 i;
+	P *p, **pp;
+	MCache *c;
+	MLink *m;
+
+	// Mark objects in the MCache of each P so we don't collect them.
+	for(pp=runtime_allp; (p=*pp); pp++) {
+		c = p->mcache;
+		if(c==nil)
+			continue;
+		for(i = 0; i < NumSizeClasses; i++) {
+			for(m = c->list[i].list; m != nil; m = m->next) {
+				markonly(m);
+			}
+		}
+	}
+	// Note: the sweeper will mark objects in each span's freelist.
+}
+
 static bool
 handlespecial(byte *p, uintptr size)
 {
@@ -1581,7 +1662,7 @@  sweepspan(ParFor *desc, uint32 idx)
 {
 	M *m;
 	int32 cl, n, npages;
-	uintptr size;
+	uintptr size, off, *bitp, shift;
 	byte *p;
 	MCache *c;
 	byte *arena_start;
@@ -1591,6 +1672,7 @@  sweepspan(ParFor *desc, uint32 idx)
 	byte compression;
 	uintptr type_data_inc;
 	MSpan *s;
+	MLink *x;
 
 	m = runtime_m();
 
@@ -1612,6 +1694,17 @@  sweepspan(ParFor *desc, uint32 idx)
 	nfree = 0;
 	end = &head;
 	c = m->mcache;
+
+	// mark any free objects in this span so we don't collect them
+	for(x = s->freelist; x != nil; x = x->next) {
+		// This is markonly(x) but faster because we don't need
+		// atomic access and we're guaranteed to be pointing at
+		// the head of a valid object.
+		off = (uintptr*)x - (uintptr*)runtime_mheap.arena_start;
+		bitp = (uintptr*)runtime_mheap.arena_start - off/wordsPerBitmapWord - 1;
+		shift = off % wordsPerBitmapWord;
+		*bitp |= bitMarked<<shift;
+	}
 	
 	type_data = (byte*)s->types.data;
 	type_data_inc = sizeof(uintptr);
@@ -1655,14 +1748,17 @@  sweepspan(ParFor *desc, uint32 idx)
 				continue;
 		}
 
-		// Mark freed; restore block boundary bit.
-		*bitp = (*bitp & ~(bitMask<<shift)) | (bitBlockBoundary<<shift);
+		// Clear mark, scan, and special bits.
+		*bitp &= ~((bitScan|bitMarked|bitSpecial)<<shift);
 
 		if(cl == 0) {
 			// Free large span.
 			runtime_unmarkspan(p, 1<<PageShift);
 			*(uintptr*)p = (uintptr)0xdeaddeaddeaddeadll;	// needs zeroing
-			runtime_MHeap_Free(&runtime_mheap, s, 1);
+			if(runtime_debug.efence)
+				runtime_SysFree(p, size, &mstats.gc_sys);
+			else
+				runtime_MHeap_Free(&runtime_mheap, s, 1);
 			c->local_nlargefree++;
 			c->local_largefree += size;
 		} else {
@@ -1985,7 +2081,9 @@  runtime_gc(int32 force)
 	a.start_time = runtime_nanotime();
 	m->gcing = 1;
 	runtime_stoptheworld();
-	
+
+	clearpools();
+
 	// Run gc on the g0 stack.  We do this so that the g stack
 	// we're currently running on will no longer change.  Cuts
 	// the root set down a bit (g0 stacks are not scanned, and
@@ -2081,6 +2179,7 @@  gc(struct gc_args *args)
 	work.debugmarkdone = 0;
 	work.nproc = runtime_gcprocs();
 	addroots();
+	addfreelists();
 	runtime_parforsetup(work.markfor, work.nproc, work.nroot, nil, false, markroot);
 	runtime_parforsetup(work.sweepfor, work.nproc, runtime_mheap.nspan, nil, true, sweepspan);
 	if(work.nproc > 1) {
@@ -2317,18 +2416,35 @@  runfinq(void* dummy __attribute__ ((unus
 	}
 }
 
-// mark the block at v of size n as allocated.
-// If noscan is true, mark it as not needing scanning.
 void
-runtime_markallocated(void *v, uintptr n, bool noscan)
+runtime_marknogc(void *v)
 {
 	uintptr *b, obits, bits, off, shift;
 
-	if(0)
-		runtime_printf("markallocated %p+%p\n", v, n);
+	off = (uintptr*)v - (uintptr*)runtime_mheap.arena_start;  // word offset
+	b = (uintptr*)runtime_mheap.arena_start - off/wordsPerBitmapWord - 1;
+	shift = off % wordsPerBitmapWord;
 
-	if((byte*)v+n > (byte*)runtime_mheap.arena_used || (byte*)v < runtime_mheap.arena_start)
-		runtime_throw("markallocated: bad pointer");
+	for(;;) {
+		obits = *b;
+		if((obits>>shift & bitMask) != bitAllocated)
+			runtime_throw("bad initial state for marknogc");
+		bits = (obits & ~(bitAllocated<<shift)) | bitBlockBoundary<<shift;
+		if(runtime_gomaxprocs == 1) {
+			*b = bits;
+			break;
+		} else {
+			// more than one goroutine is potentially running: use atomic op
+			if(runtime_casp((void**)b, (void*)obits, (void*)bits))
+				break;
+		}
+	}
+}
+
+void
+runtime_markscan(void *v)
+{
+	uintptr *b, obits, bits, off, shift;
 
 	off = (uintptr*)v - (uintptr*)runtime_mheap.arena_start;  // word offset
 	b = (uintptr*)runtime_mheap.arena_start - off/wordsPerBitmapWord - 1;
@@ -2336,9 +2452,9 @@  runtime_markallocated(void *v, uintptr n
 
 	for(;;) {
 		obits = *b;
-		bits = (obits & ~(bitMask<<shift)) | (bitAllocated<<shift);
-		if(noscan)
-			bits |= bitNoScan<<shift;
+		if((obits>>shift & bitMask) != bitAllocated)
+			runtime_throw("bad initial state for markscan");
+		bits = obits | bitScan<<shift;
 		if(runtime_gomaxprocs == 1) {
 			*b = bits;
 			break;
@@ -2368,7 +2484,10 @@  runtime_markfreed(void *v, uintptr n)
 
 	for(;;) {
 		obits = *b;
-		bits = (obits & ~(bitMask<<shift)) | (bitBlockBoundary<<shift);
+		// This could be a free of a gc-eligible object (bitAllocated + others) or
+		// a FlagNoGC object (bitBlockBoundary set).  In either case, we revert to
+		// a simple no-scan allocated object because it is going on a free list.
+		bits = (obits & ~(bitMask<<shift)) | (bitAllocated<<shift);
 		if(runtime_gomaxprocs == 1) {
 			*b = bits;
 			break;
@@ -2409,12 +2528,22 @@  runtime_checkfreed(void *v, uintptr n)
 void
 runtime_markspan(void *v, uintptr size, uintptr n, bool leftover)
 {
-	uintptr *b, off, shift;
+	uintptr *b, off, shift, i;
 	byte *p;
 
 	if((byte*)v+size*n > (byte*)runtime_mheap.arena_used || (byte*)v < runtime_mheap.arena_start)
 		runtime_throw("markspan: bad pointer");
 
+	if(runtime_checking) {
+		// bits should be all zero at the start
+		off = (byte*)v + size - runtime_mheap.arena_start;
+		b = (uintptr*)(runtime_mheap.arena_start - off/wordsPerBitmapWord);
+		for(i = 0; i < size/PtrSize/wordsPerBitmapWord; i++) {
+			if(b[i] != 0)
+				runtime_throw("markspan: span bits not zero");
+		}
+	}
+
 	p = v;
 	if(leftover)	// mark a boundary just past end of last block too
 		n++;
@@ -2426,7 +2555,7 @@  runtime_markspan(void *v, uintptr size,
 		off = (uintptr*)p - (uintptr*)runtime_mheap.arena_start;  // word offset
 		b = (uintptr*)runtime_mheap.arena_start - off/wordsPerBitmapWord - 1;
 		shift = off % wordsPerBitmapWord;
-		*b = (*b & ~(bitMask<<shift)) | (bitBlockBoundary<<shift);
+		*b = (*b & ~(bitMask<<shift)) | (bitAllocated<<shift);
 	}
 }
 
Index: libgo/runtime/go-unsafe-pointer.c
===================================================================
--- libgo/runtime/go-unsafe-pointer.c	(revision 211248)
+++ libgo/runtime/go-unsafe-pointer.c	(working copy)
@@ -9,6 +9,9 @@ 
 #include "runtime.h"
 #include "go-type.h"
 
+/* A pointer with a zero value.  */
+static void *zero_pointer;
+
 /* This file provides the type descriptor for the unsafe.Pointer type.
    The unsafe package is defined by the compiler itself, which means
    that there is no package to compile to define the type
@@ -53,7 +56,9 @@  const struct __go_type_descriptor unsafe
   /* __uncommon */
   NULL,
   /* __pointer_to_this */
-  NULL
+  NULL,
+  /* __zero */
+  &zero_pointer
 };
 
 /* We also need the type descriptor for the pointer to unsafe.Pointer,
@@ -94,7 +99,9 @@  const struct __go_ptr_type pointer_unsaf
     /* __uncommon */
     NULL,
     /* __pointer_to_this */
-    NULL
+    NULL,
+    /* __zero */
+    &zero_pointer
   },
   /* __element_type */
   &unsafe_Pointer
Index: libgo/runtime/go-reflect-map.c
===================================================================
--- libgo/runtime/go-reflect-map.c	(revision 211248)
+++ libgo/runtime/go-reflect-map.c	(working copy)
@@ -16,112 +16,55 @@ 
 /* This file implements support for reflection on maps.  These
    functions are called from reflect/value.go.  */
 
-struct mapaccess_ret
-{
-  uintptr_t val;
-  _Bool pres;
-};
-
-extern struct mapaccess_ret mapaccess (struct __go_map_type *, uintptr_t,
-				       uintptr_t)
+extern void *mapaccess (struct __go_map_type *, void *, void *)
   __asm__ (GOSYM_PREFIX "reflect.mapaccess");
 
-struct mapaccess_ret
-mapaccess (struct __go_map_type *mt, uintptr_t m, uintptr_t key_i)
+void *
+mapaccess (struct __go_map_type *mt, void *m, void *key)
 {
   struct __go_map *map = (struct __go_map *) m;
-  void *key;
-  const struct __go_type_descriptor *key_descriptor;
-  void *p;
-  const struct __go_type_descriptor *val_descriptor;
-  struct mapaccess_ret ret;
-  void *val;
-  void *pv;
 
   __go_assert (mt->__common.__code == GO_MAP);
-
-  key_descriptor = mt->__key_type;
-  if (__go_is_pointer_type (key_descriptor))
-    key = &key_i;
-  else
-    key = (void *) key_i;
-
   if (map == NULL)
-    p = NULL;
+    return NULL;
   else
-    p = __go_map_index (map, key, 0);
-
-  val_descriptor = mt->__val_type;
-  if (__go_is_pointer_type (val_descriptor))
-    {
-      val = NULL;
-      pv = &val;
-    }
-  else
-    {
-      val = __go_alloc (val_descriptor->__size);
-      pv = val;
-    }
-
-  if (p == NULL)
-    ret.pres = 0;
-  else
-    {
-      __builtin_memcpy (pv, p, val_descriptor->__size);
-      ret.pres = 1;
-    }
-
-  ret.val = (uintptr_t) val;
-  return ret;
+    return __go_map_index (map, key, 0);
 }
 
-extern void mapassign (struct __go_map_type *, uintptr_t, uintptr_t,
-		       uintptr_t, _Bool)
+extern void mapassign (struct __go_map_type *, void *, void *, void *)
   __asm__ (GOSYM_PREFIX "reflect.mapassign");
 
 void
-mapassign (struct __go_map_type *mt, uintptr_t m, uintptr_t key_i,
-	   uintptr_t val_i, _Bool pres)
+mapassign (struct __go_map_type *mt, void *m, void *key, void *val)
 {
   struct __go_map *map = (struct __go_map *) m;
-  const struct __go_type_descriptor *key_descriptor;
-  void *key;
+  void *p;
 
   __go_assert (mt->__common.__code == GO_MAP);
-
   if (map == NULL)
     runtime_panicstring ("assignment to entry in nil map");
+  p = __go_map_index (map, key, 1);
+  __builtin_memcpy (p, val, mt->__val_type->__size);
+}
 
-  key_descriptor = mt->__key_type;
-  if (__go_is_pointer_type (key_descriptor))
-    key = &key_i;
-  else
-    key = (void *) key_i;
+extern void mapdelete (struct __go_map_type *, void *, void *)
+  __asm__ (GOSYM_PREFIX "reflect.mapdelete");
 
-  if (!pres)
-    __go_map_delete (map, key);
-  else
-    {
-      void *p;
-      const struct __go_type_descriptor *val_descriptor;
-      void *pv;
-
-      p = __go_map_index (map, key, 1);
-
-      val_descriptor = mt->__val_type;
-      if (__go_is_pointer_type (val_descriptor))
-	pv = &val_i;
-      else
-	pv = (void *) val_i;
-      __builtin_memcpy (p, pv, val_descriptor->__size);
-    }
+void
+mapdelete (struct __go_map_type *mt, void *m, void *key)
+{
+  struct __go_map *map = (struct __go_map *) m;
+
+  __go_assert (mt->__common.__code == GO_MAP);
+  if (map == NULL)
+    return;
+  __go_map_delete (map, key);
 }
 
-extern int32_t maplen (uintptr_t)
-  __asm__ (GOSYM_PREFIX "reflect.maplen");
+extern int32_t maplen (void *) __asm__ (GOSYM_PREFIX "reflect.maplen");
 
 int32_t
-maplen (uintptr_t m)
+maplen (void *m)
 {
   struct __go_map *map = (struct __go_map *) m;
 
@@ -130,11 +73,11 @@  maplen (uintptr_t m)
   return (int32_t) map->__element_count;
 }
 
-extern unsigned char *mapiterinit (struct __go_map_type *, uintptr_t)
+extern unsigned char *mapiterinit (struct __go_map_type *, void *)
   __asm__ (GOSYM_PREFIX "reflect.mapiterinit");
 
 unsigned char *
-mapiterinit (struct __go_map_type *mt, uintptr_t m)
+mapiterinit (struct __go_map_type *mt, void *m)
 {
   struct __go_hash_iter *it;
 
@@ -144,78 +87,45 @@  mapiterinit (struct __go_map_type *mt, u
   return (unsigned char *) it;
 }
 
-extern void mapiternext (unsigned char *)
-  __asm__ (GOSYM_PREFIX "reflect.mapiternext");
+extern void mapiternext (void *) __asm__ (GOSYM_PREFIX "reflect.mapiternext");
 
 void
-mapiternext (unsigned char *it)
+mapiternext (void *it)
 {
   __go_mapiternext ((struct __go_hash_iter *) it);
 }
 
-struct mapiterkey_ret
-{
-  uintptr_t key;
-  _Bool ok;
-};
-
-extern struct mapiterkey_ret mapiterkey (unsigned char *)
-  __asm__ (GOSYM_PREFIX "reflect.mapiterkey");
+extern void *mapiterkey (void *) __asm__ (GOSYM_PREFIX "reflect.mapiterkey");
 
-struct mapiterkey_ret
-mapiterkey (unsigned char *ita)
+void *
+mapiterkey (void *ita)
 {
   struct __go_hash_iter *it = (struct __go_hash_iter *) ita;
-  struct mapiterkey_ret ret;
+  const struct __go_type_descriptor *key_descriptor;
+  void *key;
 
   if (it->entry == NULL)
-    {
-      ret.key = 0;
-      ret.ok = 0;
-    }
-  else
-    {
-      const struct __go_type_descriptor *key_descriptor;
-      void *key;
-      void *pk;
-
-      key_descriptor = it->map->__descriptor->__map_descriptor->__key_type;
-      if (__go_is_pointer_type (key_descriptor))
-	{
-	  key = NULL;
-	  pk = &key;
-	}
-      else
-	{
-	  key = __go_alloc (key_descriptor->__size);
-	  pk = key;
-	}
-
-      __go_mapiter1 (it, pk);
-
-      ret.key = (uintptr_t) key;
-      ret.ok = 1;
-    }
+    return NULL;
 
-  return ret;
+  key_descriptor = it->map->__descriptor->__map_descriptor->__key_type;
+  key = __go_alloc (key_descriptor->__size);
+  __go_mapiter1 (it, key);
+  return key;
 }
 
 /* Make a new map.  We have to build our own map descriptor.  */
 
-extern uintptr_t makemap (const struct __go_map_type *)
+extern struct __go_map *makemap (const struct __go_map_type *)
   __asm__ (GOSYM_PREFIX "reflect.makemap");
 
-uintptr_t
+struct __go_map *
 makemap (const struct __go_map_type *t)
 {
   struct __go_map_descriptor *md;
   unsigned int o;
   const struct __go_type_descriptor *kt;
   const struct __go_type_descriptor *vt;
-  struct __go_map* map;
-  void *ret;
 
-  /* FIXME: Reference count.  */
   md = (struct __go_map_descriptor *) __go_alloc (sizeof (*md));
   md->__map_descriptor = t;
   o = sizeof (void *);
@@ -232,11 +142,7 @@  makemap (const struct __go_map_type *t)
   o = (o + vt->__field_align - 1) & ~ (vt->__field_align - 1);
   md->__entry_size = o;
 
-  map = __go_new_map (md, 0);
-
-  ret = __go_alloc (sizeof (void *));
-  __builtin_memcpy (ret, &map, sizeof (void *));
-  return (uintptr_t) ret;
+  return __go_new_map (md, 0);
 }
 
 extern _Bool ismapkey (const struct __go_type_descriptor *)
Index: libgo/runtime/chan.c
===================================================================
--- libgo/runtime/chan.c	(revision 211248)
+++ libgo/runtime/chan.c	(working copy)
@@ -123,19 +123,16 @@  runtime_makechan_c(ChanType *t, int64 hi
 
 // For reflect
 //	func makechan(typ *ChanType, size uint64) (chan)
-uintptr reflect_makechan(ChanType *, uint64)
+Hchan *reflect_makechan(ChanType *, uint64)
   __asm__ (GOSYM_PREFIX "reflect.makechan");
 
-uintptr
+Hchan *
 reflect_makechan(ChanType *t, uint64 size)
 {
-	void *ret;
 	Hchan *c;
 
 	c = runtime_makechan_c(t, size);
-	ret = runtime_mal(sizeof(void*));
-	__builtin_memcpy(ret, &c, sizeof(void*));
-	return (uintptr)ret;
+	return c;
 }
 
 // makechan(t *ChanType, hint int64) (hchan *chan any);
@@ -1308,12 +1305,12 @@  runtime_closechan(Hchan *c)
 // For reflect
 //	func chanclose(c chan)
 
-void reflect_chanclose(uintptr) __asm__ (GOSYM_PREFIX "reflect.chanclose");
+void reflect_chanclose(Hchan *) __asm__ (GOSYM_PREFIX "reflect.chanclose");
 
 void
-reflect_chanclose(uintptr c)
+reflect_chanclose(Hchan *c)
 {
-	closechan((Hchan*)c, runtime_getcallerpc(&c));
+	closechan(c, runtime_getcallerpc(&c));
 }
 
 static void
@@ -1377,15 +1374,13 @@  __go_builtin_close(Hchan *c)
 // For reflect
 //	func chanlen(c chan) (len int)
 
-intgo reflect_chanlen(uintptr) __asm__ (GOSYM_PREFIX "reflect.chanlen");
+intgo reflect_chanlen(Hchan *) __asm__ (GOSYM_PREFIX "reflect.chanlen");
 
 intgo
-reflect_chanlen(uintptr ca)
+reflect_chanlen(Hchan *c)
 {
-	Hchan *c;
 	intgo len;
 
-	c = (Hchan*)ca;
 	if(c == nil)
 		len = 0;
 	else
@@ -1396,21 +1391,19 @@  reflect_chanlen(uintptr ca)
 intgo
 __go_chan_len(Hchan *c)
 {
-	return reflect_chanlen((uintptr)c);
+	return reflect_chanlen(c);
 }
 
 // For reflect
-//	func chancap(c chan) (cap intgo)
+//	func chancap(c chan) int
 
-intgo reflect_chancap(uintptr) __asm__ (GOSYM_PREFIX "reflect.chancap");
+intgo reflect_chancap(Hchan *) __asm__ (GOSYM_PREFIX "reflect.chancap");
 
 intgo
-reflect_chancap(uintptr ca)
+reflect_chancap(Hchan *c)
 {
-	Hchan *c;
 	intgo cap;
 
-	c = (Hchan*)ca;
 	if(c == nil)
 		cap = 0;
 	else
@@ -1421,7 +1414,7 @@  reflect_chancap(uintptr ca)
 intgo
 __go_chan_cap(Hchan *c)
 {
-	return reflect_chancap((uintptr)c);
+	return reflect_chancap(c);
 }
 
 static SudoG*
Index: libgo/runtime/cpuprof.c
===================================================================
--- libgo/runtime/cpuprof.c	(revision 211248)
+++ libgo/runtime/cpuprof.c	(working copy)
@@ -177,7 +177,7 @@  runtime_SetCPUProfileRate(intgo hz)
 		runtime_noteclear(&prof->wait);
 
 		runtime_setcpuprofilerate(tick, hz);
-	} else if(prof->on) {
+	} else if(prof != nil && prof->on) {
 		runtime_setcpuprofilerate(nil, 0);
 		prof->on = false;
 
Index: libgo/runtime/go-type.h
===================================================================
--- libgo/runtime/go-type.h	(revision 211248)
+++ libgo/runtime/go-type.h	(working copy)
@@ -103,6 +103,11 @@  struct __go_type_descriptor
   /* The descriptor for the type which is a pointer to this type.
      This may be NULL.  */
   const struct __go_type_descriptor *__pointer_to_this;
+
+  /* A pointer to a zero value for this type.  All types will point to
+     the same zero value, go$zerovalue, which is a common variable so
+     that it will be large enough.  */
+  void *__zero;
 };
 
 /* The information we store for each method of a type.  */
Index: libgo/runtime/malloc.goc
===================================================================
--- libgo/runtime/malloc.goc	(revision 211248)
+++ libgo/runtime/malloc.goc	(working copy)
@@ -118,7 +118,7 @@  runtime_mallocgc(uintptr size, uintptr t
 		size += sizeof(uintptr);
 
 	c = m->mcache;
-	if(size <= MaxSmallSize) {
+	if(!runtime_debug.efence && size <= MaxSmallSize) {
 		// Allocate from mcache free lists.
 		// Inlined version of SizeToClass().
 		if(size <= 1024-8)
@@ -157,8 +157,10 @@  runtime_mallocgc(uintptr size, uintptr t
 		runtime_markspan(v, 0, 0, true);
 	}
 
-	if(!(flag & FlagNoGC))
-		runtime_markallocated(v, size, (flag&FlagNoScan) != 0);
+	if(flag & FlagNoGC)
+		runtime_marknogc(v);
+	else if(!(flag & FlagNoScan))
+		runtime_markscan(v);
 
 	if(DebugTypeAtBlockEnd)
 		*(uintptr*)((uintptr)v+size-sizeof(uintptr)) = typ;
@@ -180,6 +182,9 @@  runtime_mallocgc(uintptr size, uintptr t
 		runtime_settype_flush(m);
 	m->locks--;
 
+	if(runtime_debug.allocfreetrace)
+		goto profile;
+
 	if(!(flag & FlagNoProfiling) && (rate = runtime_MemProfileRate) > 0) {
 		if(size >= (uint32) rate)
 			goto profile;
@@ -193,7 +198,7 @@  runtime_mallocgc(uintptr size, uintptr t
 			m->mcache->next_sample = runtime_fastrand1() % (2*rate);
 		profile:
 			runtime_setblockspecial(v, true);
-			runtime_MProf_Malloc(v, size);
+			runtime_MProf_Malloc(v, size, typ);
 		}
 	}
 
@@ -257,7 +262,10 @@  __go_free(void *v)
 		// they might coalesce v into other spans and change the bitmap further.
 		runtime_markfreed(v, size);
 		runtime_unmarkspan(v, 1<<PageShift);
-		runtime_MHeap_Free(&runtime_mheap, s, 1);
+		if(runtime_debug.efence)
+			runtime_SysFree((void*)(s->start<<PageShift), size, &mstats.heap_sys);
+		else
+			runtime_MHeap_Free(&runtime_mheap, s, 1);
 		c->local_nlargefree++;
 		c->local_largefree += size;
 	} else {
@@ -819,6 +827,10 @@  func SetFinalizer(obj Eface, finalizer E
 		runtime_printf("runtime.SetFinalizer: first argument is %S, not pointer\n", *obj.__type_descriptor->__reflection);
 		goto throw;
 	}
+	ot = (const PtrType*)obj.type;
+	if(ot->__element_type != nil && ot->__element_type->__size == 0) {
+		return;
+	}
 	if(!runtime_mlookup(obj.__object, &base, &size, nil) || obj.__object != base) {
 		runtime_printf("runtime.SetFinalizer: pointer not at beginning of allocated block\n");
 		goto throw;
Index: libgo/runtime/mprof.goc
===================================================================
--- libgo/runtime/mprof.goc	(revision 211248)
+++ libgo/runtime/mprof.goc	(working copy)
@@ -256,16 +256,56 @@  found:
 	return nil;
 }
 
+static const char*
+typeinfoname(int32 typeinfo)
+{
+	if(typeinfo == TypeInfo_SingleObject)
+		return "single object";
+	else if(typeinfo == TypeInfo_Array)
+		return "array";
+	else if(typeinfo == TypeInfo_Chan)
+		return "channel";
+	// runtime_throw("typinfoname: unknown type info");
+	return "unknown";
+}
+
+static void
+printstackframes(Location *stk, int32 nstk)
+{
+	Location *loc;
+	int32 frame;
+
+	for(frame = 0; frame < nstk; frame++) {
+		loc = &stk[frame];
+		if (loc->function.len > 0) {
+			runtime_printf("\t#%d %p %S %S:%d\n", frame, loc->pc, loc->function, loc->filename, (int32)loc->lineno);
+		} else {
+			runtime_printf("\t#%d %p\n", frame, loc->pc);
+		}
+	}
+}
+
 // Called by malloc to record a profiled block.
 void
-runtime_MProf_Malloc(void *p, uintptr size)
+runtime_MProf_Malloc(void *p, uintptr size, uintptr typ)
 {
-	int32 nstk;
 	Location stk[32];
 	Bucket *b;
+	Type *type;
+	const char *name;
+	int32 nstk;
 
 	nstk = runtime_callers(1, stk, 32);
 	runtime_lock(&proflock);
+        if(runtime_debug.allocfreetrace) {
+		type = (Type*)(typ & ~3);
+		name = typeinfoname(typ & 3);
+		runtime_printf("MProf_Malloc(p=%p, size=%p, type=%p <%s", p, size, type, name);
+		if(type != nil)
+                	runtime_printf(" of %S", *type->__reflection);
+		runtime_printf(">)\n");
+		printstackframes(stk, nstk);
+	}
 	b = stkbucket(MProf, stk, nstk, true);
 	b->recent_allocs++;
 	b->recent_alloc_bytes += size;
@@ -284,6 +324,10 @@  runtime_MProf_Free(void *p, uintptr size
 	if(b != nil) {
 		b->recent_frees++;
 		b->recent_free_bytes += size;
+		if(runtime_debug.allocfreetrace) {
+			runtime_printf("MProf_Free(p=%p, size=%p)\n", p, size);
+			printstackframes(b->stk, b->nstk);
+		}
 	}
 	runtime_unlock(&proflock);
 }
Index: libgo/runtime/malloc.h
===================================================================
--- libgo/runtime/malloc.h	(revision 211248)
+++ libgo/runtime/malloc.h	(working copy)
@@ -449,7 +449,8 @@  void*	runtime_mallocgc(uintptr size, uin
 void*	runtime_persistentalloc(uintptr size, uintptr align, uint64 *stat);
 int32	runtime_mlookup(void *v, byte **base, uintptr *size, MSpan **s);
 void	runtime_gc(int32 force);
-void	runtime_markallocated(void *v, uintptr n, bool noptr);
+void	runtime_markscan(void *v);
+void	runtime_marknogc(void *v);
 void	runtime_checkallocated(void *v, uintptr n);
 void	runtime_markfreed(void *v, uintptr n);
 void	runtime_checkfreed(void *v, uintptr n);
@@ -484,7 +485,7 @@  struct Obj
 	uintptr	ti;	// type info
 };
 
-void	runtime_MProf_Malloc(void*, uintptr);
+void	runtime_MProf_Malloc(void*, uintptr, uintptr);
 void	runtime_MProf_Free(void*, uintptr);
 void	runtime_MProf_GC(void);
 void	runtime_MProf_Mark(void (*addroot)(Obj));
Index: libgo/runtime/runtime.c
===================================================================
--- libgo/runtime/runtime.c	(revision 211248)
+++ libgo/runtime/runtime.c	(working copy)
@@ -282,9 +282,11 @@  static struct {
 	const char* name;
 	int32*	value;
 } dbgvar[] = {
+	{"allocfreetrace", &runtime_debug.allocfreetrace},
+	{"efence", &runtime_debug.efence},
 	{"gctrace", &runtime_debug.gctrace},
-	{"schedtrace", &runtime_debug.schedtrace},
 	{"scheddetail", &runtime_debug.scheddetail},
+	{"schedtrace", &runtime_debug.schedtrace},
 };
 
 void
Index: libgo/runtime/runtime.h
===================================================================
--- libgo/runtime/runtime.h	(revision 211248)
+++ libgo/runtime/runtime.h	(working copy)
@@ -427,9 +427,11 @@  struct CgoMal
 // Holds variables parsed from GODEBUG env var.
 struct DebugVars
 {
+	int32	allocfreetrace;
+	int32	efence;
 	int32	gctrace;
-	int32	schedtrace;
 	int32	scheddetail;
+	int32	schedtrace;
 };
 
 extern bool runtime_precisestack;
@@ -741,6 +743,9 @@  void	runtime_lockOSThread(void);
 void	runtime_unlockOSThread(void);
 
 bool	runtime_showframe(String, bool);
+Hchan*	runtime_makechan_c(ChanType*, int64);
+void	runtime_chansend(ChanType*, Hchan*, byte*, bool*, void*);
+void	runtime_chanrecv(ChanType*, Hchan*, byte*, bool*, bool*);
 void	runtime_printcreatedby(G*);
 
 uintptr	runtime_memlimit(void);