diff mbox

Fix PR64535 - increase emergency EH buffers via a new allocator

Message ID alpine.LSU.2.11.1501121358300.12482@zhemvz.fhfr.qr
State New
Headers show

Commit Message

Richard Biener Jan. 12, 2015, 1:06 p.m. UTC
This "fixes" PR64535 by changing the fixed object size emergency pool
to a variable EH object size (but fixed arena size) allocator.  Via
combining the dependent and non-dependent EH arenas this should allow
around 600 bad_alloc throws in OOM situations on x86_64-linux
compared to the current 64 which should provide some headroom to
the poor souls using EH to communicate OOM in a heavily threaded
enviroment.

Bootstrapped and tested on x86_64-unknown-linux-gnu (with the #if 1
as in the patch below, forcing the use of the allocator).

Comments?  Ok with only the #else path retained?

What about the buffer size - we're now free to choose sth that
doesn't depend on the size of INT_MAX (previously required for
old allocator bitmap)?

With the cost of some more members I can make the allocator more
generic (use a constructor with a arena and a arena size parameter)
and we may move it somewhere public under __gnu_cxx?  But eventually
boost has something like this anyway.

Thanks,
Richard.

2015-01-12  Richard Biener  <rguenther@suse.de>

	PR libstdc++/64535
	* libsupc++/eh_alloc.cc: Include new.
	(bitmask_type): Remove.
	(one_buffer): Likewise.
	(emergency_buffer): Likewise.
	(emergency_used): Likewise.
	(dependents_buffer): Likewise.
	(dependents_used): Likewise.
	(class pool): New custom fixed-size arena, variable size object
	allocator.
	(emergency_pool): New global.
	(__cxxabiv1::__cxa_allocate_exception): Use new emergency_pool.
	(__cxxabiv1::__cxa_free_exception): Likewise.
	(__cxxabiv1::__cxa_allocate_dependent_exception): Likewise.
	(__cxxabiv1::__cxa_free_dependent_exception): Likewise.

Comments

Richard Biener Jan. 19, 2015, 10:33 a.m. UTC | #1
On Mon, 12 Jan 2015, Richard Biener wrote:

> 
> This "fixes" PR64535 by changing the fixed object size emergency pool
> to a variable EH object size (but fixed arena size) allocator.  Via
> combining the dependent and non-dependent EH arenas this should allow
> around 600 bad_alloc throws in OOM situations on x86_64-linux
> compared to the current 64 which should provide some headroom to
> the poor souls using EH to communicate OOM in a heavily threaded
> enviroment.
> 
> Bootstrapped and tested on x86_64-unknown-linux-gnu (with the #if 1
> as in the patch below, forcing the use of the allocator).
> 
> Comments?  Ok with only the #else path retained?

I've changed C-style comments to C++-style ones and made the arena
allocation dynamic instead of using a static buffer.  Other that this
the patch is pretty much unchanged.

> What about the buffer size - we're now free to choose sth that
> doesn't depend on the size of INT_MAX (previously required for
> old allocator bitmap)?

I've left it to reproduce exactly the old combined buffer size
of emergency_buffer and dependents_buffer.  It comes out around
72kB on x86_64-linux.  With dynamic allocation it should only
consume virtual memory (the pages are only touched when first used),
so eventually we can even increase it more on 64bit targets.

Still the defines could get some cleanup - I suppose 64k on
a 64bit ptr_mode target, 32k on a 32bit ptr_mode target and
1k on a non-threaded target is reasonable.

Unfortunately I couldn't get an answer of whether throwing
bad_alloc from a throw where we can't allocate an exception
is a) valid or b) even required by the standard ('throw' isn't
listed as 'allocation function' - also our EH allocators are
marked as throw(), so such change would change the ABI...).

> With the cost of some more members I can make the allocator more
> generic (use a constructor with a arena and a arena size parameter)
> and we may move it somewhere public under __gnu_cxx?  But eventually
> boost has something like this anyway.

Didn't explore this - it doesn't really match the STL allocator interface
and imposes overhead even over an implementation class (STL allocators
know the size of the objects they free).

I'm re-bootstrapping / testing with the cosmetic changes I did and
with EH allocation not forced to go through the emergency pool
(I've done that in previous bootstraps / tests to get the pool code
exercised).

Any comments?  We have a customer that runs into the issue that 64
bad_alloc exceptions are not enough for them (yes, they require bad_alloc
to work when thrown in a massive quantity from threads).  Other
solutions for this would include to simply wait and re-try (with possibly
deadlocking if no progress is made) to artificially throttling
bad_alloc allocations from the EH emergency pool (identify it by
size, sleep some time inside the lock).

CCing rth who implemented the existing code.

Thanks,
Richard.

2015-01-12  Richard Biener  <rguenther@suse.de>

	PR libstdc++/64535
	* libsupc++/eh_alloc.cc: Include new.
	(bitmask_type): Remove.
	(one_buffer): Likewise.
	(emergency_buffer): Likewise.
	(emergency_used): Likewise.
	(dependents_buffer): Likewise.
	(dependents_used): Likewise.
	(class pool): New custom fixed-size arena, variable size object
	allocator.
	(emergency_pool): New global.
	(__cxxabiv1::__cxa_allocate_exception): Use new emergency_pool.
	(__cxxabiv1::__cxa_free_exception): Likewise.
	(__cxxabiv1::__cxa_allocate_dependent_exception): Likewise.
	(__cxxabiv1::__cxa_free_dependent_exception): Likewise.

Index: libstdc++-v3/libsupc++/eh_alloc.cc
===================================================================
*** libstdc++-v3/libsupc++/eh_alloc.cc.orig	2015-01-19 10:54:57.201633535 +0100
--- libstdc++-v3/libsupc++/eh_alloc.cc	2015-01-19 11:32:29.481555554 +0100
***************
*** 34,39 ****
--- 34,40 ----
  #include <exception>
  #include "unwind-cxx.h"
  #include <ext/concurrence.h>
+ #include <new>
  
  #if _GLIBCXX_HOSTED
  using std::free;
*************** using namespace __cxxabiv1;
*** 72,99 ****
  # define EMERGENCY_OBJ_COUNT	4
  #endif
  
- #if INT_MAX == 32767 || EMERGENCY_OBJ_COUNT <= 32
- typedef unsigned int bitmask_type;
- #else
- #if defined (_GLIBCXX_LLP64)
- typedef unsigned long long bitmask_type;
- #else
- typedef unsigned long bitmask_type;
- #endif
- #endif
  
  
! typedef char one_buffer[EMERGENCY_OBJ_SIZE] __attribute__((aligned));
! static one_buffer emergency_buffer[EMERGENCY_OBJ_COUNT];
! static bitmask_type emergency_used;
  
! static __cxa_dependent_exception dependents_buffer[EMERGENCY_OBJ_COUNT];
! static bitmask_type dependents_used;
  
! namespace
! {
!   // A single mutex controlling emergency allocations.
!   __gnu_cxx::__mutex emergency_mutex;
  }
  
  extern "C" void *
--- 73,239 ----
  # define EMERGENCY_OBJ_COUNT	4
  #endif
  
  
+ namespace
+ {
+   // A fixed-size heap, variable size object allocator
+   class pool
+     {
+     public:
+       pool();
  
!       void *allocate (size_t);
!       void free (void *);
  
!       bool in_pool (void *);
  
!     private:
!       struct free_entry {
! 	size_t size;
! 	free_entry *next;
!       };
!       struct allocated_entry {
! 	size_t size;
! 	char data[];
!       };
! 
!       // A single mutex controlling emergency allocations.
!       __gnu_cxx::__mutex emergency_mutex;
! 
!       // The free-list
!       free_entry *first_free_entry;
!       // The arena itself - we need to keep track of these only
!       // to implement in_pool.
!       char *arena;
!       size_t arena_size;
!     };
! 
!   pool::pool()
!     {
!       // Allocate the arena - we could add a GLIBCXX_EH_ARENA_SIZE environment
!       // to make this tunable
!       arena_size = (EMERGENCY_OBJ_SIZE * EMERGENCY_OBJ_COUNT
! 		    + EMERGENCY_OBJ_COUNT * sizeof (__cxa_dependent_exception));
!       arena = (char *)malloc (arena_size);
! 
!       // Populate the free-list with a single entry covering the whole arena
!       first_free_entry = reinterpret_cast <free_entry *> (arena);
!       new (first_free_entry) free_entry;
!       first_free_entry->size = arena_size;
!       first_free_entry->next = NULL;
!     }
! 
!   void *pool::allocate (size_t size)
!     {
!       __gnu_cxx::__scoped_lock sentry(emergency_mutex);
!       // We need an additional size_t member.
!       size += sizeof (size_t);
!       // And we need to at least hand out objects of the size of
!       // a freelist entry.
!       if (size < sizeof (free_entry))
! 	size = sizeof (free_entry);
!       // And we need to align objects we hand out to the required
!       // alignment of a freelist entry (this really aligns the
!       // tail which will become a new freelist entry).
!       size = ((size + __alignof__(free_entry) - 1)
! 	      & ~(__alignof__(free_entry) - 1));
!       // Search for an entry of proper size on the freelist.
!       free_entry **e;
!       for (e = &first_free_entry;
! 	   *e && (*e)->size < size;
! 	   e = &(*e)->next)
! 	;
!       if (!*e)
! 	return NULL;
!       allocated_entry *x;
!       if ((*e)->size - size >= sizeof (free_entry))
! 	{
! 	  // Slit block if it is too large.
! 	  free_entry *f = reinterpret_cast <free_entry *>
! 	      (reinterpret_cast <char *> (*e) + size);
! 	  size_t sz = (*e)->size;
! 	  free_entry *next = (*e)->next;
! 	  new (f) free_entry;
! 	  f->next = next;
! 	  f->size = sz - size;
! 	  x = reinterpret_cast <allocated_entry *> (*e);
! 	  new (x) allocated_entry;
! 	  x->size = size;
! 	  *e = f;
! 	}
!       else
! 	{
! 	  // Exact size match or too small overhead for a free entry.
! 	  size_t sz = (*e)->size;
! 	  free_entry *next = (*e)->next;
! 	  x = reinterpret_cast <allocated_entry *> (*e);
! 	  new (x) allocated_entry;
! 	  x->size = sz;
! 	  *e = next;
! 	}
!       return &x->data;
!     }
! 
!   void pool::free (void *data)
!     {
!       __gnu_cxx::__scoped_lock sentry(emergency_mutex);
!       allocated_entry *e = reinterpret_cast <allocated_entry *>
! 	(reinterpret_cast <char *> (data) - sizeof (size_t));
!       size_t sz = e->size;
!       if (!first_free_entry)
! 	{
! 	  // If the free list is empty just put the entry there.
! 	  free_entry *f = reinterpret_cast <free_entry *> (e);
! 	  new (f) free_entry;
! 	  f->size = sz;
! 	  f->next = NULL;
! 	  first_free_entry = f;
! 	}
!       else if (reinterpret_cast <char *> (e) + sz
! 	       == reinterpret_cast <char *> (first_free_entry))
! 	{
! 	  // Check if we can merge with the first free entry being right
! 	  // after us.
! 	  free_entry *f = reinterpret_cast <free_entry *> (e);
! 	  new (f) free_entry;
! 	  f->size = sz + first_free_entry->size;
! 	  f->next = first_free_entry->next;
! 	  first_free_entry = f;
! 	}
!       else
! 	{
! 	  // Else search for a free item we can merge with at its end.
! 	  free_entry **fe;
! 	  for (fe = &first_free_entry;
! 	       (*fe)->next
! 	       && (reinterpret_cast <char *> ((*fe)->next)
! 		   > reinterpret_cast <char *> (e) + sz);
! 	       fe = &(*fe)->next)
! 	    ;
! 	  if (reinterpret_cast <char *> (*fe) + (*fe)->size
! 	      == reinterpret_cast <char *> (e))
! 	    /* Merge with the freelist entry.  */
! 	    (*fe)->size += sz;
! 	  else
! 	    {
! 	      // Else put it after it which keeps the freelist sorted.
! 	      free_entry *f = reinterpret_cast <free_entry *> (e);
! 	      new (f) free_entry;
! 	      f->size = sz;
! 	      f->next = (*fe)->next;
! 	      (*fe)->next = f;
! 	    }
! 	}
!     }
! 
!   bool pool::in_pool (void *ptr)
!     {
!       char *p = reinterpret_cast <char *> (ptr);
!       return (p > arena
! 	      && p < arena + arena_size);
!     }
! 
!   pool emergency_pool;
  }
  
  extern "C" void *
*************** __cxxabiv1::__cxa_allocate_exception(std
*** 104,133 ****
    thrown_size += sizeof (__cxa_refcounted_exception);
    ret = malloc (thrown_size);
  
!   if (! ret)
!     {
!       __gnu_cxx::__scoped_lock sentry(emergency_mutex);
! 
!       bitmask_type used = emergency_used;
!       unsigned int which = 0;
! 
!       if (thrown_size > EMERGENCY_OBJ_SIZE)
! 	goto failed;
!       while (used & 1)
! 	{
! 	  used >>= 1;
! 	  if (++which >= EMERGENCY_OBJ_COUNT)
! 	    goto failed;
! 	}
! 
!       emergency_used |= (bitmask_type)1 << which;
!       ret = &emergency_buffer[which][0];
! 
!     failed:;
  
!       if (!ret)
! 	std::terminate ();
!     }
  
    memset (ret, 0, sizeof (__cxa_refcounted_exception));
  
--- 244,254 ----
    thrown_size += sizeof (__cxa_refcounted_exception);
    ret = malloc (thrown_size);
  
!   if (!ret)
!     ret = emergency_pool.allocate (thrown_size);
  
!   if (!ret)
!     std::terminate ();
  
    memset (ret, 0, sizeof (__cxa_refcounted_exception));
  
*************** __cxxabiv1::__cxa_allocate_exception(std
*** 138,156 ****
  extern "C" void
  __cxxabiv1::__cxa_free_exception(void *vptr) _GLIBCXX_NOTHROW
  {
!   char *base = (char *) emergency_buffer;
!   char *ptr = (char *) vptr;
!   if (ptr >= base
!       && ptr < base + sizeof (emergency_buffer))
!     {
!       const unsigned int which
! 	= (unsigned) (ptr - base) / EMERGENCY_OBJ_SIZE;
! 
!       __gnu_cxx::__scoped_lock sentry(emergency_mutex);
!       emergency_used &= ~((bitmask_type)1 << which);
!     }
    else
!     free (ptr - sizeof (__cxa_refcounted_exception));
  }
  
  
--- 259,269 ----
  extern "C" void
  __cxxabiv1::__cxa_free_exception(void *vptr) _GLIBCXX_NOTHROW
  {
!   char *ptr = (char *) vptr - sizeof (__cxa_refcounted_exception);
!   if (emergency_pool.in_pool (ptr))
!     emergency_pool.free (ptr);
    else
!     free (ptr);
  }
  
  
*************** __cxxabiv1::__cxa_allocate_dependent_exc
*** 163,189 ****
      (malloc (sizeof (__cxa_dependent_exception)));
  
    if (!ret)
!     {
!       __gnu_cxx::__scoped_lock sentry(emergency_mutex);
! 
!       bitmask_type used = dependents_used;
!       unsigned int which = 0;
! 
!       while (used & 1)
! 	{
! 	  used >>= 1;
! 	  if (++which >= EMERGENCY_OBJ_COUNT)
! 	    goto failed;
! 	}
! 
!       dependents_used |= (bitmask_type)1 << which;
!       ret = &dependents_buffer[which];
  
!     failed:;
! 
!       if (!ret)
! 	std::terminate ();
!     }
  
    memset (ret, 0, sizeof (__cxa_dependent_exception));
  
--- 276,286 ----
      (malloc (sizeof (__cxa_dependent_exception)));
  
    if (!ret)
!     ret = static_cast <__cxa_dependent_exception*>
!       (emergency_pool.allocate (sizeof (__cxa_dependent_exception)));
  
!   if (!ret)
!     std::terminate ();
  
    memset (ret, 0, sizeof (__cxa_dependent_exception));
  
*************** extern "C" void
*** 195,211 ****
  __cxxabiv1::__cxa_free_dependent_exception
    (__cxa_dependent_exception *vptr) _GLIBCXX_NOTHROW
  {
!   char *base = (char *) dependents_buffer;
!   char *ptr = (char *) vptr;
!   if (ptr >= base
!       && ptr < base + sizeof (dependents_buffer))
!     {
!       const unsigned int which
! 	= (unsigned) (ptr - base) / sizeof (__cxa_dependent_exception);
! 
!       __gnu_cxx::__scoped_lock sentry(emergency_mutex);
!       dependents_used &= ~((bitmask_type)1 << which);
!     }
    else
      free (vptr);
  }
--- 292,299 ----
  __cxxabiv1::__cxa_free_dependent_exception
    (__cxa_dependent_exception *vptr) _GLIBCXX_NOTHROW
  {
!   if (emergency_pool.in_pool (vptr))
!     emergency_pool.free (vptr);
    else
      free (vptr);
  }
Jonathan Wakely Jan. 19, 2015, 3:34 p.m. UTC | #2
On 19/01/15 11:33 +0100, Richard Biener wrote:
>On Mon, 12 Jan 2015, Richard Biener wrote:
>
>>
>> This "fixes" PR64535 by changing the fixed object size emergency pool
>> to a variable EH object size (but fixed arena size) allocator.  Via
>> combining the dependent and non-dependent EH arenas this should allow
>> around 600 bad_alloc throws in OOM situations on x86_64-linux
>> compared to the current 64 which should provide some headroom to
>> the poor souls using EH to communicate OOM in a heavily threaded
>> enviroment.
>>
>> Bootstrapped and tested on x86_64-unknown-linux-gnu (with the #if 1
>> as in the patch below, forcing the use of the allocator).

I see only the #else part is kept now - that was what I was going to
suggest.

>Unfortunately I couldn't get an answer of whether throwing
>bad_alloc from a throw where we can't allocate an exception
>is a) valid or b) even required by the standard ('throw' isn't
>listed as 'allocation function' - also our EH allocators are
>marked as throw(), so such change would change the ABI...).

I'll ask the C++ committee.

>> With the cost of some more members I can make the allocator more
>> generic (use a constructor with a arena and a arena size parameter)
>> and we may move it somewhere public under __gnu_cxx?  But eventually
>> boost has something like this anyway.
>
>Didn't explore this - it doesn't really match the STL allocator interface
>and imposes overhead even over an implementation class (STL allocators
>know the size of the objects they free).

Yeah, I don't think there's any advantage complicating this type to
make it usable as an STL allocator - it does what it's designed to do
and that's fine.

>I'm re-bootstrapping / testing with the cosmetic changes I did and
>with EH allocation not forced to go through the emergency pool
>(I've done that in previous bootstraps / tests to get the pool code
>exercised).

>Any comments?  We have a customer that runs into the issue that 64
>bad_alloc exceptions are not enough for them (yes, they require bad_alloc
>to work when thrown in a massive quantity from threads).  Other
>solutions for this would include to simply wait and re-try (with possibly
>deadlocking if no progress is made) to artificially throttling
>bad_alloc allocations from the EH emergency pool (identify it by
>size, sleep some time inside the lock).
>
>CCing rth who implemented the existing code.

I don't have any better ideas for fixing the issue, so it's approved
by me. Unless rth comes back with something else please go ahead and
check it in.
Richard Biener Jan. 20, 2015, 9:06 a.m. UTC | #3
On Mon, 19 Jan 2015, Jonathan Wakely wrote:

> On 19/01/15 11:33 +0100, Richard Biener wrote:
> > On Mon, 12 Jan 2015, Richard Biener wrote:
> > 
> > > 
> > > This "fixes" PR64535 by changing the fixed object size emergency pool
> > > to a variable EH object size (but fixed arena size) allocator.  Via
> > > combining the dependent and non-dependent EH arenas this should allow
> > > around 600 bad_alloc throws in OOM situations on x86_64-linux
> > > compared to the current 64 which should provide some headroom to
> > > the poor souls using EH to communicate OOM in a heavily threaded
> > > enviroment.
> > > 
> > > Bootstrapped and tested on x86_64-unknown-linux-gnu (with the #if 1
> > > as in the patch below, forcing the use of the allocator).
> 
> I see only the #else part is kept now - that was what I was going to
> suggest.
> 
> > Unfortunately I couldn't get an answer of whether throwing
> > bad_alloc from a throw where we can't allocate an exception
> > is a) valid or b) even required by the standard ('throw' isn't
> > listed as 'allocation function' - also our EH allocators are
> > marked as throw(), so such change would change the ABI...).
> 
> I'll ask the C++ committee.

Thanks.  Only needing to deal with std::bad_alloc allocations from
the pool would greatly simplify it (I'd do a fixed-object-size one then).

> > > With the cost of some more members I can make the allocator more
> > > generic (use a constructor with a arena and a arena size parameter)
> > > and we may move it somewhere public under __gnu_cxx?  But eventually
> > > boost has something like this anyway.
> > 
> > Didn't explore this - it doesn't really match the STL allocator interface
> > and imposes overhead even over an implementation class (STL allocators
> > know the size of the objects they free).
> 
> Yeah, I don't think there's any advantage complicating this type to
> make it usable as an STL allocator - it does what it's designed to do
> and that's fine.
> 
> > I'm re-bootstrapping / testing with the cosmetic changes I did and
> > with EH allocation not forced to go through the emergency pool
> > (I've done that in previous bootstraps / tests to get the pool code
> > exercised).
> 
> > Any comments?  We have a customer that runs into the issue that 64
> > bad_alloc exceptions are not enough for them (yes, they require bad_alloc
> > to work when thrown in a massive quantity from threads).  Other
> > solutions for this would include to simply wait and re-try (with possibly
> > deadlocking if no progress is made) to artificially throttling
> > bad_alloc allocations from the EH emergency pool (identify it by
> > size, sleep some time inside the lock).
> > 
> > CCing rth who implemented the existing code.
> 
> I don't have any better ideas for fixing the issue, so it's approved
> by me. Unless rth comes back with something else please go ahead and
> check it in.

Testing revealed g++.old-deja/g++.eh/badalloc1.C which has an interesting
way of limiting allocation (providing its own malloc).  I had to bump
its arena size to make the upfront allocation in libstdc++ pass.  I
also added code to deal with malloc failing there (not sure if it's worth
failing in a way to still setup a minimal-size emergency arena).

I also replaced all size_t with std::size_t for consistency.

Thanks,
Richard.

2015-01-12  Richard Biener  <rguenther@suse.de>

	PR libstdc++/64535
	* libsupc++/eh_alloc.cc: Include new.
	(bitmask_type): Remove.
	(one_buffer): Likewise.
	(emergency_buffer): Likewise.
	(emergency_used): Likewise.
	(dependents_buffer): Likewise.
	(dependents_used): Likewise.
	(class pool): New custom fixed-size arena, variable size object
	allocator.
	(emergency_pool): New global.
	(__cxxabiv1::__cxa_allocate_exception): Use new emergency_pool.
	(__cxxabiv1::__cxa_free_exception): Likewise.
	(__cxxabiv1::__cxa_allocate_dependent_exception): Likewise.
	(__cxxabiv1::__cxa_free_dependent_exception): Likewise.

	* g++.old-deja/g++.eh/badalloc1.C: Adjust.

Index: libstdc++-v3/libsupc++/eh_alloc.cc
===================================================================
*** libstdc++-v3/libsupc++/eh_alloc.cc.orig	2015-01-19 13:32:47.559305641 +0100
--- libstdc++-v3/libsupc++/eh_alloc.cc	2015-01-20 10:03:17.255749415 +0100
***************
*** 34,39 ****
--- 34,40 ----
  #include <exception>
  #include "unwind-cxx.h"
  #include <ext/concurrence.h>
+ #include <new>
  
  #if _GLIBCXX_HOSTED
  using std::free;
*************** using namespace __cxxabiv1;
*** 72,133 ****
  # define EMERGENCY_OBJ_COUNT	4
  #endif
  
- #if INT_MAX == 32767 || EMERGENCY_OBJ_COUNT <= 32
- typedef unsigned int bitmask_type;
- #else
- #if defined (_GLIBCXX_LLP64)
- typedef unsigned long long bitmask_type;
- #else
- typedef unsigned long bitmask_type;
- #endif
- #endif
  
  
! typedef char one_buffer[EMERGENCY_OBJ_SIZE] __attribute__((aligned));
! static one_buffer emergency_buffer[EMERGENCY_OBJ_COUNT];
! static bitmask_type emergency_used;
  
! static __cxa_dependent_exception dependents_buffer[EMERGENCY_OBJ_COUNT];
! static bitmask_type dependents_used;
  
! namespace
! {
!   // A single mutex controlling emergency allocations.
!   __gnu_cxx::__mutex emergency_mutex;
! }
  
! extern "C" void *
! __cxxabiv1::__cxa_allocate_exception(std::size_t thrown_size) _GLIBCXX_NOTHROW
! {
!   void *ret;
  
!   thrown_size += sizeof (__cxa_refcounted_exception);
!   ret = malloc (thrown_size);
  
!   if (! ret)
      {
        __gnu_cxx::__scoped_lock sentry(emergency_mutex);
  
!       bitmask_type used = emergency_used;
!       unsigned int which = 0;
! 
!       if (thrown_size > EMERGENCY_OBJ_SIZE)
! 	goto failed;
!       while (used & 1)
  	{
! 	  used >>= 1;
! 	  if (++which >= EMERGENCY_OBJ_COUNT)
! 	    goto failed;
  	}
  
!       emergency_used |= (bitmask_type)1 << which;
!       ret = &emergency_buffer[which][0];
  
!     failed:;
  
!       if (!ret)
! 	std::terminate ();
!     }
  
    memset (ret, 0, sizeof (__cxa_refcounted_exception));
  
--- 73,261 ----
  # define EMERGENCY_OBJ_COUNT	4
  #endif
  
  
+ namespace
+ {
+   // A fixed-size heap, variable size object allocator
+   class pool
+     {
+     public:
+       pool();
  
!       void *allocate (std::size_t);
!       void free (void *);
  
!       bool in_pool (void *);
  
!     private:
!       struct free_entry {
! 	std::size_t size;
! 	free_entry *next;
!       };
!       struct allocated_entry {
! 	std::size_t size;
! 	char data[];
!       };
! 
!       // A single mutex controlling emergency allocations.
!       __gnu_cxx::__mutex emergency_mutex;
! 
!       // The free-list
!       free_entry *first_free_entry;
!       // The arena itself - we need to keep track of these only
!       // to implement in_pool.
!       char *arena;
!       std::size_t arena_size;
!     };
  
!   pool::pool()
!     {
!       // Allocate the arena - we could add a GLIBCXX_EH_ARENA_SIZE environment
!       // to make this tunable.
!       arena_size = (EMERGENCY_OBJ_SIZE * EMERGENCY_OBJ_COUNT
! 		    + EMERGENCY_OBJ_COUNT * sizeof (__cxa_dependent_exception));
!       arena = (char *)malloc (arena_size);
!       if (!arena)
! 	{
! 	  // If the allocation failed go without an emergency pool.
! 	  arena_size = 0;
! 	  first_free_entry = NULL;
! 	  return;
! 	}
  
!       // Populate the free-list with a single entry covering the whole arena
!       first_free_entry = reinterpret_cast <free_entry *> (arena);
!       new (first_free_entry) free_entry;
!       first_free_entry->size = arena_size;
!       first_free_entry->next = NULL;
!     }
  
!   void *pool::allocate (std::size_t size)
      {
        __gnu_cxx::__scoped_lock sentry(emergency_mutex);
+       // We need an additional size_t member.
+       size += sizeof (std::size_t);
+       // And we need to at least hand out objects of the size of
+       // a freelist entry.
+       if (size < sizeof (free_entry))
+ 	size = sizeof (free_entry);
+       // And we need to align objects we hand out to the required
+       // alignment of a freelist entry (this really aligns the
+       // tail which will become a new freelist entry).
+       size = ((size + __alignof__(free_entry) - 1)
+ 	      & ~(__alignof__(free_entry) - 1));
+       // Search for an entry of proper size on the freelist.
+       free_entry **e;
+       for (e = &first_free_entry;
+ 	   *e && (*e)->size < size;
+ 	   e = &(*e)->next)
+ 	;
+       if (!*e)
+ 	return NULL;
+       allocated_entry *x;
+       if ((*e)->size - size >= sizeof (free_entry))
+ 	{
+ 	  // Slit block if it is too large.
+ 	  free_entry *f = reinterpret_cast <free_entry *>
+ 	      (reinterpret_cast <char *> (*e) + size);
+ 	  std::size_t sz = (*e)->size;
+ 	  free_entry *next = (*e)->next;
+ 	  new (f) free_entry;
+ 	  f->next = next;
+ 	  f->size = sz - size;
+ 	  x = reinterpret_cast <allocated_entry *> (*e);
+ 	  new (x) allocated_entry;
+ 	  x->size = size;
+ 	  *e = f;
+ 	}
+       else
+ 	{
+ 	  // Exact size match or too small overhead for a free entry.
+ 	  std::size_t sz = (*e)->size;
+ 	  free_entry *next = (*e)->next;
+ 	  x = reinterpret_cast <allocated_entry *> (*e);
+ 	  new (x) allocated_entry;
+ 	  x->size = sz;
+ 	  *e = next;
+ 	}
+       return &x->data;
+     }
  
!   void pool::free (void *data)
!     {
!       __gnu_cxx::__scoped_lock sentry(emergency_mutex);
!       allocated_entry *e = reinterpret_cast <allocated_entry *>
! 	(reinterpret_cast <char *> (data) - sizeof (std::size_t));
!       std::size_t sz = e->size;
!       if (!first_free_entry)
! 	{
! 	  // If the free list is empty just put the entry there.
! 	  free_entry *f = reinterpret_cast <free_entry *> (e);
! 	  new (f) free_entry;
! 	  f->size = sz;
! 	  f->next = NULL;
! 	  first_free_entry = f;
! 	}
!       else if (reinterpret_cast <char *> (e) + sz
! 	       == reinterpret_cast <char *> (first_free_entry))
! 	{
! 	  // Check if we can merge with the first free entry being right
! 	  // after us.
! 	  free_entry *f = reinterpret_cast <free_entry *> (e);
! 	  new (f) free_entry;
! 	  f->size = sz + first_free_entry->size;
! 	  f->next = first_free_entry->next;
! 	  first_free_entry = f;
! 	}
!       else
  	{
! 	  // Else search for a free item we can merge with at its end.
! 	  free_entry **fe;
! 	  for (fe = &first_free_entry;
! 	       (*fe)->next
! 	       && (reinterpret_cast <char *> ((*fe)->next)
! 		   > reinterpret_cast <char *> (e) + sz);
! 	       fe = &(*fe)->next)
! 	    ;
! 	  if (reinterpret_cast <char *> (*fe) + (*fe)->size
! 	      == reinterpret_cast <char *> (e))
! 	    /* Merge with the freelist entry.  */
! 	    (*fe)->size += sz;
! 	  else
! 	    {
! 	      // Else put it after it which keeps the freelist sorted.
! 	      free_entry *f = reinterpret_cast <free_entry *> (e);
! 	      new (f) free_entry;
! 	      f->size = sz;
! 	      f->next = (*fe)->next;
! 	      (*fe)->next = f;
! 	    }
  	}
+     }
  
!   bool pool::in_pool (void *ptr)
!     {
!       char *p = reinterpret_cast <char *> (ptr);
!       return (p > arena
! 	      && p < arena + arena_size);
!     }
  
!   pool emergency_pool;
! }
  
! extern "C" void *
! __cxxabiv1::__cxa_allocate_exception(std::size_t thrown_size) _GLIBCXX_NOTHROW
! {
!   void *ret;
! 
!   thrown_size += sizeof (__cxa_refcounted_exception);
!   ret = malloc (thrown_size);
! 
!   if (!ret)
!     ret = emergency_pool.allocate (thrown_size);
! 
!   if (!ret)
!     std::terminate ();
  
    memset (ret, 0, sizeof (__cxa_refcounted_exception));
  
*************** __cxxabiv1::__cxa_allocate_exception(std
*** 138,156 ****
  extern "C" void
  __cxxabiv1::__cxa_free_exception(void *vptr) _GLIBCXX_NOTHROW
  {
!   char *base = (char *) emergency_buffer;
!   char *ptr = (char *) vptr;
!   if (ptr >= base
!       && ptr < base + sizeof (emergency_buffer))
!     {
!       const unsigned int which
! 	= (unsigned) (ptr - base) / EMERGENCY_OBJ_SIZE;
! 
!       __gnu_cxx::__scoped_lock sentry(emergency_mutex);
!       emergency_used &= ~((bitmask_type)1 << which);
!     }
    else
!     free (ptr - sizeof (__cxa_refcounted_exception));
  }
  
  
--- 266,276 ----
  extern "C" void
  __cxxabiv1::__cxa_free_exception(void *vptr) _GLIBCXX_NOTHROW
  {
!   char *ptr = (char *) vptr - sizeof (__cxa_refcounted_exception);
!   if (emergency_pool.in_pool (ptr))
!     emergency_pool.free (ptr);
    else
!     free (ptr);
  }
  
  
*************** __cxxabiv1::__cxa_allocate_dependent_exc
*** 163,189 ****
      (malloc (sizeof (__cxa_dependent_exception)));
  
    if (!ret)
!     {
!       __gnu_cxx::__scoped_lock sentry(emergency_mutex);
! 
!       bitmask_type used = dependents_used;
!       unsigned int which = 0;
! 
!       while (used & 1)
! 	{
! 	  used >>= 1;
! 	  if (++which >= EMERGENCY_OBJ_COUNT)
! 	    goto failed;
! 	}
  
!       dependents_used |= (bitmask_type)1 << which;
!       ret = &dependents_buffer[which];
! 
!     failed:;
! 
!       if (!ret)
! 	std::terminate ();
!     }
  
    memset (ret, 0, sizeof (__cxa_dependent_exception));
  
--- 283,293 ----
      (malloc (sizeof (__cxa_dependent_exception)));
  
    if (!ret)
!     ret = static_cast <__cxa_dependent_exception*>
!       (emergency_pool.allocate (sizeof (__cxa_dependent_exception)));
  
!   if (!ret)
!     std::terminate ();
  
    memset (ret, 0, sizeof (__cxa_dependent_exception));
  
*************** extern "C" void
*** 195,211 ****
  __cxxabiv1::__cxa_free_dependent_exception
    (__cxa_dependent_exception *vptr) _GLIBCXX_NOTHROW
  {
!   char *base = (char *) dependents_buffer;
!   char *ptr = (char *) vptr;
!   if (ptr >= base
!       && ptr < base + sizeof (dependents_buffer))
!     {
!       const unsigned int which
! 	= (unsigned) (ptr - base) / sizeof (__cxa_dependent_exception);
! 
!       __gnu_cxx::__scoped_lock sentry(emergency_mutex);
!       dependents_used &= ~((bitmask_type)1 << which);
!     }
    else
      free (vptr);
  }
--- 299,306 ----
  __cxxabiv1::__cxa_free_dependent_exception
    (__cxa_dependent_exception *vptr) _GLIBCXX_NOTHROW
  {
!   if (emergency_pool.in_pool (vptr))
!     emergency_pool.free (vptr);
    else
      free (vptr);
  }
Index: gcc/testsuite/g++.old-deja/g++.eh/badalloc1.C
===================================================================
*** gcc/testsuite/g++.old-deja/g++.eh/badalloc1.C.orig	2014-09-04 12:52:05.878030488 +0200
--- gcc/testsuite/g++.old-deja/g++.eh/badalloc1.C	2015-01-20 09:45:30.556786348 +0100
*************** typedef __SIZE_TYPE__ size_t;
*** 12,32 ****
  extern "C" void abort();
  extern "C" void *memcpy(void *, const void *, size_t);
  
  // Assume that STACK_SIZE defined implies a system that does not have a
  // large data space either, and additionally that we're not linking against
  // a shared libstdc++ (which requires quite a bit more initialization space).
  #ifdef STACK_SIZE
! const int arena_size = 256;
  #else
  #if defined(__FreeBSD__) || defined(__sun__) || defined(__hpux__)
  // FreeBSD, Solaris and HP-UX require even more space at initialization time.
  // FreeBSD 5 now requires over 131072 bytes.
! const int arena_size = 262144;
  #else
  // Because pointers make up the bulk of our exception-initialization
  // allocations, we scale by the pointer size from the original
  // 32-bit-systems-based estimate.
! const int arena_size = 32768 * ((sizeof (void *) + 3)/4);
  #endif
  #endif
  
--- 12,35 ----
  extern "C" void abort();
  extern "C" void *memcpy(void *, const void *, size_t);
  
+ // libstdc++ requires a large initialization time allocation for the
+ // emergency EH allocation pool.  Add that to the arena size.
+ 
  // Assume that STACK_SIZE defined implies a system that does not have a
  // large data space either, and additionally that we're not linking against
  // a shared libstdc++ (which requires quite a bit more initialization space).
  #ifdef STACK_SIZE
! const int arena_size = 256 + 8 * 128;
  #else
  #if defined(__FreeBSD__) || defined(__sun__) || defined(__hpux__)
  // FreeBSD, Solaris and HP-UX require even more space at initialization time.
  // FreeBSD 5 now requires over 131072 bytes.
! const int arena_size = 262144 + 72 * 1024;
  #else
  // Because pointers make up the bulk of our exception-initialization
  // allocations, we scale by the pointer size from the original
  // 32-bit-systems-based estimate.
! const int arena_size = 32768 * ((sizeof (void *) + 3)/4) + 72 * 1024;
  #endif
  #endif
Jonathan Wakely Jan. 20, 2015, 1:18 p.m. UTC | #4
On 20/01/15 10:06 +0100, Richard Biener wrote:
>On Mon, 19 Jan 2015, Jonathan Wakely wrote:
>
>> On 19/01/15 11:33 +0100, Richard Biener wrote:
>> > On Mon, 12 Jan 2015, Richard Biener wrote:
>> >
>> > >
>> > > This "fixes" PR64535 by changing the fixed object size emergency pool
>> > > to a variable EH object size (but fixed arena size) allocator.  Via
>> > > combining the dependent and non-dependent EH arenas this should allow
>> > > around 600 bad_alloc throws in OOM situations on x86_64-linux
>> > > compared to the current 64 which should provide some headroom to
>> > > the poor souls using EH to communicate OOM in a heavily threaded
>> > > enviroment.
>> > >
>> > > Bootstrapped and tested on x86_64-unknown-linux-gnu (with the #if 1
>> > > as in the patch below, forcing the use of the allocator).
>>
>> I see only the #else part is kept now - that was what I was going to
>> suggest.
>>
>> > Unfortunately I couldn't get an answer of whether throwing
>> > bad_alloc from a throw where we can't allocate an exception
>> > is a) valid or b) even required by the standard ('throw' isn't
>> > listed as 'allocation function' - also our EH allocators are
>> > marked as throw(), so such change would change the ABI...).
>>
>> I'll ask the C++ committee.
>
>Thanks.  Only needing to deal with std::bad_alloc allocations from
>the pool would greatly simplify it (I'd do a fixed-object-size one then).

Basically the unspecified way memory is allocated for exceptions
cannot be 'operator new()', but malloc or a static buffer (or both) is
OK. If that runs out of space (which is the situation we care about
here) it's undefined, because the program has exceeded the system's
resource limits. So we can do whatever is best for our users.

On that basis, I think using a fixed-object-size pool and only using
it for bad_alloc exceptions is reasonable.
Richard Biener Jan. 20, 2015, 1:25 p.m. UTC | #5
On Tue, 20 Jan 2015, Jonathan Wakely wrote:

> On 20/01/15 10:06 +0100, Richard Biener wrote:
> > On Mon, 19 Jan 2015, Jonathan Wakely wrote:
> > 
> > > On 19/01/15 11:33 +0100, Richard Biener wrote:
> > > > On Mon, 12 Jan 2015, Richard Biener wrote:
> > > >
> > > > >
> > > > > This "fixes" PR64535 by changing the fixed object size emergency pool
> > > > > to a variable EH object size (but fixed arena size) allocator.  Via
> > > > > combining the dependent and non-dependent EH arenas this should allow
> > > > > around 600 bad_alloc throws in OOM situations on x86_64-linux
> > > > > compared to the current 64 which should provide some headroom to
> > > > > the poor souls using EH to communicate OOM in a heavily threaded
> > > > > enviroment.
> > > > >
> > > > > Bootstrapped and tested on x86_64-unknown-linux-gnu (with the #if 1
> > > > > as in the patch below, forcing the use of the allocator).
> > > 
> > > I see only the #else part is kept now - that was what I was going to
> > > suggest.
> > > 
> > > > Unfortunately I couldn't get an answer of whether throwing
> > > > bad_alloc from a throw where we can't allocate an exception
> > > > is a) valid or b) even required by the standard ('throw' isn't
> > > > listed as 'allocation function' - also our EH allocators are
> > > > marked as throw(), so such change would change the ABI...).
> > > 
> > > I'll ask the C++ committee.
> > 
> > Thanks.  Only needing to deal with std::bad_alloc allocations from
> > the pool would greatly simplify it (I'd do a fixed-object-size one then).
> 
> Basically the unspecified way memory is allocated for exceptions
> cannot be 'operator new()', but malloc or a static buffer (or both) is
> OK. If that runs out of space (which is the situation we care about
> here) it's undefined, because the program has exceeded the system's
> resource limits. So we can do whatever is best for our users.
> 
> On that basis, I think using a fixed-object-size pool and only using
> it for bad_alloc exceptions is reasonable.

Though my question was whether 'throw X()' might instead behave as
'throw std::bad_alloc()' if allocating X exceeds the system's resource
limits.  I guess the answer is yes as it's undefined?

Of course we can't really implement that because our EH allocator
is marked throw().

Richard.
Jonathan Wakely Jan. 20, 2015, 2:36 p.m. UTC | #6
On 20/01/15 14:25 +0100, Richard Biener wrote:
>Though my question was whether 'throw X()' might instead behave as
>'throw std::bad_alloc()' if allocating X exceeds the system's resource
>limits.  I guess the answer is yes as it's undefined?

Right, if we can't allocate memory for an X (plus the EH header) then
it's UB, so we can do anything, including using your pool to get room
for a bad_alloc and throwing that.
diff mbox

Patch

Index: libstdc++-v3/libsupc++/eh_alloc.cc
===================================================================
--- libstdc++-v3/libsupc++/eh_alloc.cc	(revision 216303)
+++ libstdc++-v3/libsupc++/eh_alloc.cc	(working copy)
@@ -34,6 +34,7 @@ 
 #include <exception>
 #include "unwind-cxx.h"
 #include <ext/concurrence.h>
+#include <new>
 
 #if _GLIBCXX_HOSTED
 using std::free;
@@ -72,62 +73,176 @@  using namespace __cxxabiv1;
 # define EMERGENCY_OBJ_COUNT	4
 #endif
 
-#if INT_MAX == 32767 || EMERGENCY_OBJ_COUNT <= 32
-typedef unsigned int bitmask_type;
-#else
-#if defined (_GLIBCXX_LLP64)
-typedef unsigned long long bitmask_type;
-#else
-typedef unsigned long bitmask_type;
-#endif
-#endif
-
-
-typedef char one_buffer[EMERGENCY_OBJ_SIZE] __attribute__((aligned));
-static one_buffer emergency_buffer[EMERGENCY_OBJ_COUNT];
-static bitmask_type emergency_used;
-
-static __cxa_dependent_exception dependents_buffer[EMERGENCY_OBJ_COUNT];
-static bitmask_type dependents_used;
 
 namespace
 {
   // A single mutex controlling emergency allocations.
   __gnu_cxx::__mutex emergency_mutex;
-}
 
-extern "C" void *
-__cxxabiv1::__cxa_allocate_exception(std::size_t thrown_size) _GLIBCXX_NOTHROW
-{
-  void *ret;
+  // A fixed-size heap, variable size object allocator
+  class pool
+    {
+    public:
+      pool();
 
-  thrown_size += sizeof (__cxa_refcounted_exception);
-  ret = malloc (thrown_size);
+      void *allocate (size_t);
+      void free (void *);
+
+      bool in_pool (void *);
+
+    private:
+      struct free_entry {
+	size_t size;
+	free_entry *next;
+      };
+      struct allocated_entry {
+	size_t size;
+	char data[];
+      };
+      free_entry *first_free_entry;
+      char arena[EMERGENCY_OBJ_SIZE * EMERGENCY_OBJ_COUNT
+	         + EMERGENCY_OBJ_COUNT * sizeof (__cxa_dependent_exception)]
+		 __attribute__((aligned(__alignof__(free_entry))));
+    };
 
-  if (! ret)
+  pool::pool()
     {
-      __gnu_cxx::__scoped_lock sentry(emergency_mutex);
+      first_free_entry = reinterpret_cast <free_entry *> (arena);
+      new (first_free_entry) free_entry;
+      first_free_entry->size = EMERGENCY_OBJ_SIZE * EMERGENCY_OBJ_COUNT;
+      first_free_entry->next = NULL;
+    }
 
-      bitmask_type used = emergency_used;
-      unsigned int which = 0;
+  void *pool::allocate (size_t size)
+    {
+      __gnu_cxx::__scoped_lock sentry(emergency_mutex);
+      /* We need an additional size_t member.  */
+      size += sizeof (size_t);
+      /* And we need to at least hand out objects of the size of
+         a freelist entry.  */
+      if (size < sizeof (free_entry))
+	size = sizeof (free_entry);
+      /* And we need to align objects we hand out to the required
+         alignment of a freelist entry (this really aligns the
+	 tail which will become a new freelist entry).  */
+      size = ((size + __alignof__(free_entry) - 1)
+	      & ~(__alignof__(free_entry) - 1));
+      /* Search for an entry of proper size on the freelist.  */
+      free_entry **e;
+      for (e = &first_free_entry;
+	   *e && (*e)->size < size;
+	   e = &(*e)->next)
+	;
+      if (!*e)
+	return NULL;
+      allocated_entry *x;
+      if ((*e)->size - size >= sizeof (free_entry))
+	{
+	  /* Slit block if it is too large.  */
+	  free_entry *f = reinterpret_cast <free_entry *>
+	      (reinterpret_cast <char *> (*e) + size);
+	  size_t sz = (*e)->size;
+	  free_entry *next = (*e)->next;
+	  new (f) free_entry;
+	  f->next = next;
+	  f->size = sz - size;
+	  x = reinterpret_cast <allocated_entry *> (*e);
+	  new (x) allocated_entry;
+	  x->size = size;
+	  *e = f;
+	}
+      else
+	{
+	  /* Exact size match or too small overhead for a free entry.  */
+	  size_t sz = (*e)->size;
+	  free_entry *next = (*e)->next;
+	  x = reinterpret_cast <allocated_entry *> (*e);
+	  new (x) allocated_entry;
+	  x->size = sz;
+	  *e = next;
+	}
+      return &x->data;
+    }
 
-      if (thrown_size > EMERGENCY_OBJ_SIZE)
-	goto failed;
-      while (used & 1)
+  void pool::free (void *data)
+    {
+      __gnu_cxx::__scoped_lock sentry(emergency_mutex);
+      allocated_entry *e = reinterpret_cast <allocated_entry *>
+	(reinterpret_cast <char *> (data) - sizeof (size_t));
+      size_t sz = e->size;
+      if (!first_free_entry)
+	{
+	  /* If the free list is empty just put the entry there.  */
+	  free_entry *f = reinterpret_cast <free_entry *> (e);
+	  new (f) free_entry;
+	  f->size = sz;
+	  f->next = NULL;
+	  first_free_entry = f;
+	}
+      else if (reinterpret_cast <char *> (e) + sz
+	       == reinterpret_cast <char *> (first_free_entry))
 	{
-	  used >>= 1;
-	  if (++which >= EMERGENCY_OBJ_COUNT)
-	    goto failed;
+	  /* Check if we can merge with the first free entry being right
+	     after us.  */
+	  free_entry *f = reinterpret_cast <free_entry *> (e);
+	  new (f) free_entry;
+	  f->size = sz + first_free_entry->size;
+	  f->next = first_free_entry->next;
+	  first_free_entry = f;
 	}
+      else
+	{
+	  /* Else search for a free item we can merge with at its end.  */
+	  free_entry **fe;
+	  for (fe = &first_free_entry;
+	       (*fe)->next
+	       && (reinterpret_cast <char *> ((*fe)->next)
+		   > reinterpret_cast <char *> (e) + sz);
+	       fe = &(*fe)->next)
+	    ;
+	  if (reinterpret_cast <char *> (*fe) + (*fe)->size
+	      == reinterpret_cast <char *> (e))
+	    /* Merge with the freelist entry.  */
+	    (*fe)->size += sz;
+	  else
+	    {
+	      /* Else put it after it which keeps the freelist sorted.  */
+	      free_entry *f = reinterpret_cast <free_entry *> (e);
+	      new (f) free_entry;
+	      f->size = sz;
+	      f->next = (*fe)->next;
+	      (*fe)->next = f;
+	    }
+	}
+    }
+
+  bool pool::in_pool (void *ptr)
+    {
+      char *p = reinterpret_cast <char *> (ptr);
+      return (p > arena
+	      && p < arena + sizeof (arena));
+    }
 
-      emergency_used |= (bitmask_type)1 << which;
-      ret = &emergency_buffer[which][0];
+  pool emergency_pool;
+}
 
-    failed:;
+extern "C" void *
+__cxxabiv1::__cxa_allocate_exception(std::size_t thrown_size) _GLIBCXX_NOTHROW
+{
+  void *ret;
 
-      if (!ret)
-	std::terminate ();
-    }
+  thrown_size += sizeof (__cxa_refcounted_exception);
+#if 1
+  ret = emergency_pool.allocate (thrown_size);
+#else
+  ret = malloc (thrown_size);
+
+  if (!ret)
+    ret = emergency_pool.allocate (thrown_size);
+#endif
+
+  if (!ret)
+    std::terminate ();
 
   memset (ret, 0, sizeof (__cxa_refcounted_exception));
 
@@ -138,19 +253,11 @@  __cxxabiv1::__cxa_allocate_exception(std
 extern "C" void
 __cxxabiv1::__cxa_free_exception(void *vptr) _GLIBCXX_NOTHROW
 {
-  char *base = (char *) emergency_buffer;
-  char *ptr = (char *) vptr;
-  if (ptr >= base
-      && ptr < base + sizeof (emergency_buffer))
-    {
-      const unsigned int which
-	= (unsigned) (ptr - base) / EMERGENCY_OBJ_SIZE;
-
-      __gnu_cxx::__scoped_lock sentry(emergency_mutex);
-      emergency_used &= ~((bitmask_type)1 << which);
-    }
+  char *ptr = (char *) vptr - sizeof (__cxa_refcounted_exception);
+  if (emergency_pool.in_pool (ptr))
+    emergency_pool.free (ptr);
   else
-    free (ptr - sizeof (__cxa_refcounted_exception));
+    free (ptr);
 }
 
 
@@ -163,27 +270,11 @@  __cxxabiv1::__cxa_allocate_dependent_exc
     (malloc (sizeof (__cxa_dependent_exception)));
 
   if (!ret)
-    {
-      __gnu_cxx::__scoped_lock sentry(emergency_mutex);
-
-      bitmask_type used = dependents_used;
-      unsigned int which = 0;
-
-      while (used & 1)
-	{
-	  used >>= 1;
-	  if (++which >= EMERGENCY_OBJ_COUNT)
-	    goto failed;
-	}
+    ret = static_cast <__cxa_dependent_exception*>
+      (emergency_pool.allocate (sizeof (__cxa_dependent_exception)));
 
-      dependents_used |= (bitmask_type)1 << which;
-      ret = &dependents_buffer[which];
-
-    failed:;
-
-      if (!ret)
-	std::terminate ();
-    }
+  if (!ret)
+    std::terminate ();
 
   memset (ret, 0, sizeof (__cxa_dependent_exception));
 
@@ -195,17 +286,8 @@  extern "C" void
 __cxxabiv1::__cxa_free_dependent_exception
   (__cxa_dependent_exception *vptr) _GLIBCXX_NOTHROW
 {
-  char *base = (char *) dependents_buffer;
-  char *ptr = (char *) vptr;
-  if (ptr >= base
-      && ptr < base + sizeof (dependents_buffer))
-    {
-      const unsigned int which
-	= (unsigned) (ptr - base) / sizeof (__cxa_dependent_exception);
-
-      __gnu_cxx::__scoped_lock sentry(emergency_mutex);
-      dependents_used &= ~((bitmask_type)1 << which);
-    }
+  if (emergency_pool.in_pool (vptr))
+    emergency_pool.free (vptr);
   else
     free (vptr);
 }