diff mbox

[RFC] Split pool_allocator and create a new object_allocator

Message ID 559EEAE9.6060500@suse.cz
State New
Headers show

Commit Message

Martin Liška July 9, 2015, 9:43 p.m. UTC
On 07/03/2015 06:18 PM, Richard Sandiford wrote:
> Hi Martin,
>
> Martin Liška <mliska@suse.cz> writes:
>> On 07/03/2015 03:07 PM, Richard Sandiford wrote:
>>> Martin Jambor <mjambor@suse.cz> writes:
>>>> On Fri, Jul 03, 2015 at 09:55:58AM +0100, Richard Sandiford wrote:
>>>>> Trevor Saunders <tbsaunde@tbsaunde.org> writes:
>>>>>> On Thu, Jul 02, 2015 at 09:09:31PM +0100, Richard Sandiford wrote:
>>>>>>> Martin Liška <mliska@suse.cz> writes:
>>>>>>>> diff --git a/gcc/asan.c b/gcc/asan.c
>>>>>>>> index e89817e..dabd6f1 100644
>>>>>>>> --- a/gcc/asan.c
>>>>>>>> +++ b/gcc/asan.c
>>>>>>>> @@ -362,20 +362,20 @@ struct asan_mem_ref
>>>>>>>>     /* Pool allocation new operator.  */
>>>>>>>>     inline void *operator new (size_t)
>>>>>>>>     {
>>>>>>>> -    return pool.allocate ();
>>>>>>>> +    return ::new (pool.allocate ()) asan_mem_ref ();
>>>>>>>>     }
>>>>>>>>
>>>>>>>>     /* Delete operator utilizing pool allocation.  */
>>>>>>>>     inline void operator delete (void *ptr)
>>>>>>>>     {
>>>>>>>> -    pool.remove ((asan_mem_ref *) ptr);
>>>>>>>> +    pool.remove (ptr);
>>>>>>>>     }
>>>>>>>>
>>>>>>>>     /* Memory allocation pool.  */
>>>>>>>> -  static pool_allocator<asan_mem_ref> pool;
>>>>>>>> +  static pool_allocator pool;
>>>>>>>>   };
>>>>>>>
>>>>>>> I'm probably going over old ground/wounds, sorry, but what's the benefit
>>>>>>> of having this sort of pattern?  Why not simply have object_allocators
>>>>>>> and make callers use pool.allocate () and pool.remove (x) (with
>>>>>>> pool.remove
>>>>>>> calling the destructor) instead of new and delete?  It feels wrong to me
>>>>>>> to tie the data type to a particular allocation object like this.
>>>>>>
>>>>>> Well the big question is what does allocate() do about construction?  if
>>>>>> it seems wierd for it to not call the ctor, but I'm not sure we can do a
>>>>>> good job of forwarding args to allocate() with C++98.
>>>>>
>>>>> If you need non-default constructors then:
>>>>>
>>>>>    new (pool) type (aaa, bbb)...;
>>>>>
>>>>> doesn't seem too bad.  I agree object_allocator's allocate () should call
>>>>> the constructor.
>>>>>
>>>>
>>>> but then the pool allocator must not call placement new on the
>>>> allocated memory itself because that would result in double
>>>> construction.
>>>
>>> But we're talking about two different methods.  The "normal" allocator
>>> object_allocator <T>::allocate () would use placement new and return a
>>> pointer to the new object while operator new (size_t, object_allocator <T> &)
>>> wouldn't call placement new and would just return a pointer to the memory.
>>>
>>>>>>> And using the pool allocator functions directly has the nice property
>>>>>>> that you can tell when a delete/remove isn't necessary because the pool
>>>>>>> itself is being cleared.
>>>>>>
>>>>>> Well, all these cases involve a pool with static storage lifetime right?
>>>>>> so actually if you don't delete things in these pool they are
>>>>>> effectively leaked.
>>>>>
>>>>> They might have a static storage lifetime now, but it doesn't seem like
>>>>> a good idea to hard-bake that into the interface
>>>>
>>>> Does that mean that operators new and delete are considered evil?
>>>
>>> Not IMO.  Just that static load-time-initialized caches are not
>>> necessarily a good thing.  That's effectively what the pool
>>> allocator is.
>>>
>>>>> (by saying that for
>>>>> these types you should use new and delete, but for other pool-allocated
>>>>> types you should use object_allocators).
>>>>
>>>> Depending on what kind of pool allocator you use, you will be forced
>>>> to either call placement new or not, so the inconsistency will be
>>>> there anyway.
>>>
>>> But how we handle argument-taking constructors is a problem that needs
>>> to be solved for the pool-allocated objects that don't use a single
>>> static type-specific pool.  And once we solve that, we get consistency
>>> across all pools:
>>>
>>> - if you want a new object and argumentless construction is OK,
>>>    use "pool.allocate ()"
>>>
>>> - if you want a new object and need to pass arguments to the constructor,
>>>    use "new (pool) some_type (arg1, arg2, ...)"
>>>
>>>>> Maybe I just have bad memories
>>>>> from doing the SWITCHABLE_TARGET stuff, but there I was changing a lot
>>>>> of state that was "obviously" static in the old days, but that needed
>>>>> to become non-static to support vaguely-efficient switching between
>>>>> different subtargets.  The same kind of thing is likely to happen again.
>>>>> I assume things like the jit would prefer not to have new global state
>>>>> with load-time construction.
>>>>
>>>> I'm not sure I follow this branch of the discussion, the allocators of
>>>> any kind surely can dynamically allocated themselves?
>>>
>>> Sure, but either (a) you keep the pools as a static part of the class
>>> and some initialisation and finalisation code that has tendrils into
>>> all such classes or (b) you move the static pool outside of the
>>> class to some new (still global) state.  Explicit pool allocation,
>>> like in the C days, gives you the option of putting the pool whereever
>>> it needs to go without relying on the principle that you can get to
>>> it from global state.
>>>
>>> Thanks,
>>> Richard
>>>
>>
>> Ok Richard.
>>
>> I've just finally understood your suggestions and I would suggest following:
>>
>> + I will add a new method to object_allocator<T> that will return an
>> allocated memory (void*)
>> (w/o calling any construction)
>> + object_allocator<T>::allocate will call placement new with for a
>> parameterless ctor
>> + I will remove all overwritten operators new/delete on e.g. et_forest, ...
>> + For these classes, I will add void* operator new (size_t,
>> object_allocator<T> &)
>
> I was thinking we'd simply use allocate () for cases where we don't
> need to pass arguments to the constructor.  It looks like et_forest
> comes into that category.  The operator new would be a single function
> defined in pool-allocator.h for cases where explicit construction
> is needed.
>
> In fact, it looks from a quick grep like all current uses of pool operator
> new/delete are in POD types, so there are no special constructors.
> The best example I could come up with was the copy constructor in:
>
>    return new lra_live_range (*r);
>
> which would become:
>
>    return new (*live_range_pool) lra_live_range (*r);
>
> but perhaps we should have an object_allocator copy (T *) routine:
>
>    return live_range_pool->copy (*r);
>
>> + Pool allocators connected to these classes will be back transformed to
>> static variables and
>> one would call new et_forest (my_et_forest_allocator)
>
> Thanks, this sounds really good to me.  Please make sure I'm not the
> only one who thinks so though :-)
>
> I think the "normal" remove () method should then also call the destructor.
>
> Thanks,
> Richard
>

Hello.

This final version which I agreed with Richard Sandiford.
Hope this can be finally installed to trunk?

Patch can bootstrap and survive regression tests on x86_64-linux-gnu.

Thanks,
Martin

Comments

Pat Haugen July 10, 2015, 2:19 p.m. UTC | #1
On 07/09/2015 04:43 PM, Martin Liška wrote:
> This final version which I agreed with Richard Sandiford.
> Hope this can be finally installed to trunk?
>
> Patch can bootstrap and survive regression tests on x86_64-linux-gnu.
FWIW, I confirmed this version of the patch fixes the build issues on 
powerpc64 that I have been seeing.

-Pat
Martin Liška July 16, 2015, 10:47 a.m. UTC | #2
On 07/09/2015 11:43 PM, Martin Liška wrote:
> On 07/03/2015 06:18 PM, Richard Sandiford wrote:
>> Hi Martin,
>>
>> Martin Liška <mliska@suse.cz> writes:
>>> On 07/03/2015 03:07 PM, Richard Sandiford wrote:
>>>> Martin Jambor <mjambor@suse.cz> writes:
>>>>> On Fri, Jul 03, 2015 at 09:55:58AM +0100, Richard Sandiford wrote:
>>>>>> Trevor Saunders <tbsaunde@tbsaunde.org> writes:
>>>>>>> On Thu, Jul 02, 2015 at 09:09:31PM +0100, Richard Sandiford wrote:
>>>>>>>> Martin Liška <mliska@suse.cz> writes:
>>>>>>>>> diff --git a/gcc/asan.c b/gcc/asan.c
>>>>>>>>> index e89817e..dabd6f1 100644
>>>>>>>>> --- a/gcc/asan.c
>>>>>>>>> +++ b/gcc/asan.c
>>>>>>>>> @@ -362,20 +362,20 @@ struct asan_mem_ref
>>>>>>>>>     /* Pool allocation new operator.  */
>>>>>>>>>     inline void *operator new (size_t)
>>>>>>>>>     {
>>>>>>>>> -    return pool.allocate ();
>>>>>>>>> +    return ::new (pool.allocate ()) asan_mem_ref ();
>>>>>>>>>     }
>>>>>>>>>
>>>>>>>>>     /* Delete operator utilizing pool allocation.  */
>>>>>>>>>     inline void operator delete (void *ptr)
>>>>>>>>>     {
>>>>>>>>> -    pool.remove ((asan_mem_ref *) ptr);
>>>>>>>>> +    pool.remove (ptr);
>>>>>>>>>     }
>>>>>>>>>
>>>>>>>>>     /* Memory allocation pool.  */
>>>>>>>>> -  static pool_allocator<asan_mem_ref> pool;
>>>>>>>>> +  static pool_allocator pool;
>>>>>>>>>   };
>>>>>>>>
>>>>>>>> I'm probably going over old ground/wounds, sorry, but what's the benefit
>>>>>>>> of having this sort of pattern?  Why not simply have object_allocators
>>>>>>>> and make callers use pool.allocate () and pool.remove (x) (with
>>>>>>>> pool.remove
>>>>>>>> calling the destructor) instead of new and delete?  It feels wrong to me
>>>>>>>> to tie the data type to a particular allocation object like this.
>>>>>>>
>>>>>>> Well the big question is what does allocate() do about construction?  if
>>>>>>> it seems wierd for it to not call the ctor, but I'm not sure we can do a
>>>>>>> good job of forwarding args to allocate() with C++98.
>>>>>>
>>>>>> If you need non-default constructors then:
>>>>>>
>>>>>>    new (pool) type (aaa, bbb)...;
>>>>>>
>>>>>> doesn't seem too bad.  I agree object_allocator's allocate () should call
>>>>>> the constructor.
>>>>>>
>>>>>
>>>>> but then the pool allocator must not call placement new on the
>>>>> allocated memory itself because that would result in double
>>>>> construction.
>>>>
>>>> But we're talking about two different methods.  The "normal" allocator
>>>> object_allocator <T>::allocate () would use placement new and return a
>>>> pointer to the new object while operator new (size_t, object_allocator <T> &)
>>>> wouldn't call placement new and would just return a pointer to the memory.
>>>>
>>>>>>>> And using the pool allocator functions directly has the nice property
>>>>>>>> that you can tell when a delete/remove isn't necessary because the pool
>>>>>>>> itself is being cleared.
>>>>>>>
>>>>>>> Well, all these cases involve a pool with static storage lifetime right?
>>>>>>> so actually if you don't delete things in these pool they are
>>>>>>> effectively leaked.
>>>>>>
>>>>>> They might have a static storage lifetime now, but it doesn't seem like
>>>>>> a good idea to hard-bake that into the interface
>>>>>
>>>>> Does that mean that operators new and delete are considered evil?
>>>>
>>>> Not IMO.  Just that static load-time-initialized caches are not
>>>> necessarily a good thing.  That's effectively what the pool
>>>> allocator is.
>>>>
>>>>>> (by saying that for
>>>>>> these types you should use new and delete, but for other pool-allocated
>>>>>> types you should use object_allocators).
>>>>>
>>>>> Depending on what kind of pool allocator you use, you will be forced
>>>>> to either call placement new or not, so the inconsistency will be
>>>>> there anyway.
>>>>
>>>> But how we handle argument-taking constructors is a problem that needs
>>>> to be solved for the pool-allocated objects that don't use a single
>>>> static type-specific pool.  And once we solve that, we get consistency
>>>> across all pools:
>>>>
>>>> - if you want a new object and argumentless construction is OK,
>>>>    use "pool.allocate ()"
>>>>
>>>> - if you want a new object and need to pass arguments to the constructor,
>>>>    use "new (pool) some_type (arg1, arg2, ...)"
>>>>
>>>>>> Maybe I just have bad memories
>>>>>> from doing the SWITCHABLE_TARGET stuff, but there I was changing a lot
>>>>>> of state that was "obviously" static in the old days, but that needed
>>>>>> to become non-static to support vaguely-efficient switching between
>>>>>> different subtargets.  The same kind of thing is likely to happen again.
>>>>>> I assume things like the jit would prefer not to have new global state
>>>>>> with load-time construction.
>>>>>
>>>>> I'm not sure I follow this branch of the discussion, the allocators of
>>>>> any kind surely can dynamically allocated themselves?
>>>>
>>>> Sure, but either (a) you keep the pools as a static part of the class
>>>> and some initialisation and finalisation code that has tendrils into
>>>> all such classes or (b) you move the static pool outside of the
>>>> class to some new (still global) state.  Explicit pool allocation,
>>>> like in the C days, gives you the option of putting the pool whereever
>>>> it needs to go without relying on the principle that you can get to
>>>> it from global state.
>>>>
>>>> Thanks,
>>>> Richard
>>>>
>>>
>>> Ok Richard.
>>>
>>> I've just finally understood your suggestions and I would suggest following:
>>>
>>> + I will add a new method to object_allocator<T> that will return an
>>> allocated memory (void*)
>>> (w/o calling any construction)
>>> + object_allocator<T>::allocate will call placement new with for a
>>> parameterless ctor
>>> + I will remove all overwritten operators new/delete on e.g. et_forest, ...
>>> + For these classes, I will add void* operator new (size_t,
>>> object_allocator<T> &)
>>
>> I was thinking we'd simply use allocate () for cases where we don't
>> need to pass arguments to the constructor.  It looks like et_forest
>> comes into that category.  The operator new would be a single function
>> defined in pool-allocator.h for cases where explicit construction
>> is needed.
>>
>> In fact, it looks from a quick grep like all current uses of pool operator
>> new/delete are in POD types, so there are no special constructors.
>> The best example I could come up with was the copy constructor in:
>>
>>    return new lra_live_range (*r);
>>
>> which would become:
>>
>>    return new (*live_range_pool) lra_live_range (*r);
>>
>> but perhaps we should have an object_allocator copy (T *) routine:
>>
>>    return live_range_pool->copy (*r);
>>
>>> + Pool allocators connected to these classes will be back transformed to
>>> static variables and
>>> one would call new et_forest (my_et_forest_allocator)
>>
>> Thanks, this sounds really good to me.  Please make sure I'm not the
>> only one who thinks so though :-)
>>
>> I think the "normal" remove () method should then also call the destructor.
>>
>> Thanks,
>> Richard
>>
> 
> Hello.
> 
> This final version which I agreed with Richard Sandiford.
> Hope this can be finally installed to trunk?
> 
> Patch can bootstrap and survive regression tests on x86_64-linux-gnu.
> 
> Thanks,
> Martin

Hello.

I would like to ping the patch as it's a blocker for some ppc64 machines.

Thanks,
Martin
Richard Biener July 16, 2015, 10:49 a.m. UTC | #3
On Thu, Jul 16, 2015 at 12:47 PM, Martin Liška <mliska@suse.cz> wrote:
> On 07/09/2015 11:43 PM, Martin Liška wrote:
>> On 07/03/2015 06:18 PM, Richard Sandiford wrote:
>>> Hi Martin,
>>>
>>> Martin Liška <mliska@suse.cz> writes:
>>>> On 07/03/2015 03:07 PM, Richard Sandiford wrote:
>>>>> Martin Jambor <mjambor@suse.cz> writes:
>>>>>> On Fri, Jul 03, 2015 at 09:55:58AM +0100, Richard Sandiford wrote:
>>>>>>> Trevor Saunders <tbsaunde@tbsaunde.org> writes:
>>>>>>>> On Thu, Jul 02, 2015 at 09:09:31PM +0100, Richard Sandiford wrote:
>>>>>>>>> Martin Liška <mliska@suse.cz> writes:
>>>>>>>>>> diff --git a/gcc/asan.c b/gcc/asan.c
>>>>>>>>>> index e89817e..dabd6f1 100644
>>>>>>>>>> --- a/gcc/asan.c
>>>>>>>>>> +++ b/gcc/asan.c
>>>>>>>>>> @@ -362,20 +362,20 @@ struct asan_mem_ref
>>>>>>>>>>     /* Pool allocation new operator.  */
>>>>>>>>>>     inline void *operator new (size_t)
>>>>>>>>>>     {
>>>>>>>>>> -    return pool.allocate ();
>>>>>>>>>> +    return ::new (pool.allocate ()) asan_mem_ref ();
>>>>>>>>>>     }
>>>>>>>>>>
>>>>>>>>>>     /* Delete operator utilizing pool allocation.  */
>>>>>>>>>>     inline void operator delete (void *ptr)
>>>>>>>>>>     {
>>>>>>>>>> -    pool.remove ((asan_mem_ref *) ptr);
>>>>>>>>>> +    pool.remove (ptr);
>>>>>>>>>>     }
>>>>>>>>>>
>>>>>>>>>>     /* Memory allocation pool.  */
>>>>>>>>>> -  static pool_allocator<asan_mem_ref> pool;
>>>>>>>>>> +  static pool_allocator pool;
>>>>>>>>>>   };
>>>>>>>>>
>>>>>>>>> I'm probably going over old ground/wounds, sorry, but what's the benefit
>>>>>>>>> of having this sort of pattern?  Why not simply have object_allocators
>>>>>>>>> and make callers use pool.allocate () and pool.remove (x) (with
>>>>>>>>> pool.remove
>>>>>>>>> calling the destructor) instead of new and delete?  It feels wrong to me
>>>>>>>>> to tie the data type to a particular allocation object like this.
>>>>>>>>
>>>>>>>> Well the big question is what does allocate() do about construction?  if
>>>>>>>> it seems wierd for it to not call the ctor, but I'm not sure we can do a
>>>>>>>> good job of forwarding args to allocate() with C++98.
>>>>>>>
>>>>>>> If you need non-default constructors then:
>>>>>>>
>>>>>>>    new (pool) type (aaa, bbb)...;
>>>>>>>
>>>>>>> doesn't seem too bad.  I agree object_allocator's allocate () should call
>>>>>>> the constructor.
>>>>>>>
>>>>>>
>>>>>> but then the pool allocator must not call placement new on the
>>>>>> allocated memory itself because that would result in double
>>>>>> construction.
>>>>>
>>>>> But we're talking about two different methods.  The "normal" allocator
>>>>> object_allocator <T>::allocate () would use placement new and return a
>>>>> pointer to the new object while operator new (size_t, object_allocator <T> &)
>>>>> wouldn't call placement new and would just return a pointer to the memory.
>>>>>
>>>>>>>>> And using the pool allocator functions directly has the nice property
>>>>>>>>> that you can tell when a delete/remove isn't necessary because the pool
>>>>>>>>> itself is being cleared.
>>>>>>>>
>>>>>>>> Well, all these cases involve a pool with static storage lifetime right?
>>>>>>>> so actually if you don't delete things in these pool they are
>>>>>>>> effectively leaked.
>>>>>>>
>>>>>>> They might have a static storage lifetime now, but it doesn't seem like
>>>>>>> a good idea to hard-bake that into the interface
>>>>>>
>>>>>> Does that mean that operators new and delete are considered evil?
>>>>>
>>>>> Not IMO.  Just that static load-time-initialized caches are not
>>>>> necessarily a good thing.  That's effectively what the pool
>>>>> allocator is.
>>>>>
>>>>>>> (by saying that for
>>>>>>> these types you should use new and delete, but for other pool-allocated
>>>>>>> types you should use object_allocators).
>>>>>>
>>>>>> Depending on what kind of pool allocator you use, you will be forced
>>>>>> to either call placement new or not, so the inconsistency will be
>>>>>> there anyway.
>>>>>
>>>>> But how we handle argument-taking constructors is a problem that needs
>>>>> to be solved for the pool-allocated objects that don't use a single
>>>>> static type-specific pool.  And once we solve that, we get consistency
>>>>> across all pools:
>>>>>
>>>>> - if you want a new object and argumentless construction is OK,
>>>>>    use "pool.allocate ()"
>>>>>
>>>>> - if you want a new object and need to pass arguments to the constructor,
>>>>>    use "new (pool) some_type (arg1, arg2, ...)"
>>>>>
>>>>>>> Maybe I just have bad memories
>>>>>>> from doing the SWITCHABLE_TARGET stuff, but there I was changing a lot
>>>>>>> of state that was "obviously" static in the old days, but that needed
>>>>>>> to become non-static to support vaguely-efficient switching between
>>>>>>> different subtargets.  The same kind of thing is likely to happen again.
>>>>>>> I assume things like the jit would prefer not to have new global state
>>>>>>> with load-time construction.
>>>>>>
>>>>>> I'm not sure I follow this branch of the discussion, the allocators of
>>>>>> any kind surely can dynamically allocated themselves?
>>>>>
>>>>> Sure, but either (a) you keep the pools as a static part of the class
>>>>> and some initialisation and finalisation code that has tendrils into
>>>>> all such classes or (b) you move the static pool outside of the
>>>>> class to some new (still global) state.  Explicit pool allocation,
>>>>> like in the C days, gives you the option of putting the pool whereever
>>>>> it needs to go without relying on the principle that you can get to
>>>>> it from global state.
>>>>>
>>>>> Thanks,
>>>>> Richard
>>>>>
>>>>
>>>> Ok Richard.
>>>>
>>>> I've just finally understood your suggestions and I would suggest following:
>>>>
>>>> + I will add a new method to object_allocator<T> that will return an
>>>> allocated memory (void*)
>>>> (w/o calling any construction)
>>>> + object_allocator<T>::allocate will call placement new with for a
>>>> parameterless ctor
>>>> + I will remove all overwritten operators new/delete on e.g. et_forest, ...
>>>> + For these classes, I will add void* operator new (size_t,
>>>> object_allocator<T> &)
>>>
>>> I was thinking we'd simply use allocate () for cases where we don't
>>> need to pass arguments to the constructor.  It looks like et_forest
>>> comes into that category.  The operator new would be a single function
>>> defined in pool-allocator.h for cases where explicit construction
>>> is needed.
>>>
>>> In fact, it looks from a quick grep like all current uses of pool operator
>>> new/delete are in POD types, so there are no special constructors.
>>> The best example I could come up with was the copy constructor in:
>>>
>>>    return new lra_live_range (*r);
>>>
>>> which would become:
>>>
>>>    return new (*live_range_pool) lra_live_range (*r);
>>>
>>> but perhaps we should have an object_allocator copy (T *) routine:
>>>
>>>    return live_range_pool->copy (*r);
>>>
>>>> + Pool allocators connected to these classes will be back transformed to
>>>> static variables and
>>>> one would call new et_forest (my_et_forest_allocator)
>>>
>>> Thanks, this sounds really good to me.  Please make sure I'm not the
>>> only one who thinks so though :-)
>>>
>>> I think the "normal" remove () method should then also call the destructor.
>>>
>>> Thanks,
>>> Richard
>>>
>>
>> Hello.
>>
>> This final version which I agreed with Richard Sandiford.
>> Hope this can be finally installed to trunk?
>>
>> Patch can bootstrap and survive regression tests on x86_64-linux-gnu.
>>
>> Thanks,
>> Martin
>
> Hello.
>
> I would like to ping the patch as it's a blocker for some ppc64 machines.

Ok.

Thanks,
Richard.

> Thanks,
> Martin
>
>
diff mbox

Patch

From db8dc3ce0825503652ceed48101fda6be8f5fe58 Mon Sep 17 00:00:00 2001
From: mliska <mliska@suse.cz>
Date: Wed, 24 Jun 2015 13:42:52 +0200
Subject: [PATCH] Add new object_allocator and clean-up allocator usage.

gcc/c-family/ChangeLog:

2015-07-09  Martin Liska  <mliska@suse.cz>

	* c-format.c (static void check_format_info_main): Use
	object_allocator instead of pool_allocator.
	(check_format_arg): Likewise.
	(check_format_info_main): Likewise.

gcc/ChangeLog:

2015-07-09  Martin Liska  <mliska@suse.cz>

	* alloc-pool.h
	(object_allocator): Add new class.
	(pool_allocator::initialize): Use the underlying class.
	(pool_allocator::allocate): Likewise.
	(pool_allocator::remove): Likewise.
	(operator new): A new generic allocator.
	* asan.c (struct asan_mem_ref): Remove unused members.
	(asan_mem_ref_new): Replace new operator with
	object_allocator::allocate.
	(free_mem_ref_resources): Change deallocation.
	* cfg.c (initialize_original_copy_tables): Replace pool_allocator
	with object_allocator.
	* config/sh/sh.c (add_constant): Replace new operator with
	object_allocator::allocate.
	(sh_reorg): Change call to a release method.
	* cselib.c (struct elt_list): Remove unused members.
	(new_elt_list): Replace new operator with
	object_allocator::allocate.
	(new_elt_loc_list): Likewise.
	(new_cselib_val): Likewise.
	(unchain_one_elt_list): Change delete operator with remove method.
	(unchain_one_elt_loc_list): Likewise.
	(unchain_one_value): Likewise.
	(cselib_finish): Release newly added static allocators.
	* cselib.h (struct cselib_val): Remove unused members.
	(struct elt_loc_list): Likewise.
	* df-problems.c (df_chain_alloc): Replace pool_allocator with
	object_allocator.
	* df-scan.c (struct df_scan_problem_data): Likewise.
	(df_scan_alloc): Likewise.
	* df.h (struct dataflow): Likewise.
	* dse.c (struct read_info_type): Likewise.
	(struct insn_info_type): Likewise.
	(struct dse_bb_info_type): Likewise.
	(struct group_info): Likewise.
	(struct deferred_change): Likewise.
	(get_group_info): Likewise.
	(delete_dead_store_insn): Likewise.
	(free_read_records): Likewise.
	(replace_read): Likewise.
	(check_mem_read_rtx): Likewise.
	(scan_insn): Likewise.
	(dse_step1): Likewise.
	(dse_step7): Likewise.
	* et-forest.c (struct et_occ): Remove unused members.
	(et_new_occ): Use allocate instead of new operator.
	(et_new_tree): Likewise.
	(et_free_tree): Call release method explicitly.
	(et_free_tree_force): Likewise.
	(et_free_pools): Likewise.
	(et_split): Use remove instead of delete operator.
	* et-forest.h (struct et_node): Remove unused members.
	* ipa-cp.c: Change pool_allocator to object_allocator.
	* ipa-inline-analysis.c: Likewise.
	* ipa-profile.c: Likewise.
	* ipa-prop.c: Likewise.
	* ipa-prop.h: Likewise.
	* ira-build.c (initiate_cost_vectors): Cast return value.
	(ira_allocate_cost_vector): Likewise.
	* ira-color.c (struct update_cost_record): Remove unused members.
	* lra-int.h (struct lra_live_range): Likewise.
	(struct lra_copy): Likewise.
	(struct lra_insn_reg): Likewise.
	* lra-lives.c (lra_live_ranges_finish): Release new static allocator.
	* lra.c (new_insn_reg): Replace new operator with allocate method.
	(free_insn_regs): Same for operator delete.
	(finish_insn_regs): Release new static allocator.
	(finish_insn_recog_data): Likewise.
	(lra_free_copies): Replace delete operator with remove method.
	(lra_create_copy): Replace operator new with allocate method.
	(invalidate_insn_data_regno_info): Same for remove method.
	* regcprop.c (struct queued_debug_insn_change): Remove unused members.
	(free_debug_insn_changes): Replace delete operator with remove method.
	(replace_oldest_value_reg): Replace operator new with allocate method.
	(pass_cprop_hardreg::execute): Release new static variable.
	* sched-deps.c (sched_deps_init): Change pool_allocator to
	object_allocator.
	* sel-sched-ir.c: Likewise.
	* sel-sched-ir.h: Likewise.
	* stmt.c (expand_case): Likewise.
	(expand_sjlj_dispatch_table): Likewise.
	* tree-sra.c (struct access): Remove unused members.
	(struct assign_link): Likewise.
	(sra_deinitialize): Release newly added static pools.
	(create_access_1):Replace operator new with allocate method.
	(build_accesses_from_assign): Likewise.
	(create_artificial_child_access): Likewise.
	* tree-ssa-math-opts.c (pass_cse_reciprocals::execute): Change
	pool_allocator to object_allocator.
	* tree-ssa-pre.c: Likewise.
	* tree-ssa-reassoc.c: Likewise.
	* tree-ssa-sccvn.c (allocate_vn_table): Likewise.
	* tree-ssa-strlen.c: Likewise.
	* tree-ssa-structalias.c: Likewise.
	* var-tracking.c (onepart_pool_allocate): New function.
	(unshare_variable): Use the newly added function.
	(variable_merge_over_cur): Likewise.
	(variable_from_dropped): Likewise.
	(variable_was_changed): Likewise.
	(set_slot_part): Likewise.
	(emit_notes_for_differences_1): Likewise.
	(vt_finalize): Release newly added static pools.
---
 gcc/alloc-pool.h           | 187 ++++++++++++++++++++++++++-------------------
 gcc/asan.c                 |  21 +----
 gcc/c-family/c-format.c    |   7 +-
 gcc/cfg.c                  |   5 +-
 gcc/config/sh/sh.c         |  24 +-----
 gcc/cselib.c               |  46 ++++-------
 gcc/cselib.h               |  30 --------
 gcc/df-problems.c          |   2 +-
 gcc/df-scan.c              |  24 +++---
 gcc/df.h                   |   2 +-
 gcc/dse.c                  | 124 +++++++-----------------------
 gcc/et-forest.c            |  42 +++-------
 gcc/et-forest.h            |  15 ----
 gcc/ipa-cp.c               |   8 +-
 gcc/ipa-inline-analysis.c  |   2 +-
 gcc/ipa-profile.c          |   2 +-
 gcc/ipa-prop.c             |   2 +-
 gcc/ipa-prop.h             |   8 +-
 gcc/ira-build.c            |  18 ++---
 gcc/ira-color.c            |  17 +----
 gcc/lra-int.h              |  46 -----------
 gcc/lra-lives.c            |   5 +-
 gcc/lra.c                  |  24 +++---
 gcc/regcprop.c             |  23 +-----
 gcc/sched-deps.c           |   8 +-
 gcc/sel-sched-ir.c         |   2 +-
 gcc/sel-sched-ir.h         |   2 +-
 gcc/stmt.c                 |   7 +-
 gcc/tree-sra.c             |  44 ++---------
 gcc/tree-ssa-math-opts.c   |   4 +-
 gcc/tree-ssa-pre.c         |   4 +-
 gcc/tree-ssa-reassoc.c     |   2 +-
 gcc/tree-ssa-sccvn.c       |  10 +--
 gcc/tree-ssa-strlen.c      |   3 +-
 gcc/tree-ssa-structalias.c |   4 +-
 gcc/var-tracking.c         | 107 +++++++-------------------
 36 files changed, 289 insertions(+), 592 deletions(-)

diff --git a/gcc/alloc-pool.h b/gcc/alloc-pool.h
index 1785df5..03bde63 100644
--- a/gcc/alloc-pool.h
+++ b/gcc/alloc-pool.h
@@ -25,6 +25,9 @@  extern void dump_alloc_pool_statistics (void);
 
 typedef unsigned long ALLOC_POOL_ID_TYPE;
 
+/* Last used ID.  */
+extern ALLOC_POOL_ID_TYPE last_id;
+
 /* Pool allocator memory usage.  */
 struct pool_usage: public mem_usage
 {
@@ -92,21 +95,18 @@  struct pool_usage: public mem_usage
 
 extern mem_alloc_description<pool_usage> pool_allocator_usage;
 
-/* Type based memory pool allocator.  */
-template <typename T>
+/* Generic pool allocator.  */
 class pool_allocator
 {
 public:
   /* Default constructor for pool allocator called NAME.  Each block
-     has NUM elements.  The allocator support EXTRA_SIZE and can
-     potentially IGNORE_TYPE_SIZE.  */
-  pool_allocator (const char *name, size_t num, size_t extra_size = 0,
-		  bool ignore_type_size = false CXX_MEM_STAT_INFO);
+     has NUM elements.  */
+  pool_allocator (const char *name, size_t num, size_t size CXX_MEM_STAT_INFO);
   ~pool_allocator ();
   void release ();
   void release_if_empty ();
-  T *allocate () ATTRIBUTE_MALLOC;
-  void remove (T *object);
+  void *allocate () ATTRIBUTE_MALLOC;
+  void remove (void *object);
 
 private:
   struct allocation_pool_list
@@ -117,7 +117,6 @@  private:
   /* Initialize a pool allocator.  */
   void initialize ();
 
-  template <typename U>
   struct allocation_object
   {
     /* The ID of alloc pool which the object was allocated from.  */
@@ -136,18 +135,18 @@  private:
 	int64_t align_i;
       } u;
 
-    static inline allocation_object<U> *
+    static inline allocation_object*
     get_instance (void *data_ptr)
     {
-      return (allocation_object<U> *)(((char *)(data_ptr))
-				      - offsetof (allocation_object<U>,
+      return (allocation_object *)(((char *)(data_ptr))
+				      - offsetof (allocation_object,
 						  u.data));
     }
 
-    static inline U *
+    static inline void*
     get_data (void *instance_ptr)
     {
-      return (U*)(((allocation_object<U> *) instance_ptr)->u.data);
+      return (void*)(((allocation_object *) instance_ptr)->u.data);
     }
   };
 
@@ -185,66 +184,33 @@  private:
   size_t m_block_size;
   /* Size of a pool elements in bytes.  */
   size_t m_elt_size;
-  /* Flag if we shoul ignore size of a type.  */
-  bool m_ignore_type_size;
-  /* Extra size in bytes that should be allocated for each element.  */
-  size_t m_extra_size;
+  /* Size in bytes that should be allocated for each element.  */
+  size_t m_size;
   /* Flag if a pool allocator is initialized.  */
   bool m_initialized;
   /* Memory allocation location.  */
   mem_location m_location;
 };
 
-/* Last used ID.  */
-extern ALLOC_POOL_ID_TYPE last_id;
-
-/* Store information about each particular alloc_pool.  Note that this
-   will underestimate the amount the amount of storage used by a small amount:
-   1) The overhead in a pool is not accounted for.
-   2) The unallocated elements in a block are not accounted for.  Note
-   that this can at worst case be one element smaller that the block
-   size for that pool.  */
-struct alloc_pool_descriptor
-{
-  /* Number of pools allocated.  */
-  unsigned long created;
-  /* Gross allocated storage.  */
-  unsigned long allocated;
-  /* Amount of currently active storage.  */
-  unsigned long current;
-  /* Peak amount of storage used.  */
-  unsigned long peak;
-  /* Size of element in the pool.  */
-  int elt_size;
-};
-
-
-/* Hashtable mapping alloc_pool names to descriptors.  */
-extern hash_map<const char *, alloc_pool_descriptor> *alloc_pool_hash;
-
-template <typename T>
 inline
-pool_allocator<T>::pool_allocator (const char *name, size_t num,
-				   size_t extra_size, bool ignore_type_size
-				   MEM_STAT_DECL):
+pool_allocator::pool_allocator (const char *name, size_t num,
+				size_t size MEM_STAT_DECL):
   m_name (name), m_id (0), m_elts_per_block (num), m_returned_free_list (NULL),
   m_virgin_free_list (NULL), m_virgin_elts_remaining (0), m_elts_allocated (0),
   m_elts_free (0), m_blocks_allocated (0), m_block_list (NULL),
-  m_block_size (0), m_ignore_type_size (ignore_type_size),
-  m_extra_size (extra_size), m_initialized (false),
+  m_block_size (0), m_size (size), m_initialized (false),
   m_location (ALLOC_POOL_ORIGIN, false PASS_MEM_STAT) {}
 
 /* Initialize a pool allocator.  */
 
-template <typename T>
-void
-pool_allocator<T>::initialize ()
+inline void
+pool_allocator::initialize ()
 {
   gcc_checking_assert (!m_initialized);
   m_initialized = true;
 
   size_t header_size;
-  size_t size = (m_ignore_type_size ? 0 : sizeof (T)) + m_extra_size;
+  size_t size = m_size;
 
   gcc_checking_assert (m_name);
 
@@ -256,7 +222,7 @@  pool_allocator<T>::initialize ()
   size = align_eight (size);
 
   /* Add the aligned size of ID.  */
-  size += offsetof (allocation_object<T>, u.data);
+  size += offsetof (allocation_object, u.data);
 
   /* Um, we can't really allocate 0 elements per block.  */
   gcc_checking_assert (m_elts_per_block);
@@ -289,9 +255,8 @@  pool_allocator<T>::initialize ()
 }
 
 /* Free all memory allocated for the given memory pool.  */
-template <typename T>
 inline void
-pool_allocator<T>::release ()
+pool_allocator::release ()
 {
   if (!m_initialized)
     return;
@@ -320,24 +285,21 @@  pool_allocator<T>::release ()
   m_block_list = NULL;
 }
 
-template <typename T>
 void
-inline pool_allocator<T>::release_if_empty ()
+inline pool_allocator::release_if_empty ()
 {
   if (m_elts_free == m_elts_allocated)
     release ();
 }
 
-template <typename T>
-inline pool_allocator<T>::~pool_allocator ()
+inline pool_allocator::~pool_allocator ()
 {
   release ();
 }
 
 /* Allocates one element from the pool specified.  */
-template <typename T>
-inline T *
-pool_allocator<T>::allocate ()
+inline void*
+pool_allocator::allocate ()
 {
   if (!m_initialized)
     initialize ();
@@ -353,7 +315,7 @@  pool_allocator<T>::allocate ()
     }
 
 #ifdef ENABLE_VALGRIND_ANNOTATIONS
-  size = m_elt_size - offsetof (allocation_object<T>, u.data);
+  size = m_elt_size - offsetof (allocation_object, u.data);
 #endif
 
   /* If there are no more free elements, make some more!.  */
@@ -387,11 +349,11 @@  pool_allocator<T>::allocate ()
       /* We now know that we can take the first elt off the virgin list and
 	 put it on the returned list.  */
       block = m_virgin_free_list;
-      header = (allocation_pool_list*) allocation_object<T>::get_data (block);
+      header = (allocation_pool_list*) allocation_object::get_data (block);
       header->next = NULL;
 #ifdef ENABLE_CHECKING
       /* Mark the element to be free.  */
-      ((allocation_object<T> *) block)->id = 0;
+      ((allocation_object*) block)->id = 0;
 #endif
       VALGRIND_DISCARD (VALGRIND_MAKE_MEM_NOACCESS (header,size));
       m_returned_free_list = header;
@@ -408,36 +370,34 @@  pool_allocator<T>::allocate ()
 
 #ifdef ENABLE_CHECKING
   /* Set the ID for element.  */
-  allocation_object<T>::get_instance (header)->id = m_id;
+  allocation_object::get_instance (header)->id = m_id;
 #endif
   VALGRIND_DISCARD (VALGRIND_MAKE_MEM_UNDEFINED (header, size));
 
-  /* Call default constructor.  */
-  return (T *)(header);
+  return (void *)(header);
 }
 
 /* Puts PTR back on POOL's free list.  */
-template <typename T>
-void
-pool_allocator<T>::remove (T *object)
+inline void
+pool_allocator::remove (void *object)
 {
   gcc_checking_assert (m_initialized);
 
   allocation_pool_list *header;
   int size ATTRIBUTE_UNUSED;
-  size = m_elt_size - offsetof (allocation_object<T>, u.data);
+  size = m_elt_size - offsetof (allocation_object, u.data);
 
 #ifdef ENABLE_CHECKING
   gcc_assert (object
 	      /* Check if we free more than we allocated, which is Bad (TM).  */
 	      && m_elts_free < m_elts_allocated
 	      /* Check whether the PTR was allocated from POOL.  */
-	      && m_id == allocation_object<T>::get_instance (object)->id);
+	      && m_id == allocation_object::get_instance (object)->id);
 
   memset (object, 0xaf, size);
 
   /* Mark the element to be free.  */
-  allocation_object<T>::get_instance (object)->id = 0;
+  allocation_object::get_instance (object)->id = 0;
 #endif
 
   header = (allocation_pool_list*) object;
@@ -452,4 +412,77 @@  pool_allocator<T>::remove (T *object)
     }
 }
 
+/* Type based memory pool allocator.  */
+template <typename T>
+class object_allocator
+{
+public:
+  /* Default constructor for pool allocator called NAME.  Each block
+     has NUM elements.  */
+  object_allocator (const char *name, size_t num CXX_MEM_STAT_INFO):
+    m_allocator (name, num, sizeof (T) PASS_MEM_STAT) {}
+
+  inline void
+  release ()
+  {
+    m_allocator.release ();
+  }
+
+  inline void release_if_empty ()
+  {
+    m_allocator.release_if_empty ();
+  }
+
+  inline T *
+  allocate () ATTRIBUTE_MALLOC
+  {
+    return ::new (m_allocator.allocate ()) T ();
+  }
+
+  inline void
+  remove (T *object)
+  {
+    /* Call destructor.  */
+    object->~T ();
+
+    m_allocator.remove (object);
+  }
+
+private:
+  pool_allocator m_allocator;
+};
+
+/* Store information about each particular alloc_pool.  Note that this
+   will underestimate the amount the amount of storage used by a small amount:
+   1) The overhead in a pool is not accounted for.
+   2) The unallocated elements in a block are not accounted for.  Note
+   that this can at worst case be one element smaller that the block
+   size for that pool.  */
+struct alloc_pool_descriptor
+{
+  /* Number of pools allocated.  */
+  unsigned long created;
+  /* Gross allocated storage.  */
+  unsigned long allocated;
+  /* Amount of currently active storage.  */
+  unsigned long current;
+  /* Peak amount of storage used.  */
+  unsigned long peak;
+  /* Size of element in the pool.  */
+  int elt_size;
+};
+
+/* Helper for classes that do not provide default ctor.  */
+
+template <typename T>
+inline void *
+operator new (size_t, object_allocator<T> &a)
+{
+  return a.allocate ();
+}
+
+/* Hashtable mapping alloc_pool names to descriptors.  */
+extern hash_map<const char *, alloc_pool_descriptor> *alloc_pool_hash;
+
+
 #endif
diff --git a/gcc/asan.c b/gcc/asan.c
index 1caeb32..126681f 100644
--- a/gcc/asan.c
+++ b/gcc/asan.c
@@ -347,24 +347,9 @@  struct asan_mem_ref
 
   /* The size of the access.  */
   HOST_WIDE_INT access_size;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((asan_mem_ref *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<asan_mem_ref> pool;
 };
 
-pool_allocator<asan_mem_ref> asan_mem_ref::pool ("asan_mem_ref", 10);
+object_allocator <asan_mem_ref> asan_mem_ref_pool ("asan_mem_ref", 10);
 
 /* Initializes an instance of asan_mem_ref.  */
 
@@ -384,7 +369,7 @@  asan_mem_ref_init (asan_mem_ref *ref, tree start, HOST_WIDE_INT access_size)
 static asan_mem_ref*
 asan_mem_ref_new (tree start, HOST_WIDE_INT access_size)
 {
-  asan_mem_ref *ref = new asan_mem_ref;
+  asan_mem_ref *ref = asan_mem_ref_pool.allocate ();
 
   asan_mem_ref_init (ref, start, access_size);
   return ref;
@@ -472,7 +457,7 @@  free_mem_ref_resources ()
   delete asan_mem_ref_ht;
   asan_mem_ref_ht = NULL;
 
-  asan_mem_ref::pool.release ();
+  asan_mem_ref_pool.release ();
 }
 
 /* Return true iff the memory reference REF has been instrumented.  */
diff --git a/gcc/c-family/c-format.c b/gcc/c-family/c-format.c
index 1dac985..4bc3147 100644
--- a/gcc/c-family/c-format.c
+++ b/gcc/c-family/c-format.c
@@ -1025,7 +1025,7 @@  static void check_format_info_main (format_check_results *,
 				    function_format_info *,
 				    const char *, int, tree,
 				    unsigned HOST_WIDE_INT,
-				    pool_allocator<format_wanted_type> &);
+				    object_allocator<format_wanted_type> &);
 
 static void init_dollar_format_checking (int, tree);
 static int maybe_read_dollar_number (const char **, int,
@@ -1687,7 +1687,8 @@  check_format_arg (void *ctx, tree format_tree,
      will decrement it if it finds there are extra arguments, but this way
      need not adjust it for every return.  */
   res->number_other++;
-  pool_allocator <format_wanted_type> fwt_pool ("format_wanted_type pool", 10);
+  object_allocator <format_wanted_type> fwt_pool ("format_wanted_type pool",
+						  10);
   check_format_info_main (res, info, format_chars, format_length,
 			  params, arg_num, fwt_pool);
 }
@@ -1705,7 +1706,7 @@  check_format_info_main (format_check_results *res,
 			function_format_info *info, const char *format_chars,
 			int format_length, tree params,
 			unsigned HOST_WIDE_INT arg_num,
-			pool_allocator<format_wanted_type> &fwt_pool)
+			object_allocator <format_wanted_type> &fwt_pool)
 {
   const char *orig_format_chars = format_chars;
   tree first_fillin_param = params;
diff --git a/gcc/cfg.c b/gcc/cfg.c
index 8c2723e..2b652a8 100644
--- a/gcc/cfg.c
+++ b/gcc/cfg.c
@@ -1044,15 +1044,14 @@  static hash_table<bb_copy_hasher> *bb_copy;
 
 /* And between loops and copies.  */
 static hash_table<bb_copy_hasher> *loop_copy;
-static pool_allocator<htab_bb_copy_original_entry> *original_copy_bb_pool;
+static object_allocator<htab_bb_copy_original_entry> *original_copy_bb_pool;
 
 /* Initialize the data structures to maintain mapping between blocks
    and its copies.  */
 void
 initialize_original_copy_tables (void)
 {
-
-  original_copy_bb_pool = new pool_allocator<htab_bb_copy_original_entry>
+  original_copy_bb_pool = new object_allocator<htab_bb_copy_original_entry>
     ("original_copy", 10);
   bb_original = new hash_table<bb_copy_hasher> (10);
   bb_copy = new hash_table<bb_copy_hasher> (10);
diff --git a/gcc/config/sh/sh.c b/gcc/config/sh/sh.c
index 71f3a5d..6f30c50 100644
--- a/gcc/config/sh/sh.c
+++ b/gcc/config/sh/sh.c
@@ -4651,25 +4651,9 @@  typedef struct label_ref_list_d
 {
   rtx_code_label *label;
   struct label_ref_list_d *next;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((label_ref_list_d *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<label_ref_list_d> pool;
-
 } *label_ref_list_t;
 
-pool_allocator<label_ref_list_d> label_ref_list_d::pool
+static object_allocator<label_ref_list_d> label_ref_list_d_pool
   ("label references list", 30);
 
 /* The SH cannot load a large constant into a register, constants have to
@@ -4791,7 +4775,7 @@  add_constant (rtx x, machine_mode mode, rtx last_value)
 		}
 	      if (lab && pool_window_label)
 		{
-		  newref = new label_ref_list_d;
+		  newref = label_ref_list_d_pool.allocate ();
 		  newref->label = pool_window_label;
 		  ref = pool_vector[pool_window_last].wend;
 		  newref->next = ref;
@@ -4820,7 +4804,7 @@  add_constant (rtx x, machine_mode mode, rtx last_value)
   pool_vector[pool_size].part_of_sequence_p = (lab == 0);
   if (lab && pool_window_label)
     {
-      newref = new label_ref_list_d;
+      newref = label_ref_list_d_pool.allocate ();
       newref->label = pool_window_label;
       ref = pool_vector[pool_window_last].wend;
       newref->next = ref;
@@ -6566,7 +6550,7 @@  sh_reorg (void)
 	  insn = barrier;
 	}
     }
-  label_ref_list_d::pool.release ();
+  label_ref_list_d_pool.release ();
   for (insn = first; insn; insn = NEXT_INSN (insn))
     PUT_MODE (insn, VOIDmode);
 
diff --git a/gcc/cselib.c b/gcc/cselib.c
index fc7deab..2149959 100644
--- a/gcc/cselib.c
+++ b/gcc/cselib.c
@@ -45,21 +45,6 @@  struct elt_list
 {
   struct elt_list *next;
   cselib_val *elt;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((elt_list *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<elt_list> pool;
 };
 
 static bool cselib_record_memory;
@@ -261,12 +246,11 @@  static unsigned int cfa_base_preserved_regno = INVALID_REGNUM;
    each time memory is invalidated.  */
 static cselib_val *first_containing_mem = &dummy_val;
 
-pool_allocator<elt_list> elt_list::pool ("elt_list", 10);
-pool_allocator<elt_loc_list> elt_loc_list::pool ("elt_loc_list", 10);
-pool_allocator<cselib_val> cselib_val::pool ("cselib_val_list", 10);
+static object_allocator<elt_list> elt_list_pool ("elt_list", 10);
+static object_allocator<elt_loc_list> elt_loc_list_pool ("elt_loc_list", 10);
+static object_allocator<cselib_val> cselib_val_pool ("cselib_val_list", 10);
 
-static pool_allocator<rtx_def> value_pool ("value", 100, RTX_CODE_SIZE (VALUE),
-					   true);
+static pool_allocator value_pool ("value", 100, RTX_CODE_SIZE (VALUE));
 
 /* If nonnull, cselib will call this function before freeing useless
    VALUEs.  A VALUE is deemed useless if its "locs" field is null.  */
@@ -294,7 +278,7 @@  void (*cselib_record_sets_hook) (rtx_insn *insn, struct cselib_set *sets,
 static inline struct elt_list *
 new_elt_list (struct elt_list *next, cselib_val *elt)
 {
-  elt_list *el = new elt_list ();
+  elt_list *el = elt_list_pool.allocate ();
   el->next = next;
   el->elt = elt;
   return el;
@@ -378,14 +362,14 @@  new_elt_loc_list (cselib_val *val, rtx loc)
 	}
 
       /* Chain LOC back to VAL.  */
-      el = new elt_loc_list;
+      el = elt_loc_list_pool.allocate ();
       el->loc = val->val_rtx;
       el->setting_insn = cselib_current_insn;
       el->next = NULL;
       CSELIB_VAL_PTR (loc)->locs = el;
     }
 
-  el = new elt_loc_list;
+  el = elt_loc_list_pool.allocate ();
   el->loc = loc;
   el->setting_insn = cselib_current_insn;
   el->next = next;
@@ -425,7 +409,7 @@  unchain_one_elt_list (struct elt_list **pl)
   struct elt_list *l = *pl;
 
   *pl = l->next;
-  delete l;
+  elt_list_pool.remove (l);
 }
 
 /* Likewise for elt_loc_lists.  */
@@ -436,7 +420,7 @@  unchain_one_elt_loc_list (struct elt_loc_list **pl)
   struct elt_loc_list *l = *pl;
 
   *pl = l->next;
-  delete l;
+  elt_loc_list_pool.remove (l);
 }
 
 /* Likewise for cselib_vals.  This also frees the addr_list associated with
@@ -448,7 +432,7 @@  unchain_one_value (cselib_val *v)
   while (v->addr_list)
     unchain_one_elt_list (&v->addr_list);
 
-  delete v;
+  cselib_val_pool.remove (v);
 }
 
 /* Remove all entries from the hash table.  Also used during
@@ -1311,7 +1295,7 @@  cselib_hash_rtx (rtx x, int create, machine_mode memmode)
 static inline cselib_val *
 new_cselib_val (unsigned int hash, machine_mode mode, rtx x)
 {
-  cselib_val *e = new cselib_val;
+  cselib_val *e = cselib_val_pool.allocate ();
 
   gcc_assert (hash);
   gcc_assert (next_uid);
@@ -1323,7 +1307,7 @@  new_cselib_val (unsigned int hash, machine_mode mode, rtx x)
      precisely when we can have VALUE RTXen (when cselib is active)
      so we don't need to put them in garbage collected memory.
      ??? Why should a VALUE be an RTX in the first place?  */
-  e->val_rtx = value_pool.allocate ();
+  e->val_rtx = (rtx_def*) value_pool.allocate ();
   memset (e->val_rtx, 0, RTX_HDR_SIZE);
   PUT_CODE (e->val_rtx, VALUE);
   PUT_MODE (e->val_rtx, mode);
@@ -2775,9 +2759,9 @@  cselib_finish (void)
   cselib_any_perm_equivs = false;
   cfa_base_preserved_val = NULL;
   cfa_base_preserved_regno = INVALID_REGNUM;
-  elt_list::pool.release ();
-  elt_loc_list::pool.release ();
-  cselib_val::pool.release ();
+  elt_list_pool.release ();
+  elt_loc_list_pool.release ();
+  cselib_val_pool.release ();
   value_pool.release ();
   cselib_clear_table ();
   delete cselib_hash_table;
diff --git a/gcc/cselib.h b/gcc/cselib.h
index cdd06ad..1a278d4 100644
--- a/gcc/cselib.h
+++ b/gcc/cselib.h
@@ -41,21 +41,6 @@  struct cselib_val
   struct elt_list *addr_list;
 
   struct cselib_val *next_containing_mem;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((cselib_val *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<cselib_val> pool;
 };
 
 /* A list of rtl expressions that hold the same value.  */
@@ -66,21 +51,6 @@  struct elt_loc_list {
   rtx loc;
   /* The insn that made the equivalence.  */
   rtx_insn *setting_insn;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((elt_loc_list *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<elt_loc_list> pool;
 };
 
 /* Describe a single set that is part of an insn.  */
diff --git a/gcc/df-problems.c b/gcc/df-problems.c
index acf93dd..d4b5d76 100644
--- a/gcc/df-problems.c
+++ b/gcc/df-problems.c
@@ -1997,7 +1997,7 @@  static void
 df_chain_alloc (bitmap all_blocks ATTRIBUTE_UNUSED)
 {
   df_chain_remove_problem ();
-  df_chain->block_pool = new pool_allocator<df_link> ("df_chain_block pool",
+  df_chain->block_pool = new object_allocator<df_link> ("df_chain_block pool",
 						      50);
   df_chain->optional_p = true;
 }
diff --git a/gcc/df-scan.c b/gcc/df-scan.c
index 010a4b8..93c2eae 100644
--- a/gcc/df-scan.c
+++ b/gcc/df-scan.c
@@ -138,12 +138,12 @@  static const unsigned int copy_all = copy_defs | copy_uses | copy_eq_uses
 /* Problem data for the scanning dataflow function.  */
 struct df_scan_problem_data
 {
-  pool_allocator<df_base_ref> *ref_base_pool;
-  pool_allocator<df_artificial_ref> *ref_artificial_pool;
-  pool_allocator<df_regular_ref> *ref_regular_pool;
-  pool_allocator<df_insn_info> *insn_pool;
-  pool_allocator<df_reg_info> *reg_pool;
-  pool_allocator<df_mw_hardreg> *mw_reg_pool;
+  object_allocator<df_base_ref> *ref_base_pool;
+  object_allocator<df_artificial_ref> *ref_artificial_pool;
+  object_allocator<df_regular_ref> *ref_regular_pool;
+  object_allocator<df_insn_info> *insn_pool;
+  object_allocator<df_reg_info> *reg_pool;
+  object_allocator<df_mw_hardreg> *mw_reg_pool;
 
   bitmap_obstack reg_bitmaps;
   bitmap_obstack insn_bitmaps;
@@ -252,17 +252,17 @@  df_scan_alloc (bitmap all_blocks ATTRIBUTE_UNUSED)
   df_scan->problem_data = problem_data;
   df_scan->computed = true;
 
-  problem_data->ref_base_pool = new pool_allocator<df_base_ref>
+  problem_data->ref_base_pool = new object_allocator<df_base_ref>
     ("df_scan ref base", SCAN_PROBLEM_DATA_BLOCK_SIZE);
-  problem_data->ref_artificial_pool = new pool_allocator<df_artificial_ref>
+  problem_data->ref_artificial_pool = new object_allocator<df_artificial_ref>
     ("df_scan ref artificial", SCAN_PROBLEM_DATA_BLOCK_SIZE);
-  problem_data->ref_regular_pool = new pool_allocator<df_regular_ref>
+  problem_data->ref_regular_pool = new object_allocator<df_regular_ref>
     ("df_scan ref regular", SCAN_PROBLEM_DATA_BLOCK_SIZE);
-  problem_data->insn_pool = new pool_allocator<df_insn_info>
+  problem_data->insn_pool = new object_allocator<df_insn_info>
     ("df_scan insn", SCAN_PROBLEM_DATA_BLOCK_SIZE);
-  problem_data->reg_pool = new pool_allocator<df_reg_info>
+  problem_data->reg_pool = new object_allocator<df_reg_info>
     ("df_scan reg", SCAN_PROBLEM_DATA_BLOCK_SIZE);
-  problem_data->mw_reg_pool = new pool_allocator<df_mw_hardreg>
+  problem_data->mw_reg_pool = new object_allocator<df_mw_hardreg>
     ("df_scan mw_reg", SCAN_PROBLEM_DATA_BLOCK_SIZE / 16);
 
   bitmap_obstack_initialize (&problem_data->reg_bitmaps);
diff --git a/gcc/df.h b/gcc/df.h
index cea12a3..44e5fdb 100644
--- a/gcc/df.h
+++ b/gcc/df.h
@@ -294,7 +294,7 @@  struct dataflow
   unsigned int block_info_size;
 
   /* The pool to allocate the block_info from. */
-  pool_allocator<df_link> *block_pool;
+  object_allocator<df_link> *block_pool;
 
   /* The lr and live problems have their transfer functions recomputed
      only if necessary.  This is possible for them because, the
diff --git a/gcc/dse.c b/gcc/dse.c
index 89ba0c9..44f70f1 100644
--- a/gcc/dse.c
+++ b/gcc/dse.c
@@ -307,10 +307,10 @@  lowpart_bitmask (int n)
 }
 
 typedef struct store_info *store_info_t;
-static pool_allocator<store_info> cse_store_info_pool ("cse_store_info_pool",
+static object_allocator<store_info> cse_store_info_pool ("cse_store_info_pool",
 						       100);
 
-static pool_allocator<store_info> rtx_store_info_pool ("rtx_store_info_pool",
+static object_allocator<store_info> rtx_store_info_pool ("rtx_store_info_pool",
 						       100);
 
 /* This structure holds information about a load.  These are only
@@ -333,25 +333,11 @@  struct read_info_type
 
   /* The next read_info for this insn.  */
   struct read_info_type *next;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((read_info_type *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<read_info_type> pool;
 };
 typedef struct read_info_type *read_info_t;
 
-pool_allocator<read_info_type> read_info_type::pool ("read_info_pool", 100);
+static object_allocator<read_info_type> read_info_type_pool
+  ("read_info_pool", 100);
 
 /* One of these records is created for each insn.  */
 
@@ -437,25 +423,11 @@  struct insn_info_type
      time it is guaranteed to be correct is when the traversal starts
      at active_local_stores.  */
   struct insn_info_type * next_local_store;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((insn_info_type *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<insn_info_type> pool;
 };
 typedef struct insn_info_type *insn_info_t;
 
-pool_allocator<insn_info_type> insn_info_type::pool ("insn_info_pool", 100);
+static object_allocator<insn_info_type> insn_info_type_pool
+  ("insn_info_pool", 100);
 
 /* The linked list of stores that are under consideration in this
    basic block.  */
@@ -517,25 +489,12 @@  struct dse_bb_info_type
      to assure that shift and/or add sequences that are inserted do not
      accidentally clobber live hard regs.  */
   bitmap regs_live;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((dse_bb_info_type *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<dse_bb_info_type> pool;
 };
 
 typedef struct dse_bb_info_type *bb_info_t;
-pool_allocator<dse_bb_info_type> dse_bb_info_type::pool ("bb_info_pool", 100);
+
+static object_allocator<dse_bb_info_type> dse_bb_info_type_pool
+  ("bb_info_pool", 100);
 
 /* Table to hold all bb_infos.  */
 static bb_info_t *bb_table;
@@ -603,26 +562,12 @@  struct group_info
      care about.  */
   int *offset_map_n, *offset_map_p;
   int offset_map_size_n, offset_map_size_p;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((group_info *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<group_info> pool;
 };
 typedef struct group_info *group_info_t;
 typedef const struct group_info *const_group_info_t;
 
-pool_allocator<group_info> group_info::pool ("rtx_group_info_pool", 100);
+static object_allocator<group_info> group_info_pool
+  ("rtx_group_info_pool", 100);
 
 /* Index into the rtx_group_vec.  */
 static int rtx_group_next_id;
@@ -643,26 +588,11 @@  struct deferred_change
   rtx reg;
 
   struct deferred_change *next;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((deferred_change *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<deferred_change> pool;
 };
 
 typedef struct deferred_change *deferred_change_t;
 
-pool_allocator<deferred_change> deferred_change::pool
+static object_allocator<deferred_change> deferred_change_pool
   ("deferred_change_pool", 10);
 
 static deferred_change_t deferred_change_list = NULL;
@@ -768,7 +698,7 @@  get_group_info (rtx base)
     {
       if (!clear_alias_group)
 	{
-	  clear_alias_group = gi = new group_info;
+	  clear_alias_group = gi = group_info_pool.allocate ();
 	  memset (gi, 0, sizeof (struct group_info));
 	  gi->id = rtx_group_next_id++;
 	  gi->store1_n = BITMAP_ALLOC (&dse_bitmap_obstack);
@@ -790,7 +720,7 @@  get_group_info (rtx base)
 
   if (gi == NULL)
     {
-      *slot = gi = new group_info;
+      *slot = gi = group_info_pool.allocate ();
       gi->rtx_base = base;
       gi->id = rtx_group_next_id++;
       gi->base_mem = gen_rtx_MEM (BLKmode, base);
@@ -1026,7 +956,7 @@  delete_dead_store_insn (insn_info_t insn_info)
   while (read_info)
     {
       read_info_t next = read_info->next;
-      delete read_info;
+      read_info_type_pool.remove (read_info);
       read_info = next;
     }
   insn_info->read_rec = NULL;
@@ -1150,7 +1080,7 @@  free_read_records (bb_info_t bb_info)
       read_info_t next = (*ptr)->next;
       if ((*ptr)->alias_set == 0)
         {
-	  delete *ptr;
+	  read_info_type_pool.remove (*ptr);
           *ptr = next;
         }
       else
@@ -2098,7 +2028,7 @@  replace_read (store_info_t store_info, insn_info_t store_insn,
 
   if (validate_change (read_insn->insn, loc, read_reg, 0))
     {
-      deferred_change_t change = new deferred_change;
+      deferred_change_t change = deferred_change_pool.allocate ();
 
       /* Insert this right before the store insn where it will be safe
 	 from later insns that might change it before the read.  */
@@ -2136,7 +2066,7 @@  replace_read (store_info_t store_info, insn_info_t store_insn,
       /* Get rid of the read_info, from the point of view of the
 	 rest of dse, play like this read never happened.  */
       read_insn->read_rec = read_info->next;
-      delete read_info;
+      read_info_type_pool.remove (read_info);
       if (dump_file && (dump_flags & TDF_DETAILS))
 	{
 	  fprintf (dump_file, " -- replaced the loaded MEM with ");
@@ -2202,7 +2132,7 @@  check_mem_read_rtx (rtx *loc, bb_info_t bb_info)
   else
     width = GET_MODE_SIZE (GET_MODE (mem));
 
-  read_info = new read_info_type;
+  read_info = read_info_type_pool.allocate ();
   read_info->group_id = group_id;
   read_info->mem = mem;
   read_info->alias_set = spill_alias_set;
@@ -2518,7 +2448,7 @@  static void
 scan_insn (bb_info_t bb_info, rtx_insn *insn)
 {
   rtx body;
-  insn_info_type *insn_info = new insn_info_type;
+  insn_info_type *insn_info = insn_info_type_pool.allocate ();
   int mems_found = 0;
   memset (insn_info, 0, sizeof (struct insn_info_type));
 
@@ -2777,7 +2707,7 @@  dse_step1 (void)
   FOR_ALL_BB_FN (bb, cfun)
     {
       insn_info_t ptr;
-      bb_info_t bb_info = new dse_bb_info_type;
+      bb_info_t bb_info = dse_bb_info_type_pool.allocate ();
 
       memset (bb_info, 0, sizeof (dse_bb_info_type));
       bitmap_set_bit (all_blocks, bb->index);
@@ -2854,7 +2784,7 @@  dse_step1 (void)
 	      /* There is no reason to validate this change.  That was
 		 done earlier.  */
 	      *deferred_change_list->loc = deferred_change_list->reg;
-	      delete deferred_change_list;
+	      deferred_change_pool.remove (deferred_change_list);
 	      deferred_change_list = next;
 	    }
 
@@ -3739,11 +3669,11 @@  dse_step7 (void)
   BITMAP_FREE (scratch);
 
   rtx_store_info_pool.release ();
-  read_info_type::pool.release ();
-  insn_info_type::pool.release ();
-  dse_bb_info_type::pool.release ();
-  group_info::pool.release ();
-  deferred_change::pool.release ();
+  read_info_type_pool.release ();
+  insn_info_type_pool.release ();
+  dse_bb_info_type_pool.release ();
+  group_info_pool.release ();
+  deferred_change_pool.release ();
 }
 
 
diff --git a/gcc/et-forest.c b/gcc/et-forest.c
index a1a02f2..1931285 100644
--- a/gcc/et-forest.c
+++ b/gcc/et-forest.c
@@ -52,26 +52,10 @@  struct et_occ
 				   on the path to the root.  */
   struct et_occ *min_occ;	/* The occurrence in the subtree with the minimal
 				   depth.  */
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((et_occ *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<et_occ> pool;
-
 };
 
-pool_allocator<et_node> et_node::pool ("et_nodes pool", 300);
-pool_allocator<et_occ> et_occ::pool ("et_occ pool", 300);
+static object_allocator<et_node> et_nodes ("et_nodes pool", 300);
+static object_allocator<et_occ> et_occurrences ("et_occ pool", 300);
 
 /* Changes depth of OCC to D.  */
 
@@ -458,7 +442,7 @@  et_splay (struct et_occ *occ)
 static struct et_occ *
 et_new_occ (struct et_node *node)
 {
-  et_occ *nw = new et_occ;
+  et_occ *nw = et_occurrences.allocate ();
 
   nw->of = node;
   nw->parent = NULL;
@@ -477,9 +461,7 @@  et_new_occ (struct et_node *node)
 struct et_node *
 et_new_tree (void *data)
 {
-  struct et_node *nw;
-
-  nw = new et_node;
+  et_node *nw = et_nodes.allocate ();
 
   nw->data = data;
   nw->father = NULL;
@@ -504,8 +486,8 @@  et_free_tree (struct et_node *t)
   if (t->father)
     et_split (t);
 
-  delete t->rightmost_occ;
-  delete t;
+  et_occurrences.remove (t->rightmost_occ);
+  et_nodes.remove (t);
 }
 
 /* Releases et tree T without maintaining other nodes.  */
@@ -513,10 +495,10 @@  et_free_tree (struct et_node *t)
 void
 et_free_tree_force (struct et_node *t)
 {
-  delete t->rightmost_occ;
+  et_occurrences.remove (t->rightmost_occ);
   if (t->parent_occ)
-    delete t->parent_occ;
-  delete t;
+    et_occurrences.remove (t->parent_occ);
+  et_nodes.remove (t);
 }
 
 /* Release the alloc pools, if they are empty.  */
@@ -524,8 +506,8 @@  et_free_tree_force (struct et_node *t)
 void
 et_free_pools (void)
 {
-  et_occ::pool.release_if_empty ();
-  et_node::pool.release_if_empty ();
+  et_occurrences.release_if_empty ();
+  et_nodes.release_if_empty ();
 }
 
 /* Sets father of et tree T to FATHER.  */
@@ -617,7 +599,7 @@  et_split (struct et_node *t)
   rmost->depth = 0;
   rmost->min = 0;
 
-  delete p_occ;
+  et_occurrences.remove (p_occ);
 
   /* Update the tree.  */
   if (father->son == t)
diff --git a/gcc/et-forest.h b/gcc/et-forest.h
index 15c582d..b507c64 100644
--- a/gcc/et-forest.h
+++ b/gcc/et-forest.h
@@ -66,21 +66,6 @@  struct et_node
 
   struct et_occ *rightmost_occ;	/* The rightmost occurrence.  */
   struct et_occ *parent_occ;	/* The occurrence of the parent node.  */
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((et_node *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<et_node> pool;
 };
 
 struct et_node *et_new_tree (void *data);
diff --git a/gcc/ipa-cp.c b/gcc/ipa-cp.c
index 16b9cde..2bf0eaf 100644
--- a/gcc/ipa-cp.c
+++ b/gcc/ipa-cp.c
@@ -274,16 +274,16 @@  public:
 
 /* Allocation pools for values and their sources in ipa-cp.  */
 
-pool_allocator<ipcp_value<tree> > ipcp_cst_values_pool
+object_allocator<ipcp_value<tree> > ipcp_cst_values_pool
   ("IPA-CP constant values", 32);
 
-pool_allocator<ipcp_value<ipa_polymorphic_call_context> >
+object_allocator<ipcp_value<ipa_polymorphic_call_context> >
   ipcp_poly_ctx_values_pool ("IPA-CP polymorphic contexts", 32);
 
-pool_allocator<ipcp_value_source<tree> > ipcp_sources_pool
+object_allocator<ipcp_value_source<tree> > ipcp_sources_pool
   ("IPA-CP value sources", 64);
 
-pool_allocator<ipcp_agg_lattice> ipcp_agg_lattice_pool
+object_allocator<ipcp_agg_lattice> ipcp_agg_lattice_pool
   ("IPA_CP aggregate lattices", 32);
 
 /* Maximal count found in program.  */
diff --git a/gcc/ipa-inline-analysis.c b/gcc/ipa-inline-analysis.c
index d5dbfbd..ae6bb0f 100644
--- a/gcc/ipa-inline-analysis.c
+++ b/gcc/ipa-inline-analysis.c
@@ -145,7 +145,7 @@  vec<inline_edge_summary_t> inline_edge_summary_vec;
 vec<edge_growth_cache_entry> edge_growth_cache;
 
 /* Edge predicates goes here.  */
-static pool_allocator<predicate> edge_predicate_pool ("edge predicates", 10);
+static object_allocator<predicate> edge_predicate_pool ("edge predicates", 10);
 
 /* Return true predicate (tautology).
    We represent it by empty list of clauses.  */
diff --git a/gcc/ipa-profile.c b/gcc/ipa-profile.c
index 698729b..e2f2b39 100644
--- a/gcc/ipa-profile.c
+++ b/gcc/ipa-profile.c
@@ -87,7 +87,7 @@  struct histogram_entry
    duplicate entries.  */
 
 vec<histogram_entry *> histogram;
-static pool_allocator<histogram_entry> histogram_pool
+static object_allocator<histogram_entry> histogram_pool
   ("IPA histogram", 10);
 
 /* Hashtable support for storing SSA names hashed by their SSA_NAME_VAR.  */
diff --git a/gcc/ipa-prop.c b/gcc/ipa-prop.c
index 6074194..da01ddb 100644
--- a/gcc/ipa-prop.c
+++ b/gcc/ipa-prop.c
@@ -147,7 +147,7 @@  struct ipa_cst_ref_desc
 
 /* Allocation pool for reference descriptions.  */
 
-static pool_allocator<ipa_cst_ref_desc> ipa_refdesc_pool
+static object_allocator<ipa_cst_ref_desc> ipa_refdesc_pool
   ("IPA-PROP ref descriptions", 32);
 
 /* Return true if DECL_FUNCTION_SPECIFIC_OPTIMIZATION of the decl associated
diff --git a/gcc/ipa-prop.h b/gcc/ipa-prop.h
index e6725aa..47790e5 100644
--- a/gcc/ipa-prop.h
+++ b/gcc/ipa-prop.h
@@ -598,18 +598,18 @@  void ipcp_verify_propagated_values (void);
 template <typename value>
 class ipcp_value;
 
-extern pool_allocator<ipcp_value<tree> > ipcp_cst_values_pool;
-extern pool_allocator<ipcp_value<ipa_polymorphic_call_context> >
+extern object_allocator<ipcp_value<tree> > ipcp_cst_values_pool;
+extern object_allocator<ipcp_value<ipa_polymorphic_call_context> >
   ipcp_poly_ctx_values_pool;
 
 template <typename valtype>
 class ipcp_value_source;
 
-extern pool_allocator<ipcp_value_source<tree> > ipcp_sources_pool;
+extern object_allocator<ipcp_value_source<tree> > ipcp_sources_pool;
 
 class ipcp_agg_lattice;
 
-extern pool_allocator<ipcp_agg_lattice> ipcp_agg_lattice_pool;
+extern object_allocator<ipcp_agg_lattice> ipcp_agg_lattice_pool;
 
 /* Operation to be performed for the parameter in ipa_parm_adjustment
    below.  */
diff --git a/gcc/ira-build.c b/gcc/ira-build.c
index 2dc6ec4..43dfb25 100644
--- a/gcc/ira-build.c
+++ b/gcc/ira-build.c
@@ -420,9 +420,9 @@  rebuild_regno_allocno_maps (void)
 
 
 /* Pools for allocnos, allocno live ranges and objects.  */
-static pool_allocator<live_range> live_range_pool ("live ranges", 100);
-static pool_allocator<ira_allocno> allocno_pool ("allocnos", 100);
-static pool_allocator<ira_object> object_pool ("objects", 100);
+static object_allocator<live_range> live_range_pool ("live ranges", 100);
+static object_allocator<ira_allocno> allocno_pool ("allocnos", 100);
+static object_allocator<ira_object> object_pool ("objects", 100);
 
 /* Vec containing references to all created allocnos.  It is a
    container of array allocnos.  */
@@ -1170,7 +1170,7 @@  finish_allocnos (void)
 
 
 /* Pools for allocno preferences.  */
-static pool_allocator <ira_allocno_pref> pref_pool ("prefs", 100);
+static object_allocator <ira_allocno_pref> pref_pool ("prefs", 100);
 
 /* Vec containing references to all created preferences.  It is a
    container of array ira_prefs.  */
@@ -1357,7 +1357,7 @@  finish_prefs (void)
 
 
 /* Pools for copies.  */
-static pool_allocator<ira_allocno_copy> copy_pool ("copies", 100);
+static object_allocator<ira_allocno_copy> copy_pool ("copies", 100);
 
 /* Vec containing references to all created copies.  It is a
    container of array ira_copies.  */
@@ -1616,7 +1616,7 @@  finish_copies (void)
 
 
 /* Pools for cost vectors.  It is defined only for allocno classes.  */
-static pool_allocator<int> * cost_vector_pool[N_REG_CLASSES];
+static pool_allocator *cost_vector_pool[N_REG_CLASSES];
 
 /* The function initiates work with hard register cost vectors.  It
    creates allocation pool for each allocno class.  */
@@ -1629,9 +1629,9 @@  initiate_cost_vectors (void)
   for (i = 0; i < ira_allocno_classes_num; i++)
     {
       aclass = ira_allocno_classes[i];
-      cost_vector_pool[aclass] = new pool_allocator<int>
+      cost_vector_pool[aclass] = new pool_allocator
 	("cost vectors", 100,
-	 sizeof (int) * (ira_class_hard_regs_num[aclass] - 1));
+	 sizeof (int) * (ira_class_hard_regs_num[aclass]));
     }
 }
 
@@ -1639,7 +1639,7 @@  initiate_cost_vectors (void)
 int *
 ira_allocate_cost_vector (reg_class_t aclass)
 {
-  return cost_vector_pool[(int) aclass]->allocate ();
+  return (int*) cost_vector_pool[(int) aclass]->allocate ();
 }
 
 /* Free a cost vector VEC for ACLASS.  */
diff --git a/gcc/ira-color.c b/gcc/ira-color.c
index 98858b3..d0fa160 100644
--- a/gcc/ira-color.c
+++ b/gcc/ira-color.c
@@ -105,21 +105,6 @@  struct update_cost_record
   int divisor;
   /* Next record for given allocno.  */
   struct update_cost_record *next;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((update_cost_record *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<update_cost_record> pool;
 };
 
 /* To decrease footprint of ira_allocno structure we store all data
@@ -1161,7 +1146,7 @@  setup_profitable_hard_regs (void)
    allocnos.  */
 
 /* Pool for update cost records.  */
-static pool_allocator<update_cost_record> update_cost_record_pool
+static object_allocator<update_cost_record> update_cost_record_pool
   ("update cost records", 100);
 
 /* Return new update cost record with given params.  */
diff --git a/gcc/lra-int.h b/gcc/lra-int.h
index a7763e8..57e1e0a 100644
--- a/gcc/lra-int.h
+++ b/gcc/lra-int.h
@@ -46,21 +46,6 @@  struct lra_live_range
   lra_live_range_t next;
   /* Pointer to structures with the same start.	 */
   lra_live_range_t start_next;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((lra_live_range *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<lra_live_range> pool;
 };
 
 typedef struct lra_copy *lra_copy_t;
@@ -76,22 +61,6 @@  struct lra_copy
   int regno1, regno2;
   /* Next copy with correspondingly REGNO1 and REGNO2.	*/
   lra_copy_t regno1_next, regno2_next;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((lra_copy *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<lra_copy> pool;
-
 };
 
 /* Common info about a register (pseudo or hard register).  */
@@ -199,21 +168,6 @@  struct lra_insn_reg
   int regno;
   /* Next reg info of the same insn.  */
   struct lra_insn_reg *next;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((lra_insn_reg *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<lra_insn_reg> pool;
 };
 
 /* Static part (common info for insns with the same ICODE) of LRA
diff --git a/gcc/lra-lives.c b/gcc/lra-lives.c
index b270b0e..60dcfde 100644
--- a/gcc/lra-lives.c
+++ b/gcc/lra-lives.c
@@ -106,7 +106,8 @@  static sparseset unused_set, dead_set;
 static bitmap_head temp_bitmap;
 
 /* Pool for pseudo live ranges.	 */
-pool_allocator <lra_live_range> lra_live_range::pool ("live ranges", 100);
+static object_allocator<lra_live_range> lra_live_range_pool
+  ("live ranges", 100);
 
 /* Free live range list LR.  */
 static void
@@ -1364,5 +1365,5 @@  lra_live_ranges_finish (void)
 {
   finish_live_solver ();
   bitmap_clear (&temp_bitmap);
-  lra_live_range::pool.release ();
+  lra_live_range_pool.release ();
 }
diff --git a/gcc/lra.c b/gcc/lra.c
index 793098b..b65a9bb 100644
--- a/gcc/lra.c
+++ b/gcc/lra.c
@@ -532,7 +532,7 @@  lra_update_dups (lra_insn_recog_data_t id, signed char *nops)
    insns.  */
 
 /* Pools for insn reg info.  */
-pool_allocator<lra_insn_reg> lra_insn_reg::pool ("insn regs", 100);
+object_allocator<lra_insn_reg> lra_insn_reg_pool ("insn regs", 100);
 
 /* Create LRA insn related info about a reference to REGNO in INSN with
    TYPE (in/out/inout), biggest reference mode MODE, flag that it is
@@ -544,7 +544,7 @@  new_insn_reg (rtx_insn *insn, int regno, enum op_type type,
 	      machine_mode mode,
 	      bool subreg_p, bool early_clobber, struct lra_insn_reg *next)
 {
-  lra_insn_reg *ir = new lra_insn_reg ();
+  lra_insn_reg *ir = lra_insn_reg_pool.allocate ();
   ir->type = type;
   ir->biggest_mode = mode;
   if (GET_MODE_SIZE (mode) > GET_MODE_SIZE (lra_reg_info[regno].biggest_mode)
@@ -566,7 +566,7 @@  free_insn_regs (struct lra_insn_reg *ir)
   for (; ir != NULL; ir = next_ir)
     {
       next_ir = ir->next;
-      delete ir;
+      lra_insn_reg_pool.remove (ir);
     }
 }
 
@@ -574,7 +574,7 @@  free_insn_regs (struct lra_insn_reg *ir)
 static void
 finish_insn_regs (void)
 {
-  lra_insn_reg::pool.release ();
+  lra_insn_reg_pool.release ();
 }
 
 
@@ -744,6 +744,9 @@  free_insn_recog_data (lra_insn_recog_data_t data)
   free (data);
 }
 
+/* Pools for copies.  */
+static object_allocator<lra_copy> lra_copy_pool ("lra copies", 100);
+
 /* Finish LRA data about all insns.  */
 static void
 finish_insn_recog_data (void)
@@ -755,8 +758,8 @@  finish_insn_recog_data (void)
     if ((data = lra_insn_recog_data[i]) != NULL)
       free_insn_recog_data (data);
   finish_insn_regs ();
-  lra_copy::pool.release ();
-  lra_insn_reg::pool.release ();
+  lra_copy_pool.release ();
+  lra_insn_reg_pool.release ();
   free (lra_insn_recog_data);
 }
 
@@ -1275,9 +1278,6 @@  get_new_reg_value (void)
   return ++last_reg_value;
 }
 
-/* Pools for copies.  */
-pool_allocator<lra_copy> lra_copy::pool ("lra copies", 100);
-
 /* Vec referring to pseudo copies.  */
 static vec<lra_copy_t> copy_vec;
 
@@ -1356,7 +1356,7 @@  lra_free_copies (void)
     {
       cp = copy_vec.pop ();
       lra_reg_info[cp->regno1].copies = lra_reg_info[cp->regno2].copies = NULL;
-      delete cp;
+      lra_copy_pool.remove (cp);
     }
 }
 
@@ -1375,7 +1375,7 @@  lra_create_copy (int regno1, int regno2, int freq)
       std::swap (regno1, regno2);
       regno1_dest_p = false;
     }
-  cp = new lra_copy ();
+  cp = lra_copy_pool.allocate ();
   copy_vec.safe_push (cp);
   cp->regno1_dest_p = regno1_dest_p;
   cp->freq = freq;
@@ -1544,7 +1544,7 @@  invalidate_insn_data_regno_info (lra_insn_recog_data_t data, rtx_insn *insn,
     {
       i = ir->regno;
       next_ir = ir->next;
-      delete ir;
+      lra_insn_reg_pool.remove (ir);
       bitmap_clear_bit (&lra_reg_info[i].insn_bitmap, uid);
       if (i >= FIRST_PSEUDO_REGISTER && ! debug_p)
 	{
diff --git a/gcc/regcprop.c b/gcc/regcprop.c
index 093ebdc..5aebbad 100644
--- a/gcc/regcprop.c
+++ b/gcc/regcprop.c
@@ -52,21 +52,6 @@  struct queued_debug_insn_change
   rtx_insn *insn;
   rtx *loc;
   rtx new_rtx;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((queued_debug_insn_change *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<queued_debug_insn_change> pool;
 };
 
 /* For each register, we have a list of registers that contain the same
@@ -90,7 +75,7 @@  struct value_data
   unsigned int n_debug_insn_changes;
 };
 
-pool_allocator<queued_debug_insn_change> queued_debug_insn_change::pool
+static object_allocator<queued_debug_insn_change> queued_debug_insn_change_pool
   ("debug insn changes pool", 256);
 
 static bool skip_debug_insn_p;
@@ -131,7 +116,7 @@  free_debug_insn_changes (struct value_data *vd, unsigned int regno)
     {
       next = cur->next;
       --vd->n_debug_insn_changes;
-      delete cur;
+      queued_debug_insn_change_pool.remove (cur);
     }
   vd->e[regno].debug_insn_changes = NULL;
 }
@@ -502,7 +487,7 @@  replace_oldest_value_reg (rtx *loc, enum reg_class cl, rtx_insn *insn,
 	    fprintf (dump_file, "debug_insn %u: queued replacing reg %u with %u\n",
 		     INSN_UID (insn), REGNO (*loc), REGNO (new_rtx));
 
-	  change = new queued_debug_insn_change;
+	  change = queued_debug_insn_change_pool.allocate ();
 	  change->next = vd->e[REGNO (new_rtx)].debug_insn_changes;
 	  change->insn = insn;
 	  change->loc = loc;
@@ -1309,7 +1294,7 @@  pass_cprop_hardreg::execute (function *fun)
 		}
 	  }
 
-      queued_debug_insn_change::pool.release ();
+      queued_debug_insn_change_pool.release ();
     }
 
   sbitmap_free (visited);
diff --git a/gcc/sched-deps.c b/gcc/sched-deps.c
index fa4bf5a..33f065a 100644
--- a/gcc/sched-deps.c
+++ b/gcc/sched-deps.c
@@ -321,7 +321,7 @@  dep_link_is_detached_p (dep_link_t link)
 }
 
 /* Pool to hold all dependency nodes (dep_node_t).  */
-static pool_allocator<_dep_node> *dn_pool;
+static object_allocator<_dep_node> *dn_pool;
 
 /* Number of dep_nodes out there.  */
 static int dn_pool_diff = 0;
@@ -362,7 +362,7 @@  delete_dep_node (dep_node_t n)
 }
 
 /* Pool to hold dependencies lists (deps_list_t).  */
-static pool_allocator<_deps_list> *dl_pool;
+static object_allocator<_deps_list> *dl_pool;
 
 /* Number of deps_lists out there.  */
 static int dl_pool_diff = 0;
@@ -4059,10 +4059,10 @@  sched_deps_init (bool global_p)
 
   if (global_p)
     {
-      dl_pool = new pool_allocator<_deps_list> ("deps_list",
+      dl_pool = new object_allocator<_deps_list> ("deps_list",
                                    /* Allocate lists for one block at a time.  */
                                    insns_in_block);
-      dn_pool = new pool_allocator<_dep_node> ("dep_node",
+      dn_pool = new object_allocator<_dep_node> ("dep_node",
                                    /* Allocate nodes for one block at a time.
                                       We assume that average insn has
                                       5 producers.  */
diff --git a/gcc/sel-sched-ir.c b/gcc/sel-sched-ir.c
index 2926b67..311e317 100644
--- a/gcc/sel-sched-ir.c
+++ b/gcc/sel-sched-ir.c
@@ -58,7 +58,7 @@  vec<sel_region_bb_info_def>
     sel_region_bb_info = vNULL;
 
 /* A pool for allocating all lists.  */
-pool_allocator<_list_node> sched_lists_pool ("sel-sched-lists", 500);
+object_allocator<_list_node> sched_lists_pool ("sel-sched-lists", 500);
 
 /* This contains information about successors for compute_av_set.  */
 struct succs_info current_succs;
diff --git a/gcc/sel-sched-ir.h b/gcc/sel-sched-ir.h
index 0e7f5aa..09c97ab 100644
--- a/gcc/sel-sched-ir.h
+++ b/gcc/sel-sched-ir.h
@@ -357,7 +357,7 @@  struct _list_node
 /* _list_t functions.
    All of _*list_* functions are used through accessor macros, thus
    we can't move them in sel-sched-ir.c.  */
-extern pool_allocator<_list_node> sched_lists_pool;
+extern object_allocator<_list_node> sched_lists_pool;
 
 static inline _list_t
 _list_alloc (void)
diff --git a/gcc/stmt.c b/gcc/stmt.c
index 6fb7233..97310b2 100644
--- a/gcc/stmt.c
+++ b/gcc/stmt.c
@@ -729,7 +729,8 @@  do_jump_if_equal (machine_mode mode, rtx op0, rtx op1, rtx_code_label *label,
 
 static struct case_node *
 add_case_node (struct case_node *head, tree low, tree high,
-	       tree label, int prob, pool_allocator<case_node> &case_node_pool)
+	       tree label, int prob,
+	       object_allocator<case_node> &case_node_pool)
 {
   struct case_node *r;
 
@@ -1137,7 +1138,7 @@  expand_case (gswitch *stmt)
   struct case_node *case_list = 0;
 
   /* A pool for case nodes.  */
-  pool_allocator<case_node> case_node_pool ("struct case_node pool", 100);
+  object_allocator<case_node> case_node_pool ("struct case_node pool", 100);
 
   /* An ERROR_MARK occurs for various reasons including invalid data type.
      ??? Can this still happen, with GIMPLE and all?  */
@@ -1313,7 +1314,7 @@  expand_sjlj_dispatch_table (rtx dispatch_index,
     {
       /* Similar to expand_case, but much simpler.  */
       struct case_node *case_list = 0;
-      pool_allocator<case_node> case_node_pool ("struct sjlj_case pool",
+      object_allocator<case_node> case_node_pool ("struct sjlj_case pool",
 						ncases);
       tree index_expr = make_tree (index_type, dispatch_index);
       tree minval = build_int_cst (index_type, 0);
diff --git a/gcc/tree-sra.c b/gcc/tree-sra.c
index 3d26c1e..9f8859f 100644
--- a/gcc/tree-sra.c
+++ b/gcc/tree-sra.c
@@ -270,28 +270,13 @@  struct access
   /* Set when we discover that this pointer is not safe to dereference in the
      caller.  */
   unsigned grp_not_necessarilly_dereferenced : 1;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((access *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<access> pool;
 };
 
 typedef struct access *access_p;
 
 
 /* Alloc pool for allocating access structures.  */
-pool_allocator<struct access> access::pool ("SRA accesses", 16);
+static object_allocator<struct access> access_pool ("SRA accesses", 16);
 
 /* A structure linking lhs and rhs accesses from an aggregate assignment.  They
    are used to propagate subaccesses from rhs to lhs as long as they don't
@@ -300,25 +285,10 @@  struct assign_link
 {
   struct access *lacc, *racc;
   struct assign_link *next;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((assign_link *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<assign_link> pool;
 };
 
 /* Alloc pool for allocating assign link structures.  */
-pool_allocator<assign_link> assign_link::pool ("SRA links", 16);
+static object_allocator<assign_link> assign_link_pool ("SRA links", 16);
 
 /* Base (tree) -> Vector (vec<access_p> *) map.  */
 static hash_map<tree, auto_vec<access_p> > *base_access_vec;
@@ -705,8 +675,8 @@  sra_deinitialize (void)
   candidates = NULL;
   BITMAP_FREE (should_scalarize_away_bitmap);
   BITMAP_FREE (cannot_scalarize_away_bitmap);
-  access::pool.release ();
-  assign_link::pool.release ();
+  access_pool.release ();
+  assign_link_pool.release ();
   obstack_free (&name_obstack, NULL);
 
   delete base_access_vec;
@@ -858,7 +828,7 @@  mark_parm_dereference (tree base, HOST_WIDE_INT dist, gimple stmt)
 static struct access *
 create_access_1 (tree base, HOST_WIDE_INT offset, HOST_WIDE_INT size)
 {
-  struct access *access = new struct access ();
+  struct access *access = access_pool.allocate ();
 
   memset (access, 0, sizeof (struct access));
   access->base = base;
@@ -1234,7 +1204,7 @@  build_accesses_from_assign (gimple stmt)
     {
       struct assign_link *link;
 
-      link = new assign_link;
+      link = assign_link_pool.allocate ();
       memset (link, 0, sizeof (struct assign_link));
 
       link->lacc = lacc;
@@ -2393,7 +2363,7 @@  create_artificial_child_access (struct access *parent, struct access *model,
 
   gcc_assert (!model->grp_unscalarizable_region);
 
-  struct access *access = new struct access ();
+  struct access *access = access_pool.allocate ();
   memset (access, 0, sizeof (struct access));
   if (!build_user_friendly_ref_for_offset (&expr, TREE_TYPE (expr), new_offset,
 					   model->type))
diff --git a/gcc/tree-ssa-math-opts.c b/gcc/tree-ssa-math-opts.c
index bb3c2ef..24e9ebb 100644
--- a/gcc/tree-ssa-math-opts.c
+++ b/gcc/tree-ssa-math-opts.c
@@ -203,7 +203,7 @@  static struct
 static struct occurrence *occ_head;
 
 /* Allocation pool for getting instances of "struct occurrence".  */
-static pool_allocator<occurrence> *occ_pool;
+static object_allocator<occurrence> *occ_pool;
 
 
 
@@ -546,7 +546,7 @@  pass_cse_reciprocals::execute (function *fun)
   basic_block bb;
   tree arg;
 
-  occ_pool = new pool_allocator<occurrence>
+  occ_pool = new object_allocator<occurrence>
     ("dominators for recip", n_basic_blocks_for_fn (fun) / 3 + 1);
 
   memset (&reciprocal_stats, 0, sizeof (reciprocal_stats));
diff --git a/gcc/tree-ssa-pre.c b/gcc/tree-ssa-pre.c
index 644fa26..e9fd9f7 100644
--- a/gcc/tree-ssa-pre.c
+++ b/gcc/tree-ssa-pre.c
@@ -349,7 +349,7 @@  clear_expression_ids (void)
   expressions.release ();
 }
 
-static pool_allocator<pre_expr_d> pre_expr_pool ("pre_expr nodes", 30);
+static object_allocator<pre_expr_d> pre_expr_pool ("pre_expr nodes", 30);
 
 /* Given an SSA_NAME NAME, get or create a pre_expr to represent it.  */
 
@@ -488,7 +488,7 @@  static unsigned int get_expr_value_id (pre_expr);
 /* We can add and remove elements and entries to and from sets
    and hash tables, so we use alloc pools for them.  */
 
-static pool_allocator<bitmap_set> bitmap_set_pool ("Bitmap sets", 30);
+static object_allocator<bitmap_set> bitmap_set_pool ("Bitmap sets", 30);
 static bitmap_obstack grand_bitmap_obstack;
 
 /* Set of blocks with statements that have had their EH properties changed.  */
diff --git a/gcc/tree-ssa-reassoc.c b/gcc/tree-ssa-reassoc.c
index 47a14bd..262d1ec 100644
--- a/gcc/tree-ssa-reassoc.c
+++ b/gcc/tree-ssa-reassoc.c
@@ -208,7 +208,7 @@  typedef struct operand_entry
   unsigned int count;
 } *operand_entry_t;
 
-static pool_allocator<operand_entry> operand_entry_pool ("operand entry pool",
+static object_allocator<operand_entry> operand_entry_pool ("operand entry pool",
 							 30);
 
 /* This is used to assign a unique ID to each struct operand_entry
diff --git a/gcc/tree-ssa-sccvn.c b/gcc/tree-ssa-sccvn.c
index 637368b..c20f1cc 100644
--- a/gcc/tree-ssa-sccvn.c
+++ b/gcc/tree-ssa-sccvn.c
@@ -260,8 +260,8 @@  typedef struct vn_tables_s
   vn_phi_table_type *phis;
   vn_reference_table_type *references;
   struct obstack nary_obstack;
-  pool_allocator<vn_phi_s> *phis_pool;
-  pool_allocator<vn_reference_s> *references_pool;
+  object_allocator<vn_phi_s> *phis_pool;
+  object_allocator<vn_reference_s> *references_pool;
 } *vn_tables_t;
 
 
@@ -4125,9 +4125,9 @@  allocate_vn_table (vn_tables_t table)
   table->references = new vn_reference_table_type (23);
 
   gcc_obstack_init (&table->nary_obstack);
-  table->phis_pool = new pool_allocator<vn_phi_s> ("VN phis", 30);
-  table->references_pool = new pool_allocator<vn_reference_s> ("VN references",
-							       30);
+  table->phis_pool = new object_allocator<vn_phi_s> ("VN phis", 30);
+  table->references_pool = new object_allocator<vn_reference_s>
+    ("VN references", 30);
 }
 
 /* Free a value number table.  */
diff --git a/gcc/tree-ssa-strlen.c b/gcc/tree-ssa-strlen.c
index 0f6750a..cfe4dd9 100644
--- a/gcc/tree-ssa-strlen.c
+++ b/gcc/tree-ssa-strlen.c
@@ -113,7 +113,8 @@  typedef struct strinfo_struct
 } *strinfo;
 
 /* Pool for allocating strinfo_struct entries.  */
-static pool_allocator<strinfo_struct> strinfo_pool ("strinfo_struct pool", 64);
+static object_allocator<strinfo_struct> strinfo_pool ("strinfo_struct pool",
+						      64);
 
 /* Vector mapping positive string indexes to strinfo, for the
    current basic block.  The first pointer in the vector is special,
diff --git a/gcc/tree-ssa-structalias.c b/gcc/tree-ssa-structalias.c
index 1cc6d55..1e9d7d5 100644
--- a/gcc/tree-ssa-structalias.c
+++ b/gcc/tree-ssa-structalias.c
@@ -323,7 +323,7 @@  static varinfo_t lookup_vi_for_tree (tree);
 static inline bool type_can_have_subvars (const_tree);
 
 /* Pool of variable info structures.  */
-static pool_allocator<variable_info> variable_info_pool
+static object_allocator<variable_info> variable_info_pool
   ("Variable info pool", 30);
 
 /* Map varinfo to final pt_solution.  */
@@ -524,7 +524,7 @@  struct constraint
 /* List of constraints that we use to build the constraint graph from.  */
 
 static vec<constraint_t> constraints;
-static pool_allocator<constraint> constraint_pool ("Constraint pool", 30);
+static object_allocator<constraint> constraint_pool ("Constraint pool", 30);
 
 /* The constraint graph is represented as an array of bitmaps
    containing successor nodes.  */
diff --git a/gcc/var-tracking.c b/gcc/var-tracking.c
index b7ecea8..6d29482 100644
--- a/gcc/var-tracking.c
+++ b/gcc/var-tracking.c
@@ -260,21 +260,6 @@  typedef struct attrs_def
 
   /* Offset from start of DECL.  */
   HOST_WIDE_INT offset;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((attrs_def *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<attrs_def> pool;
 } *attrs;
 
 /* Structure for chaining the locations.  */
@@ -291,21 +276,6 @@  typedef struct location_chain_def
 
   /* Initialized? */
   enum var_init_status init;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((location_chain_def *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<location_chain_def> pool;
 } *location_chain;
 
 /* A vector of loc_exp_dep holds the active dependencies of a one-part
@@ -323,21 +293,6 @@  typedef struct loc_exp_dep_s
   /* A pointer to the pointer to this entry (head or prev's next) in
      the doubly-linked list.  */
   struct loc_exp_dep_s **pprev;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((loc_exp_dep_s *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<loc_exp_dep_s> pool;
 } loc_exp_dep;
 
 
@@ -576,21 +531,6 @@  typedef struct shared_hash_def
 
   /* Actual hash table.  */
   variable_table_type *htab;
-
-  /* Pool allocation new operator.  */
-  inline void *operator new (size_t)
-  {
-    return pool.allocate ();
-  }
-
-  /* Delete operator utilizing pool allocation.  */
-  inline void operator delete (void *ptr)
-  {
-    pool.remove ((shared_hash_def *) ptr);
-  }
-
-  /* Memory allocation pool.  */
-  static pool_allocator<shared_hash_def> pool;
 } *shared_hash;
 
 /* Structure holding the IN or OUT set for a basic block.  */
@@ -635,28 +575,28 @@  typedef struct variable_tracking_info_def
 } *variable_tracking_info;
 
 /* Alloc pool for struct attrs_def.  */
-pool_allocator<attrs_def> attrs_def::pool ("attrs_def pool", 1024);
+object_allocator<attrs_def> attrs_def_pool ("attrs_def pool", 1024);
 
 /* Alloc pool for struct variable_def with MAX_VAR_PARTS entries.  */
 
-static pool_allocator<variable_def> var_pool
-  ("variable_def pool", 64,
+static pool_allocator var_pool
+  ("variable_def pool", 64, sizeof (variable_def) +
    (MAX_VAR_PARTS - 1) * sizeof (((variable)NULL)->var_part[0]));
 
 /* Alloc pool for struct variable_def with a single var_part entry.  */
-static pool_allocator<variable_def> valvar_pool
-  ("small variable_def pool", 256);
+static pool_allocator valvar_pool
+  ("small variable_def pool", 256, sizeof (variable_def));
 
 /* Alloc pool for struct location_chain_def.  */
-pool_allocator<location_chain_def> location_chain_def::pool
+static object_allocator<location_chain_def> location_chain_def_pool
   ("location_chain_def pool", 1024);
 
 /* Alloc pool for struct shared_hash_def.  */
-pool_allocator<shared_hash_def> shared_hash_def::pool
+static object_allocator<shared_hash_def> shared_hash_def_pool
   ("shared_hash_def pool", 256);
 
 /* Alloc pool for struct loc_exp_dep_s for NOT_ONEPART variables.  */
-pool_allocator<loc_exp_dep> loc_exp_dep::pool ("loc_exp_dep pool", 64);
+object_allocator<loc_exp_dep> loc_exp_dep_pool ("loc_exp_dep pool", 64);
 
 /* Changed variables, notes will be emitted for them.  */
 static variable_table_type *changed_variables;
@@ -1417,12 +1357,19 @@  dv_onepart_p (decl_or_value dv)
 }
 
 /* Return the variable pool to be used for a dv of type ONEPART.  */
-static inline pool_allocator <variable_def> &
+static inline pool_allocator &
 onepart_pool (onepart_enum_t onepart)
 {
   return onepart ? valvar_pool : var_pool;
 }
 
+/* Allocate a variable_def from the corresponding variable pool.  */
+static inline variable_def *
+onepart_pool_allocate (onepart_enum_t onepart)
+{
+  return (variable_def*) onepart_pool (onepart).allocate ();
+}
+
 /* Build a decl_or_value out of a decl.  */
 static inline decl_or_value
 dv_from_decl (tree decl)
@@ -1777,7 +1724,7 @@  unshare_variable (dataflow_set *set, variable_def **slot, variable var,
   variable new_var;
   int i;
 
-  new_var = onepart_pool (var->onepart).allocate ();
+  new_var = onepart_pool_allocate (var->onepart);
   new_var->dv = var->dv;
   new_var->refcount = 1;
   var->refcount--;
@@ -4055,7 +4002,7 @@  variable_merge_over_cur (variable s1var, struct dfset_merge *dsm)
 	{
 	  if (node)
 	    {
-	      dvar = onepart_pool (onepart).allocate ();
+	      dvar = onepart_pool_allocate (onepart);
 	      dvar->dv = dv;
 	      dvar->refcount = 1;
 	      dvar->n_var_parts = 1;
@@ -4191,7 +4138,7 @@  variable_merge_over_cur (variable s1var, struct dfset_merge *dsm)
 							  INSERT);
 		  if (!*slot)
 		    {
-		      variable var = onepart_pool (ONEPART_VALUE).allocate ();
+		      variable var = onepart_pool_allocate (ONEPART_VALUE);
 		      var->dv = dv;
 		      var->refcount = 1;
 		      var->n_var_parts = 1;
@@ -7340,7 +7287,7 @@  variable_from_dropped (decl_or_value dv, enum insert_option insert)
 
   gcc_checking_assert (onepart == ONEPART_VALUE || onepart == ONEPART_DEXPR);
 
-  empty_var = onepart_pool (onepart).allocate ();
+  empty_var = onepart_pool_allocate (onepart);
   empty_var->dv = dv;
   empty_var->refcount = 1;
   empty_var->n_var_parts = 0;
@@ -7444,7 +7391,7 @@  variable_was_changed (variable var, dataflow_set *set)
 
 	  if (!empty_var)
 	    {
-	      empty_var = onepart_pool (onepart).allocate ();
+	      empty_var = onepart_pool_allocate (onepart);
 	      empty_var->dv = var->dv;
 	      empty_var->refcount = 1;
 	      empty_var->n_var_parts = 0;
@@ -7568,7 +7515,7 @@  set_slot_part (dataflow_set *set, rtx loc, variable_def **slot,
   if (!var)
     {
       /* Create new variable information.  */
-      var = onepart_pool (onepart).allocate ();
+      var = onepart_pool_allocate (onepart);
       var->dv = dv;
       var->refcount = 1;
       var->n_var_parts = 1;
@@ -9048,7 +8995,7 @@  emit_notes_for_differences_1 (variable_def **slot, variable_table_type *new_vars
 
       if (!empty_var)
 	{
-	  empty_var = onepart_pool (old_var->onepart).allocate ();
+	  empty_var = onepart_pool_allocate (old_var->onepart);
 	  empty_var->dv = old_var->dv;
 	  empty_var->refcount = 0;
 	  empty_var->n_var_parts = 0;
@@ -10265,17 +10212,17 @@  vt_finalize (void)
   empty_shared_hash->htab = NULL;
   delete changed_variables;
   changed_variables = NULL;
-  attrs_def::pool.release ();
+  attrs_def_pool.release ();
   var_pool.release ();
-  location_chain_def::pool.release ();
-  shared_hash_def::pool.release ();
+  location_chain_def_pool.release ();
+  shared_hash_def_pool.release ();
 
   if (MAY_HAVE_DEBUG_INSNS)
     {
       if (global_get_addr_cache)
 	delete global_get_addr_cache;
       global_get_addr_cache = NULL;
-      loc_exp_dep::pool.release ();
+      loc_exp_dep_pool.release ();
       valvar_pool.release ();
       preserved_values.release ();
       cselib_finish ();
-- 
2.4.5