diff mbox

[01/35] Introduce new type-based pool allocator.

Message ID 55686A1A.4090105@suse.cz
State New
Headers show

Commit Message

Martin Liška May 29, 2015, 1:31 p.m. UTC
On 05/28/2015 07:15 PM, Jeff Law wrote:
> On 05/28/2015 06:49 AM, Martin Liška wrote:
> .
>>
>> This mechanism has been just adapted. I find it quite useful as we have
>> examples in source code where we
>> allocate same struct/class types from a various pool. For debugging
>> purpose, it helps to identify if
>> release operation is called for a correct pool.
> I saw that you were following existing practice for the pools in the removal patch. I still don't like it as it makes mixing and matching objects harder when debugging gcc and if the structure is exposed for plugins, then we've got an unnecessary ABI plugin breakage.
>
> I certainly understand how it's useful -- I'm not questioning that.  I'm questioning changing the size of structures on ENABLE_CHECKING.
>
> My first inclination would be to include all that stuff unconditionally.  If that's too much overhead, then perhaps include the structure member, but not bother with any of the bookkeeping except for ENABLE_CHECKING.

Hi.

Updated version of patch removes ENABLE_CHECKING in the struct definition.

News in the patchset I'm going to re-send:
+ Changelog entries are fixed for spaces
+ Each patch passes ./contrib/check_GNU_style.sh script
+ pool_allocator::pool_allocator is a trivial constructor and first allocation launches initialization
+ some patches are squashed as were mentioned multiple time (ira-color.c, ira-build.c)

The patch set survives x86_64-linux-pc boostrap, I'm going to re-run regression tests.

Thanks,
Martin

>
>
>> Anyway, I would like to commit all these patches at once (one by one).
>> Thus, I'm going to wait for approval for the whole series before I'll
>> commit the set.
> Quite reasonable -- I was mostly trying to make sure I understood the testing situation.
>
> I think at this point the whole series is approved, so you can move forward.
>
> jeff
>

Comments

Jeff Law May 30, 2015, 4:44 a.m. UTC | #1
On 05/29/2015 07:31 AM, Martin Liška wrote:
> On 05/28/2015 07:15 PM, Jeff Law wrote:
>> On 05/28/2015 06:49 AM, Martin Liška wrote:
>> .
>>>
>>> This mechanism has been just adapted. I find it quite useful as we have
>>> examples in source code where we
>>> allocate same struct/class types from a various pool. For debugging
>>> purpose, it helps to identify if
>>> release operation is called for a correct pool.
>> I saw that you were following existing practice for the pools in the
>> removal patch. I still don't like it as it makes mixing and matching
>> objects harder when debugging gcc and if the structure is exposed for
>> plugins, then we've got an unnecessary ABI plugin breakage.
>>
>> I certainly understand how it's useful -- I'm not questioning that.
>> I'm questioning changing the size of structures on ENABLE_CHECKING.
>>
>> My first inclination would be to include all that stuff
>> unconditionally.  If that's too much overhead, then perhaps include
>> the structure member, but not bother with any of the bookkeeping
>> except for ENABLE_CHECKING.
>
> Hi.
>
> Updated version of patch removes ENABLE_CHECKING in the struct definition.
>
> News in the patchset I'm going to re-send:
> + Changelog entries are fixed for spaces
> + Each patch passes ./contrib/check_GNU_style.sh script
> + pool_allocator::pool_allocator is a trivial constructor and first
> allocation launches initialization
> + some patches are squashed as were mentioned multiple time
> (ira-color.c, ira-build.c)
>
> The patch set survives x86_64-linux-pc boostrap, I'm going to re-run
> regression tests.
Sounds perfect.   Once the regression testing is done, the whole set is 
fine for the trunk.

Thanks for tackling this.

jeff
Andreas Schwab June 2, 2015, 9:48 a.m. UTC | #2
In file included from ../../gcc/stmt.c:78:0:
../../gcc/alloc-pool.h: In function 'void expand_sjlj_dispatch_table(rtx, vec<t\
ree_node*>)':
../../gcc/alloc-pool.h:303:4: error: 'case_node_pool.pool_allocator<case_node>:\
:m_block_size' may be used uninitialized in this function [-Werror=maybe-uninit\
ialized]
    block = XNEWVEC (char, m_block_size);
    ^
../../gcc/stmt.c:1339:33: note: 'case_node_pool.pool_allocator<case_node>::m_bl\
ock_size' was declared here
       pool_allocator<case_node> case_node_pool ("struct sjlj_case pool",
                                 ^

Andreas.
diff mbox

Patch

From b551d2349b342776213f518de0835ab54fa9fa03 Mon Sep 17 00:00:00 2001
From: mliska <mliska@suse.cz>
Date: Wed, 27 May 2015 15:56:44 +0200
Subject: [PATCH 01/32] Introduce new type-based pool allocator.

Hello.

Following patch set attempts to replace old-style pool allocator
to a type-based one. Moreover, as we utilize  classes and structs that are used
just by a pool allocator, these types have overwritten ctors and dtors.
Thus, using the allocator is much easier and we shouldn't cast types
back and forth. Another beneficat can be achieved in future, as we will
be able to call a class constructors to correctly register a location,
where a memory is allocated (-fgather-detailed-mem-stats).

Patch can boostrap on x86_64-linux-gnu and ppc64-linux-gnu and
survives regression tests on x86_64-linux-gnu.

Ready for trunk?
Thanks,
Martin

gcc/ChangeLog:

2015-04-30  Martin Liska  <mliska@suse.cz>

	* alloc-pool.c (struct alloc_pool_descriptor): Move definition
	to header file.
	* alloc-pool.h (pool_allocator::pool_allocator): New function.
	(pool_allocator::release): Likewise.
	(inline pool_allocator::release_if_empty): Likewise.
	(inline pool_allocator::~pool_allocator): Likewise.
	(pool_allocator::allocate): Likewise.
	(pool_allocator::remove): Likewise.
---
 gcc/alloc-pool.c |  33 +----
 gcc/alloc-pool.h | 380 ++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 383 insertions(+), 30 deletions(-)

diff --git a/gcc/alloc-pool.c b/gcc/alloc-pool.c
index 81909d8..0bea7a6 100644
--- a/gcc/alloc-pool.c
+++ b/gcc/alloc-pool.c
@@ -25,6 +25,8 @@  along with GCC; see the file COPYING3.  If not see
 #include "hash-table.h"
 #include "hash-map.h"
 
+ALLOC_POOL_ID_TYPE last_id;
+
 #define align_eight(x) (((x+7) >> 3) << 3)
 
 /* The internal allocation object.  */
@@ -58,36 +60,10 @@  typedef struct allocation_object_def
 #define USER_PTR_FROM_ALLOCATION_OBJECT_PTR(X)				\
    ((void *) (((allocation_object *) (X))->u.data))
 
-#ifdef ENABLE_CHECKING
-/* Last used ID.  */
-static ALLOC_POOL_ID_TYPE last_id;
-#endif
-
-/* Store information about each particular alloc_pool.  Note that this
-   will underestimate the amount the amount of storage used by a small amount:
-   1) The overhead in a pool is not accounted for.
-   2) The unallocated elements in a block are not accounted for.  Note
-   that this can at worst case be one element smaller that the block
-   size for that pool.  */
-struct alloc_pool_descriptor
-{
-  /* Number of pools allocated.  */
-  unsigned long created;
-  /* Gross allocated storage.  */
-  unsigned long allocated;
-  /* Amount of currently active storage. */
-  unsigned long current;
-  /* Peak amount of storage used.  */
-  unsigned long peak;
-  /* Size of element in the pool.  */
-  int elt_size;
-};
-
 /* Hashtable mapping alloc_pool names to descriptors.  */
-static hash_map<const char *, alloc_pool_descriptor> *alloc_pool_hash;
+hash_map<const char *, alloc_pool_descriptor> *alloc_pool_hash;
 
-/* For given name, return descriptor, create new if needed.  */
-static struct alloc_pool_descriptor *
+struct alloc_pool_descriptor *
 allocate_pool_descriptor (const char *name)
 {
   if (!alloc_pool_hash)
@@ -96,6 +72,7 @@  allocate_pool_descriptor (const char *name)
   return &alloc_pool_hash->get_or_insert (name);
 }
 
+
 /* Create a pool of things of size SIZE, with NUM in each block we
    allocate.  */
 
diff --git a/gcc/alloc-pool.h b/gcc/alloc-pool.h
index 0c30711..ec671dc 100644
--- a/gcc/alloc-pool.h
+++ b/gcc/alloc-pool.h
@@ -20,6 +20,8 @@  along with GCC; see the file COPYING3.  If not see
 #ifndef ALLOC_POOL_H
 #define ALLOC_POOL_H
 
+#include "hash-map.h"
+
 typedef unsigned long ALLOC_POOL_ID_TYPE;
 
 typedef struct alloc_pool_list_def
@@ -31,9 +33,7 @@  typedef struct alloc_pool_list_def
 typedef struct alloc_pool_def
 {
   const char *name;
-#ifdef ENABLE_CHECKING
   ALLOC_POOL_ID_TYPE id;
-#endif
   size_t elts_per_block;
 
   /* These are the elements that have been allocated at least once and freed.  */
@@ -63,4 +63,380 @@  extern void free_alloc_pool_if_empty (alloc_pool *);
 extern void *pool_alloc (alloc_pool) ATTRIBUTE_MALLOC;
 extern void pool_free (alloc_pool, void *);
 extern void dump_alloc_pool_statistics (void);
+
+typedef unsigned long ALLOC_POOL_ID_TYPE;
+
+/* Type based memory pool allocator.  */
+template <typename T>
+class pool_allocator
+{
+public:
+  /* Default constructor for pool allocator called NAME.  Each block
+     has NUM elements.  The allocator support EXTRA_SIZE and can
+     potentially IGNORE_TYPE_SIZE.  */
+  pool_allocator (const char *name, size_t num, size_t extra_size = 0,
+		  bool ignore_type_size = false);
+
+  /* Default destuctor.  */
+  ~pool_allocator ();
+
+  /* Release internal data structures.  */
+  void release ();
+
+  /* Release internal data structures if the pool has not allocated
+     an object.  */
+  void release_if_empty ();
+
+  /* Allocate a new object.  */
+  T *allocate () ATTRIBUTE_MALLOC;
+
+  /* Release OBJECT that must come from the pool.  */
+  void remove (T *object);
+
+private:
+  struct allocation_pool_list
+  {
+    allocation_pool_list *next;
+  };
+
+  /* Initialize a pool allocator.  */
+  void initialize ();
+
+  template <typename U>
+  struct allocation_object
+  {
+    /* The ID of alloc pool which the object was allocated from.  */
+    ALLOC_POOL_ID_TYPE id;
+
+    union
+      {
+	/* The data of the object.  */
+	char data[1];
+
+	/* Because we want any type of data to be well aligned after the ID,
+	   the following elements are here.  They are never accessed so
+	   the allocated object may be even smaller than this structure.
+	   We do not care about alignment for floating-point types.  */
+	char *align_p;
+	int64_t align_i;
+      } u;
+
+    static inline allocation_object<U> *get_instance (void *data_ptr)
+    {
+      return (allocation_object<U> *)(((char *)(data_ptr))
+				      - offsetof (allocation_object<U>,
+						  u.data));
+    }
+
+    static inline U *get_data (void *instance_ptr)
+    {
+      return (U*)(((allocation_object<U> *) instance_ptr)->u.data);
+    }
+  };
+
+  /* Align X to 8.  */
+  size_t align_eight (size_t x)
+  {
+    return (((x+7) >> 3) << 3);
+  }
+
+  const char *m_name;
+  ALLOC_POOL_ID_TYPE m_id;
+  size_t m_elts_per_block;
+
+  /* These are the elements that have been allocated at least once
+     and freed.  */
+  allocation_pool_list *m_returned_free_list;
+
+  /* These are the elements that have not yet been allocated out of
+     the last block obtained from XNEWVEC.  */
+  char* m_virgin_free_list;
+
+  /* The number of elements in the virgin_free_list that can be
+     allocated before needing another block.  */
+  size_t m_virgin_elts_remaining;
+  /* The number of elements that are allocated.  */
+  size_t m_elts_allocated;
+  /* The number of elements that are released.  */
+  size_t m_elts_free;
+  /* The number of allocated blocks.  */
+  size_t m_blocks_allocated;
+  /* List of blocks that are used to allocate new objects.  */
+  allocation_pool_list *m_block_list;
+  /* The number of elements in a block.  */
+  size_t m_block_size;
+  /* Size of a pool elements in bytes.  */
+  size_t m_elt_size;
+  /* Flag if we shoul ignore size of a type.  */
+  bool m_ignore_type_size;
+  /* Extra size in bytes that should be allocated for each element.  */
+  size_t m_extra_size;
+  /* Flag if a pool allocator is initialized.  */
+  bool m_initialized;
+};
+
+/* Last used ID.  */
+extern ALLOC_POOL_ID_TYPE last_id;
+
+/* Store information about each particular alloc_pool.  Note that this
+   will underestimate the amount the amount of storage used by a small amount:
+   1) The overhead in a pool is not accounted for.
+   2) The unallocated elements in a block are not accounted for.  Note
+   that this can at worst case be one element smaller that the block
+   size for that pool.  */
+struct alloc_pool_descriptor
+{
+  /* Number of pools allocated.  */
+  unsigned long created;
+  /* Gross allocated storage.  */
+  unsigned long allocated;
+  /* Amount of currently active storage.  */
+  unsigned long current;
+  /* Peak amount of storage used.  */
+  unsigned long peak;
+  /* Size of element in the pool.  */
+  int elt_size;
+};
+
+
+/* Hashtable mapping alloc_pool names to descriptors.  */
+extern hash_map<const char *, alloc_pool_descriptor> *alloc_pool_hash;
+
+/* For given name, return descriptor, create new if needed.  */
+alloc_pool_descriptor *
+allocate_pool_descriptor (const char *name);
+
+template <typename T>
+inline
+pool_allocator<T>::pool_allocator (const char *name, size_t num,
+				   size_t extra_size, bool ignore_type_size):
+  m_name (name), m_elts_per_block (num), m_returned_free_list (NULL),
+  m_virgin_free_list (NULL), m_virgin_elts_remaining (0), m_elts_allocated (0),
+  m_elts_free (0), m_blocks_allocated (0), m_block_list (NULL),
+  m_ignore_type_size (ignore_type_size), m_extra_size (extra_size),
+  m_initialized (false) {}
+
+/* Initialize a pool allocator.  */
+
+template <typename T>
+void
+pool_allocator<T>::initialize ()
+{
+  gcc_checking_assert (!m_initialized);
+  m_initialized = true;
+
+  size_t header_size;
+  size_t size = (m_ignore_type_size ? 0 : sizeof (T)) + m_extra_size;
+
+  gcc_checking_assert (m_name);
+
+  /* Make size large enough to store the list header.  */
+  if (size < sizeof (allocation_pool_list*))
+    size = sizeof (allocation_pool_list*);
+
+  /* Now align the size to a multiple of 4.  */
+  size = align_eight (size);
+
+  /* Add the aligned size of ID.  */
+  size += offsetof (allocation_object<T>, u.data);
+
+  /* Um, we can't really allocate 0 elements per block.  */
+  gcc_checking_assert (m_elts_per_block);
+
+  m_elt_size = size;
+
+  if (GATHER_STATISTICS)
+    {
+      alloc_pool_descriptor *desc = allocate_pool_descriptor (m_name);
+      desc->elt_size = size;
+      desc->created++;
+    }
+
+  /* List header size should be a multiple of 8.  */
+  header_size = align_eight (sizeof (allocation_pool_list));
+
+  m_block_size = (size * m_elts_per_block) + header_size;
+
+#ifdef ENABLE_CHECKING
+  /* Increase the last used ID and use it for this pool.
+     ID == 0 is used for free elements of pool so skip it.  */
+  last_id++;
+  if (last_id == 0)
+    last_id++;
+
+  m_id = last_id;
+#endif
+
+}
+
+/* Free all memory allocated for the given memory pool.  */
+template <typename T>
+inline void
+pool_allocator<T>::release ()
+{
+  if (!m_initialized)
+    return;
+
+  allocation_pool_list *block, *next_block;
+
+  /* Free each block allocated to the pool.  */
+  for (block = m_block_list; block != NULL; block = next_block)
+    {
+      next_block = block->next;
+      free (block);
+    }
+
+  if (GATHER_STATISTICS && false)
+    {
+      alloc_pool_descriptor *desc = allocate_pool_descriptor (m_name);
+      desc->current -= (m_elts_allocated - m_elts_free) * m_elt_size;
+    }
+
+  m_returned_free_list = NULL;
+  m_virgin_free_list = NULL;
+  m_virgin_elts_remaining = 0;
+  m_elts_allocated = 0;
+  m_elts_free = 0;
+  m_blocks_allocated = 0;
+  m_block_list = NULL;
+}
+
+template <typename T>
+void
+inline pool_allocator<T>::release_if_empty ()
+{
+  if (m_elts_free == m_elts_allocated)
+    release ();
+}
+
+template <typename T>
+inline pool_allocator<T>::~pool_allocator ()
+{
+  release ();
+}
+
+/* Allocates one element from the pool specified.  */
+template <typename T>
+inline T *
+pool_allocator<T>::allocate ()
+{
+  if (!m_initialized)
+    initialize ();
+
+  allocation_pool_list *header;
+#ifdef ENABLE_VALGRIND_ANNOTATIONS
+  int size;
+#endif
+
+  if (GATHER_STATISTICS)
+    {
+      alloc_pool_descriptor *desc = allocate_pool_descriptor (m_name);
+
+      desc->allocated += m_elt_size;
+      desc->current += m_elt_size;
+      if (desc->peak < desc->current)
+	desc->peak = desc->current;
+    }
+
+#ifdef ENABLE_VALGRIND_ANNOTATIONS
+  size = m_elt_size - offsetof (allocation_object<T>, u.data);
+#endif
+
+  /* If there are no more free elements, make some more!.  */
+  if (!m_returned_free_list)
+    {
+      char *block;
+      if (!m_virgin_elts_remaining)
+	{
+	  allocation_pool_list *block_header;
+
+	  /* Make the block.  */
+	  block = XNEWVEC (char, m_block_size);
+	  block_header = (allocation_pool_list*) block;
+	  block += align_eight (sizeof (allocation_pool_list));
+
+	  /* Throw it on the block list.  */
+	  block_header->next = m_block_list;
+	  m_block_list = block_header;
+
+	  /* Make the block available for allocation.  */
+	  m_virgin_free_list = block;
+	  m_virgin_elts_remaining = m_elts_per_block;
+
+	  /* Also update the number of elements we have free/allocated, and
+	     increment the allocated block count.  */
+	  m_elts_allocated += m_elts_per_block;
+	  m_elts_free += m_elts_per_block;
+	  m_blocks_allocated += 1;
+	}
+
+      /* We now know that we can take the first elt off the virgin list and
+	 put it on the returned list.  */
+      block = m_virgin_free_list;
+      header = (allocation_pool_list*) allocation_object<T>::get_data (block);
+      header->next = NULL;
+#ifdef ENABLE_CHECKING
+      /* Mark the element to be free.  */
+      ((allocation_object<T> *) block)->id = 0;
+#endif
+      VALGRIND_DISCARD (VALGRIND_MAKE_MEM_NOACCESS (header,size));
+      m_returned_free_list = header;
+      m_virgin_free_list += m_elt_size;
+      m_virgin_elts_remaining--;
+
+    }
+
+  /* Pull the first free element from the free list, and return it.  */
+  header = m_returned_free_list;
+  VALGRIND_DISCARD (VALGRIND_MAKE_MEM_DEFINED (header, sizeof (*header)));
+  m_returned_free_list = header->next;
+  m_elts_free--;
+
+#ifdef ENABLE_CHECKING
+  /* Set the ID for element.  */
+  allocation_object<T>::get_instance (header)->id = m_id;
+#endif
+  VALGRIND_DISCARD (VALGRIND_MAKE_MEM_UNDEFINED (header, size));
+
+  /* Call default constructor.  */
+  return (T *)(header);
+}
+
+/* Puts PTR back on POOL's free list.  */
+template <typename T>
+void
+pool_allocator<T>::remove (T *object)
+{
+  gcc_checking_assert (m_initialized);
+
+  allocation_pool_list *header;
+  int size;
+  size = m_elt_size - offsetof (allocation_object<T>, u.data);
+
+#ifdef ENABLE_CHECKING
+  gcc_assert (object
+	      /* Check if we free more than we allocated, which is Bad (TM).  */
+	      && m_elts_free < m_elts_allocated
+	      /* Check whether the PTR was allocated from POOL.  */
+	      && m_id == allocation_object<T>::get_instance (object)->id);
+
+  memset (object, 0xaf, size);
+
+  /* Mark the element to be free.  */
+  allocation_object<T>::get_instance (object)->id = 0;
+#endif
+
+  header = (allocation_pool_list*) object;
+  header->next = m_returned_free_list;
+  m_returned_free_list = header;
+  VALGRIND_DISCARD (VALGRIND_MAKE_MEM_NOACCESS (object, size));
+  m_elts_free++;
+
+  if (GATHER_STATISTICS)
+    {
+      alloc_pool_descriptor *desc = allocate_pool_descriptor (m_name);
+      desc->current -= m_elt_size;
+    }
+}
+
 #endif
-- 
2.1.4