diff mbox series

Fix tcache count maximum

Message ID AM6PR08MB5078A52962A76A1598147CB5832B0@AM6PR08MB5078.eurprd08.prod.outlook.com
State New
Headers show
Series Fix tcache count maximum | expand

Commit Message

Wilco Dijkstra April 15, 2019, 2:32 p.m. UTC
There are 2 issues with tcache count: the tcache counts[] array
is a char, which may be signed and has a very small range and thus
may overflow.  When setting it, there is no overflow check.

Eg. export GLIBC_TUNABLES=glibc.malloc.tcache_count=4096
leads to various crashes in benchtests:

Running /build/glibc/benchtests/bench-strcoll
bench-strcoll: malloc.c:2949: tcache_get: Assertion `tcache->counts[tc_idx] > 0' failed.
Aborted

Btw is this kind of crash regarded as a security risk? It's easy to do a
denial of service this way if you're able to set an environment variable.

ChangeLog:
2019-04-15  Wilco Dijkstra  <wdijkstr@arm.com>

        * malloc/malloc.c (tcache_perthread_struct): Use an
        unsigned short for counts array.
        (MAX_TCACHE_COUNT): New define.
        (do_set_tcache_count): Only update if count is small enough.

--

Comments

Carlos O'Donell April 29, 2019, 9 p.m. UTC | #1
On 4/15/19 10:32 AM, Wilco Dijkstra wrote:
> There are 2 issues with tcache count: the tcache counts[] array
> is a char, which may be signed and has a very small range and thus
> may overflow.  When setting it, there is no overflow check.
> 
> Eg. export GLIBC_TUNABLES=glibc.malloc.tcache_count=4096
> leads to various crashes in benchtests:
> 
> Running /build/glibc/benchtests/bench-strcoll
> bench-strcoll: malloc.c:2949: tcache_get: Assertion `tcache->counts[tc_idx] > 0' failed.
> Aborted
> 
> Btw is this kind of crash regarded as a security risk? It's easy to do a
> denial of service this way if you're able to set an environment variable.

I don't think this is a security bug. If you control the environment variables
going into a non-SUID application then you can use LD_PRELOAD to load any other
library you have on the system, or take control of a lot of other glibc options.

> ChangeLog:
> 2019-04-15  Wilco Dijkstra  <wdijkstr@arm.com>
> 

Please create a bug for this.

This is a publicly visible issue with tcache and tunables.

>          * malloc/malloc.c (tcache_perthread_struct): Use an
>          unsigned short for counts array.
>          (MAX_TCACHE_COUNT): New define.
>          (do_set_tcache_count): Only update if count is small enough.
> 
> --
> 
> diff --git a/malloc/malloc.c b/malloc/malloc.c
> index 801ba1f499b566e677b763fc84f8ba86f4f7ccd0..4db7283cc27118cd7d39410febf7be8f7633780a 100644
> --- a/malloc/malloc.c
> +++ b/malloc/malloc.c
> @@ -2915,10 +2915,12 @@ typedef struct tcache_entry
>      time), this is for performance reasons.  */
>   typedef struct tcache_perthread_struct
>   {
> -  char counts[TCACHE_MAX_BINS];
> +  unsigned short counts[TCACHE_MAX_BINS];

This patch conflates two issues.

(a) Adding checking to tunable.

(b) Lifting limit.

Please split into two bugs. One to fix tunables, another to raise the
tcache chunk caching limit.

If you are going to do (b) and change the sign of counts then you need
to go through and fix other code that expects to possibly have a
negative value.

e.g.

2925 /* Caller must ensure that we know tc_idx is valid and there's room
2926    for more chunks.  */
2927 static __always_inline void
2928 tcache_put (mchunkptr chunk, size_t tc_idx)
2929 {
2930   tcache_entry *e = (tcache_entry *) chunk2mem (chunk);
2931   assert (tc_idx < TCACHE_MAX_BINS);
2932
2933   /* Mark this chunk as "in the tcache" so the test in _int_free will
2934      detect a double free.  */
2935   e->key = tcache;
2936
2937   e->next = tcache->entries[tc_idx];
2938   tcache->entries[tc_idx] = e;
2939   ++(tcache->counts[tc_idx]);

        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
        assert (tcache->counts[tc_idx] != 0);
        See below for discussion.

2940 }


2942 /* Caller must ensure that we know tc_idx is valid and there's
2943    available chunks to remove.  */
2944 static __always_inline void *
2945 tcache_get (size_t tc_idx)
2946 {
2947   tcache_entry *e = tcache->entries[tc_idx];
2948   assert (tc_idx < TCACHE_MAX_BINS);
2949   assert (tcache->counts[tc_idx] > 0);

        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        Always true now if counts is only positive.
        Remove?

2950   tcache->entries[tc_idx] = e->next;
2951   --(tcache->counts[tc_idx]);

        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
        May wrap on error, should we check that and assert?
        We expect the caller to check for != NULL entry,
        indicating at least one entry. It's possible the list
        is corrupt and 'e' is pointing to garbage, so an
        assert might be good here?

        assert (tcache->counts[tc_idx] != MAX_TCACHE_COUNT);

2952   e->key = NULL;
2953   return (void *) e;
2954 }

The manual/memory.texi needs updating because you made the
count twice the size, and the rough estimates for size of
tcache should be updated. The manual should also list the
actual limit being imposed.

>     tcache_entry *entries[TCACHE_MAX_BINS];
>   } tcache_perthread_struct;
>   
> +#define MAX_TCACHE_COUNT 65535	/* Maximum value of counts[] entries.  */

OK. But see notes above about a new bug, distinct patch, and
manual update.

> +
>   static __thread bool tcache_shutting_down = false;
>   static __thread tcache_perthread_struct *tcache = NULL;
>   
> @@ -5114,8 +5116,11 @@ do_set_tcache_max (size_t value)
>   static __always_inline int
>   do_set_tcache_count (size_t value)
>   {
> -  LIBC_PROBE (memory_tunable_tcache_count, 2, value, mp_.tcache_count);
> -  mp_.tcache_count = value;
> +  if (value <= MAX_TCACHE_COUNT)
> +    {
> +      LIBC_PROBE (memory_tunable_tcache_count, 2, value, mp_.tcache_count);
> +      mp_.tcache_count = value;

OK.

> +    }
>     return 1;
>   }
>   
>
Wilco Dijkstra May 7, 2019, 2:30 p.m. UTC | #2
Hi Carlos,

> Please create a bug for this.
>
> This is a publicly visible issue with tcache and tunables.

Sure, BZ 24531.

> This patch conflates two issues.
>
> (a) Adding checking to tunable.
>
> (b) Lifting limit.
>
> Please split into two bugs. One to fix tunables, another to raise the
> tcache chunk caching limit.

> If you are going to do (b) and change the sign of counts then you need
> to go through and fix other code that expects to possibly have a
> negative value.

If there is any code that expects it to be negative that's a serious bug...
Char is neither signed nor unsigned, the valid range for char is 0..127.

2939   ++(tcache->counts[tc_idx]);

        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
        assert (tcache->counts[tc_idx] != 0);
        See below for discussion.

In all cases we already check tcache->counts[tc_idx] < mp_.tcache_count,
so there can be no overflow iff mp_.tcache_count is the maximum value of
tcache->counts[] entries. So no checks needed.

2947   tcache_entry *e = tcache->entries[tc_idx];
2948   assert (tc_idx < TCACHE_MAX_BINS);
2949   assert (tcache->counts[tc_idx] > 0);

        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        Always true now if counts is only positive.
        Remove?

Yes this assert is redundant since we already checked there is a valid entry
(or just added several new entries). So this assert can never trigger, it only 
fails if tcache_put has an overflow bug.

2950   tcache->entries[tc_idx] = e->next;
2951   --(tcache->counts[tc_idx]);

        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
        May wrap on error, should we check that and assert?
        We expect the caller to check for != NULL entry,
        indicating at least one entry. It's possible the list
        is corrupt and 'e' is pointing to garbage, so an
        assert might be good here?

        assert (tcache->counts[tc_idx] != MAX_TCACHE_COUNT);

No this can't underflow after we fix the overflow bug. 

> The manual/memory.texi needs updating because you made the
> count twice the size, and the rough estimates for size of
> tcache should be updated. The manual should also list the
> actual limit being imposed.

Which size do you mean? I can't find anything in manual/memory.texi
refering to tcache. I did update the tunables which incorrectly states
there is no limit on glibc.malloc.tcache_count.

See updated patch below - this should be simple and safe to backport.

Cheers,
Wilco


[PATCH v2] Fix tcache count maximum (BZ #24531)

The tcache counts[] array is a char, which has a very small range and thus
may overflow.  When setting tcache_count tunable, there is no overflow check.
However the tunable must not be larger than the maximum value of the tcache
counts[] array, otherwise it can overflow when filling the tcache.

Eg. export GLIBC_TUNABLES=glibc.malloc.tcache_count=4096
leads to crashes in benchtests:

Running /build/glibc/benchtests/bench-strcoll
bench-strcoll: malloc.c:2949: tcache_get: Assertion `tcache->counts[tc_idx] > 0' failed.
Aborted

ChangeLog:
2019-05-07  Wilco Dijkstra  <wdijkstr@arm.com>

        [BZ #24531]
        * malloc/malloc.c (MAX_TCACHE_COUNT): New define.
        (tcache_put): Remove redundant assert.
        (do_set_tcache_count): Only update if count is small enough.
        * manual/tunables.texi (glibc.malloc.tcache_count): Document max value.

diff --git a/malloc/malloc.c b/malloc/malloc.c
index 0e3d4dd5163f5fa8fb07b71fb7e318e7b10f5cfd..e03a14aabe5d4a1ca28eb5c0865e03606a70e1d6 100644
--- a/malloc/malloc.c
+++ b/malloc/malloc.c
@@ -2905,6 +2905,8 @@ typedef struct tcache_perthread_struct
   tcache_entry *entries[TCACHE_MAX_BINS];
 } tcache_perthread_struct;
 
+#define MAX_TCACHE_COUNT 127	/* Maximum value of counts[] entries.  */
+
 static __thread bool tcache_shutting_down = false;
 static __thread tcache_perthread_struct *tcache = NULL;
 
@@ -2932,7 +2934,6 @@ tcache_get (size_t tc_idx)
 {
   tcache_entry *e = tcache->entries[tc_idx];
   assert (tc_idx < TCACHE_MAX_BINS);
-  assert (tcache->counts[tc_idx] > 0);
   tcache->entries[tc_idx] = e->next;
   --(tcache->counts[tc_idx]);
   e->key = NULL;
@@ -5098,8 +5099,11 @@ do_set_tcache_max (size_t value)
 static __always_inline int
 do_set_tcache_count (size_t value)
 {
-  LIBC_PROBE (memory_tunable_tcache_count, 2, value, mp_.tcache_count);
-  mp_.tcache_count = value;
+  if (value <= MAX_TCACHE_COUNT)
+    {
+      LIBC_PROBE (memory_tunable_tcache_count, 2, value, mp_.tcache_count);
+      mp_.tcache_count = value;
+    }
   return 1;
 }
 
diff --git a/manual/tunables.texi b/manual/tunables.texi
index 749cabff1b003f20e36f793a268f5f77944aafbb..ae638823a21b9cc7aca3684c8e3067cb8cd287e0 100644
--- a/manual/tunables.texi
+++ b/manual/tunables.texi
@@ -189,8 +189,8 @@ per-thread cache.  The default (and maximum) value is 1032 bytes on
 
 @deftp Tunable glibc.malloc.tcache_count
 The maximum number of chunks of each size to cache. The default is 7.
-There is no upper limit, other than available system memory.  If set
-to zero, the per-thread cache is effectively disabled.
+The upper limit is 127.  If set to zero, the per-thread cache is effectively
+disabled.
 
 The approximate maximum overhead of the per-thread cache is thus equal
 to the number of bins times the chunk count in each bin times the size
Carlos O'Donell May 8, 2019, 4:53 p.m. UTC | #3
On 5/7/19 10:30 AM, Wilco Dijkstra wrote:
> Hi Carlos,
> 
>> Please create a bug for this.
>>
>> This is a publicly visible issue with tcache and tunables.
> 
> Sure, BZ 24531.

Thanks.

>> This patch conflates two issues.
>>
>> (a) Adding checking to tunable.
>>
>> (b) Lifting limit.
>>
>> Please split into two bugs. One to fix tunables, another to raise the
>> tcache chunk caching limit.
> 
>> If you are going to do (b) and change the sign of counts then you need
>> to go through and fix other code that expects to possibly have a
>> negative value.
> 
> If there is any code that expects it to be negative that's a serious bug...
> Char is neither signed nor unsigned, the valid range for char is 0..127.

This is not correct.

Char's sign is implementation defined.

So it's not a serious bug, but it's a non-portable assumption we should fix.

I don't know if gcc makes the sign of char vary by architecture or not.

> 
> 2939   ++(tcache->counts[tc_idx]);
> 
>          ^^^^^^^^^^^^^^^^^^^^^^^^^^^
>          assert (tcache->counts[tc_idx] != 0);
>          See below for discussion.
> 
> In all cases we already check tcache->counts[tc_idx] < mp_.tcache_count,
> so there can be no overflow iff mp_.tcache_count is the maximum value of
> tcache->counts[] entries. So no checks needed.
> 
> 2947   tcache_entry *e = tcache->entries[tc_idx];
> 2948   assert (tc_idx < TCACHE_MAX_BINS);
> 2949   assert (tcache->counts[tc_idx] > 0);
> 
>          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>          Always true now if counts is only positive.
>          Remove?
> 
> Yes this assert is redundant since we already checked there is a valid entry
> (or just added several new entries). So this assert can never trigger, it only
> fails if tcache_put has an overflow bug.
> 
> 2950   tcache->entries[tc_idx] = e->next;
> 2951   --(tcache->counts[tc_idx]);
> 
>          ^^^^^^^^^^^^^^^^^^^^^^^^^^^
>          May wrap on error, should we check that and assert?
>          We expect the caller to check for != NULL entry,
>          indicating at least one entry. It's possible the list
>          is corrupt and 'e' is pointing to garbage, so an
>          assert might be good here?
> 
>          assert (tcache->counts[tc_idx] != MAX_TCACHE_COUNT);
> 
> No this can't underflow after we fix the overflow bug.

OK.

>> The manual/memory.texi needs updating because you made the
>> count twice the size, and the rough estimates for size of
>> tcache should be updated. The manual should also list the
>> actual limit being imposed.
> 
> Which size do you mean? I can't find anything in manual/memory.texi
> refering to tcache. I did update the tunables which incorrectly states
> there is no limit on glibc.malloc.tcache_count.

When you extend the counts will you need to update the size estimates?

glibc/manual/tunables.texi:

195 The approximate maximum overhead of the per-thread cache is thus equal
196 to the number of bins times the chunk count in each bin times the size
197 of each chunk.  With defaults, the approximate maximum overhead of the
198 per-thread cache is approximately 236 KB on 64-bit systems and 118 KB
199 on 32-bit systems.
200 @end deftp


> See updated patch below - this should be simple and safe to backport.
> 
> Cheers,
> Wilco
> 
> 
> [PATCH v2] Fix tcache count maximum (BZ #24531)
> 
> The tcache counts[] array is a char, which has a very small range and thus
> may overflow.  When setting tcache_count tunable, there is no overflow check.
> However the tunable must not be larger than the maximum value of the tcache
> counts[] array, otherwise it can overflow when filling the tcache.
> 
> Eg. export GLIBC_TUNABLES=glibc.malloc.tcache_count=4096
> leads to crashes in benchtests:
> 
> Running /build/glibc/benchtests/bench-strcoll
> bench-strcoll: malloc.c:2949: tcache_get: Assertion `tcache->counts[tc_idx] > 0' failed.
> Aborted
> 
> ChangeLog:
> 2019-05-07  Wilco Dijkstra  <wdijkstr@arm.com>
> 
>          [BZ #24531]
>          * malloc/malloc.c (MAX_TCACHE_COUNT): New define.
>          (tcache_put): Remove redundant assert.
>          (do_set_tcache_count): Only update if count is small enough.
>          * manual/tunables.texi (glibc.malloc.tcache_count): Document max value.
> 

This looks good to me!

Thank you.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>

> diff --git a/malloc/malloc.c b/malloc/malloc.c
> index 0e3d4dd5163f5fa8fb07b71fb7e318e7b10f5cfd..e03a14aabe5d4a1ca28eb5c0865e03606a70e1d6 100644
> --- a/malloc/malloc.c
> +++ b/malloc/malloc.c
> @@ -2905,6 +2905,8 @@ typedef struct tcache_perthread_struct
>     tcache_entry *entries[TCACHE_MAX_BINS];
>   } tcache_perthread_struct;
>   
> +#define MAX_TCACHE_COUNT 127	/* Maximum value of counts[] entries.  */
> +

OK.

>   static __thread bool tcache_shutting_down = false;
>   static __thread tcache_perthread_struct *tcache = NULL;
>   
> @@ -2932,7 +2934,6 @@ tcache_get (size_t tc_idx)
>   {
>     tcache_entry *e = tcache->entries[tc_idx];
>     assert (tc_idx < TCACHE_MAX_BINS);
> -  assert (tcache->counts[tc_idx] > 0);

OK.

>     tcache->entries[tc_idx] = e->next;
>     --(tcache->counts[tc_idx]);
>     e->key = NULL;
> @@ -5098,8 +5099,11 @@ do_set_tcache_max (size_t value)
>   static __always_inline int
>   do_set_tcache_count (size_t value)
>   {
> -  LIBC_PROBE (memory_tunable_tcache_count, 2, value, mp_.tcache_count);
> -  mp_.tcache_count = value;
> +  if (value <= MAX_TCACHE_COUNT)
> +    {
> +      LIBC_PROBE (memory_tunable_tcache_count, 2, value, mp_.tcache_count);
> +      mp_.tcache_count = value;
> +    }

OK.

>     return 1;
>   }
>   
> diff --git a/manual/tunables.texi b/manual/tunables.texi
> index 749cabff1b003f20e36f793a268f5f77944aafbb..ae638823a21b9cc7aca3684c8e3067cb8cd287e0 100644
> --- a/manual/tunables.texi
> +++ b/manual/tunables.texi
> @@ -189,8 +189,8 @@ per-thread cache.  The default (and maximum) value is 1032 bytes on
>   
>   @deftp Tunable glibc.malloc.tcache_count
>   The maximum number of chunks of each size to cache. The default is 7.
> -There is no upper limit, other than available system memory.  If set
> -to zero, the per-thread cache is effectively disabled.
> +The upper limit is 127.  If set to zero, the per-thread cache is effectively
> +disabled.

OK.

>   
>   The approximate maximum overhead of the per-thread cache is thus equal
>   to the number of bins times the chunk count in each bin times the size
>
Wilco Dijkstra May 8, 2019, 5:18 p.m. UTC | #4
Hi DJ,

>>> -  assert (tcache->counts[tc_idx] > 0);
>>
>> OK.
>
> Note I still want further justification for this one.

Well I already mentioned that all calls to tcache_get ensure there
is an entry:

  if (tc_idx < mp_.tcache_bins
      /*&& tc_idx < TCACHE_MAX_BINS*/ /* to appease gcc */
      && tcache
      && tcache->entries[tc_idx] != NULL)
    {
      return tcache_get (tc_idx);
    }

Here we've explicitly checked the linked list contains at least one
element. 

      if (return_cached
          && mp_.tcache_unsorted_limit > 0
          && tcache_unsorted_count > mp_.tcache_unsorted_limit)
        {
          return tcache_get (tc_idx);
        }

      if (return_cached)
        {
          return tcache_get (tc_idx);
        }

These cases can only call tcache_get if return_cached is true. This is set
by this code:

             /* Fill cache first, return to user only if cache fills.
                 We may return one of these chunks later.  */
              if (tcache_nb
                  && tcache->counts[tc_idx] < mp_.tcache_count)
                {
                  tcache_put (victim, tc_idx);
                  return_cached = 1;
                  continue;
                }

Now it is of course feasible to overwrite the tcace count or the entries or corrupt
the blocks held in the tcache list. If that happens then all bets are off, since any
targeted corruption can be made to look like a valid entry. This is true for all the
malloc datastructures - you need to encrypt all the fields to reduce the chances
of being able to spoof the fields, but that is way too much overhead.

Note that these asserts are trivially redundant too:

static __always_inline void
tcache_put (mchunkptr chunk, size_t tc_idx)
{
  tcache_entry *e = (tcache_entry *) chunk2mem (chunk);
  assert (tc_idx < TCACHE_MAX_BINS);
...

static __always_inline void *
tcache_get (size_t tc_idx)
{
  tcache_entry *e = tcache->entries[tc_idx];
  assert (tc_idx < TCACHE_MAX_BINS);

Adding these asserts just makes the tcache slower without making it any safer.

Wilco
Wilco Dijkstra May 8, 2019, 5:35 p.m. UTC | #5
Hi Carlos,

> Char's sign is implementation defined.
> 
> So it's not a serious bug, but it's a non-portable assumption we should fix.
>
> I don't know if gcc makes the sign of char vary by architecture or not.

Char can be signed or unsigned per target, so that means portable code using
char can only use the intersection of the signed and unsigned ranges safely
(unless you only use equality so signedness is irrelevant).

> When you extend the counts will you need to update the size estimates?
>
> glibc/manual/tunables.texi:
>
> 195 The approximate maximum overhead of the per-thread cache is thus equal
> 196 to the number of bins times the chunk count in each bin times the size
> 197 of each chunk.  With defaults, the approximate maximum overhead of the
> 198 per-thread cache is approximately 236 KB on 64-bit systems and 118 KB
> 199 on 32-bit systems.
> 200 @end deftp

That is the maximum size of the blocks contained in the tcache, not the size
overhead of the tcache datastructure itself. My original change would add just 64
bytes, but even if we made the count array a size_t, it would add 448 bytes on a
64-bit target, ie. a tiny fraction of the maximum tcache size of 236KB.

Wilco
Carlos O'Donell May 8, 2019, 7:27 p.m. UTC | #6
On 5/8/19 1:35 PM, Wilco Dijkstra wrote:
> Hi Carlos,
> 
>> Char's sign is implementation defined.
>>
>> So it's not a serious bug, but it's a non-portable assumption we should fix.
>>
>> I don't know if gcc makes the sign of char vary by architecture or not.
> 
> Char can be signed or unsigned per target, so that means portable code using
> char can only use the intersection of the signed and unsigned ranges safely
> (unless you only use equality so signedness is irrelevant).
> 
>> When you extend the counts will you need to update the size estimates?
>>
>> glibc/manual/tunables.texi:
>>
>> 195 The approximate maximum overhead of the per-thread cache is thus equal
>> 196 to the number of bins times the chunk count in each bin times the size
>> 197 of each chunk.  With defaults, the approximate maximum overhead of the
>> 198 per-thread cache is approximately 236 KB on 64-bit systems and 118 KB
>> 199 on 32-bit systems.
>> 200 @end deftp
> 
> That is the maximum size of the blocks contained in the tcache, not the size
> overhead of the tcache datastructure itself. My original change would add just 64
> bytes, but even if we made the count array a size_t, it would add 448 bytes on a
> 64-bit target, ie. a tiny fraction of the maximum tcache size of 236KB.

Thanks for reviewing that. I wonder if we shouldn't just say 256KiB here and
128KiB respectively, so give round easy to understand values which are *higher*
than expected to allow for this kind of change?
Wilco Dijkstra May 8, 2019, 8:33 p.m. UTC | #7
Hi Carlos,

>> glibc/manual/tunables.texi:
>>
>> 195 The approximate maximum overhead of the per-thread cache is thus equal
>> 196 to the number of bins times the chunk count in each bin times the size
>> 197 of each chunk.  With defaults, the approximate maximum overhead of the
>> 198 per-thread cache is approximately 236 KB on 64-bit systems and 118 KB
>> 199 on 32-bit systems.
>> 200 @end deftp
> 
> That is the maximum size of the blocks contained in the tcache, not the size
> overhead of the tcache datastructure itself. My original change would add just 64
> bytes, but even if we made the count array a size_t, it would add 448 bytes on a
> 64-bit target, ie. a tiny fraction of the maximum tcache size of 236KB.

> Thanks for reviewing that. I wonder if we shouldn't just say 256KiB here and
> 128KiB respectively, so give round easy to understand values which are *higher*
> than expected to allow for this kind of change?

Well the text is quite misleading already. Firstly blocks contained in tcache are
not "overhead". It's the maximum amount of free memory that tcache can hold
per thread. However few applications use blocks of size 16 and 32 and 48 and 64
all the way up to 1KB. So the typical amount is a tiny fraction of the maximum.
This memory is not leaked since it is still available to that thread. It's just that there
isn't a mechanism to reclaim it if a thread does no further allocations but doesn't
exit either.

Secondly a single free block in tcache can block a whole multi-gigabyte arena
from being freed and returned to the system. That's a much more significant
bug than this maximum "overhead".

Wilco
Carlos O'Donell May 8, 2019, 8:54 p.m. UTC | #8
On 5/8/19 4:33 PM, Wilco Dijkstra wrote:
> Hi Carlos,
> 
>>> glibc/manual/tunables.texi:
>>>
>>> 195 The approximate maximum overhead of the per-thread cache is thus equal
>>> 196 to the number of bins times the chunk count in each bin times the size
>>> 197 of each chunk.  With defaults, the approximate maximum overhead of the
>>> 198 per-thread cache is approximately 236 KB on 64-bit systems and 118 KB
>>> 199 on 32-bit systems.
>>> 200 @end deftp
>>
>> That is the maximum size of the blocks contained in the tcache, not the size
>> overhead of the tcache datastructure itself. My original change would add just 64
>> bytes, but even if we made the count array a size_t, it would add 448 bytes on a
>> 64-bit target, ie. a tiny fraction of the maximum tcache size of 236KB.
> 
>> Thanks for reviewing that. I wonder if we shouldn't just say 256KiB here and
>> 128KiB respectively, so give round easy to understand values which are *higher*
>> than expected to allow for this kind of change?
> 
> Well the text is quite misleading already. Firstly blocks contained in tcache are
> not "overhead". It's the maximum amount of free memory that tcache can hold
> per thread. However few applications use blocks of size 16 and 32 and 48 and 64
> all the way up to 1KB. So the typical amount is a tiny fraction of the maximum.
> This memory is not leaked since it is still available to that thread. It's just that there
> isn't a mechanism to reclaim it if a thread does no further allocations but doesn't
> exit either.

The tcache is a cost to the thread, above and beyond what it might be using,
so in that sense we call it an "overhead." It's overhead because calling free
will not lower the RSS used by the cache, nor will malloc_trim() reclaim it.

The overhead is not free memory, it's not free, it's held by tcache, and not
available for use by any other thread or any other request. Calling it free'd
could be confused with actual free chunks, so I want to avoid that.

You make a statement about application workload patterns, could you expand on
that a bit, I'd like to understand the conclusion you're trying to draw i.e.
"few applications use".

> Secondly a single free block in tcache can block a whole multi-gigabyte arena
> from being freed and returned to the system. That's a much more significant
> bug than this maximum "overhead".

This is a common complaint with heap-based allocators, and pathological worst
cases can always be found for any allocator. It is a distinct issue from the
issue at hand (though I'm happy to start another thread on the topic).

Lastly, you can reclaim all that RSS by calling malloc_trim() which provides
non-heap-top reclamation, but this is not automatic (we don't create hidden
threads to do temporal page recalamation).
Carlos O'Donell May 8, 2019, 9 p.m. UTC | #9
On 5/8/19 4:33 PM, Wilco Dijkstra wrote:
> Hi Carlos,
> 
>>> glibc/manual/tunables.texi:
>>>
>>> 195 The approximate maximum overhead of the per-thread cache is thus equal
>>> 196 to the number of bins times the chunk count in each bin times the size
>>> 197 of each chunk.  With defaults, the approximate maximum overhead of the
>>> 198 per-thread cache is approximately 236 KB on 64-bit systems and 118 KB
>>> 199 on 32-bit systems.
>>> 200 @end deftp
>>
>> That is the maximum size of the blocks contained in the tcache, not the size
>> overhead of the tcache datastructure itself. My original change would add just 64
>> bytes, but even if we made the count array a size_t, it would add 448 bytes on a
>> 64-bit target, ie. a tiny fraction of the maximum tcache size of 236KB.
> 
>> Thanks for reviewing that. I wonder if we shouldn't just say 256KiB here and
>> 128KiB respectively, so give round easy to understand values which are *higher*
>> than expected to allow for this kind of change?
> 
> Well the text is quite misleading already. Firstly blocks contained in tcache are
> not "overhead". It's the maximum amount of free memory that tcache can hold
> per thread. However few applications use blocks of size 16 and 32 and 48 and 64
> all the way up to 1KB. So the typical amount is a tiny fraction of the maximum.
> This memory is not leaked since it is still available to that thread. It's just that there
> isn't a mechanism to reclaim it if a thread does no further allocations but doesn't
> exit either.
> 
> Secondly a single free block in tcache can block a whole multi-gigabyte arena
> from being freed and returned to the system. That's a much more significant
> bug than this maximum "overhead".

Just to be clear, your patch was OK for me. The above is a total digression,
sorry for that.

You have one out-standing issue to resolve with DJ.

I suggest iterating with DJ so we can commit this.
Wilco Dijkstra May 9, 2019, 12:29 p.m. UTC | #10
Hi Carlos,

> The tcache is a cost to the thread, above and beyond what it might be using,
> so in that sense we call it an "overhead." It's overhead because calling free
> will not lower the RSS used by the cache, nor will malloc_trim() reclaim it.

But in principle it could. It's an implementation issue, not a fundamental
design issue of per-thread caches.

> You make a statement about application workload patterns, could you expand on
> that a bit, I'd like to understand the conclusion you're trying to draw i.e.
> "few applications use".

Most applications only use a few different block sizes rather than every possible
size. You typically see a huge spike for just 2 or 3 unique sizes, and the rest is
in the noise. Hence the idea of caching blocks of the same sizes to improve
performance.

So it's misleading to quote the maximum amount of memory held in tcache as
a useful figure when no application would ever see that in reality. 

Note I see major speedups when increasing the tcache count. So we could simply
cache more small blocks as that is where the big gains are.

>> Secondly a single free block in tcache can block a whole multi-gigabyte arena
>> from being freed and returned to the system. That's a much more significant
>> bug than this maximum "overhead".
>
> This is a common complaint with heap-based allocators, and pathological worst
> cases can always be found for any allocator. It is a distinct issue from the
> issue at hand (though I'm happy to start another thread on the topic).

Sure but we're talking about blocks that have been freed, not blocks that are still in
use. The current implementation of tcache means that RSS size can be hugely
inflated compared to switching tcache off (far more than this "overhead"). The
current implementation cannot reclaim these freed blocks, but that could be fixed.

Anyway this is digression from the subject indeed, but the key point is that we can
improve the tcache significantly.

Wilco
diff mbox series

Patch

diff --git a/malloc/malloc.c b/malloc/malloc.c
index 801ba1f499b566e677b763fc84f8ba86f4f7ccd0..4db7283cc27118cd7d39410febf7be8f7633780a 100644
--- a/malloc/malloc.c
+++ b/malloc/malloc.c
@@ -2915,10 +2915,12 @@  typedef struct tcache_entry
    time), this is for performance reasons.  */
 typedef struct tcache_perthread_struct
 {
-  char counts[TCACHE_MAX_BINS];
+  unsigned short counts[TCACHE_MAX_BINS];
   tcache_entry *entries[TCACHE_MAX_BINS];
 } tcache_perthread_struct;
 
+#define MAX_TCACHE_COUNT 65535	/* Maximum value of counts[] entries.  */
+
 static __thread bool tcache_shutting_down = false;
 static __thread tcache_perthread_struct *tcache = NULL;
 
@@ -5114,8 +5116,11 @@  do_set_tcache_max (size_t value)
 static __always_inline int
 do_set_tcache_count (size_t value)
 {
-  LIBC_PROBE (memory_tunable_tcache_count, 2, value, mp_.tcache_count);
-  mp_.tcache_count = value;
+  if (value <= MAX_TCACHE_COUNT)
+    {
+      LIBC_PROBE (memory_tunable_tcache_count, 2, value, mp_.tcache_count);
+      mp_.tcache_count = value;
+    }
   return 1;
 }