Message ID | 20150218102900.GF23372@suse.de |
---|---|
State | New |
Headers | show |
On 02/18/2015 11:29 AM, Mel Gorman wrote: > diff --git a/malloc/arena.c b/malloc/arena.c > index 886defb074a2..a78d4835a825 100644 > --- a/malloc/arena.c > +++ b/malloc/arena.c > @@ -696,7 +696,7 @@ heap_trim (heap_info *heap, size_t pad) > } > top_size = chunksize (top_chunk); > extra = (top_size - pad - MINSIZE - 1) & ~(pagesz - 1); > - if (extra < (long) pagesz) > + if (extra < (long) mp_.trim_threshold) > return 0; > > /* Try to shrink. */ > I am not very familiar with how the allocator works, what now happens with this change when you are creating lots of temporary threads that allocate memory below the trim treshold? As I understand it each thread has its own arena, as they are now not trimmed as often anymore is that area memory lost when the thread ends? I recall something along these lines for the original reasoning why the trim was not applied to thread arenas.
On Wed, Feb 18, 2015 at 02:26:20PM +0100, Julian Taylor wrote: > On 02/18/2015 11:29 AM, Mel Gorman wrote: > > > diff --git a/malloc/arena.c b/malloc/arena.c > > index 886defb074a2..a78d4835a825 100644 > > --- a/malloc/arena.c > > +++ b/malloc/arena.c > > @@ -696,7 +696,7 @@ heap_trim (heap_info *heap, size_t pad) > > } > > top_size = chunksize (top_chunk); > > extra = (top_size - pad - MINSIZE - 1) & ~(pagesz - 1); > > - if (extra < (long) pagesz) > > + if (extra < (long) mp_.trim_threshold) > > return 0; > > > > /* Try to shrink. */ > > > > > I am not very familiar with how the allocator works, This is the first time in years I've looked so take this with a grain of salt. Others with more experience of the allocator may chime in with corrections. > what now happens > with this change when you are creating lots of temporary threads that > allocate memory below the trim treshold? The exact impact would depend on the configured trim threshold. In theory nothing stops a user making this very large and creating processes with a single thread that achieve the same effect. Assuming the default is used and they stay below the default trim threshold then each thread can attempt to use 128K of useless data. It changes with dynamic heap tuning but lets take the simple case. At this point, the maximum amount of wastage is limited by M_ARENA_MAX which is 8 by default. By default, the maximum wastage is low even for a carefully crafted application. The worst-case situation can be tuned by setting trim threshold or arena_max but there are easier ways to waste memory. Furthermore, as the application should not reference the memory (it's freed) the wasted memory eventually be swapped out if swap is available. A hostile application could use knowledge of the allocator to use-after-free the memory and keep it resident but it would be very obvious in top. Overall it would be easier to interfere with a machine but just calling mmap() for a large anonymous buffer. This applies to a carefully crafted application or one that happens to free just below the threshold and never mallocs again. Overall, I think the worst-case impact is marginal. > As I understand it each thread has its own arena, as they are now not > trimmed as often anymore is that area memory lost when the thread ends? No, when a process exits, the memory is given back to the system unconditionally. If it's a thread that's joined then the wastage depends on whether another thread uses that heap or not. > I recall something along these lines for the original reasoning why the > trim was not applied to thread arenas. Is there any chance you can remember keywords in the discussion? The changelog is completely void of information. Worse, it would have been harder to hit this problem originally because new areas were only created when contention occurred. If there was strong motivation for the pagesz value then it's possible that it no longer applies.
On 02/18/2015 04:09 PM, Mel Gorman wrote: > On Wed, Feb 18, 2015 at 02:26:20PM +0100, Julian Taylor wrote: >> On 02/18/2015 11:29 AM, Mel Gorman wrote: >> >> I recall something along these lines for the original reasoning why the >> trim was not applied to thread arenas. > > Is there any chance you can remember keywords in the discussion? The > changelog is completely void of information. Worse, it would have been > harder to hit this problem originally because new areas were only created > when contention occurred. If there was strong motivation for the pagesz > value then it's possible that it no longer applies. > it was probably BZ#11261 that I remembered, its about excessive virtual memory usage due to arenas. As they are now trimmed less aggressively are we making it worse?
On Wed, Feb 18, 2015 at 04:43:04PM +0100, Julian Taylor wrote: > On 02/18/2015 04:09 PM, Mel Gorman wrote: > > On Wed, Feb 18, 2015 at 02:26:20PM +0100, Julian Taylor wrote: > >> On 02/18/2015 11:29 AM, Mel Gorman wrote: > >> > >> I recall something along these lines for the original reasoning why the > >> trim was not applied to thread arenas. > > > > Is there any chance you can remember keywords in the discussion? The > > changelog is completely void of information. Worse, it would have been > > harder to hit this problem originally because new areas were only created > > when contention occurred. If there was strong motivation for the pagesz > > value then it's possible that it no longer applies. > > > > it was probably BZ#11261 that I remembered, its about excessive virtual > memory usage due to arenas. > As they are now trimmed less aggressively are we making it worse? By a small margin. Worst case TRIM_THRESHOLD * ARENA_MAX and only applies for a task that hits the exact case and never mallocs again. As the number of arenas is bound and small by default, I cannot see a case where there would be excessive amounts of memory wastage. Wasting a lot of memory would require the application to be tuned specifically to waste memory. The test case in question shows no difference in behaviour that I could see but it's not actually trying to target the trim threshold case.
On 18.02.2015 11:29, Mel Gorman wrote: > > 2015-02-10 Mel Gorman <mgorman@suse.de> > > [BZ #17195] > * malloc/arena.c (free): Apply trim threshold to per-thread heaps > as well as the main arena. > --- > malloc/arena.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/malloc/arena.c b/malloc/arena.c > index 886defb074a2..a78d4835a825 100644 > --- a/malloc/arena.c > +++ b/malloc/arena.c > @@ -696,7 +696,7 @@ heap_trim (heap_info *heap, size_t pad) > } > top_size = chunksize (top_chunk); > extra = (top_size - pad - MINSIZE - 1) & ~(pagesz - 1); > - if (extra < (long) pagesz) > + if (extra < (long) mp_.trim_threshold) > return 0; > > /* Try to shrink. */ > the patch does not seem to account for trim threshold being -1 which is documented to disable the threshold.
On Wed, Feb 18, 2015 at 11:08:13PM +0100, Julian Taylor wrote: > On 18.02.2015 11:29, Mel Gorman wrote: > > > > > 2015-02-10 Mel Gorman <mgorman@suse.de> > > > > [BZ #17195] > > * malloc/arena.c (free): Apply trim threshold to per-thread heaps > > as well as the main arena. > > --- > > malloc/arena.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/malloc/arena.c b/malloc/arena.c > > index 886defb074a2..a78d4835a825 100644 > > --- a/malloc/arena.c > > +++ b/malloc/arena.c > > @@ -696,7 +696,7 @@ heap_trim (heap_info *heap, size_t pad) > > } > > top_size = chunksize (top_chunk); > > extra = (top_size - pad - MINSIZE - 1) & ~(pagesz - 1); > > - if (extra < (long) pagesz) > > + if (extra < (long) mp_.trim_threshold) > > return 0; > > > > /* Try to shrink. */ > > > > the patch does not seem to account for trim threshold being -1 which is > documented to disable the threshold. Well spotted, will post v5 shortly that matches the style for the main arena test to catch this case.
On Wed, Feb 18, 2015 at 04:43:04PM +0100, Julian Taylor wrote: > it was probably BZ#11261 that I remembered, its about excessive virtual > memory usage due to arenas. > As they are now trimmed less aggressively are we making it worse? That is a completely different problem. The number of arenas went out of control because there was a race in accounting the number of arenas. Siddhesh
diff --git a/malloc/arena.c b/malloc/arena.c index 886defb074a2..a78d4835a825 100644 --- a/malloc/arena.c +++ b/malloc/arena.c @@ -696,7 +696,7 @@ heap_trim (heap_info *heap, size_t pad) } top_size = chunksize (top_chunk); extra = (top_size - pad - MINSIZE - 1) & ~(pagesz - 1); - if (extra < (long) pagesz) + if (extra < (long) mp_.trim_threshold) return 0; /* Try to shrink. */