diff mbox

powerpc: Move 64bit heap above 1TB on machines with 1TB segments

Message ID 20090922025235.GD31801@kryten (mailing list archive)
State Accepted, archived
Commit 8bbde7a7062facf8af35bcc9a64cbafe8f36f3cf
Delegated to: Benjamin Herrenschmidt
Headers show

Commit Message

Anton Blanchard Sept. 22, 2009, 2:52 a.m. UTC
If we are using 1TB segments and we are allowed to randomise the heap, we can
put it above 1TB so it is backed by a 1TB segment. Otherwise the heap will be
in the bottom 1TB which always uses 256MB segments and this may result in a
performance penalty.

This functionality is disabled when heap randomisation is turned off:

echo 1 > /proc/sys/kernel/randomize_va_space

which may be useful when trying to allocate the maximum amount of 16M or 16G
pages.

On a microbenchmark that repeatedly touches 32GB of memory with a stride of
256MB + 4kB (designed to stress 256MB segments while still mapping nicely into
the L1 cache), we see the improvement:

Force malloc to use heap all the time:
# export MALLOC_MMAP_MAX_=0 MALLOC_TRIM_THRESHOLD_=-1

Disable heap randomization:
# echo 1 > /proc/sys/kernel/randomize_va_space
# time ./test 
12.51s

Enable heap randomization:
# echo 2 > /proc/sys/kernel/randomize_va_space
# time ./test 
1.70s

Signed-off-by: Anton Blanchard <anton@samba.org>
---

I've cc-ed Mel on this one. As you can see it definitely helps the base
page size performance, but I'm a bit worried of the impact of taking away
another of our 1TB slices.

Comments

Mel Gorman Sept. 22, 2009, 2:47 p.m. UTC | #1
Anton Blanchard <anton@samba.org> wrote on 22/09/2009 03:52:35:

> If we are using 1TB segments and we are allowed to randomise the heap, we
can
> put it above 1TB so it is backed by a 1TB segment. Otherwise the heap
will be
> in the bottom 1TB which always uses 256MB segments and this may result in
a
> performance penalty.
>
> This functionality is disabled when heap randomisation is turned off:
>
> echo 1 > /proc/sys/kernel/randomize_va_space
>
> which may be useful when trying to allocate the maximum amount of 16M or
16G
> pages.
>
> On a microbenchmark that repeatedly touches 32GB of memory with a stride
of
> 256MB + 4kB (designed to stress 256MB segments while still mapping nicely
into
> the L1 cache), we see the improvement:
>
> Force malloc to use heap all the time:
> # export MALLOC_MMAP_MAX_=0 MALLOC_TRIM_THRESHOLD_=-1
>
> Disable heap randomization:
> # echo 1 > /proc/sys/kernel/randomize_va_space
> # time ./test
> 12.51s
>
> Enable heap randomization:
> # echo 2 > /proc/sys/kernel/randomize_va_space
> # time ./test
> 1.70s
>
> Signed-off-by: Anton Blanchard <anton@samba.org>
> ---
>
> I've cc-ed Mel on this one. As you can see it definitely helps the base
> page size performance, but I'm a bit worried of the impact of taking away
> another of our 1TB slices.
>

Unfortunately, I am not sensitive to issues surrounding 1TB segments or how
they are currently being used. However, as this clearly helps performance
for large amounts of memory, is it worth providing an option to
libhugetlbfs to locate 16MB pages above 1TB when they are otherwise being
unused?

> Index: linux.trees.git/arch/powerpc/kernel/process.c
> ===================================================================
> --- linux.trees.git.orig/arch/powerpc/kernel/process.c   2009-09-17
> 15:47:46.000000000 +1000
> +++ linux.trees.git/arch/powerpc/kernel/process.c   2009-09-17 15:
> 49:11.000000000 +1000
> @@ -1165,7 +1165,22 @@ static inline unsigned long brk_rnd(void
>
>  unsigned long arch_randomize_brk(struct mm_struct *mm)
>  {
> -   unsigned long ret = PAGE_ALIGN(mm->brk + brk_rnd());
> +   unsigned long base = mm->brk;
> +   unsigned long ret;
> +
> +#ifdef CONFIG_PPC64
> +   /*
> +    * If we are using 1TB segments and we are allowed to randomise
> +    * the heap, we can put it above 1TB so it is backed by a 1TB
> +    * segment. Otherwise the heap will be in the bottom 1TB
> +    * which always uses 256MB segments and this may result in a
> +    * performance penalty.
> +    */
> +   if (!is_32bit_task() && (mmu_highuser_ssize == MMU_SEGSIZE_1T))
> +      base = max_t(unsigned long, mm->brk, 1UL << SID_SHIFT_1T);
> +#endif
> +
> +   ret = PAGE_ALIGN(base + brk_rnd());
>
>     if (ret < mm->brk)
>        return mm->brk;
Benjamin Herrenschmidt Sept. 22, 2009, 9:08 p.m. UTC | #2
> Unfortunately, I am not sensitive to issues surrounding 1TB segments or how
> they are currently being used. However, as this clearly helps performance
> for large amounts of memory, is it worth providing an option to
> libhugetlbfs to locate 16MB pages above 1TB when they are otherwise being
> unused?

AFAIK, that is already the case, at least the kernel will hand out pages
above 1T preferentially iirc.

There were talks about making huge pages below 1T not even come up
untily you ask for them with MAP_FIXED, dunno where that went.

Cheers,
Ben.
David Gibson Sept. 23, 2009, 12:03 a.m. UTC | #3
On Wed, Sep 23, 2009 at 07:08:22AM +1000, Benjamin Herrenschmidt wrote:
> 
> > Unfortunately, I am not sensitive to issues surrounding 1TB segments or how
> > they are currently being used. However, as this clearly helps performance
> > for large amounts of memory, is it worth providing an option to
> > libhugetlbfs to locate 16MB pages above 1TB when they are otherwise being
> > unused?
> 
> AFAIK, that is already the case, at least the kernel will hand out pages
> above 1T preferentially iirc.
> 
> There were talks about making huge pages below 1T not even come up
> untily you ask for them with MAP_FIXED, dunno where that went.

That was already the case as far as I remember.  But it's just
possible that changed when the general slice handling code came in,
Mel Gorman Sept. 23, 2009, 12:50 p.m. UTC | #4
Benjamin Herrenschmidt <benh@kernel.crashing.org> wrote on 22/09/2009
22:08:22:

> > Unfortunately, I am not sensitive to issues surrounding 1TB segments or
how
> > they are currently being used. However, as this clearly helps
performance
> > for large amounts of memory, is it worth providing an option to
> > libhugetlbfs to locate 16MB pages above 1TB when they are otherwise
being
> > unused?
>
> AFAIK, that is already the case, at least the kernel will hand out pages
> above 1T preferentially iirc.
>
> There were talks about making huge pages below 1T not even come up
> untily you ask for them with MAP_FIXED, dunno where that went.
>

Confirmed, huge pages are already being placed above the 1TB mark. I hadn't
given thought previously to where hugepages were being placed except within
segment boundaries. The patch works as advertised and doesn't appear to
collide with huge pages in any obvious way as far as I can tell.
diff mbox

Patch

Index: linux.trees.git/arch/powerpc/kernel/process.c
===================================================================
--- linux.trees.git.orig/arch/powerpc/kernel/process.c	2009-09-17 15:47:46.000000000 +1000
+++ linux.trees.git/arch/powerpc/kernel/process.c	2009-09-17 15:49:11.000000000 +1000
@@ -1165,7 +1165,22 @@  static inline unsigned long brk_rnd(void
 
 unsigned long arch_randomize_brk(struct mm_struct *mm)
 {
-	unsigned long ret = PAGE_ALIGN(mm->brk + brk_rnd());
+	unsigned long base = mm->brk;
+	unsigned long ret;
+
+#ifdef CONFIG_PPC64
+	/*
+	 * If we are using 1TB segments and we are allowed to randomise
+	 * the heap, we can put it above 1TB so it is backed by a 1TB
+	 * segment. Otherwise the heap will be in the bottom 1TB
+	 * which always uses 256MB segments and this may result in a
+	 * performance penalty.
+	 */
+	if (!is_32bit_task() && (mmu_highuser_ssize == MMU_SEGSIZE_1T))
+		base = max_t(unsigned long, mm->brk, 1UL << SID_SHIFT_1T);
+#endif
+
+	ret = PAGE_ALIGN(base + brk_rnd());
 
 	if (ret < mm->brk)
 		return mm->brk;