Patchwork [1/3] powerpc: Move 64bit VDSO to improve context switch performance

login
register
mail settings
Submitter Anton Blanchard
Date July 14, 2009, 6:53 a.m.
Message ID <20090714065425.301516312@samba.org>
Download mbox | patch
Permalink /patch/29757/
State Accepted
Commit 30d0b3682887a81f0335b42f20116fd40d743371
Delegated to: Benjamin Herrenschmidt
Headers show

Comments

Anton Blanchard - July 14, 2009, 6:53 a.m.
On 64bit applications the VDSO is the only thing in segment 0. Since the VDSO
is position independent we can remove the hint and let get_unmapped_area pick
an area. This will mean the vdso will be near other mmaps and will share
an SLB entry:

10000000-10001000 r-xp 00000000 08:06 5778459        /root/context_switch_64
10010000-10011000 r--p 00000000 08:06 5778459        /root/context_switch_64
10011000-10012000 rw-p 00001000 08:06 5778459        /root/context_switch_64
fffa92ae000-fffa92b0000 rw-p 00000000 00:00 0 
fffa92b0000-fffa9453000 r-xp 00000000 08:06 4334051  /lib64/power6/libc-2.9.so
fffa9453000-fffa9462000 ---p 001a3000 08:06 4334051  /lib64/power6/libc-2.9.so
fffa9462000-fffa9466000 r--p 001a2000 08:06 4334051  /lib64/power6/libc-2.9.so
fffa9466000-fffa947c000 rw-p 001a6000 08:06 4334051  /lib64/power6/libc-2.9.so
fffa947c000-fffa9480000 rw-p 00000000 00:00 0 
fffa9480000-fffa94a8000 r-xp 00000000 08:06 4333852  /lib64/ld-2.9.so
fffa94b3000-fffa94b4000 rw-p 00000000 00:00 0 

fffa94b4000-fffa94b7000 r-xp 00000000 00:00 0        [vdso] <----- here I am

fffa94b7000-fffa94b8000 r--p 00027000 08:06 4333852  /lib64/ld-2.9.so
fffa94b8000-fffa94bb000 rw-p 00028000 08:06 4333852  /lib64/ld-2.9.so
fffa94bb000-fffa94bc000 rw-p 00000000 00:00 0 
fffe4c10000-fffe4c25000 rw-p 00000000 00:00 0        [stack]

On a microbenchmark that bounces a token between two 64bit processes over pipes
and calls gettimeofday each iteration (to access the VDSO), our context switch
rate goes from 268k to 277k ctx switches/sec (tested on a 4GHz POWER6).

Signed-off-by: Anton Blanchard <anton@samba.org>
---
Anton Blanchard - July 14, 2009, 7:38 a.m.
Hi Ben,

> Don't we lose randomization ? Or do we randomize the whole mem map
> nowadays ?

The start of the top down mmap region is randomized, so the VDSO will be in a
different position each time. A quick example:

run 1:
fffb01f6000-fffb01f9000 r-xp 00000000 00:00 0         [vdso]
fffb01f9000-fffb01fa000 r--p 00027000 08:06 4333852   /lib64/ld-2.9.so
fffb01fa000-fffb01fd000 rw-p 00028000 08:06 4333852   /lib64/ld-2.9.so
fffb01fd000-fffb01fe000 rw-p 00000000 00:00 0 
ffff7c6f000-ffff7c84000 rw-p 00000000 00:00 0         [stack]

run 2:
fff9a094000-fff9a097000 r-xp 00000000 00:00 0         [vdso]  
fff9a097000-fff9a098000 r--p 00027000 08:06 4333852   /lib64/ld-2.9.so
fff9a098000-fff9a09b000 rw-p 00028000 08:06 4333852   /lib64/ld-2.9.so
fff9a09b000-fff9a09c000 rw-p 00000000 00:00 0 
fffea0a6000-fffea0bb000 rw-p 00000000 00:00 0         [stack]

You will notice we aren't randomising each mmap, so the relative offset
between ld.so and the vdso will be consistent. I just checked and it 
looks like x86 does the same.

It might make sense to add a small amount of randomness between mmaps
on both x86 and PowerPC, at least for 64bit applications where we have
enough address space.

Anton
Andreas Schwab - Oct. 2, 2009, 7:14 p.m.
Anton Blanchard <anton@samba.org> writes:

> On 64bit applications the VDSO is the only thing in segment 0. Since the VDSO
> is position independent we can remove the hint and let get_unmapped_area pick
> an area.

This breaks gdb.  The section table in the VDSO image when mapped into
the process no longer contains meaningful values, and gdb rejects it.

Andreas.
Andreas Schwab - Oct. 3, 2009, 2:15 p.m.
Andreas Schwab <schwab@linux-m68k.org> writes:

> Anton Blanchard <anton@samba.org> writes:
>
>> On 64bit applications the VDSO is the only thing in segment 0. Since the VDSO
>> is position independent we can remove the hint and let get_unmapped_area pick
>> an area.
>
> This breaks gdb.  The section table in the VDSO image when mapped into
> the process no longer contains meaningful values, and gdb rejects it.

The problem is that the load segment requires 64k alignment, but the
page allocater of course only provides PAGE_SIZE alignment, causing the
image to be unaligned in memory.

Andreas.

Patch

Index: linux.trees.git/arch/powerpc/include/asm/vdso.h
===================================================================
--- linux.trees.git.orig/arch/powerpc/include/asm/vdso.h	2009-07-14 11:41:52.000000000 +1000
+++ linux.trees.git/arch/powerpc/include/asm/vdso.h	2009-07-14 11:42:59.000000000 +1000
@@ -7,9 +7,8 @@ 
 #define VDSO32_LBASE	0x100000
 #define VDSO64_LBASE	0x100000
 
-/* Default map addresses */
+/* Default map addresses for 32bit vDSO */
 #define VDSO32_MBASE	VDSO32_LBASE
-#define VDSO64_MBASE	VDSO64_LBASE
 
 #define VDSO_VERSION_STRING	LINUX_2.6.15
 
Index: linux.trees.git/arch/powerpc/kernel/vdso.c
===================================================================
--- linux.trees.git.orig/arch/powerpc/kernel/vdso.c	2009-07-14 11:41:46.000000000 +1000
+++ linux.trees.git/arch/powerpc/kernel/vdso.c	2009-07-14 12:03:13.000000000 +1000
@@ -203,7 +203,12 @@ 
 	} else {
 		vdso_pagelist = vdso64_pagelist;
 		vdso_pages = vdso64_pages;
-		vdso_base = VDSO64_MBASE;
+		/*
+		 * On 64bit we don't have a preferred map address. This
+		 * allows get_unmapped_area to find an area near other mmaps
+		 * and most likely share a SLB entry.
+		 */
+		vdso_base = 0;
 	}
 #else
 	vdso_pagelist = vdso32_pagelist;