diff mbox

powerpc64: use fixed lock token for !CONFIG_PPC_SPLPAR

Message ID 1425727183-30880-1-git-send-email-haokexin@gmail.com (mailing list archive)
State Superseded
Headers show

Commit Message

Kevin Hao March 7, 2015, 11:19 a.m. UTC
It makes no sense to use a variant lock token on a platform which
doesn't support for shared-processor logical partitions. Actually we
can eliminate a memory load by using a fixed lock token on these
platforms.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
 arch/powerpc/include/asm/spinlock.h | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

Comments

Benjamin Herrenschmidt March 9, 2015, 6:53 a.m. UTC | #1
On Sat, 2015-03-07 at 19:19 +0800, Kevin Hao wrote:
> It makes no sense to use a variant lock token on a platform which
> doesn't support for shared-processor logical partitions. Actually we
> can eliminate a memory load by using a fixed lock token on these
> platforms.

Does this provide an actual measurable benefit ? I found that the lock
token was quite handy for debugging ...

Cheers,
Ben.

> Signed-off-by: Kevin Hao <haokexin@gmail.com>
> ---
>  arch/powerpc/include/asm/spinlock.h | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
> index 4dbe072eecbe..d303cdad2519 100644
> --- a/arch/powerpc/include/asm/spinlock.h
> +++ b/arch/powerpc/include/asm/spinlock.h
> @@ -30,7 +30,7 @@
>  
>  #define smp_mb__after_unlock_lock()	smp_mb()  /* Full ordering for lock. */
>  
> -#ifdef CONFIG_PPC64
> +#ifdef CONFIG_PPC_SPLPAR
>  /* use 0x800000yy when locked, where yy == CPU number */
>  #ifdef __BIG_ENDIAN__
>  #define LOCK_TOKEN	(*(u32 *)(&get_paca()->lock_token))
> @@ -187,9 +187,13 @@ extern void arch_spin_unlock_wait(arch_spinlock_t *lock);
>  
>  #ifdef CONFIG_PPC64
>  #define __DO_SIGN_EXTEND	"extsw	%0,%0\n"
> -#define WRLOCK_TOKEN		LOCK_TOKEN	/* it's negative */
>  #else
>  #define __DO_SIGN_EXTEND
> +#endif
> +
> +#ifdef CONFIG_PPC_SPLPAR
> +#define WRLOCK_TOKEN		LOCK_TOKEN	/* it's negative */
> +#else
>  #define WRLOCK_TOKEN		(-1)
>  #endif
>
Michael Ellerman March 10, 2015, 12:15 a.m. UTC | #2
On Mon, 2015-03-09 at 17:53 +1100, Benjamin Herrenschmidt wrote:
> On Sat, 2015-03-07 at 19:19 +0800, Kevin Hao wrote:
> > It makes no sense to use a variant lock token on a platform which
> > doesn't support for shared-processor logical partitions. Actually we
> > can eliminate a memory load by using a fixed lock token on these
> > platforms.
> 
> Does this provide an actual measurable benefit ? I found that the lock
> token was quite handy for debugging ...

Yeah. It can be very useful to know which cpu holds a lock when you get into a
dead lock.

Unless you can show this is a performance boost I'm inclined to leave it as-is.

cheers
Kevin Hao March 10, 2015, 12:32 p.m. UTC | #3
On Tue, Mar 10, 2015 at 11:15:18AM +1100, Michael Ellerman wrote:
> On Mon, 2015-03-09 at 17:53 +1100, Benjamin Herrenschmidt wrote:
> > On Sat, 2015-03-07 at 19:19 +0800, Kevin Hao wrote:
> > > It makes no sense to use a variant lock token on a platform which
> > > doesn't support for shared-processor logical partitions. Actually we
> > > can eliminate a memory load by using a fixed lock token on these
> > > platforms.
> > 
> > Does this provide an actual measurable benefit ? I found that the lock
> > token was quite handy for debugging ...
> 
> Yeah. It can be very useful to know which cpu holds a lock when you get into a
> dead lock.
> 
> Unless you can show this is a performance boost I'm inclined to leave it as-is.

I am not sure if there is more suitable benchmark for spinlock. I tried the
perf locking benchmark got from here [1]. The test was running on a t4240rdb
board with xfs rootfs. I have run the following commands four times before and
after applying this patch.
    ./perf record ./perf bench locking vfs; ./perf report

Before:
3.06%  locking-vfs  [kernel.kallsyms]   [k] ._raw_spin_lock
3.04%  locking-vfs  [kernel.kallsyms]   [k] ._raw_spin_lock
3.03%  locking-vfs  [kernel.kallsyms]   [k] ._raw_spin_lock
3.00%  locking-vfs  [kernel.kallsyms]   [k] ._raw_spin_lock

After:
3.05%  locking-vfs  [kernel.kallsyms]   [k] ._raw_spin_lock 
2.97%  locking-vfs  [kernel.kallsyms]   [k] ._raw_spin_lock
2.96%  locking-vfs  [kernel.kallsyms]   [k] ._raw_spin_lock
2.97%  locking-vfs  [kernel.kallsyms]   [k] ._raw_spin_lock

[1] http://lkml.iu.edu/hypermail/linux/kernel/1412.1/01419.html

Thanks,
Kevin
diff mbox

Patch

diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
index 4dbe072eecbe..d303cdad2519 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -30,7 +30,7 @@ 
 
 #define smp_mb__after_unlock_lock()	smp_mb()  /* Full ordering for lock. */
 
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_SPLPAR
 /* use 0x800000yy when locked, where yy == CPU number */
 #ifdef __BIG_ENDIAN__
 #define LOCK_TOKEN	(*(u32 *)(&get_paca()->lock_token))
@@ -187,9 +187,13 @@  extern void arch_spin_unlock_wait(arch_spinlock_t *lock);
 
 #ifdef CONFIG_PPC64
 #define __DO_SIGN_EXTEND	"extsw	%0,%0\n"
-#define WRLOCK_TOKEN		LOCK_TOKEN	/* it's negative */
 #else
 #define __DO_SIGN_EXTEND
+#endif
+
+#ifdef CONFIG_PPC_SPLPAR
+#define WRLOCK_TOKEN		LOCK_TOKEN	/* it's negative */
+#else
 #define WRLOCK_TOKEN		(-1)
 #endif