Patchwork seqlock: don't smp_rmb in seqlock reader spin loop

login
register
mail settings
Submitter Milton Miller
Date May 12, 2011, 9:13 a.m.
Message ID <seqlock-rmb@mdm.bga.com>
Download mbox | patch
Permalink /patch/95273/
State Not Applicable
Headers show

Comments

Milton Miller - May 12, 2011, 9:13 a.m.
Move the smp_rmb after cpu_relax loop in read_seqlock and add
ACCESS_ONCE to make sure the test and return are consistent.

A multi-threaded core in the lab didn't like the update
from 2.6.35 to 2.6.36, to the point it would hang during
boot when multiple threads were active.  Bisection showed
af5ab277ded04bd9bc6b048c5a2f0e7d70ef0867 (clockevents:
Remove the per cpu tick skew) as the culprit and it is
supported with stack traces showing xtime_lock waits including
tick_do_update_jiffies64 and/or update_vsyscall.

Experimentation showed the combination of cpu_relax and smp_rmb
was significantly slowing the progress of other threads sharing
the core, and this patch is effective in avoiding the hang.

A theory is the rmb is affecting the whole core while the
cpu_relax is causing a resource rebalance flush, together they
cause an interfernce cadance that is unbroken when the seqlock
reader has interrupts disabled.  

At first I was confused why the refactor in
3c22cd5709e8143444a6d08682a87f4c57902df3 (kernel: optimise
seqlock) didn't affect this patch application, but after some
study that affected seqcount not seqlock. The new seqcount was
not factored back into the seqlock.  I defer that the future.

While the removal of the timer interrupt offset created
contention for the xtime lock while a cpu does the
additonal work to update the system clock, the seqlock
implementation with the tight rmb spin loop goes back much
further, and is just waiting for the right trigger.

Cc: <stable@vger.kernel.org>
Signed-off-by: Milton Miller <miltonm@bga.com>
---

To the readers of [RFC] time: xtime_lock is held too long:

I initially thought x86 would not see this because rmb would
be a nop, but upon closer inspection X86_PPRO_FENCE will add
a lfence for rmb.

milton
Eric Dumazet - May 12, 2011, 9:35 a.m.
Le jeudi 12 mai 2011 à 04:13 -0500, Milton Miller a écrit :
> Move the smp_rmb after cpu_relax loop in read_seqlock and add
> ACCESS_ONCE to make sure the test and return are consistent.
> 
> A multi-threaded core in the lab didn't like the update
> from 2.6.35 to 2.6.36, to the point it would hang during
> boot when multiple threads were active.  Bisection showed
> af5ab277ded04bd9bc6b048c5a2f0e7d70ef0867 (clockevents:
> Remove the per cpu tick skew) as the culprit and it is
> supported with stack traces showing xtime_lock waits including
> tick_do_update_jiffies64 and/or update_vsyscall.
> 
> Experimentation showed the combination of cpu_relax and smp_rmb
> was significantly slowing the progress of other threads sharing
> the core, and this patch is effective in avoiding the hang.
> 
> A theory is the rmb is affecting the whole core while the
> cpu_relax is causing a resource rebalance flush, together they
> cause an interfernce cadance that is unbroken when the seqlock
> reader has interrupts disabled.  
> 
> At first I was confused why the refactor in
> 3c22cd5709e8143444a6d08682a87f4c57902df3 (kernel: optimise
> seqlock) didn't affect this patch application, but after some
> study that affected seqcount not seqlock. The new seqcount was
> not factored back into the seqlock.  I defer that the future.
> 
> While the removal of the timer interrupt offset created
> contention for the xtime lock while a cpu does the
> additonal work to update the system clock, the seqlock
> implementation with the tight rmb spin loop goes back much
> further, and is just waiting for the right trigger.
> 
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Milton Miller <miltonm@bga.com>
> ---
> 
> To the readers of [RFC] time: xtime_lock is held too long:
> 
> I initially thought x86 would not see this because rmb would
> be a nop, but upon closer inspection X86_PPRO_FENCE will add
> a lfence for rmb.
> 
> milton
> 
> Index: common/include/linux/seqlock.h
> ===================================================================
> --- common.orig/include/linux/seqlock.h	2011-04-06 03:27:02.000000000 -0500
> +++ common/include/linux/seqlock.h	2011-04-06 03:35:02.000000000 -0500
> @@ -88,12 +88,12 @@ static __always_inline unsigned read_seq
>  	unsigned ret;
>  
>  repeat:
> -	ret = sl->sequence;
> -	smp_rmb();
> +	ret = ACCESS_ONCE(sl->sequence);
>  	if (unlikely(ret & 1)) {
>  		cpu_relax();
>  		goto repeat;
>  	}
> +	smp_rmb();
>  
>  	return ret;
>  }

I fully agree with your analysis. This is a call to make the change I
suggested earlier [1]. (Use a seqcount object in seqlock_t)

typedef struct {
	seqcount_t seq
	spinlock_t lock;
} seqlock_t;

I'll submit a patch for 2.6.40

Acked-by: Eric Dumazet <eric.dumazet@gmail.com>

Thanks

[1] Ref: https://lkml.org/lkml/2011/5/6/351
Andi Kleen - May 12, 2011, 2:08 p.m.
On Thu, May 12, 2011 at 04:13:54AM -0500, Milton Miller wrote:
> 
> Move the smp_rmb after cpu_relax loop in read_seqlock and add
> ACCESS_ONCE to make sure the test and return are consistent.
> 
> A multi-threaded core in the lab didn't like the update

Which core was that?

-Andi

Patch

Index: common/include/linux/seqlock.h
===================================================================
--- common.orig/include/linux/seqlock.h	2011-04-06 03:27:02.000000000 -0500
+++ common/include/linux/seqlock.h	2011-04-06 03:35:02.000000000 -0500
@@ -88,12 +88,12 @@  static __always_inline unsigned read_seq
 	unsigned ret;
 
 repeat:
-	ret = sl->sequence;
-	smp_rmb();
+	ret = ACCESS_ONCE(sl->sequence);
 	if (unlikely(ret & 1)) {
 		cpu_relax();
 		goto repeat;
 	}
+	smp_rmb();
 
 	return ret;
 }