Message ID | 1353305799.3694.9.camel@ThinkPad-T5421.cn.ibm.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | Benjamin Herrenschmidt |
Headers | show |
On Mon, 2012-11-19 at 14:16 +0800, Li Zhong wrote: > This patch tries to disable preemption for using smp_processor_id() in arch_spin_unlock_wait(), > to avoid following report: .../... > diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c > index bb7cfec..7a7c31b 100644 > --- a/arch/powerpc/lib/locks.c > +++ b/arch/powerpc/lib/locks.c > @@ -72,8 +72,10 @@ void arch_spin_unlock_wait(arch_spinlock_t *lock) > { > while (lock->slock) { > HMT_low(); > + preempt_disable(); > if (SHARED_PROCESSOR) > __spin_yield(lock); > + preempt_enable(); > } I assume what you are protecting is the PACA access in SHARED_PROCESSOR or is there more ? In that case I'd say just make it use local_paca-> directly or something like that. It doesn't matter if the access is racy, all processors will have the same value for that field as far as I can tell. Cheers, Ben.
On Thu, 2013-01-10 at 17:02 +1100, Benjamin Herrenschmidt wrote: > On Mon, 2012-11-19 at 14:16 +0800, Li Zhong wrote: > > This patch tries to disable preemption for using smp_processor_id() in arch_spin_unlock_wait(), > > to avoid following report: > > .../... > > > diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c > > index bb7cfec..7a7c31b 100644 > > --- a/arch/powerpc/lib/locks.c > > +++ b/arch/powerpc/lib/locks.c > > @@ -72,8 +72,10 @@ void arch_spin_unlock_wait(arch_spinlock_t *lock) > > { > > while (lock->slock) { > > HMT_low(); > > + preempt_disable(); > > if (SHARED_PROCESSOR) > > __spin_yield(lock); > > + preempt_enable(); > > } > > I assume what you are protecting is the PACA access in SHARED_PROCESSOR > or is there more ? Yes, only the one in SHARED_PROCESSOR. > > In that case I'd say just make it use local_paca-> directly or something > like that. It doesn't matter if the access is racy, all processors will > have the same value for that field as far as I can tell. It also seemed to me that all processors have the same value :). I'll send an updated version based on your suggestion soon. Thanks, Zhong > > Cheers, > Ben. > >
diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c index bb7cfec..7a7c31b 100644 --- a/arch/powerpc/lib/locks.c +++ b/arch/powerpc/lib/locks.c @@ -72,8 +72,10 @@ void arch_spin_unlock_wait(arch_spinlock_t *lock) { while (lock->slock) { HMT_low(); + preempt_disable(); if (SHARED_PROCESSOR) __spin_yield(lock); + preempt_enable(); } HMT_medium(); }
This patch tries to disable preemption for using smp_processor_id() in arch_spin_unlock_wait(), to avoid following report: [ 17.279377] BUG: using smp_processor_id() in preemptible [00000000] code: autorun/1024 [ 17.279407] caller is .arch_spin_unlock_wait+0x34/0x80 [ 17.279415] Call Trace: [ 17.279423] [c00000000911fb30] [c000000000015230] .show_stack+0x70/0x1c0 (unreliable) [ 17.279441] [c00000000911fbe0] [c0000000003e13b0] .debug_smp_processor_id+0x110/0x130 [ 17.279455] [c00000000911fc80] [c00000000004c2b4] .arch_spin_unlock_wait+0x34/0x80 [ 17.279470] [c00000000911fd00] [c0000000000b0e50] .task_work_run+0x80/0x160 [ 17.279482] [c00000000911fdb0] [c000000000018744] .do_notify_resume+0x94/0xa0 [ 17.279495] [c00000000911fe30] [c000000000009d1c] .ret_from_except_lite+0x48/0x4c Reported-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com> --- arch/powerpc/lib/locks.c | 2 ++ 1 file changed, 2 insertions(+)