Patchwork [53/84] ARM: 7747/1: pcpu: ensure __my_cpu_offset cannot be re-ordered across barrier()

mail settings
Submitter Kamal Mostafa
Date June 17, 2013, 5:30 p.m.
Message ID <>
Download mbox | patch
Permalink /patch/252003/
State New
Headers show


Kamal Mostafa - June 17, 2013, 5:30 p.m. -stable review patch.  If anyone has any objections, please let me know.


From: Will Deacon <>

commit 509eb76ebf9771abc9fe51859382df2571f11447 upstream.

__my_cpu_offset is non-volatile, since we want its value to be cached
when we access several per-cpu variables in a row with preemption
disabled. This means that we rely on preempt_{en,dis}able to hazard
with the operation via the barrier() macro, so that we can't end up
migrating CPUs without reloading the per-cpu offset.

Unfortunately, GCC doesn't treat a "memory" clobber on a non-volatile
asm block as a side-effect, and will happily re-order it before other
memory clobbers (including those in prempt_disable()) and cache the
value. This has been observed to break the cmpxchg logic in the slub
allocator, leading to livelock in kmem_cache_alloc in mainline kernels.

This patch adds a dummy memory input operand to __my_cpu_offset,
forcing it to be ordered with respect to the barrier() macro.

Cc: Rob Herring <>
Reviewed-by: Nicolas Pitre <>
Signed-off-by: Will Deacon <>
Signed-off-by: Russell King <>
Signed-off-by: Kamal Mostafa <>
 arch/arm/include/asm/percpu.h | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)


diff --git a/arch/arm/include/asm/percpu.h b/arch/arm/include/asm/percpu.h
index 968c0a1..209e650 100644
--- a/arch/arm/include/asm/percpu.h
+++ b/arch/arm/include/asm/percpu.h
@@ -30,8 +30,15 @@  static inline void set_my_cpu_offset(unsigned long off)
 static inline unsigned long __my_cpu_offset(void)
 	unsigned long off;
-	/* Read TPIDRPRW */
-	asm("mrc p15, 0, %0, c13, c0, 4" : "=r" (off) : : "memory");
+	register unsigned long *sp asm ("sp");
+	/*
+	 * Read TPIDRPRW.
+	 * We want to allow caching the value, so avoid using volatile and
+	 * instead use a fake stack read to hazard against barrier().
+	 */
+	asm("mrc p15, 0, %0, c13, c0, 4" : "=r" (off) : "Q" (*sp));
 	return off;
 #define __my_cpu_offset __my_cpu_offset()