powerpc: Add memory clobber to mtspr()

Submitted by Benjamin Herrenschmidt on June 15, 2009, 2:16 a.m.

Details

Message ID 20090615021645.7299DDDD1B@ozlabs.org
State Accepted, archived
Commit 2fae0a524b193e200b71778407ad29b22417056a
Delegated to: Benjamin Herrenschmidt
Headers show

Commit Message

Benjamin Herrenschmidt June 15, 2009, 2:16 a.m.
Without this clobber, mtspr can be re-ordered by gcc vs. surrounding
memory accesses. While this might be ok for some cases, it's not in
others and I'm not confident that all callers get it right (In fact
I'm sure some of them don't).

So for now, let's make mtspr() itself contain a memory clobber until
we can audit and fix everything, at which point we can remove it
if we think it's worth doing so.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---

 arch/powerpc/include/asm/reg.h |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Patch hide | download patch | download mbox

--- linux-work.orig/arch/powerpc/include/asm/reg.h	2009-06-15 12:01:06.000000000 +1000
+++ linux-work/arch/powerpc/include/asm/reg.h	2009-06-15 12:01:17.000000000 +1000
@@ -755,7 +755,8 @@ 
 #define mfspr(rn)	({unsigned long rval; \
 			asm volatile("mfspr %0," __stringify(rn) \
 				: "=r" (rval)); rval;})
-#define mtspr(rn, v)	asm volatile("mtspr " __stringify(rn) ",%0" : : "r" (v))
+#define mtspr(rn, v)	asm volatile("mtspr " __stringify(rn) ",%0" : : "r" (v)\
+				     : "memory")
 
 #ifdef __powerpc64__
 #ifdef CONFIG_PPC_CELL