diff mbox

powerpc/tm: do not use r13 for tabort_syscall

Message ID 1469172468-12892-1-git-send-email-npiggin@gmail.com (mailing list archive)
State Superseded
Headers show

Commit Message

Nicholas Piggin July 22, 2016, 7:27 a.m. UTC
tabort_syscall runs with RI=1, so a nested recoverable machine
check will load the paca into r13 and overwrite what we loaded
it with, because exceptions returning to privileged mode do not
restore r13.

This has survived testing with sc instruction inside transaction
(bare sc, not glibc syscall because glibc can tabort before sc).
Verified the transaction is failing failing with with
TM_CAUSE_SYSCALL.

Signed-off-by: Nick Piggin <npiggin@gmail.com>
Cc: Michael Neuling <mikey@neuling.org>
Cc: Sam Bobroff <sam.bobroff@au1.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>

---

 arch/powerpc/kernel/entry_64.S | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

Comments

Michael Neuling July 25, 2016, 12:57 a.m. UTC | #1
On Fri, 2016-07-22 at 17:27 +1000, Nicholas Piggin wrote:
> tabort_syscall runs with RI=1, so a nested recoverable machine
> check will load the paca into r13 and overwrite what we loaded
> it with, because exceptions returning to privileged mode do not
> restore r13.
> 
> This has survived testing with sc instruction inside transaction
> (bare sc, not glibc syscall because glibc can tabort before sc).
> Verified the transaction is failing failing with with
> TM_CAUSE_SYSCALL.
> 
> Signed-off-by: Nick Piggin <npiggin@gmail.com>

Thanks.

This looks good, but should probably be cc: stable from when the syscall tm
abort went in.

There are some random whitespace changes in here too, which if we avoid
will make the patch smaller (and easier to read).

Mikey

> Cc: Michael Neuling <mikey@neuling.org>
> Cc: Sam Bobroff <sam.bobroff@au1.ibm.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> 
> ---
> 
>  arch/powerpc/kernel/entry_64.S | 20 ++++++++++----------
>  1 file changed, 10 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/entry_64.S
> b/arch/powerpc/kernel/entry_64.S
> index 73e461a..387dee3 100644
> --- a/arch/powerpc/kernel/entry_64.S
> +++ b/arch/powerpc/kernel/entry_64.S
> @@ -368,13 +368,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
>  tabort_syscall:
>  	/* Firstly we need to enable TM in the kernel */
>  	mfmsr	r10
> -	li	r13, 1
> -	rldimi	r10, r13, MSR_TM_LG, 63-MSR_TM_LG
> -	mtmsrd	r10, 0
> +	li	r9,1
> +	rldimi	r10,r9,MSR_TM_LG,63-MSR_TM_LG
> +	mtmsrd	r10,0
>  
>  	/* tabort, this dooms the transaction, nothing else */
> -	li	r13, (TM_CAUSE_SYSCALL|TM_CAUSE_PERSISTENT)
> -	TABORT(R13)
> +	li	r9,(TM_CAUSE_SYSCALL|TM_CAUSE_PERSISTENT)
> +	TABORT(R9)
>  
>  	/*
>  	 * Return directly to userspace. We have corrupted user register
> state,
> @@ -382,11 +382,11 @@ tabort_syscall:
>  	 * resume after the tbegin of the aborted transaction with the
>  	 * checkpointed register state.
>  	 */
> -	li	r13, MSR_RI
> -	andc	r10, r10, r13
> -	mtmsrd	r10, 1
> -	mtspr	SPRN_SRR0, r11
> -	mtspr	SPRN_SRR1, r12
> +	li	r9,MSR_RI
> +	andc	r10,r10,r9
> +	mtmsrd	r10,1
> +	mtspr	SPRN_SRR0,r11
> +	mtspr	SPRN_SRR1,r12
>  
>  	rfid
>  	b	.	/* prevent speculative execution */
Michael Neuling Aug. 22, 2016, 2:09 a.m. UTC | #2
On Fri, 2016-07-22 at 17:27 +1000, Nicholas Piggin wrote:
> tabort_syscall runs with RI=1, so a nested recoverable machine
> check will load the paca into r13 and overwrite what we loaded
> it with, because exceptions returning to privileged mode do not
> restore r13.
> 
> This has survived testing with sc instruction inside transaction
> (bare sc, not glibc syscall because glibc can tabort before sc).
> Verified the transaction is failing failing with with
> TM_CAUSE_SYSCALL.
> 
> Signed-off-by: Nick Piggin <npiggin@gmail.com>
> Cc: Michael Neuling <mikey@neuling.org>

FWIW

Acked-by: Michael Neuling <mikey@neuling.org>

> Cc: Sam Bobroff <sam.bobroff@au1.ibm.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> 
> ---
> 
>  arch/powerpc/kernel/entry_64.S | 20 ++++++++++----------
>  1 file changed, 10 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/entry_64.S
> b/arch/powerpc/kernel/entry_64.S
> index 73e461a..387dee3 100644
> --- a/arch/powerpc/kernel/entry_64.S
> +++ b/arch/powerpc/kernel/entry_64.S
> @@ -368,13 +368,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
>  tabort_syscall:
>  	/* Firstly we need to enable TM in the kernel */
>  	mfmsr	r10
> -	li	r13, 1
> -	rldimi	r10, r13, MSR_TM_LG, 63-MSR_TM_LG
> -	mtmsrd	r10, 0
> +	li	r9,1
> +	rldimi	r10,r9,MSR_TM_LG,63-MSR_TM_LG
> +	mtmsrd	r10,0
>  
>  	/* tabort, this dooms the transaction, nothing else */
> -	li	r13, (TM_CAUSE_SYSCALL|TM_CAUSE_PERSISTENT)
> -	TABORT(R13)
> +	li	r9,(TM_CAUSE_SYSCALL|TM_CAUSE_PERSISTENT)
> +	TABORT(R9)
>  
>  	/*
>  	 * Return directly to userspace. We have corrupted user register
> state,
> @@ -382,11 +382,11 @@ tabort_syscall:
>  	 * resume after the tbegin of the aborted transaction with the
>  	 * checkpointed register state.
>  	 */
> -	li	r13, MSR_RI
> -	andc	r10, r10, r13
> -	mtmsrd	r10, 1
> -	mtspr	SPRN_SRR0, r11
> -	mtspr	SPRN_SRR1, r12
> +	li	r9,MSR_RI
> +	andc	r10,r10,r9
> +	mtmsrd	r10,1
> +	mtspr	SPRN_SRR0,r11
> +	mtspr	SPRN_SRR1,r12
>  
>  	rfid
>  	b	.	/* prevent speculative execution */
diff mbox

Patch

diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 73e461a..387dee3 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -368,13 +368,13 @@  END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
 tabort_syscall:
 	/* Firstly we need to enable TM in the kernel */
 	mfmsr	r10
-	li	r13, 1
-	rldimi	r10, r13, MSR_TM_LG, 63-MSR_TM_LG
-	mtmsrd	r10, 0
+	li	r9,1
+	rldimi	r10,r9,MSR_TM_LG,63-MSR_TM_LG
+	mtmsrd	r10,0
 
 	/* tabort, this dooms the transaction, nothing else */
-	li	r13, (TM_CAUSE_SYSCALL|TM_CAUSE_PERSISTENT)
-	TABORT(R13)
+	li	r9,(TM_CAUSE_SYSCALL|TM_CAUSE_PERSISTENT)
+	TABORT(R9)
 
 	/*
 	 * Return directly to userspace. We have corrupted user register state,
@@ -382,11 +382,11 @@  tabort_syscall:
 	 * resume after the tbegin of the aborted transaction with the
 	 * checkpointed register state.
 	 */
-	li	r13, MSR_RI
-	andc	r10, r10, r13
-	mtmsrd	r10, 1
-	mtspr	SPRN_SRR0, r11
-	mtspr	SPRN_SRR1, r12
+	li	r9,MSR_RI
+	andc	r10,r10,r9
+	mtmsrd	r10,1
+	mtspr	SPRN_SRR0,r11
+	mtspr	SPRN_SRR1,r12
 
 	rfid
 	b	.	/* prevent speculative execution */