From patchwork Tue Nov 27 02:48:00 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Neuling X-Patchwork-Id: 202077 X-Patchwork-Delegate: benh@kernel.crashing.org Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from ozlabs.org (localhost [IPv6:::1]) by ozlabs.org (Postfix) with ESMTP id BAB8B2C0C14 for ; Tue, 27 Nov 2012 13:52:56 +1100 (EST) Received: from localhost.localdomain (localhost [127.0.0.1]) by ozlabs.org (Postfix) with ESMTP id 4A7DC2C009F; Tue, 27 Nov 2012 13:48:31 +1100 (EST) Received: by localhost.localdomain (Postfix, from userid 1000) id 30189D46D22; Tue, 27 Nov 2012 13:48:31 +1100 (EST) From: Michael Neuling To: Benjamin Herrenschmidt Subject: [PATCH 08/16] powerpc: Add FP/VSX and VMX register load functions for transactional memory Date: Tue, 27 Nov 2012 13:48:00 +1100 Message-Id: <1353984488-1283-9-git-send-email-mikey@neuling.org> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1353984488-1283-1-git-send-email-mikey@neuling.org> References: <1353984488-1283-1-git-send-email-mikey@neuling.org> Cc: Michael Neuling , linuxppc-dev@lists.ozlabs.org, Matt Evans X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" This adds functions to restore the state of the FP/VSX registers from what's stored in the thread_struct. Two version for FP/VSX are required since one restores them from transactional/checkpoint side of the thread_struct and the other from the speculated side. Similar functions are added for VMX registers. Signed-off-by: Matt Evans Signed-off-by: Michael Neuling --- arch/powerpc/kernel/fpu.S | 54 ++++++++++++++++++++++++++++++++++++++++++ arch/powerpc/kernel/vector.S | 51 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 105 insertions(+) diff --git a/arch/powerpc/kernel/fpu.S b/arch/powerpc/kernel/fpu.S index adb1551..6ab0e87 100644 --- a/arch/powerpc/kernel/fpu.S +++ b/arch/powerpc/kernel/fpu.S @@ -62,6 +62,60 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX); \ __REST_32FPVSRS_TRANSACT(n,__REG_##c,__REG_##base) #define SAVE_32FPVSRS(n,c,base) __SAVE_32FPVSRS(n,__REG_##c,__REG_##base) +#ifdef CONFIG_TRANSACTIONAL_MEM +/* + * Wrapper to call load_up_fpu from C. + * void do_load_up_fpu(struct pt_regs *regs); + */ +_GLOBAL(do_load_up_fpu) + mflr r0 + std r0, 16(r1) + stdu r1, -112(r1) + + subi r6, r3, STACK_FRAME_OVERHEAD + /* load_up_fpu expects r12=MSR, r13=PACA, and returns + * with r12 = new MSR. + */ + ld r12,_MSR(r6) + GET_PACA(r13) + + bl load_up_fpu + std r12,_MSR(r6) + + ld r0, 112+16(r1) + addi r1, r1, 112 + mtlr r0 + blr + + +/* void do_load_up_fpu(struct thread_struct *thread) + * + * This is similar to load_up_fpu but for the transactional version of the FP + * register set. It doesn't mess with the task MSR or valid flags. + * Furthermore, we don't do lazy FP with TM currently. + */ +_GLOBAL(do_load_up_transact_fpu) + mfmsr r6 + ori r5,r6,MSR_FP +#ifdef CONFIG_VSX +BEGIN_FTR_SECTION + oris r5,r5,MSR_VSX@h +END_FTR_SECTION_IFSET(CPU_FTR_VSX) +#endif + SYNC + MTMSRD(r5) + + lfd fr0,THREAD_TRANSACT_FPSCR(r3) + MTFSF_L(fr0) + REST_32FPVSRS_TRANSACT(0, R4, R3) + + /* FP/VSX off again */ + MTMSRD(r6) + SYNC + + blr +#endif /* CONFIG_TRANSACTIONAL_MEM */ + /* * This task wants to use the FPU now. * On UP, disable FP for the task which had the FPU previously, diff --git a/arch/powerpc/kernel/vector.S b/arch/powerpc/kernel/vector.S index e830289..330fc8c 100644 --- a/arch/powerpc/kernel/vector.S +++ b/arch/powerpc/kernel/vector.S @@ -7,6 +7,57 @@ #include #include +#ifdef CONFIG_TRANSACTIONAL_MEM +/* + * Wrapper to call load_up_altivec from C. + * void do_load_up_altivec(struct pt_regs *regs); + */ +_GLOBAL(do_load_up_altivec) + mflr r0 + std r0, 16(r1) + stdu r1, -112(r1) + + subi r6, r3, STACK_FRAME_OVERHEAD + /* load_up_altivec expects r12=MSR, r13=PACA, and returns + * with r12 = new MSR. + */ + ld r12,_MSR(r6) + GET_PACA(r13) + bl load_up_altivec + std r12,_MSR(r6) + + ld r0, 112+16(r1) + addi r1, r1, 112 + mtlr r0 + blr + +/* void do_load_up_altivec(struct thread_struct *thread) + * + * This is similar to load_up_altivec but for the transactional version of the + * vector regs. It doesn't mess with the task MSR or valid flags. + * Furthermore, VEC laziness is not supported with TM currently. + */ +_GLOBAL(do_load_up_transact_altivec) + mfmsr r6 + oris r5,r6,MSR_VEC@h + MTMSRD(r5) + isync + + li r4,1 + stw r4,THREAD_USED_VR(r3) + + li r10,THREAD_TRANSACT_VSCR + lvx vr0,r10,r3 + mtvscr vr0 + REST_32VRS_TRANSACT(0,r4,r3) + + /* Disable VEC again. */ + MTMSRD(r6) + isync + + blr +#endif + /* * load_up_altivec(unused, unused, tsk) * Disable VMX for the task which had it previously,