diff mbox series

KVM: PPC: Book3S HV: Fix regression on big endian hosts

Message ID 20191215094900.46740-1-marcus@mc.pp.se
State Accepted
Headers show
Series KVM: PPC: Book3S HV: Fix regression on big endian hosts | expand

Commit Message

Marcus Comstedt Dec. 15, 2019, 9:49 a.m. UTC
VCPU_CR is the offset of arch.regs.ccr in kvm_vcpu.
arch/powerpc/include/asm/kvm_host.h defines arch.regs as a struct
pt_regs, and arch/powerpc/include/asm/ptrace.h defines the ccr field
of pt_regs as "unsigned long ccr".  Since unsigned long is 64 bits, a
64-bit load needs to be used to load it, unless an endianness specific
correction offset is added to access the desired subpart.  In this
case there is no reason to _not_ use a 64 bit load though.

Signed-off-by: Marcus Comstedt <marcus@mc.pp.se>
---
This was tested on 5.4.3 on a Talos II (POWER9 Nimbus DD2.2)

 arch/powerpc/kvm/book3s_hv_rmhandlers.S | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Michael Ellerman Dec. 18, 2019, 4:05 a.m. UTC | #1
On Sun, 2019-12-15 at 09:49:00 UTC, "Marcus Comstedt" wrote:
> VCPU_CR is the offset of arch.regs.ccr in kvm_vcpu.
> arch/powerpc/include/asm/kvm_host.h defines arch.regs as a struct
> pt_regs, and arch/powerpc/include/asm/ptrace.h defines the ccr field
> of pt_regs as "unsigned long ccr".  Since unsigned long is 64 bits, a
> 64-bit load needs to be used to load it, unless an endianness specific
> correction offset is added to access the desired subpart.  In this
> case there is no reason to _not_ use a 64 bit load though.
> 
> Signed-off-by: Marcus Comstedt <marcus@mc.pp.se>

Applied to powerpc fixes, thanks.

https://git.kernel.org/powerpc/c/228b607d8ea1b7d4561945058d5692709099d432

cheers
diff mbox series

Patch

diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 0496e66aaa56..c6fbbd29bd87 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -1117,7 +1117,7 @@  END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
 	ld	r7, VCPU_GPR(R7)(r4)
 	bne	ret_to_ultra
 
-	lwz	r0, VCPU_CR(r4)
+	ld	r0, VCPU_CR(r4)
 	mtcr	r0
 
 	ld	r0, VCPU_GPR(R0)(r4)
@@ -1137,7 +1137,7 @@  END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
  *   R3 = UV_RETURN
  */
 ret_to_ultra:
-	lwz	r0, VCPU_CR(r4)
+	ld	r0, VCPU_CR(r4)
 	mtcr	r0
 
 	ld	r0, VCPU_GPR(R3)(r4)