Message ID | 20190518142524.28528-9-cclaudio@linux.ibm.com |
---|---|
State | Superseded |
Headers | show |
Series | kvmppc: Paravirtualize KVM to support ultravisor | expand |
On Sat, May 18, 2019 at 11:25:22AM -0300, Claudio Carvalho wrote: > From: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> > > All hcalls from a secure VM go to the ultravisor from where they are > reflected into the HV. When we (HV) complete processing such hcalls, > we should return to the UV rather than to the guest kernel. This paragraph in the patch description, and the comment in book3s_hv_rmhandlers.S, are confusing and possibly misleading in focussing on returns from hcalls, when the change is needed for any sort of entry to the guest from the hypervisor, whether it is a return from an hcall, a return from a hypervisor interrupt, or the first time that a guest vCPU is run. This paragraph needs to explain that to enter a secure guest, we have to go through the ultravisor, therefore we do a ucall when we are entering a secure guest. [snip] > +/* > + * The hcall we just completed was from Ultravisor. Use UV_RETURN > + * ultra call to return to the Ultravisor. Results from the hcall > + * are already in the appropriate registers (r3:12), except for > + * R6,7 which we used as temporary registers above. Restore them, > + * and set R0 to the ucall number (UV_RETURN). > + */ This needs to say something like "We are entering a secure guest, so we have to invoke the ultravisor to do that. If we are returning from a hcall, the results are already ...". > +ret_to_ultra: > + lwz r6, VCPU_CR(r4) > + mtcr r6 > + LOAD_REG_IMMEDIATE(r0, UV_RETURN) > + ld r7, VCPU_GPR(R7)(r4) > + ld r6, VCPU_GPR(R6)(r4) > + ld r4, VCPU_GPR(R4)(r4) > + sc 2 > > /* > * Enter the guest on a P9 or later system where we have exactly > -- > 2.20.1 Paul.
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index e6b5bb012ccb..ba7dd35cb916 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -290,6 +290,7 @@ struct kvm_arch { cpumask_t cpu_in_guest; u8 radix; u8 fwnmi_enabled; + u8 secure_guest; bool threads_indep; bool nested_enable; pgd_t *pgtable; diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h index 24bfb4c1737e..15e6ce77a131 100644 --- a/arch/powerpc/include/asm/ultravisor-api.h +++ b/arch/powerpc/include/asm/ultravisor-api.h @@ -19,5 +19,6 @@ /* opcodes */ #define UV_WRITE_PATE 0xF104 +#define UV_RETURN 0xF11C #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */ diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c index 8e02444e9d3d..44742724513e 100644 --- a/arch/powerpc/kernel/asm-offsets.c +++ b/arch/powerpc/kernel/asm-offsets.c @@ -508,6 +508,7 @@ int main(void) OFFSET(KVM_VRMA_SLB_V, kvm, arch.vrma_slb_v); OFFSET(KVM_RADIX, kvm, arch.radix); OFFSET(KVM_FWNMI, kvm, arch.fwnmi_enabled); + OFFSET(KVM_SECURE_GUEST, kvm, arch.secure_guest); OFFSET(VCPU_DSISR, kvm_vcpu, arch.shregs.dsisr); OFFSET(VCPU_DAR, kvm_vcpu, arch.shregs.dar); OFFSET(VCPU_VPA, kvm_vcpu, arch.vpa.pinned_addr); diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 938cfa5dceed..d89efa0783a2 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -36,6 +36,7 @@ #include <asm/asm-compat.h> #include <asm/feature-fixups.h> #include <asm/cpuidle.h> +#include <asm/ultravisor-api.h> /* Sign-extend HDEC if not on POWER9 */ #define EXTEND_HDEC(reg) \ @@ -1112,16 +1113,12 @@ BEGIN_FTR_SECTION END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR) ld r5, VCPU_LR(r4) - ld r6, VCPU_CR(r4) mtlr r5 - mtcr r6 ld r1, VCPU_GPR(R1)(r4) ld r2, VCPU_GPR(R2)(r4) ld r3, VCPU_GPR(R3)(r4) ld r5, VCPU_GPR(R5)(r4) - ld r6, VCPU_GPR(R6)(r4) - ld r7, VCPU_GPR(R7)(r4) ld r8, VCPU_GPR(R8)(r4) ld r9, VCPU_GPR(R9)(r4) ld r10, VCPU_GPR(R10)(r4) @@ -1139,10 +1136,35 @@ BEGIN_FTR_SECTION mtspr SPRN_HDSISR, r0 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) + ld r6, VCPU_KVM(r4) + lbz r7, KVM_SECURE_GUEST(r6) + cmpdi r7, 0 + bne ret_to_ultra + + lwz r6, VCPU_CR(r4) + mtcr r6 + + ld r7, VCPU_GPR(R7)(r4) + ld r6, VCPU_GPR(R6)(r4) ld r0, VCPU_GPR(R0)(r4) ld r4, VCPU_GPR(R4)(r4) HRFI_TO_GUEST b . +/* + * The hcall we just completed was from Ultravisor. Use UV_RETURN + * ultra call to return to the Ultravisor. Results from the hcall + * are already in the appropriate registers (r3:12), except for + * R6,7 which we used as temporary registers above. Restore them, + * and set R0 to the ucall number (UV_RETURN). + */ +ret_to_ultra: + lwz r6, VCPU_CR(r4) + mtcr r6 + LOAD_REG_IMMEDIATE(r0, UV_RETURN) + ld r7, VCPU_GPR(R7)(r4) + ld r6, VCPU_GPR(R6)(r4) + ld r4, VCPU_GPR(R4)(r4) + sc 2 /* * Enter the guest on a P9 or later system where we have exactly