From patchwork Fri Sep 21 05:33:55 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Mackerras X-Patchwork-Id: 185577 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 2D36D2C0089 for ; Fri, 21 Sep 2012 15:39:54 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754187Ab2IUFjo (ORCPT ); Fri, 21 Sep 2012 01:39:44 -0400 Received: from ozlabs.org ([203.10.76.45]:46646 "EHLO ozlabs.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754225Ab2IUFjj (ORCPT ); Fri, 21 Sep 2012 01:39:39 -0400 Received: by ozlabs.org (Postfix, from userid 1003) id 475C82C007E; Fri, 21 Sep 2012 15:39:38 +1000 (EST) Date: Fri, 21 Sep 2012 15:33:55 +1000 From: Paul Mackerras To: Alexander Graf Cc: kvm@vger.kernel.org, kvm-ppc@vger.kernel.org Subject: [PATCH 01/10] KVM: PPC: Book3S HV: Provide a way for userspace to get/set per-vCPU areas Message-ID: <20120921053354.GB15685@drongo> References: <20120921051606.GA15685@drongo> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20120921051606.GA15685@drongo> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The PAPR paravirtualization interface lets guests register three different types of per-vCPU buffer areas in its memory for communication with the hypervisor. These are called virtual processor areas (VPAs). Currently the hypercalls to register and unregister VPAs are handled by KVM in the kernel, and userspace has no way to know about or save and restore these registrations across a migration. This adds get and set ioctls to allow userspace to see what addresses have been registered, and to register or unregister them. This will be needed for guest hibernation and migration, and is also needed so that userspace can unregister them on reset (otherwise we corrupt guest memory after reboot by writing to the VPAs registered by the previous kernel). We also add a capability to indicate that the ioctls are supported. This also fixes a bug where we were calling init_vpa unconditionally, leading to an oops when unregistering the VPA. Signed-off-by: Paul Mackerras --- Documentation/virtual/kvm/api.txt | 32 +++++++++++++++++++++ arch/powerpc/include/asm/kvm_ppc.h | 3 ++ arch/powerpc/kvm/book3s_hv.c | 54 +++++++++++++++++++++++++++++++++++- arch/powerpc/kvm/powerpc.c | 26 +++++++++++++++++ include/linux/kvm.h | 11 ++++++++ 5 files changed, 125 insertions(+), 1 deletion(-) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index a12f4e4..76a07a6 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -1992,6 +1992,38 @@ return the hash table order in the parameter. (If the guest is using the virtualized real-mode area (VRMA) facility, the kernel will re-create the VMRA HPTEs on the next KVM_RUN of any vcpu.) +4.77 KVM_PPC_GET_VPA_INFO + +Capability: KVM_CAP_PPC_VPA +Architectures: powerpc +Type: vcpu ioctl +Parameters: Pointer to struct kvm_ppc_vpa (out) +Returns: 0 on success, -1 on error + +This populates and returns a structure containing the guest physical +addresses and sizes of the three per-virtual-processor areas that the +guest can register with the hypervisor under the PAPR +paravirtualization interface, namely the Virtual Processor Area, the +SLB (Segment Lookaside Buffer) Shadow Area, and the Dispatch Trace +Log. + +4.78 KVM_PPC_SET_VPA_INFO + +Capability: KVM_CAP_PPC_VPA +Architectures: powerpc +Type: vcpu ioctl +Parameters: Pointer to struct kvm_ppc_vpa (in) +Returns: 0 on success, -1 on error + +This sets the guest physical addresses and sizes of the three +per-virtual-processor areas that the guest can register with the +hypervisor under the PAPR paravirtualization interface, namely the +Virtual Processor Area, the SLB (Segment Lookaside Buffer) Shadow +Area, and the Dispatch Trace Log. Providing an address of zero for +any of these areas causes the kernel to unregister any previously +registered area; a non-zero address replaces any previously registered +area. + 5. The kvm_run structure ------------------------ diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index 3fb980d..2c94cb3 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -205,6 +205,9 @@ int kvmppc_set_sregs_ivor(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs); int kvm_vcpu_ioctl_get_one_reg(struct kvm_vcpu *vcpu, struct kvm_one_reg *reg); int kvm_vcpu_ioctl_set_one_reg(struct kvm_vcpu *vcpu, struct kvm_one_reg *reg); +int kvm_vcpu_get_vpa_info(struct kvm_vcpu *vcpu, struct kvm_ppc_vpa *vpa); +int kvm_vcpu_set_vpa_info(struct kvm_vcpu *vcpu, struct kvm_ppc_vpa *vpa); + void kvmppc_set_pid(struct kvm_vcpu *vcpu, u32 pid); #ifdef CONFIG_KVM_BOOK3S_64_HV diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 38c7f1b..bebf9cb 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -143,6 +143,57 @@ static void init_vpa(struct kvm_vcpu *vcpu, struct lppaca *vpa) vpa->yield_count = 1; } +int kvm_vcpu_get_vpa_info(struct kvm_vcpu *vcpu, struct kvm_ppc_vpa *vpa) +{ + spin_lock(&vcpu->arch.vpa_update_lock); + vpa->vpa_addr = vcpu->arch.vpa.next_gpa; + vpa->slb_shadow_addr = vcpu->arch.slb_shadow.next_gpa; + vpa->slb_shadow_size = vcpu->arch.slb_shadow.len; + vpa->dtl_addr = vcpu->arch.dtl.next_gpa; + vpa->dtl_size = vcpu->arch.dtl.len; + spin_unlock(&vcpu->arch.vpa_update_lock); + return 0; +} + +static inline void set_vpa(struct kvmppc_vpa *v, unsigned long addr, + unsigned long len) +{ + if (v->next_gpa != addr || v->len != len) { + v->next_gpa = addr; + v->len = addr ? len : 0; + v->update_pending = 1; + } +} + +int kvm_vcpu_set_vpa_info(struct kvm_vcpu *vcpu, struct kvm_ppc_vpa *vpa) +{ + /* check that addresses are cacheline aligned */ + if ((vpa->vpa_addr & (L1_CACHE_BYTES - 1)) || + (vpa->slb_shadow_addr & (L1_CACHE_BYTES - 1)) || + (vpa->dtl_addr & (L1_CACHE_BYTES - 1))) + return -EINVAL; + + /* DTL must be at least 1 entry long, if being set */ + if (vpa->dtl_addr) { + if (vpa->dtl_size < sizeof(struct dtl_entry)) + return -EINVAL; + vpa->dtl_size -= vpa->dtl_size % sizeof(struct dtl_entry); + } + + /* DTL and SLB shadow require VPA */ + if (!vpa->vpa_addr && (vpa->slb_shadow_addr || vpa->dtl_addr)) + return -EINVAL; + + spin_lock(&vcpu->arch.vpa_update_lock); + set_vpa(&vcpu->arch.vpa, vpa->vpa_addr, sizeof(struct lppaca)); + set_vpa(&vcpu->arch.slb_shadow, vpa->slb_shadow_addr, + vpa->slb_shadow_size); + set_vpa(&vcpu->arch.dtl, vpa->dtl_addr, vpa->dtl_size); + spin_unlock(&vcpu->arch.vpa_update_lock); + + return 0; +} + /* Length for a per-processor buffer is passed in at offset 4 in the buffer */ struct reg_vpa { u32 dummy; @@ -321,7 +372,8 @@ static void kvmppc_update_vpas(struct kvm_vcpu *vcpu) spin_lock(&vcpu->arch.vpa_update_lock); if (vcpu->arch.vpa.update_pending) { kvmppc_update_vpa(vcpu, &vcpu->arch.vpa); - init_vpa(vcpu, vcpu->arch.vpa.pinned_addr); + if (vcpu->arch.vpa.pinned_addr) + init_vpa(vcpu, vcpu->arch.vpa.pinned_addr); } if (vcpu->arch.dtl.update_pending) { kvmppc_update_vpa(vcpu, &vcpu->arch.dtl); diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index 8443e23..2b08564 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -349,6 +349,12 @@ int kvm_dev_ioctl_check_extension(long ext) r = 1; #else r = 0; + break; +#endif +#ifdef CONFIG_KVM_BOOK3S_64_HV + case KVM_CAP_PPC_VPA: + r = 1; + break; #endif break; case KVM_CAP_NR_VCPUS: @@ -826,6 +832,26 @@ long kvm_arch_vcpu_ioctl(struct file *filp, break; } #endif +#ifdef CONFIG_KVM_BOOK3S_64_HV + case KVM_PPC_GET_VPA_INFO: { + struct kvm_ppc_vpa vpa; + r = kvm_vcpu_get_vpa_info(vcpu, &vpa); + if (r) + break; + r = -EFAULT; + if (copy_to_user(argp, &vpa, sizeof(vpa))) + break; + r = 0; + } + case KVM_PPC_SET_VPA_INFO: { + struct kvm_ppc_vpa vpa; + r = -EFAULT; + if (copy_from_user(&vpa, argp, sizeof(vpa))) + break; + r = kvm_vcpu_set_vpa_info(vcpu, &vpa); + break; + } +#endif default: r = -EINVAL; } diff --git a/include/linux/kvm.h b/include/linux/kvm.h index 99c3c50..e7509bd 100644 --- a/include/linux/kvm.h +++ b/include/linux/kvm.h @@ -629,6 +629,7 @@ struct kvm_ppc_smmu_info { #define KVM_CAP_READONLY_MEM 81 #endif #define KVM_CAP_PPC_BOOKE_WATCHDOG 82 +#define KVM_CAP_PPC_VPA 83 #ifdef KVM_CAP_IRQ_ROUTING @@ -915,6 +916,8 @@ struct kvm_s390_ucas_mapping { #define KVM_SET_ONE_REG _IOW(KVMIO, 0xac, struct kvm_one_reg) /* VM is being stopped by host */ #define KVM_KVMCLOCK_CTRL _IO(KVMIO, 0xad) +#define KVM_PPC_GET_VPA_INFO _IOR(KVMIO, 0xae, struct kvm_ppc_vpa) +#define KVM_PPC_SET_VPA_INFO _IOW(KVMIO, 0xaf, struct kvm_ppc_vpa) #define KVM_DEV_ASSIGN_ENABLE_IOMMU (1 << 0) #define KVM_DEV_ASSIGN_PCI_2_3 (1 << 1) @@ -966,4 +969,12 @@ struct kvm_assigned_msix_entry { __u16 padding[3]; }; +struct kvm_ppc_vpa { + __u64 vpa_addr; + __u64 slb_shadow_addr; + __u64 dtl_addr; + __u32 slb_shadow_size; + __u32 dtl_size; +}; + #endif /* __LINUX_KVM_H */