From patchwork Tue Aug 6 04:23:37 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Mackerras X-Patchwork-Id: 264865 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id B94B62C007A for ; Tue, 6 Aug 2013 14:29:40 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752233Ab3HFE3d (ORCPT ); Tue, 6 Aug 2013 00:29:33 -0400 Received: from ozlabs.org ([203.10.76.45]:53493 "EHLO ozlabs.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751396Ab3HFE2o (ORCPT ); Tue, 6 Aug 2013 00:28:44 -0400 Received: by ozlabs.org (Postfix, from userid 1003) id 8EC6C2C008F; Tue, 6 Aug 2013 14:28:41 +1000 (EST) Date: Tue, 6 Aug 2013 14:23:37 +1000 From: Paul Mackerras To: Alexander Graf , Benjamin Herrenschmidt Cc: kvm-ppc@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH 14/23] KVM: PPC: Book3S PR: Delay disabling relocation-on interrupts Message-ID: <20130806042337.GT19254@iris.ozlabs.ibm.com> References: <20130806041259.GF19254@iris.ozlabs.ibm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20130806041259.GF19254@iris.ozlabs.ibm.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org When we are running a PR KVM guest on POWER8, we have to disable the new POWER8 feature of taking interrupts with relocation on, that is, of taking interrupts without disabling the MMU, because the SLB does not contain the normal kernel SLB entries while in the guest. Currently we disable relocation-on interrupts when a PR guest is created, and leave it disabled until there are no more PR guests in existence. This defers the disabling of relocation-on interrupts until the first time a PR KVM guest vcpu is run. The reason is that in future we will support both PR and HV guests in the same kernel, and this will avoid disabling relocation-on interrupts unnecessarily for guests which turn out to be HV guests, as we will not know at VM creation time whether it will be a PR or a HV guest. Signed-off-by: Paul Mackerras --- arch/powerpc/include/asm/kvm_host.h | 1 + arch/powerpc/kvm/book3s_pr.c | 71 ++++++++++++++++++++++++++----------- 2 files changed, 52 insertions(+), 20 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 4d83972..c012db2 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -264,6 +264,7 @@ struct kvm_arch { #endif /* CONFIG_KVM_BOOK3S_64_HV */ #ifdef CONFIG_KVM_BOOK3S_PR struct mutex hpt_mutex; + bool relon_disabled; #endif #ifdef CONFIG_PPC_BOOK3S_64 struct list_head spapr_tce_tables; diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index 5b06a70..2759ddc 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -1197,6 +1197,47 @@ void kvmppc_core_vcpu_free(struct kvm_vcpu *vcpu) kmem_cache_free(kvm_vcpu_cache, vcpu); } +/* + * On POWER8, we have to disable relocation-on interrupts while + * we are in the guest, since the guest doesn't have the normal + * kernel SLB contents. Since disabling relocation-on interrupts + * is a fairly heavy-weight operation, we do it once when starting + * the first guest vcpu and leave it disabled until the last guest + * has been destroyed. + */ +static unsigned int kvm_global_user_count = 0; +static DEFINE_SPINLOCK(kvm_global_user_count_lock); + +static void disable_relon_interrupts(struct kvm *kvm) +{ + mutex_lock(&kvm->lock); + if (!kvm->arch.relon_disabled) { + if (firmware_has_feature(FW_FEATURE_SET_MODE)) { + spin_lock(&kvm_global_user_count_lock); + if (++kvm_global_user_count == 1) + pSeries_disable_reloc_on_exc(); + spin_unlock(&kvm_global_user_count_lock); + } + /* order disabling above with setting relon_disabled */ + smp_mb(); + kvm->arch.relon_disabled = true; + } + mutex_unlock(&kvm->lock); +} + +static void enable_relon_interrupts(struct kvm *kvm) +{ + if (kvm->arch.relon_disabled && + firmware_has_feature(FW_FEATURE_SET_MODE)) { + spin_lock(&kvm_global_user_count_lock); + BUG_ON(kvm_global_user_count == 0); + if (--kvm_global_user_count == 0) + pSeries_enable_reloc_on_exc(); + spin_unlock(&kvm_global_user_count_lock); + } + kvm->arch.relon_disabled = false; +} + int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) { int ret; @@ -1234,6 +1275,9 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) goto out; } + if (!vcpu->kvm->arch.relon_disabled) + disable_relon_interrupts(vcpu->kvm); + /* Save FPU state in stack */ if (current->thread.regs->msr & MSR_FP) giveup_fpu(current); @@ -1400,9 +1444,6 @@ void kvmppc_core_flush_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot) { } -static unsigned int kvm_global_user_count = 0; -static DEFINE_SPINLOCK(kvm_global_user_count_lock); - int kvmppc_core_init_vm(struct kvm *kvm) { #ifdef CONFIG_PPC64 @@ -1411,28 +1452,18 @@ int kvmppc_core_init_vm(struct kvm *kvm) #endif mutex_init(&kvm->arch.hpt_mutex); - if (firmware_has_feature(FW_FEATURE_SET_MODE)) { - spin_lock(&kvm_global_user_count_lock); - if (++kvm_global_user_count == 1) - pSeries_disable_reloc_on_exc(); - spin_unlock(&kvm_global_user_count_lock); - } + /* + * If we don't have relocation-on interrupts at all, + * then we can consider them to be already disabled. + */ + kvm->arch.relon_disabled = !firmware_has_feature(FW_FEATURE_SET_MODE); + return 0; } void kvmppc_core_destroy_vm(struct kvm *kvm) { -#ifdef CONFIG_PPC64 - WARN_ON(!list_empty(&kvm->arch.spapr_tce_tables)); -#endif - - if (firmware_has_feature(FW_FEATURE_SET_MODE)) { - spin_lock(&kvm_global_user_count_lock); - BUG_ON(kvm_global_user_count == 0); - if (--kvm_global_user_count == 0) - pSeries_enable_reloc_on_exc(); - spin_unlock(&kvm_global_user_count_lock); - } + enable_relon_interrupts(kvm); } static int kvmppc_book3s_init(void)