From patchwork Mon Aug 6 10:08:16 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Mackerras X-Patchwork-Id: 175310 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id A51E52C008B for ; Mon, 6 Aug 2012 20:08:53 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755648Ab2HFKIo (ORCPT ); Mon, 6 Aug 2012 06:08:44 -0400 Received: from ozlabs.org ([203.10.76.45]:40407 "EHLO ozlabs.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755636Ab2HFKIf (ORCPT ); Mon, 6 Aug 2012 06:08:35 -0400 Received: by ozlabs.org (Postfix, from userid 1003) id B6DC12C00A3; Mon, 6 Aug 2012 20:08:32 +1000 (EST) Date: Mon, 6 Aug 2012 20:08:16 +1000 From: Paul Mackerras To: Alexander Graf , kvm-ppc@vger.kernel.org Cc: kvm@vger.kernel.org Subject: [RFC PATCH 5/5] KVM: PPC: Take the SRCU lock around memslot use Message-ID: <20120806100816.GF8980@bloggs.ozlabs.ibm.com> References: <20120806100207.GA8980@bloggs.ozlabs.ibm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20120806100207.GA8980@bloggs.ozlabs.ibm.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The generic KVM code uses SRCU (sleeping RCU) to protect accesses to the memslots data structures against updates due to userspace adding, modifying or removing memory slots. We need to do that too, both to avoid accessing stale copies of the memslots and to avoid lockdep warnings. This therefore adds srcu_read_lock/unlock pairs around code that accesses and uses memslots in the Book 3S PR code and the Book E (44x and e500) code. Signed-off-by: Paul Mackerras --- Compile-tested only. arch/powerpc/kvm/44x_tlb.c | 6 ++++++ arch/powerpc/kvm/book3s_pr.c | 6 ++++++ arch/powerpc/kvm/e500_tlb.c | 6 ++++++ 3 files changed, 18 insertions(+) diff --git a/arch/powerpc/kvm/44x_tlb.c b/arch/powerpc/kvm/44x_tlb.c index 33aa715..7dcce4e 100644 --- a/arch/powerpc/kvm/44x_tlb.c +++ b/arch/powerpc/kvm/44x_tlb.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include @@ -442,6 +443,7 @@ int kvmppc_44x_emul_tlbwe(struct kvm_vcpu *vcpu, u8 ra, u8 rs, u8 ws) struct kvmppc_vcpu_44x *vcpu_44x = to_44x(vcpu); struct kvmppc_44x_tlbe *tlbe; unsigned int gtlb_index; + int srcu_idx; gtlb_index = kvmppc_get_gpr(vcpu, ra); if (gtlb_index >= KVM44x_GUEST_TLB_SIZE) { @@ -474,6 +476,8 @@ int kvmppc_44x_emul_tlbwe(struct kvm_vcpu *vcpu, u8 ra, u8 rs, u8 ws) return EMULATE_FAIL; } + srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); + if (tlbe_is_host_safe(vcpu, tlbe)) { gva_t eaddr; gpa_t gpaddr; @@ -490,6 +494,8 @@ int kvmppc_44x_emul_tlbwe(struct kvm_vcpu *vcpu, u8 ra, u8 rs, u8 ws) kvmppc_mmu_map(vcpu, eaddr, gpaddr, gtlb_index); } + srcu_read_unlock(&vcpu->kvm->srcu, srcu_idx); + trace_kvm_gtlb_write(gtlb_index, tlbe->tid, tlbe->word0, tlbe->word1, tlbe->word2); diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index c5e0062..a786730 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include @@ -290,6 +291,7 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu, bool dr = (vcpu->arch.shared->msr & MSR_DR) ? true : false; bool ir = (vcpu->arch.shared->msr & MSR_IR) ? true : false; u64 vsid; + int srcu_idx; relocated = data ? dr : ir; @@ -334,6 +336,8 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu, pte.may_execute = !data; } + srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); + if (page_found == -ENOENT) { /* Page not found in guest PTE entries */ struct kvmppc_book3s_shadow_vcpu *svcpu = svcpu_get(vcpu); @@ -376,6 +380,8 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu, r = RESUME_HOST; } + srcu_read_unlock(&vcpu->kvm->srcu, srcu_idx); + return r; } diff --git a/arch/powerpc/kvm/e500_tlb.c b/arch/powerpc/kvm/e500_tlb.c index c8f6c58..895dc31 100644 --- a/arch/powerpc/kvm/e500_tlb.c +++ b/arch/powerpc/kvm/e500_tlb.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include "e500.h" @@ -858,6 +859,7 @@ int kvmppc_e500_emul_tlbwe(struct kvm_vcpu *vcpu) struct kvm_book3e_206_tlb_entry *gtlbe, stlbe; int tlbsel, esel, stlbsel, sesel; int recal = 0; + int srcu_idx; tlbsel = get_tlb_tlbsel(vcpu); esel = get_tlb_esel(vcpu, tlbsel); @@ -890,6 +892,8 @@ int kvmppc_e500_emul_tlbwe(struct kvm_vcpu *vcpu) kvmppc_set_tlb1map_range(vcpu, gtlbe); } + srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); + /* Invalidate shadow mappings for the about-to-be-clobbered TLBE. */ if (tlbe_is_host_safe(vcpu, gtlbe)) { u64 eaddr; @@ -928,6 +932,8 @@ int kvmppc_e500_emul_tlbwe(struct kvm_vcpu *vcpu) write_stlbe(vcpu_e500, gtlbe, &stlbe, stlbsel, sesel); } + srcu_read_unlock(&vcpu->kvm->srcu, srcu_idx); + kvmppc_set_exit_type(vcpu, EMULATED_TLBWE_EXITS); return EMULATE_DONE; }