From patchwork Mon Oct 7 12:04:47 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 281096 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id A95C52C00C7 for ; Mon, 7 Oct 2013 23:05:16 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755862Ab3JGMFI (ORCPT ); Mon, 7 Oct 2013 08:05:08 -0400 Received: from mail-qa0-f41.google.com ([209.85.216.41]:35362 "EHLO mail-qa0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755791Ab3JGMEy (ORCPT ); Mon, 7 Oct 2013 08:04:54 -0400 Received: by mail-qa0-f41.google.com with SMTP id ii20so2927369qab.0 for ; Mon, 07 Oct 2013 05:04:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=aWIJnKZIsT2LuNooPNzqZdTOXtVgVqA8is9JOxBuBrU=; b=z6nlP/kDpIcEzwVj9XDe12npCWlaMeTnEGkTDHQmHvoMjqGHYS+qo+iyg/KA61qFM3 gbO64BRspwijKBBRhkYhP/g+LaV/DC9x1ka2how1UpxB8kn3LUT+Do6+jiaDgbsa7hRs W/R2V58j4fiqR1RnavFmhL8QWYj0O3NbptYorFLlPxeciFqjHyEgyRj5arc59nxgmkw1 HUVdvStxxzve90A3xy3cW6z2MFncCyctKB3KaDRv9u5I4qVxuFXoiJapqVdDmGQRJgCg cfyot8expku334AGBtm+5xu120zHPcehwY7a5/Gr/T9H+3IGgBuFlfs6zyU424uBI+cv HZwQ== X-Received: by 10.224.24.131 with SMTP id v3mr37103569qab.48.1381147493610; Mon, 07 Oct 2013 05:04:53 -0700 (PDT) Received: from yakj.usersys.redhat.com (net-2-39-10-130.cust.dsl.vodafone.it. [2.39.10.130]) by mx.google.com with ESMTPSA id x1sm61499826qai.6.1969.12.31.16.00.00 (version=TLSv1 cipher=RC4-SHA bits=128/128); Mon, 07 Oct 2013 05:04:52 -0700 (PDT) Message-ID: <5252A35F.1000502@redhat.com> Date: Mon, 07 Oct 2013 14:04:47 +0200 From: Paolo Bonzini User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130923 Thunderbird/17.0.9 MIME-Version: 1.0 To: Alexander Graf CC: Bharat Bhushan , Paul Mackerras , Scott Wood , kvm-ppc@vger.kernel.org, "kvm@vger.kernel.org mailing list" , Bharat Bhushan , Gleb Natapov Subject: Re: [PATCH 2/2] kvm: ppc: booke: check range page invalidation progress on page setup References: <1375869826-17509-1-git-send-email-Bharat.Bhushan@freescale.com> <1375869826-17509-3-git-send-email-Bharat.Bhushan@freescale.com> In-Reply-To: X-Enigmail-Version: 1.5.2 Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Il 04/10/2013 15:38, Alexander Graf ha scritto: > > On 07.08.2013, at 12:03, Bharat Bhushan wrote: > >> When the MM code is invalidating a range of pages, it calls the KVM >> kvm_mmu_notifier_invalidate_range_start() notifier function, which calls >> kvm_unmap_hva_range(), which arranges to flush all the TLBs for guest pages. >> However, the Linux PTEs for the range being flushed are still valid at >> that point. We are not supposed to establish any new references to pages >> in the range until the ...range_end() notifier gets called. >> The PPC-specific KVM code doesn't get any explicit notification of that; >> instead, we are supposed to use mmu_notifier_retry() to test whether we >> are or have been inside a range flush notifier pair while we have been >> referencing a page. >> >> This patch calls the mmu_notifier_retry() while mapping the guest >> page to ensure we are not referencing a page when in range invalidation. >> >> This call is inside a region locked with kvm->mmu_lock, which is the >> same lock that is called by the KVM MMU notifier functions, thus >> ensuring that no new notification can proceed while we are in the >> locked region. >> >> Signed-off-by: Bharat Bhushan > > Acked-by: Alexander Graf > > Gleb, Paolo, please queue for 3.12 directly. Here is the backport. The second hunk has a nontrivial conflict, so someone please give their {Tested,Reviewed,Compiled}-by. Paolo --- To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c index 1c6a9d7..c65593a 100644 --- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -332,6 +332,13 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500, unsigned long hva; int pfnmap = 0; int tsize = BOOK3E_PAGESZ_4K; + int ret = 0; + unsigned long mmu_seq; + struct kvm *kvm = vcpu_e500->vcpu.kvm; + + /* used to check for invalidations in progress */ + mmu_seq = kvm->mmu_notifier_seq; + smp_rmb(); /* * Translate guest physical to true physical, acquiring @@ -449,6 +456,12 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500, gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1); } + spin_lock(&kvm->mmu_lock); + if (mmu_notifier_retry(kvm, mmu_seq)) { + ret = -EAGAIN; + goto out; + } + kvmppc_e500_ref_setup(ref, gtlbe, pfn); kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize, @@ -457,10 +470,13 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500, /* Clear i-cache for new pages */ kvmppc_mmu_flush_icache(pfn); +out: + spin_unlock(&kvm->mmu_lock); + /* Drop refcount on page, so that mmu notifiers can clear it */ kvm_release_pfn_clean(pfn); - return 0; + return ret; } /* XXX only map the one-one case, for now use TLB0 */