From patchwork Thu Dec 6 13:21:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 1008787 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="Zzb7iuXx"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 439bqW3wH1z9sMQ for ; Fri, 7 Dec 2018 00:22:15 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729789AbeLFNWP (ORCPT ); Thu, 6 Dec 2018 08:22:15 -0500 Received: from mail-pf1-f193.google.com ([209.85.210.193]:33413 "EHLO mail-pf1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729475AbeLFNWO (ORCPT ); Thu, 6 Dec 2018 08:22:14 -0500 Received: by mail-pf1-f193.google.com with SMTP id c123so187837pfb.0; Thu, 06 Dec 2018 05:22:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=yA/PN1Rwq9nxWHfDkJhKaDB+akOO5r2Xg4ue0VyDcyY=; b=Zzb7iuXxLVH+RwONnJ3uLNsKuvYs02g0J5LFUOMlMlS4LFoHer/7db0m9gchyfuGNa OTQ2vFnLa3gpZK5zKTjb/Feph6KuDMoF/xmE3CDr+eEpLEWxk0IaxAhLfuEcjJgSl7jb pq1+htSAYz1Wbz4ZQa44YWU/BMeaJgB678QrK//D+V/IOPJLm37md0SlULcr1k3k6FeA Gsov2C1CREBfZPiyIRQhFhK1H4zmgWrajyyKLDZPtQv4B88yt5RyTMGcFmGUIbL1QenA ECyxdad1/CJLRAwiZFHHzBp1/cXQMQ0TlAQNiYauz52BISTU3+BXe39fjLSn9YN4EFTB 0BtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=yA/PN1Rwq9nxWHfDkJhKaDB+akOO5r2Xg4ue0VyDcyY=; b=aoGQsleghdjr0HPYXNTyQGG217JOSqnIL87Qba7w2GkHHMCTzK1zjpyFyeMo7ita1D 1VF/efqnzdOp3kzbU0rt6UiDeBnfUl2h9OtaH1G/etMeHS9Wo688nI7eZ701cMD/DpDh 2owUYBKfKNbeksZ+HM6mk+mk3EiMzkdUauxmgd0h1qnOJod5757CYQ+dJIobls+UYq/D HthBYAfATc3IU4XfVfhLfLSkNER2q6MxMbApPCyXPjKY+tUW1+pYCgy05xiLzmnMh1ub vlty13loon21PhSRBKO1J7p6zOpl70zNo0KiA0/sR/X6ZxVlFpEWPnU/BIA2V81qqtI+ M2qw== X-Gm-Message-State: AA+aEWbYttqwFyqj2Okw67EabTsNXGPHzXZLng7ZSDciiILXyJ2Mjdsl V4YtHIgJAgWM8208QjX38Iw= X-Google-Smtp-Source: AFSGD/WNWN31cscUDEg0BNGK0bsrk9bdn0Q6nFKI29mcbiR9Y9GcXFVWPLcguGojjMnRB9GdFaZplQ== X-Received: by 2002:a62:b9a:: with SMTP id 26mr28854892pfl.196.1544102533677; Thu, 06 Dec 2018 05:22:13 -0800 (PST) Received: from localhost.corp.microsoft.com ([2404:f801:9000:1a:d9bd:62c6:740b:9fc4]) by smtp.googlemail.com with ESMTPSA id y5sm2409246pge.49.2018.12.06.05.22.06 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 06 Dec 2018 05:22:13 -0800 (PST) From: lantianyu1986@gmail.com X-Google-Original-From: Tianyu.Lan@microsoft.com Cc: Lan Tianyu , christoffer.dall@arm.com, marc.zyngier@arm.com, linux@armlinux.org.uk, catalin.marinas@arm.com, will.deacon@arm.com, jhogan@kernel.org, ralf@linux-mips.org, paul.burton@mips.com, paulus@ozlabs.org, benh@kernel.crashing.org, mpe@ellerman.id.au, pbonzini@redhat.com, rkrcmar@redhat.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mips@linux-mips.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, michael.h.kelley@microsoft.com, kys@microsoft.com, vkuznets@redhat.com Subject: [Resend PATCH V5 6/10] KVM: Replace old tlb flush function with new one to flush a specified range. Date: Thu, 6 Dec 2018 21:21:09 +0800 Message-Id: <20181206132113.2691-7-Tianyu.Lan@microsoft.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20181206132113.2691-1-Tianyu.Lan@microsoft.com> References: <20181206132113.2691-1-Tianyu.Lan@microsoft.com> To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Lan Tianyu This patch is to replace kvm_flush_remote_tlbs() with kvm_flush_ remote_tlbs_with_address() in some functions without logic change. Signed-off-by: Lan Tianyu --- arch/x86/kvm/mmu.c | 31 +++++++++++++++++++++---------- arch/x86/kvm/paging_tmpl.h | 3 ++- 2 files changed, 23 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 74df957f2f81..1cd3c316a7e6 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1485,8 +1485,12 @@ static bool __drop_large_spte(struct kvm *kvm, u64 *sptep) static void drop_large_spte(struct kvm_vcpu *vcpu, u64 *sptep) { - if (__drop_large_spte(vcpu->kvm, sptep)) - kvm_flush_remote_tlbs(vcpu->kvm); + if (__drop_large_spte(vcpu->kvm, sptep)) { + struct kvm_mmu_page *sp = page_header(__pa(sptep)); + + kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn, + KVM_PAGES_PER_HPAGE(sp->role.level)); + } } /* @@ -1954,7 +1958,8 @@ static void rmap_recycle(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) rmap_head = gfn_to_rmap(vcpu->kvm, gfn, sp); kvm_unmap_rmapp(vcpu->kvm, rmap_head, NULL, gfn, sp->role.level, 0); - kvm_flush_remote_tlbs(vcpu->kvm); + kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn, + KVM_PAGES_PER_HPAGE(sp->role.level)); } int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end) @@ -2470,7 +2475,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, account_shadowed(vcpu->kvm, sp); if (level == PT_PAGE_TABLE_LEVEL && rmap_write_protect(vcpu, gfn)) - kvm_flush_remote_tlbs(vcpu->kvm); + kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1); if (level > PT_PAGE_TABLE_LEVEL && need_sync) flush |= kvm_sync_pages(vcpu, gfn, &invalid_list); @@ -2590,7 +2595,7 @@ static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, return; drop_parent_pte(child, sptep); - kvm_flush_remote_tlbs(vcpu->kvm); + kvm_flush_remote_tlbs_with_address(vcpu->kvm, child->gfn, 1); } } @@ -3014,8 +3019,10 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, unsigned pte_access, ret = RET_PF_EMULATE; kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu); } + if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH || flush) - kvm_flush_remote_tlbs(vcpu->kvm); + kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, + KVM_PAGES_PER_HPAGE(level)); if (unlikely(is_mmio_spte(*sptep))) ret = RET_PF_EMULATE; @@ -5672,7 +5679,8 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, * on PT_WRITABLE_MASK anymore. */ if (flush) - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_with_address(kvm, memslot->base_gfn, + memslot->npages); } static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, @@ -5742,7 +5750,8 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, * dirty_bitmap. */ if (flush) - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_with_address(kvm, memslot->base_gfn, + memslot->npages); } EXPORT_SYMBOL_GPL(kvm_mmu_slot_leaf_clear_dirty); @@ -5760,7 +5769,8 @@ void kvm_mmu_slot_largepage_remove_write_access(struct kvm *kvm, lockdep_assert_held(&kvm->slots_lock); if (flush) - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_with_address(kvm, memslot->base_gfn, + memslot->npages); } EXPORT_SYMBOL_GPL(kvm_mmu_slot_largepage_remove_write_access); @@ -5777,7 +5787,8 @@ void kvm_mmu_slot_set_dirty(struct kvm *kvm, /* see kvm_mmu_slot_leaf_clear_dirty */ if (flush) - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_with_address(kvm, memslot->base_gfn, + memslot->npages); } EXPORT_SYMBOL_GPL(kvm_mmu_slot_set_dirty); diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 7cf2185b7eb5..6bdca39829bc 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -894,7 +894,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) pte_gpa += (sptep - sp->spt) * sizeof(pt_element_t); if (mmu_page_zap_pte(vcpu->kvm, sp, sptep)) - kvm_flush_remote_tlbs(vcpu->kvm); + kvm_flush_remote_tlbs_with_address(vcpu->kvm, + sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); if (!rmap_can_add(vcpu)) break;