From patchwork Thu Dec 6 13:21:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 1008784 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="WLHIycFY"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 439bqN3pnWz9sN4 for ; Fri, 7 Dec 2018 00:22:08 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729473AbeLFNVg (ORCPT ); Thu, 6 Dec 2018 08:21:36 -0500 Received: from mail-pf1-f194.google.com ([209.85.210.194]:36704 "EHLO mail-pf1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729424AbeLFNVf (ORCPT ); Thu, 6 Dec 2018 08:21:35 -0500 Received: by mail-pf1-f194.google.com with SMTP id b85so179398pfc.3; Thu, 06 Dec 2018 05:21:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=yl5M/YR6xjWoRgtWFGXbuS2VcfNF+kKFSMttnjzt4tA=; b=WLHIycFYaXQ0FyPdeQRQYs8WC6bRrqlmuBnn6QQ5Su48WMNqzCTGkVaprlW0Q2x/bR UTsOxyI7d2K6WaP40kFHCNbt34I9aSTgcPkW/UOAWKkTj+Yf4RP3ggqyJY5zjH2QJLo6 CRe6IcNHI7A2foaAs4cDYedVKh+LVY9grpe2ZRjRNAO9muRrWJ4xg+bJ8rIFaVGMHU4G sBb3UAAH7MIql9s0cdbCgYuTZUaFHJwULgDmpVjYWOZ0NM+cKXTZa0QIaA9SJsdRKxA+ 7L+uQ8nxkQ4T70pnsws420LQGf2DRAWlBWVTwdT8YRngx6/RzjUYw3zCZ1TWxnr19JrP XcEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=yl5M/YR6xjWoRgtWFGXbuS2VcfNF+kKFSMttnjzt4tA=; b=bQX75WHXMpMVXeh6qmTV2W858jpvMK3nWbXGVH26a00iYA8S+6BKtLLhXjywqT2wHr 0gNzZ//uB82hnIdtkX7NjeZ6EwuZqlH+gyatkK3T11i9P1GilnM5EehupbpPraK2TRQy MqV49twwlCRFNBAbl8G6gMOFZD28KAmvyRaDjGoqraAT5yVg2qalqkJvhCgrl1N9Eo5i 6WZsimcz0WNfaxGQ8yAPKEVsQfFct2lkfkXffp37d1N/66R/fFOruheT0ExGE1X4iARG 7GCRjQKazsCqPNqnUfWINHOnnllKowZXj86obSBdwHJ04OFWXjWpuzL6YTbI/frNH09N 25Tw== X-Gm-Message-State: AA+aEWZLdxTF6T263tbf4RV2r6xSMaiEHhhpkDXAOoqlJPz1K63Liiwm qc5Cz0Q0vcrhOuzFc6d3fwE= X-Google-Smtp-Source: AFSGD/Vad2NHYMuKRk3TpLn/tHk7cqeYYELuO43hDvsNWnBZeImb5HMnRC3+z6OQuE1c3wZTRRDOFA== X-Received: by 2002:a62:cf02:: with SMTP id b2mr29644460pfg.183.1544102494752; Thu, 06 Dec 2018 05:21:34 -0800 (PST) Received: from localhost.corp.microsoft.com ([2404:f801:9000:1a:d9bd:62c6:740b:9fc4]) by smtp.googlemail.com with ESMTPSA id y5sm2409246pge.49.2018.12.06.05.21.26 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 06 Dec 2018 05:21:34 -0800 (PST) From: lantianyu1986@gmail.com X-Google-Original-From: Tianyu.Lan@microsoft.com Cc: Lan Tianyu , christoffer.dall@arm.com, marc.zyngier@arm.com, linux@armlinux.org.uk, catalin.marinas@arm.com, will.deacon@arm.com, jhogan@kernel.org, ralf@linux-mips.org, paul.burton@mips.com, paulus@ozlabs.org, benh@kernel.crashing.org, mpe@ellerman.id.au, kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org, pbonzini@redhat.com, rkrcmar@redhat.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mips@linux-mips.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, devel@linuxdriverproject.org, kvm@vger.kernel.org, michael.h.kelley@microsoft.com, vkuznets@redhat.com Subject: [Resend PATCH V5 1/10] KVM: Add tlb_remote_flush_with_range callback in kvm_x86_ops Date: Thu, 6 Dec 2018 21:21:04 +0800 Message-Id: <20181206132113.2691-2-Tianyu.Lan@microsoft.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20181206132113.2691-1-Tianyu.Lan@microsoft.com> References: <20181206132113.2691-1-Tianyu.Lan@microsoft.com> To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Lan Tianyu Add flush range call back in the kvm_x86_ops and platform can use it to register its associated function. The parameter "kvm_tlb_range" accepts a single range and flush list which contains a list of ranges. Signed-off-by: Lan Tianyu --- Change since v1: Change "end_gfn" to "pages" to aviod confusion as to whether "end_gfn" is inclusive or exlusive. --- arch/x86/include/asm/kvm_host.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index fbda5a917c5b..fc7513ecfc13 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -439,6 +439,11 @@ struct kvm_mmu { u64 pdptrs[4]; /* pae */ }; +struct kvm_tlb_range { + u64 start_gfn; + u64 pages; +}; + enum pmc_type { KVM_PMC_GP = 0, KVM_PMC_FIXED, @@ -1042,6 +1047,8 @@ struct kvm_x86_ops { void (*tlb_flush)(struct kvm_vcpu *vcpu, bool invalidate_gpa); int (*tlb_remote_flush)(struct kvm *kvm); + int (*tlb_remote_flush_with_range)(struct kvm *kvm, + struct kvm_tlb_range *range); /* * Flush any TLB entries associated with the given GVA. From patchwork Thu Dec 6 13:21:05 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 1008782 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="pZ/wZuN0"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 439bq42H9cz9s8F for ; Fri, 7 Dec 2018 00:21:52 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729435AbeLFNVp (ORCPT ); Thu, 6 Dec 2018 08:21:45 -0500 Received: from mail-pf1-f196.google.com ([209.85.210.196]:41959 "EHLO mail-pf1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728448AbeLFNVo (ORCPT ); Thu, 6 Dec 2018 08:21:44 -0500 Received: by mail-pf1-f196.google.com with SMTP id b7so171959pfi.8; Thu, 06 Dec 2018 05:21:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+S7SJ5bfWZvNyT+X230IvZbmjxt06W+jDJCB+AIfZhM=; b=pZ/wZuN0klIHZbUPMIYZMz0zx98DtChtOP7DX4rUcpuDz4/OMeNKqS+rUjvgUCgi/e RIFgOPj8f1Xom0UUenKZ2eQsvzKLCZe8WZ6sEAJa1jKtz3rothObVPuNnSvb5ppq5RPR TjarYf9dhB3oieqwXLVQERtRjRFaK04r/GUZpXP7bX8RlSDZCW//zqUrm4uwWo00JDFh f1Po2DA/ovwDQFVRY79vOsqlaprx7tRLNLpYSNM9nrl2j8so5z/lOi8tXOskQivQquvC 9FmwAi+ThYrcHV45e3ZwKws5EpJH6OZbzcPBk9HoOuwKo5DpLN63drN4iCP99D3iFwj+ wXiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+S7SJ5bfWZvNyT+X230IvZbmjxt06W+jDJCB+AIfZhM=; b=TvBe+r1vVfQvPckFzrkOGLLgBPg0HPL87nv0bAgt8l8x8gBtOyxERgvncrzx292FZM LCIr/j1cbwJq2gFKCC66lMaiZ1RwHyhM8MxMswGNwglm5dHfVUaX6yjd6hLW4yNg7yej 6Vv/WuBn9pSJlMBvC00ez4CTXC+cGaoLdBGDqdwQ+a64DxMYpbJ+PlSke7HaCknQYWNU mvOpqPiA8zeArF5ENUW5llzK+YadafesmpuxxxgkNVTnTpAxjnESYEi7J7aoSknSnYSN g16UhGLwZZ/Wa+qM4FGBF64jrug53eEP8XDcs7gltAxfUiRTdE0ErANWpXXCIGaNJCQz ghZg== X-Gm-Message-State: AA+aEWaOygikAVCtDFvjS+6hxD50p+ggtz/I1B1x0m6mhvo+tpYYzwQ5 yvm/DOyJ0hx0N1pPAK5qBx4= X-Google-Smtp-Source: AFSGD/X9QcN1hDAodYE4/BDzQw/bkfDLpWXJH9haHfzb4HV/lG3PB9QECVTfOSvGTkJwe84P5tue0Q== X-Received: by 2002:a62:c28e:: with SMTP id w14mr28418646pfk.115.1544102502857; Thu, 06 Dec 2018 05:21:42 -0800 (PST) Received: from localhost.corp.microsoft.com ([2404:f801:9000:1a:d9bd:62c6:740b:9fc4]) by smtp.googlemail.com with ESMTPSA id y5sm2409246pge.49.2018.12.06.05.21.34 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 06 Dec 2018 05:21:42 -0800 (PST) From: lantianyu1986@gmail.com X-Google-Original-From: Tianyu.Lan@microsoft.com Cc: Lan Tianyu , christoffer.dall@arm.com, marc.zyngier@arm.com, linux@armlinux.org.uk, catalin.marinas@arm.com, will.deacon@arm.com, jhogan@kernel.org, ralf@linux-mips.org, paul.burton@mips.com, paulus@ozlabs.org, benh@kernel.crashing.org, mpe@ellerman.id.au, kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org, pbonzini@redhat.com, rkrcmar@redhat.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mips@linux-mips.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, devel@linuxdriverproject.org, kvm@vger.kernel.org, michael.h.kelley@microsoft.com, vkuznets@redhat.com Subject: [Resend PATCH V5 2/10] x86/hyper-v: Add HvFlushGuestAddressList hypercall support Date: Thu, 6 Dec 2018 21:21:05 +0800 Message-Id: <20181206132113.2691-3-Tianyu.Lan@microsoft.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20181206132113.2691-1-Tianyu.Lan@microsoft.com> References: <20181206132113.2691-1-Tianyu.Lan@microsoft.com> To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Lan Tianyu Hyper-V provides HvFlushGuestAddressList() hypercall to flush EPT tlb with specified ranges. This patch is to add the hypercall support. Reviewed-by: Michael Kelley Signed-off-by: Lan Tianyu --- Change sincd v4: - Expose function hyperv_fill_flush_guest_mapping_list() out of hyperv file - Adjust parameter of hyperv_flush_guest_mapping_range() Change since v2: Fix some coding style issues - Move HV_MAX_FLUSH_PAGES and HV_MAX_FLUSH_REP_COUNT to hyperv-tlfs.h. - Calculate HV_MAX_FLUSH_REP_COUNT in the macro definition - Use HV_MAX_FLUSH_REP_COUNT to define length of gpa_list in struct hv_guest_mapping_flush_list. Change since v1: Add hyperv tlb flush struct to avoid use kvm tlb flush struct in the hyperv file. --- arch/x86/hyperv/nested.c | 79 ++++++++++++++++++++++++++++++++++++++ arch/x86/include/asm/hyperv-tlfs.h | 32 +++++++++++++++ arch/x86/include/asm/mshyperv.h | 15 ++++++++ 3 files changed, 126 insertions(+) diff --git a/arch/x86/hyperv/nested.c b/arch/x86/hyperv/nested.c index b8e60cc50461..3d0f31e46954 100644 --- a/arch/x86/hyperv/nested.c +++ b/arch/x86/hyperv/nested.c @@ -7,6 +7,7 @@ * * Author : Lan Tianyu */ +#define pr_fmt(fmt) "Hyper-V: " fmt #include @@ -54,3 +55,81 @@ int hyperv_flush_guest_mapping(u64 as) return ret; } EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping); + +int hyperv_fill_flush_guest_mapping_list( + struct hv_guest_mapping_flush_list *flush, + u64 start_gfn, u64 pages) +{ + u64 cur = start_gfn; + u64 additional_pages; + int gpa_n = 0; + + do { + /* + * If flush requests exceed max flush count, go back to + * flush tlbs without range. + */ + if (gpa_n >= HV_MAX_FLUSH_REP_COUNT) + return -ENOSPC; + + additional_pages = min_t(u64, pages, HV_MAX_FLUSH_PAGES) - 1; + + flush->gpa_list[gpa_n].page.additional_pages = additional_pages; + flush->gpa_list[gpa_n].page.largepage = false; + flush->gpa_list[gpa_n].page.basepfn = cur; + + pages -= additional_pages + 1; + cur += additional_pages + 1; + gpa_n++; + } while (pages > 0); + + return gpa_n; +} +EXPORT_SYMBOL_GPL(hyperv_fill_flush_guest_mapping_list); + +int hyperv_flush_guest_mapping_range(u64 as, + hyperv_fill_flush_list_func fill_flush_list_func, void *data) +{ + struct hv_guest_mapping_flush_list **flush_pcpu; + struct hv_guest_mapping_flush_list *flush; + u64 status = 0; + unsigned long flags; + int ret = -ENOTSUPP; + int gpa_n = 0; + + if (!hv_hypercall_pg || !fill_flush_list_func) + goto fault; + + local_irq_save(flags); + + flush_pcpu = (struct hv_guest_mapping_flush_list **) + this_cpu_ptr(hyperv_pcpu_input_arg); + + flush = *flush_pcpu; + if (unlikely(!flush)) { + local_irq_restore(flags); + goto fault; + } + + flush->address_space = as; + flush->flags = 0; + + gpa_n = fill_flush_list_func(flush, data); + if (gpa_n < 0) { + local_irq_restore(flags); + goto fault; + } + + status = hv_do_rep_hypercall(HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST, + gpa_n, 0, flush, NULL); + + local_irq_restore(flags); + + if (!(status & HV_HYPERCALL_RESULT_MASK)) + ret = 0; + else + ret = status; +fault: + return ret; +} +EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping_range); diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h index 4139f7650fe5..405a378e1c62 100644 --- a/arch/x86/include/asm/hyperv-tlfs.h +++ b/arch/x86/include/asm/hyperv-tlfs.h @@ -10,6 +10,7 @@ #define _ASM_X86_HYPERV_TLFS_H #include +#include /* * The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent @@ -358,6 +359,7 @@ struct hv_tsc_emulation_status { #define HVCALL_POST_MESSAGE 0x005c #define HVCALL_SIGNAL_EVENT 0x005d #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE 0x00af +#define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST 0x00b0 #define HV_X64_MSR_VP_ASSIST_PAGE_ENABLE 0x00000001 #define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT 12 @@ -757,6 +759,36 @@ struct hv_guest_mapping_flush { u64 flags; }; +/* + * HV_MAX_FLUSH_PAGES = "additional_pages" + 1. It's limited + * by the bitwidth of "additional_pages" in union hv_gpa_page_range. + */ +#define HV_MAX_FLUSH_PAGES (2048) + +/* HvFlushGuestPhysicalAddressList hypercall */ +union hv_gpa_page_range { + u64 address_space; + struct { + u64 additional_pages:11; + u64 largepage:1; + u64 basepfn:52; + } page; +}; + +/* + * All input flush parameters should be in single page. The max flush + * count is equal with how many entries of union hv_gpa_page_range can + * be populated into the input parameter page. + */ +#define HV_MAX_FLUSH_REP_COUNT (PAGE_SIZE - 2 * sizeof(u64) / \ + sizeof(union hv_gpa_page_range)) + +struct hv_guest_mapping_flush_list { + u64 address_space; + u64 flags; + union hv_gpa_page_range gpa_list[HV_MAX_FLUSH_REP_COUNT]; +}; + /* HvFlushVirtualAddressSpace, HvFlushVirtualAddressList hypercalls */ struct hv_tlb_flush { u64 address_space; diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index 1d0a7778e163..7da712f65398 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -22,6 +22,11 @@ struct ms_hyperv_info { extern struct ms_hyperv_info ms_hyperv; + +typedef int (*hyperv_fill_flush_list_func)( + struct hv_guest_mapping_flush_list *flush, + void *data); + /* * Generate the guest ID. */ @@ -348,6 +353,11 @@ void set_hv_tscchange_cb(void (*cb)(void)); void clear_hv_tscchange_cb(void); void hyperv_stop_tsc_emulation(void); int hyperv_flush_guest_mapping(u64 as); +int hyperv_flush_guest_mapping_range(u64 as, + hyperv_fill_flush_list_func fill_func, void *data); +int hyperv_fill_flush_guest_mapping_list( + struct hv_guest_mapping_flush_list *flush, + u64 start_gfn, u64 end_gfn); #ifdef CONFIG_X86_64 void hv_apic_init(void); @@ -370,6 +380,11 @@ static inline struct hv_vp_assist_page *hv_get_vp_assist_page(unsigned int cpu) return NULL; } static inline int hyperv_flush_guest_mapping(u64 as) { return -1; } +static inline int hyperv_flush_guest_mapping_range(u64 as, + hyperv_fill_flush_list_func fill_func, void *data); +{ + return -1; +} #endif /* CONFIG_HYPERV */ #ifdef CONFIG_HYPERV_TSCPAGE From patchwork Thu Dec 6 13:21:06 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 1008783 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="Tlnp9BJS"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 439bqC1ZBXz9sDb for ; Fri, 7 Dec 2018 00:21:59 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728448AbeLFNVw (ORCPT ); Thu, 6 Dec 2018 08:21:52 -0500 Received: from mail-pl1-f195.google.com ([209.85.214.195]:44105 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728648AbeLFNVv (ORCPT ); Thu, 6 Dec 2018 08:21:51 -0500 Received: by mail-pl1-f195.google.com with SMTP id k8so158059pls.11; Thu, 06 Dec 2018 05:21:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=LcouHPlA21WmVqJldeSHxclMup9IM+x2oKuWevoKsOg=; b=Tlnp9BJSUa7XFAwwW+GKNzzbXZKtZmfeo/5Fv/ovAsoFzIKqIjwypNVHiyd8+gfLPH Fb7gxuUGSIyjYPawgXDGQLme/G7CS1zch9eFiaKR3eqZzxwNMw+mwvBmpkEFO9fyaneg OiKaViiD9vXZiBlOie/V6BWRbKxML36lqpoJ8ecYC07oRzZlgbr0wxuh+KVfLrzla7TL fEFs607cmdp+xLpu/F3LiyEdN90SStH2c2b+HVYB1kjqezE+VFYvSrdpdsrMXhdyP129 WKMGcGUEquPFViD0K0PBC7PYykx4TcQ+5CHmO8cT1Nn4YOBv4JJuWfOvwvS6lzwOiB+m /kng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=LcouHPlA21WmVqJldeSHxclMup9IM+x2oKuWevoKsOg=; b=QAXxUeBtu5parFRuy99hMejHipDoWPotYhnFwChvEUCFpT6kdxwpcO1vEWD5HnpAKe Z+/bZ0OJDQkQIXGnZVnP/rGtQRsQcTkEpzfoMBlmwTolvyFUgcGMagZb/YOjby/NiXM1 ecPXvx++4y/c1h/KbGdWTw7UvgJk6D4G9sB2klwPfJSz8M+EwDPPCMy/FGaT+n59H1Jp suKOiWe4wfTYm43qLqP+M3ZM9sDmxIuywAQXXAA7IWeV/IyUbb25t4soWYqYN52ep+gb vB7PbxnX9PKaNiMgKYg/+JWaqeqDxNgeNdZUVOwbbzHvghLrsOQ8oNf/4jF+X1I52fMm /cWA== X-Gm-Message-State: AA+aEWaaErPc3lnAY/u3ya1eNwzwLQc00Pzu+rgLXJpm5FwINmMWTDd0 jSrIXo7JyWEKFo5Etme3yrE= X-Google-Smtp-Source: AFSGD/UK1m2QlD/ekJ26senJEbu6Pb+HfQlL4cpRyq7HPE9fC51Z5WuWvOTYycdw7BrwYcyq06rLUg== X-Received: by 2002:a17:902:2aaa:: with SMTP id j39mr28951899plb.335.1544102511104; Thu, 06 Dec 2018 05:21:51 -0800 (PST) Received: from localhost.corp.microsoft.com ([2404:f801:9000:1a:d9bd:62c6:740b:9fc4]) by smtp.googlemail.com with ESMTPSA id y5sm2409246pge.49.2018.12.06.05.21.43 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 06 Dec 2018 05:21:50 -0800 (PST) From: lantianyu1986@gmail.com X-Google-Original-From: Tianyu.Lan@microsoft.com Cc: Lan Tianyu , christoffer.dall@arm.com, marc.zyngier@arm.com, linux@armlinux.org.uk, catalin.marinas@arm.com, will.deacon@arm.com, jhogan@kernel.org, ralf@linux-mips.org, paul.burton@mips.com, paulus@ozlabs.org, benh@kernel.crashing.org, mpe@ellerman.id.au, kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org, pbonzini@redhat.com, rkrcmar@redhat.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mips@linux-mips.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, devel@linuxdriverproject.org, kvm@vger.kernel.org, michael.h.kelley@microsoft.com, vkuznets@redhat.com Subject: [Resend PATCH V5 3/10] x86/Hyper-v: Add trace in the hyperv_nested_flush_guest_mapping_range() Date: Thu, 6 Dec 2018 21:21:06 +0800 Message-Id: <20181206132113.2691-4-Tianyu.Lan@microsoft.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20181206132113.2691-1-Tianyu.Lan@microsoft.com> References: <20181206132113.2691-1-Tianyu.Lan@microsoft.com> To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Lan Tianyu This patch is to trace log in the hyperv_nested_flush_ guest_mapping_range(). Signed-off-by: Lan Tianyu --- arch/x86/hyperv/nested.c | 1 + arch/x86/include/asm/trace/hyperv.h | 14 ++++++++++++++ 2 files changed, 15 insertions(+) diff --git a/arch/x86/hyperv/nested.c b/arch/x86/hyperv/nested.c index 3d0f31e46954..dd0a843f766d 100644 --- a/arch/x86/hyperv/nested.c +++ b/arch/x86/hyperv/nested.c @@ -130,6 +130,7 @@ int hyperv_flush_guest_mapping_range(u64 as, else ret = status; fault: + trace_hyperv_nested_flush_guest_mapping_range(as, ret); return ret; } EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping_range); diff --git a/arch/x86/include/asm/trace/hyperv.h b/arch/x86/include/asm/trace/hyperv.h index 2e6245a023ef..ace464f09681 100644 --- a/arch/x86/include/asm/trace/hyperv.h +++ b/arch/x86/include/asm/trace/hyperv.h @@ -42,6 +42,20 @@ TRACE_EVENT(hyperv_nested_flush_guest_mapping, TP_printk("address space %llx ret %d", __entry->as, __entry->ret) ); +TRACE_EVENT(hyperv_nested_flush_guest_mapping_range, + TP_PROTO(u64 as, int ret), + TP_ARGS(as, ret), + + TP_STRUCT__entry( + __field(u64, as) + __field(int, ret) + ), + TP_fast_assign(__entry->as = as; + __entry->ret = ret; + ), + TP_printk("address space %llx ret %d", __entry->as, __entry->ret) + ); + TRACE_EVENT(hyperv_send_ipi_mask, TP_PROTO(const struct cpumask *cpus, int vector), From patchwork Thu Dec 6 13:21:07 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 1008785 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="ouQOyufS"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 439bqR1JRzz9sNm for ; Fri, 7 Dec 2018 00:22:11 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729514AbeLFNWA (ORCPT ); Thu, 6 Dec 2018 08:22:00 -0500 Received: from mail-pg1-f195.google.com ([209.85.215.195]:34167 "EHLO mail-pg1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728648AbeLFNV7 (ORCPT ); Thu, 6 Dec 2018 08:21:59 -0500 Received: by mail-pg1-f195.google.com with SMTP id 17so166957pgg.1; Thu, 06 Dec 2018 05:21:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=FifTGwX83EQ8+OEfJfdERX2ACrXZsduv44eJgHUH8jk=; b=ouQOyufSH1/uGjxiNK638Mg4OcgdV54+aK+ERvM9nuExiRVwlTnqiudVXzDF0hTPrM k5NzMIrpAxJuZq7GrSPO/e43Slpbs3AKzR7xiiNBOCFKHxRZPklchjWhyc2xpXShjz6w mY6WyhU5r39tlUPeSie4e/oiy09f0rti5h6M//m450EBt5M95D1Ms25b8W/FiuQXHwIV SR2envrTDTAFCluFNaxS6CXxHsqS12dWFc5nbwpE6Y9c+KqBvfln/JhX9h/fVksa4A6/ DoegjgMPNn3sINWU37pGFc4UKOVj7fq3P8u4r6DlhQVTOtUYxhoL4Thigr4Hu0dIGTBC cdvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FifTGwX83EQ8+OEfJfdERX2ACrXZsduv44eJgHUH8jk=; b=FmM0hIsRKECdDFZNeK9sBzHjoj2IIZG1vz1RyP0I0QN7HsWCLSkphUoNoHIjHUyrOD 7zantdRCdeRTF/hMy8Ddm8UPYoYM6FUedGQAMU4gF/+Ehtah/qoPqkFQ9Oo4av64DKbU sYiJKxySQB8Z8l8c7L6xl5Vzi0bi4/4FOPa4aqFods6QAPMrOVuYqCfscv/390CzNRQA FQyyyAiT5W9we3nk6SmbGy+dze7FzdJ56FZWEcL5ATX4flNjVAvcMNE2erkXENfTALl4 aHFA5n3TUmCgV3Lx5XgP1yCiWC3xsnFjrS19YGfkF8cfL+5YnfXBStdYD7GL2I/mkUIl oWaw== X-Gm-Message-State: AA+aEWY/ka/lx5+U9RZyesatK5LI1eBVFqSpXDsT3OqBhtyWVtCgfbhM gjpD/keru0UId5Pa2oOzhoc= X-Google-Smtp-Source: AFSGD/XCYgp2n6afvG2NNG13lbKRDr+Z8n6ZKQ8J0OQOGaIotugH6znCCj6Lc0VvKfwDWXv900qNgQ== X-Received: by 2002:a63:396:: with SMTP id 144mr24368991pgd.68.1544102518590; Thu, 06 Dec 2018 05:21:58 -0800 (PST) Received: from localhost.corp.microsoft.com ([2404:f801:9000:1a:d9bd:62c6:740b:9fc4]) by smtp.googlemail.com with ESMTPSA id y5sm2409246pge.49.2018.12.06.05.21.51 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 06 Dec 2018 05:21:58 -0800 (PST) From: lantianyu1986@gmail.com X-Google-Original-From: Tianyu.Lan@microsoft.com Cc: Lan Tianyu , christoffer.dall@arm.com, marc.zyngier@arm.com, linux@armlinux.org.uk, catalin.marinas@arm.com, will.deacon@arm.com, jhogan@kernel.org, ralf@linux-mips.org, paul.burton@mips.com, paulus@ozlabs.org, benh@kernel.crashing.org, mpe@ellerman.id.au, pbonzini@redhat.com, rkrcmar@redhat.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mips@linux-mips.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, michael.h.kelley@microsoft.com, kys@microsoft.com, vkuznets@redhat.com Subject: [Resend PATCH V5 4/10] KVM/VMX: Add hv tlb range flush support Date: Thu, 6 Dec 2018 21:21:07 +0800 Message-Id: <20181206132113.2691-5-Tianyu.Lan@microsoft.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20181206132113.2691-1-Tianyu.Lan@microsoft.com> References: <20181206132113.2691-1-Tianyu.Lan@microsoft.com> To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Lan Tianyu This patch is to register tlb_remote_flush_with_range callback with hv tlb range flush interface. Signed-off-by: Lan Tianyu --- Change since v4: - Use new function kvm_fill_hv_flush_list_func() to fill flush request. Change since v3: - Merge Vitaly's don't pass EPT configuration info to vmx_hv_remote_flush_tlb() fix. Change since v1: - Pass flush range with new hyper-v tlb flush struct rather than KVM tlb flush struct. --- arch/x86/kvm/vmx.c | 63 +++++++++++++++++++++++++++++++++++++++--------------- 1 file changed, 46 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 2356118ea440..bad2aa6c5ca1 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -1569,7 +1569,34 @@ static void check_ept_pointer_match(struct kvm *kvm) to_kvm_vmx(kvm)->ept_pointers_match = EPT_POINTERS_MATCH; } -static int vmx_hv_remote_flush_tlb(struct kvm *kvm) +int kvm_fill_hv_flush_list_func(struct hv_guest_mapping_flush_list *flush, + void *data) +{ + struct kvm_tlb_range *range = data; + + return hyperv_fill_flush_guest_mapping_list(flush, range->start_gfn, + range->pages); +} + +static inline int __hv_remote_flush_tlb_with_range(struct kvm *kvm, + struct kvm_vcpu *vcpu, struct kvm_tlb_range *range) +{ + u64 ept_pointer = to_vmx(vcpu)->ept_pointer; + + /* + * FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE hypercall needs address + * of the base of EPT PML4 table, strip off EPT configuration + * information. + */ + if (range) + return hyperv_flush_guest_mapping_range(ept_pointer & PAGE_MASK, + kvm_fill_hv_flush_list_func, (void *)range); + else + return hyperv_flush_guest_mapping(ept_pointer & PAGE_MASK); +} + +static int hv_remote_flush_tlb_with_range(struct kvm *kvm, + struct kvm_tlb_range *range) { struct kvm_vcpu *vcpu; int ret = -ENOTSUPP, i; @@ -1579,29 +1606,26 @@ static int vmx_hv_remote_flush_tlb(struct kvm *kvm) if (to_kvm_vmx(kvm)->ept_pointers_match == EPT_POINTERS_CHECK) check_ept_pointer_match(kvm); - /* - * FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE hypercall needs the address of the - * base of EPT PML4 table, strip off EPT configuration information. - * If ept_pointer is invalid pointer, bypass the flush request. - */ if (to_kvm_vmx(kvm)->ept_pointers_match != EPT_POINTERS_MATCH) { kvm_for_each_vcpu(i, vcpu, kvm) { - u64 ept_pointer = to_vmx(vcpu)->ept_pointer; - - if (!VALID_PAGE(ept_pointer)) - continue; - - ret |= hyperv_flush_guest_mapping( - ept_pointer & PAGE_MASK); + /* If ept_pointer is invalid pointer, bypass flush request. */ + if (VALID_PAGE(to_vmx(vcpu)->ept_pointer)) + ret |= __hv_remote_flush_tlb_with_range( + kvm, vcpu, range); } } else { - ret = hyperv_flush_guest_mapping( - to_vmx(kvm_get_vcpu(kvm, 0))->ept_pointer & PAGE_MASK); + ret = __hv_remote_flush_tlb_with_range(kvm, + kvm_get_vcpu(kvm, 0), range); } spin_unlock(&to_kvm_vmx(kvm)->ept_pointer_lock); return ret; } + +static int hv_remote_flush_tlb(struct kvm *kvm) +{ + return hv_remote_flush_tlb_with_range(kvm, NULL); +} #else /* !IS_ENABLED(CONFIG_HYPERV) */ static inline void evmcs_write64(unsigned long field, u64 value) {} static inline void evmcs_write32(unsigned long field, u32 value) {} @@ -7971,8 +7995,11 @@ static __init int hardware_setup(void) #if IS_ENABLED(CONFIG_HYPERV) if (ms_hyperv.nested_features & HV_X64_NESTED_GUEST_MAPPING_FLUSH - && enable_ept) - kvm_x86_ops->tlb_remote_flush = vmx_hv_remote_flush_tlb; + && enable_ept) { + kvm_x86_ops->tlb_remote_flush = hv_remote_flush_tlb; + kvm_x86_ops->tlb_remote_flush_with_range = + hv_remote_flush_tlb_with_range; + } #endif if (!cpu_has_vmx_ple()) { @@ -11612,6 +11639,8 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id) vmx->nested.posted_intr_nv = -1; vmx->nested.current_vmptr = -1ull; + vmx->ept_pointer = INVALID_PAGE; + vmx->msr_ia32_feature_control_valid_bits = FEATURE_CONTROL_LOCKED; /* From patchwork Thu Dec 6 13:21:08 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 1008786 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="BpWEhwSr"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 439bqS1p2rz9sDP for ; Fri, 7 Dec 2018 00:22:12 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729768AbeLFNWH (ORCPT ); Thu, 6 Dec 2018 08:22:07 -0500 Received: from mail-pl1-f193.google.com ([209.85.214.193]:42909 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729620AbeLFNWG (ORCPT ); Thu, 6 Dec 2018 08:22:06 -0500 Received: by mail-pl1-f193.google.com with SMTP id y1so163889plp.9; Thu, 06 Dec 2018 05:22:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=7wJrNLh7b9Kd4SriInF8r4G9i9Dx2Oc7R9d0sGuuCyk=; b=BpWEhwSrGQIzwdd2nJOroIZ8jrNcA2e97/TsljMoQ0q5x2ka34srFoJ5siBfmD06yV /q/m6W2wZOVS7/Duk6yKbsg6D/TrZPiq1F/wgidy+KIMsYR95Tvx+ugPml/kbbp+EWrI GQ4j9Y4iDW6KoWH5cD1lNaDAfN13fMTZiG2simrlqEPrqGrC0SKFQuzsV+MLJE/O195F lrSW/fJplell0VpZc/AgyYq2gTs7n4rdofjRK3tHTjbYRv5d78qGFurY/cDMSU2zD2so bZBsKJOBQmWuecPWlw+0AChokyB4VwyhAOLGEpt++2RF4raSq4QHeySfHM8KGF7ZH/HL FQMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=7wJrNLh7b9Kd4SriInF8r4G9i9Dx2Oc7R9d0sGuuCyk=; b=WfcanKzk1Woir511i7RAGItlgD07RBTqb846HleLgTr759jCBsD4tfrB6v26g9G2DP k5f2etaS1ICLRFrpT9QiCF/GBSs6IStpzlaCgcPrq1L5BdoYKI941f8z4yNeXhpM7sIw aF5/jjI2EX48upDoBlHPOOqVXluy0qLdWI9PmAetrq4Ai4MBS6DJOQGOcEh53+XdhB2Y TggW2L+vQJ/8NVAxS5vNi4rmlQuL0GIaNdUvfMBzlM05VvwGLhOWuggxiHL6Kg0bWmBV MEF/PDMpfFYeJeFY0FIa83/YNtbncl/I/VYIILEXW9WC5ruW95gIDgRrM3h3SdALpxY2 UpNA== X-Gm-Message-State: AA+aEWbY2JAk4jlsPyzvOQg8B8IA4x4Z8QLmaZbGxVeJEE1JE+YiPi9H ilvVLGasypMuLkGsQ/D285c= X-Google-Smtp-Source: AFSGD/Ua/7/KRXozbL463rjBpxtbtx/wHbXTVeIBAw41IrMQTdAxwXDkNyEwHC+R/vOWP0iaP9fPww== X-Received: by 2002:a17:902:e28e:: with SMTP id cf14mr16631840plb.311.1544102526106; Thu, 06 Dec 2018 05:22:06 -0800 (PST) Received: from localhost.corp.microsoft.com ([2404:f801:9000:1a:d9bd:62c6:740b:9fc4]) by smtp.googlemail.com with ESMTPSA id y5sm2409246pge.49.2018.12.06.05.21.58 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 06 Dec 2018 05:22:05 -0800 (PST) From: lantianyu1986@gmail.com X-Google-Original-From: Tianyu.Lan@microsoft.com Cc: Lan Tianyu , christoffer.dall@arm.com, marc.zyngier@arm.com, linux@armlinux.org.uk, catalin.marinas@arm.com, will.deacon@arm.com, jhogan@kernel.org, ralf@linux-mips.org, paul.burton@mips.com, paulus@ozlabs.org, benh@kernel.crashing.org, mpe@ellerman.id.au, pbonzini@redhat.com, rkrcmar@redhat.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mips@linux-mips.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, michael.h.kelley@microsoft.com, kys@microsoft.com, vkuznets@redhat.com Subject: [Resend PATCH V5 5/10] KVM/MMU: Add tlb flush with range helper function Date: Thu, 6 Dec 2018 21:21:08 +0800 Message-Id: <20181206132113.2691-6-Tianyu.Lan@microsoft.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20181206132113.2691-1-Tianyu.Lan@microsoft.com> References: <20181206132113.2691-1-Tianyu.Lan@microsoft.com> To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Lan Tianyu This patch is to add wrapper functions for tlb_remote_flush_with_range callback and flush tlb directly in kvm_mmu_zap_collapsible_spte(). kvm_mmu_zap_collapsible_spte() returns flush request to the slot_handle_leaf() and the latter does flush on demand. When range flush is available, make kvm_mmu_zap_collapsible_spte() to flush tlb with range directly to avoid returning range back to slot_handle_leaf(). Signed-off-by: Lan Tianyu --- Change since V4: Remove flush list interface and flush tlb directly in kvm_mmu_zap_collapsible_spte(). Change since V3: Remove code of updating "tlbs_dirty" Change since V2: Fix comment in the kvm_flush_remote_tlbs_with_range() --- arch/x86/kvm/mmu.c | 37 ++++++++++++++++++++++++++++++++++++- 1 file changed, 36 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 7c03c0f35444..74df957f2f81 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -264,6 +264,35 @@ static void mmu_spte_set(u64 *sptep, u64 spte); static union kvm_mmu_page_role kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu); + +static inline bool kvm_available_flush_tlb_with_range(void) +{ + return kvm_x86_ops->tlb_remote_flush_with_range; +} + +static void kvm_flush_remote_tlbs_with_range(struct kvm *kvm, + struct kvm_tlb_range *range) +{ + int ret = -ENOTSUPP; + + if (range && kvm_x86_ops->tlb_remote_flush_with_range) + ret = kvm_x86_ops->tlb_remote_flush_with_range(kvm, range); + + if (ret) + kvm_flush_remote_tlbs(kvm); +} + +static void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, + u64 start_gfn, u64 pages) +{ + struct kvm_tlb_range range; + + range.start_gfn = start_gfn; + range.pages = pages; + + kvm_flush_remote_tlbs_with_range(kvm, &range); +} + void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value) { BUG_ON((mmio_mask & mmio_value) != mmio_value); @@ -5671,7 +5700,13 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, !kvm_is_reserved_pfn(pfn) && PageTransCompoundMap(pfn_to_page(pfn))) { pte_list_remove(rmap_head, sptep); - need_tlb_flush = 1; + + if (kvm_available_flush_tlb_with_range()) + kvm_flush_remote_tlbs_with_address(kvm, sp->gfn, + KVM_PAGES_PER_HPAGE(sp->role.level)); + else + need_tlb_flush = 1; + goto restart; } } From patchwork Thu Dec 6 13:21:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 1008787 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="Zzb7iuXx"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 439bqW3wH1z9sMQ for ; Fri, 7 Dec 2018 00:22:15 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729789AbeLFNWP (ORCPT ); Thu, 6 Dec 2018 08:22:15 -0500 Received: from mail-pf1-f193.google.com ([209.85.210.193]:33413 "EHLO mail-pf1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729475AbeLFNWO (ORCPT ); Thu, 6 Dec 2018 08:22:14 -0500 Received: by mail-pf1-f193.google.com with SMTP id c123so187837pfb.0; Thu, 06 Dec 2018 05:22:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=yA/PN1Rwq9nxWHfDkJhKaDB+akOO5r2Xg4ue0VyDcyY=; b=Zzb7iuXxLVH+RwONnJ3uLNsKuvYs02g0J5LFUOMlMlS4LFoHer/7db0m9gchyfuGNa OTQ2vFnLa3gpZK5zKTjb/Feph6KuDMoF/xmE3CDr+eEpLEWxk0IaxAhLfuEcjJgSl7jb pq1+htSAYz1Wbz4ZQa44YWU/BMeaJgB678QrK//D+V/IOPJLm37md0SlULcr1k3k6FeA Gsov2C1CREBfZPiyIRQhFhK1H4zmgWrajyyKLDZPtQv4B88yt5RyTMGcFmGUIbL1QenA ECyxdad1/CJLRAwiZFHHzBp1/cXQMQ0TlAQNiYauz52BISTU3+BXe39fjLSn9YN4EFTB 0BtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=yA/PN1Rwq9nxWHfDkJhKaDB+akOO5r2Xg4ue0VyDcyY=; b=aoGQsleghdjr0HPYXNTyQGG217JOSqnIL87Qba7w2GkHHMCTzK1zjpyFyeMo7ita1D 1VF/efqnzdOp3kzbU0rt6UiDeBnfUl2h9OtaH1G/etMeHS9Wo688nI7eZ701cMD/DpDh 2owUYBKfKNbeksZ+HM6mk+mk3EiMzkdUauxmgd0h1qnOJod5757CYQ+dJIobls+UYq/D HthBYAfATc3IU4XfVfhLfLSkNER2q6MxMbApPCyXPjKY+tUW1+pYCgy05xiLzmnMh1ub vlty13loon21PhSRBKO1J7p6zOpl70zNo0KiA0/sR/X6ZxVlFpEWPnU/BIA2V81qqtI+ M2qw== X-Gm-Message-State: AA+aEWbYttqwFyqj2Okw67EabTsNXGPHzXZLng7ZSDciiILXyJ2Mjdsl V4YtHIgJAgWM8208QjX38Iw= X-Google-Smtp-Source: AFSGD/WNWN31cscUDEg0BNGK0bsrk9bdn0Q6nFKI29mcbiR9Y9GcXFVWPLcguGojjMnRB9GdFaZplQ== X-Received: by 2002:a62:b9a:: with SMTP id 26mr28854892pfl.196.1544102533677; Thu, 06 Dec 2018 05:22:13 -0800 (PST) Received: from localhost.corp.microsoft.com ([2404:f801:9000:1a:d9bd:62c6:740b:9fc4]) by smtp.googlemail.com with ESMTPSA id y5sm2409246pge.49.2018.12.06.05.22.06 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 06 Dec 2018 05:22:13 -0800 (PST) From: lantianyu1986@gmail.com X-Google-Original-From: Tianyu.Lan@microsoft.com Cc: Lan Tianyu , christoffer.dall@arm.com, marc.zyngier@arm.com, linux@armlinux.org.uk, catalin.marinas@arm.com, will.deacon@arm.com, jhogan@kernel.org, ralf@linux-mips.org, paul.burton@mips.com, paulus@ozlabs.org, benh@kernel.crashing.org, mpe@ellerman.id.au, pbonzini@redhat.com, rkrcmar@redhat.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mips@linux-mips.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, michael.h.kelley@microsoft.com, kys@microsoft.com, vkuznets@redhat.com Subject: [Resend PATCH V5 6/10] KVM: Replace old tlb flush function with new one to flush a specified range. Date: Thu, 6 Dec 2018 21:21:09 +0800 Message-Id: <20181206132113.2691-7-Tianyu.Lan@microsoft.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20181206132113.2691-1-Tianyu.Lan@microsoft.com> References: <20181206132113.2691-1-Tianyu.Lan@microsoft.com> To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Lan Tianyu This patch is to replace kvm_flush_remote_tlbs() with kvm_flush_ remote_tlbs_with_address() in some functions without logic change. Signed-off-by: Lan Tianyu --- arch/x86/kvm/mmu.c | 31 +++++++++++++++++++++---------- arch/x86/kvm/paging_tmpl.h | 3 ++- 2 files changed, 23 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 74df957f2f81..1cd3c316a7e6 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1485,8 +1485,12 @@ static bool __drop_large_spte(struct kvm *kvm, u64 *sptep) static void drop_large_spte(struct kvm_vcpu *vcpu, u64 *sptep) { - if (__drop_large_spte(vcpu->kvm, sptep)) - kvm_flush_remote_tlbs(vcpu->kvm); + if (__drop_large_spte(vcpu->kvm, sptep)) { + struct kvm_mmu_page *sp = page_header(__pa(sptep)); + + kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn, + KVM_PAGES_PER_HPAGE(sp->role.level)); + } } /* @@ -1954,7 +1958,8 @@ static void rmap_recycle(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) rmap_head = gfn_to_rmap(vcpu->kvm, gfn, sp); kvm_unmap_rmapp(vcpu->kvm, rmap_head, NULL, gfn, sp->role.level, 0); - kvm_flush_remote_tlbs(vcpu->kvm); + kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn, + KVM_PAGES_PER_HPAGE(sp->role.level)); } int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end) @@ -2470,7 +2475,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, account_shadowed(vcpu->kvm, sp); if (level == PT_PAGE_TABLE_LEVEL && rmap_write_protect(vcpu, gfn)) - kvm_flush_remote_tlbs(vcpu->kvm); + kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1); if (level > PT_PAGE_TABLE_LEVEL && need_sync) flush |= kvm_sync_pages(vcpu, gfn, &invalid_list); @@ -2590,7 +2595,7 @@ static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, return; drop_parent_pte(child, sptep); - kvm_flush_remote_tlbs(vcpu->kvm); + kvm_flush_remote_tlbs_with_address(vcpu->kvm, child->gfn, 1); } } @@ -3014,8 +3019,10 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, unsigned pte_access, ret = RET_PF_EMULATE; kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu); } + if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH || flush) - kvm_flush_remote_tlbs(vcpu->kvm); + kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, + KVM_PAGES_PER_HPAGE(level)); if (unlikely(is_mmio_spte(*sptep))) ret = RET_PF_EMULATE; @@ -5672,7 +5679,8 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, * on PT_WRITABLE_MASK anymore. */ if (flush) - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_with_address(kvm, memslot->base_gfn, + memslot->npages); } static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, @@ -5742,7 +5750,8 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, * dirty_bitmap. */ if (flush) - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_with_address(kvm, memslot->base_gfn, + memslot->npages); } EXPORT_SYMBOL_GPL(kvm_mmu_slot_leaf_clear_dirty); @@ -5760,7 +5769,8 @@ void kvm_mmu_slot_largepage_remove_write_access(struct kvm *kvm, lockdep_assert_held(&kvm->slots_lock); if (flush) - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_with_address(kvm, memslot->base_gfn, + memslot->npages); } EXPORT_SYMBOL_GPL(kvm_mmu_slot_largepage_remove_write_access); @@ -5777,7 +5787,8 @@ void kvm_mmu_slot_set_dirty(struct kvm *kvm, /* see kvm_mmu_slot_leaf_clear_dirty */ if (flush) - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_with_address(kvm, memslot->base_gfn, + memslot->npages); } EXPORT_SYMBOL_GPL(kvm_mmu_slot_set_dirty); diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 7cf2185b7eb5..6bdca39829bc 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -894,7 +894,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) pte_gpa += (sptep - sp->spt) * sizeof(pt_element_t); if (mmu_page_zap_pte(vcpu->kvm, sp, sptep)) - kvm_flush_remote_tlbs(vcpu->kvm); + kvm_flush_remote_tlbs_with_address(vcpu->kvm, + sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); if (!rmap_can_add(vcpu)) break; From patchwork Thu Dec 6 13:21:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 1008788 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="IiNGHQQm"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 439bqg1s5lz9sDr for ; Fri, 7 Dec 2018 00:22:23 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729468AbeLFNWW (ORCPT ); Thu, 6 Dec 2018 08:22:22 -0500 Received: from mail-pl1-f196.google.com ([209.85.214.196]:34334 "EHLO mail-pl1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729620AbeLFNWW (ORCPT ); Thu, 6 Dec 2018 08:22:22 -0500 Received: by mail-pl1-f196.google.com with SMTP id w4so178828plz.1; Thu, 06 Dec 2018 05:22:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=5oDjfSo5BSNgEbwUj6RHvARKoGuVRkTd5H1UXf7m+VQ=; b=IiNGHQQmX9rZ+CIvg+3tL1pA98zHSSj5B/2PIh8fCEoku0pxcq/yiU4YV0A7gYAIcD NggUO7zVnaojHyOlzoBuzNvdcI23hGHk2NKMbm2sqxZ41NuqETSPYxm0XucL3mOO9qVU YaA9Py3f1oh9daPBks0NCPaKn+a+tnk+6fIrJLUkUl0kuvSKP/1/jgqJ8IfLP+Wb30D0 enFEk6VI1ZymOHq50L1c1fDcyAWSx7jNFd/0gwKz0Y9FXAtOSQPuko43Ju6Z595TgL9y llrJu13cEJ/yX4TJOuIDfySVc6bUSA56x1fRwEpody4FumuWh/jn4zKhAzySve5A3hg/ dFng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5oDjfSo5BSNgEbwUj6RHvARKoGuVRkTd5H1UXf7m+VQ=; b=PAJ2IhhJqcTvuiRPpnaEhUPeeqQRwgHHiiMlJXmRiUleNHnIFNJNrd5iVbzONl6VCo agqD3rA3pa7GM1InX1fBf+sXRoQKrd2BB8phHt3hiLiGvUbt5RYHM0cz8RCKMVj/cDM/ vN/i25dxSYqnxU/o60UEkRnBbLxQuVjFs9r6/+XE4UiRTjRJderf+IgzGNP2f14ZCTyr WRodi+GHtQyKXYm+FMMPAGU2i7M0XO4smWojQyr5Up95/0UmO/QtrES8mKle0XAUs8Yf Fgw5Ny+aELU/PYwiFOQYPrYZuWYF6OzQtK/KsVgssfG0Gi/4jMw3y48Wm7TjuKf/dIjw 8C6g== X-Gm-Message-State: AA+aEWb5e24Yd/lrDcrTWYQOTZOsy/Rhp8IWKUyBRgIQFV2VUTmRJGJW 5hTG00zQNQa9raNCmcULUrU= X-Google-Smtp-Source: AFSGD/Wc5xHfM79IbtAdwAoHmOCxX/41mS0T68DSLS9NZG44oZihFKB4YbVpuejvYg8wGdvzEUdisw== X-Received: by 2002:a17:902:aa82:: with SMTP id d2mr19658320plr.153.1544102541169; Thu, 06 Dec 2018 05:22:21 -0800 (PST) Received: from localhost.corp.microsoft.com ([2404:f801:9000:1a:d9bd:62c6:740b:9fc4]) by smtp.googlemail.com with ESMTPSA id y5sm2409246pge.49.2018.12.06.05.22.13 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 06 Dec 2018 05:22:20 -0800 (PST) From: lantianyu1986@gmail.com X-Google-Original-From: Tianyu.Lan@microsoft.com Cc: Lan Tianyu , christoffer.dall@arm.com, marc.zyngier@arm.com, linux@armlinux.org.uk, catalin.marinas@arm.com, will.deacon@arm.com, jhogan@kernel.org, ralf@linux-mips.org, paul.burton@mips.com, paulus@ozlabs.org, benh@kernel.crashing.org, mpe@ellerman.id.au, pbonzini@redhat.com, rkrcmar@redhat.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mips@linux-mips.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, michael.h.kelley@microsoft.com, kys@microsoft.com, vkuznets@redhat.com Subject: [Resend PATCH V5 7/10] KVM: Make kvm_set_spte_hva() return int Date: Thu, 6 Dec 2018 21:21:10 +0800 Message-Id: <20181206132113.2691-8-Tianyu.Lan@microsoft.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20181206132113.2691-1-Tianyu.Lan@microsoft.com> References: <20181206132113.2691-1-Tianyu.Lan@microsoft.com> To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Lan Tianyu The patch is to make kvm_set_spte_hva() return int and caller can check return value to determine flush tlb or not. Signed-off-by: Lan Tianyu Acked-by: Paul Mackerras --- arch/arm/include/asm/kvm_host.h | 2 +- arch/arm64/include/asm/kvm_host.h | 2 +- arch/mips/include/asm/kvm_host.h | 2 +- arch/mips/kvm/mmu.c | 3 ++- arch/powerpc/include/asm/kvm_host.h | 2 +- arch/powerpc/kvm/book3s.c | 3 ++- arch/powerpc/kvm/e500_mmu_host.c | 3 ++- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/mmu.c | 3 ++- virt/kvm/arm/mmu.c | 6 ++++-- 10 files changed, 17 insertions(+), 11 deletions(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 5ca5d9af0c26..fb62b500e590 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -225,7 +225,7 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, #define KVM_ARCH_WANT_MMU_NOTIFIER int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end); -void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); +int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu); int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 52fbc823ff8c..b3df7b38ba7d 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -360,7 +360,7 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, #define KVM_ARCH_WANT_MMU_NOTIFIER int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end); -void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); +int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end); int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h index 2c1c53d12179..71c3f21d80d5 100644 --- a/arch/mips/include/asm/kvm_host.h +++ b/arch/mips/include/asm/kvm_host.h @@ -933,7 +933,7 @@ enum kvm_mips_fault_result kvm_trap_emul_gva_fault(struct kvm_vcpu *vcpu, #define KVM_ARCH_WANT_MMU_NOTIFIER int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end); -void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); +int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end); int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index d8dcdb350405..97e538a8c1be 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -551,7 +551,7 @@ static int kvm_set_spte_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, (pte_dirty(old_pte) && !pte_dirty(hva_pte)); } -void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) +int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) { unsigned long end = hva + PAGE_SIZE; int ret; @@ -559,6 +559,7 @@ void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) ret = handle_hva_to_gpa(kvm, hva, end, &kvm_set_spte_handler, &pte); if (ret) kvm_mips_callbacks->flush_shadow_all(kvm); + return 0; } static int kvm_age_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end, diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index fac6f631ed29..ab23379c53a9 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -72,7 +72,7 @@ extern int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end); extern int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end); extern int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); -extern void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); +extern int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); #define HPTEG_CACHE_NUM (1 << 15) #define HPTEG_HASH_BITS_PTE 13 diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c index fd9893bc7aa1..437613bb609a 100644 --- a/arch/powerpc/kvm/book3s.c +++ b/arch/powerpc/kvm/book3s.c @@ -850,9 +850,10 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva) return kvm->arch.kvm_ops->test_age_hva(kvm, hva); } -void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) +int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) { kvm->arch.kvm_ops->set_spte_hva(kvm, hva, pte); + return 0; } void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu) diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c index 8f2985e46f6f..c3f312b2bcb3 100644 --- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -757,10 +757,11 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva) return 0; } -void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) +int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) { /* The page will get remapped properly on its next fault */ kvm_unmap_hva(kvm, hva); + return 0; } /*****************************************/ diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index fc7513ecfc13..d3bdeda92e89 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1510,7 +1510,7 @@ asmlinkage void kvm_spurious_fault(void); int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end); int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end); int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); -void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); +int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v); int kvm_cpu_has_interrupt(struct kvm_vcpu *vcpu); int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 1cd3c316a7e6..dfef49f1e7a6 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1913,9 +1913,10 @@ int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end) return kvm_handle_hva_range(kvm, start, end, 0, kvm_unmap_rmapp); } -void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) +int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) { kvm_handle_hva(kvm, hva, (unsigned long)&pte, kvm_set_pte_rmapp); + return 0; } static int kvm_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 5eca48bdb1a6..b8b20c033fcf 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1849,14 +1849,14 @@ static int kvm_set_spte_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data } -void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) +int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) { unsigned long end = hva + PAGE_SIZE; kvm_pfn_t pfn = pte_pfn(pte); pte_t stage2_pte; if (!kvm->arch.pgd) - return; + return 0; trace_kvm_set_spte_hva(hva); @@ -1867,6 +1867,8 @@ void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) clean_dcache_guest_page(pfn, PAGE_SIZE); stage2_pte = pfn_pte(pfn, PAGE_S2); handle_hva_to_gpa(kvm, hva, end, &kvm_set_spte_handler, &stage2_pte); + + return 0; } static int kvm_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data)