From patchwork Thu Dec 8 19:38:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713896 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=afbR0BEp; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=g9cixz5g; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSmVp2Dynz23pG for ; Fri, 9 Dec 2022 07:49:54 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=laXLzxIbXqrgLUxM7vRYl3LikuX0Zie+/CeTePGQK0Q=; b=afbR0BEpZBwusbCkPRndymhIBg rxr3Yd3He5LTC6RV9yvRw8DLxC4G7+tIqvHLcFw2w5qrTSEfebs6chbb4FFMTlLt9MFu5XVFg4cgT DDNYkO/hXmSofM1UvBzVBQsxkCFIac0JCEovMBuaoR3bafifcceVDieWtqmEUochY3Q06W6prdz65 uqndgVK7O0yDgGD2NFhGNNLZz198DlEekOmcUNGp9AkmI5sO2QBTICM+pqFKQhZGbceIZQzRUDIF7 EJahA+LVE8ZF78KQdWwqJlthHOa9/+VtEsMv6d3mZsGU0kcGo2HulAZcsud+6i4M+mMxkXuUuYy7U CZBRy3Fg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Nq3-00AxKU-U9; Thu, 08 Dec 2022 20:49:51 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjg-009rIN-U0 for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:16 +0000 Received: by mail-pg1-x549.google.com with SMTP id k7-20020a632407000000b00478c0260975so1629555pgk.1 for ; Thu, 08 Dec 2022 11:39:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zB9oiH2BRr4yMBo3wf45SO3Nck80VorbJdVA47iblXM=; b=g9cixz5ghWNJL3K+TodNMkBqI/oQhLTOr97M7M2/XpZEkjrmI3nr8w2JGkmRiCYvFp oCqkxLLopMMEof8X6INg95QudTyolVDU+RZhaURZt8OAHqkW63+Kglxtq76sXMR3UwFq TH0D0M7upbnITs3QBeHc4oBh6Bot1NKIQBOUihmvyAsu3jIi5z0OXboG4BlLnZQDvrbN irbKiodu/sENkvb3heU4imTZch1oJk77UygEm5fQ1B7wZYOkf7/kSwfDOcxQWmBFS0gg yWvbjdMM2Sl2l6GeuSkr5btl/7w76Xe8GfrXo/KlMP5jfkzzRUjzb+ofPgAfWivKb76C TC7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zB9oiH2BRr4yMBo3wf45SO3Nck80VorbJdVA47iblXM=; b=JfSxZ94aBPowpUn4+jX+B7BdESH450Dr1iT/UABfSTRRziRSqXZn8+Dw8YWJwReLp7 qNOigozUP4IGwiGd6rMOiLwZqOY1KQyTdORkmsHakBAHDWw5esILpIlRW5wV9jnm6iGC LqsGldQJH7auz2AeTlgRNANmuvYBzuQeJ77FfN83Av5aTeJgcP5VNuoUu1UOotL16JsE CbnRD93xVmt3rwWccL1epOkhfLeOCt5v0vaHW4oJJLJR4Fn49ubZFEIGJVRLa+x2PNzU S9o/8LU34EyjvecRaIIYvdhk69aWLVeRUrsLS6s1Yrdfw/l1iRYhzgRVvp5C0BEL0R4l nijw== X-Gm-Message-State: ANoB5pnGI6r/JCNHRqadcRy2nYJWYUeRQX6NTwPwfqR89VJmWUy/ge9z Crg6KmDKodKqAUzSKDNN9EZcjQqmao4XEQ== X-Google-Smtp-Source: AA0mqf4iTqxLqkeKUJGKBK2vDw/TcOMUrCzACSliwb1y6sicvyYssf60rgU2Phn8zeoC+EETXrN69Yd24770KA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:90b:33c8:b0:219:94b2:2004 with SMTP id lk8-20020a17090b33c800b0021994b22004mr28273120pjb.215.1670528349120; Thu, 08 Dec 2022 11:39:09 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:21 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-2-dmatlack@google.com> Subject: [RFC PATCH 01/37] KVM: x86/mmu: Store the address space ID directly in kvm_mmu_page_role From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_113912_992960_36132E0E X-CRM114-Status: GOOD ( 16.48 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Rename kvm_mmu_page_role.smm with kvm_mmu_page_role.as_id and use it directly as the address space ID throughout the KVM MMU code. This eliminates a needless level of indirection, kvm_mmu_role_as_id() [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:549 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Rename kvm_mmu_page_role.smm with kvm_mmu_page_role.as_id and use it directly as the address space ID throughout the KVM MMU code. This eliminates a needless level of indirection, kvm_mmu_role_as_id(), and prepares for making kvm_mmu_page_role architecture-neutral. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_host.h | 4 ++-- arch/x86/kvm/mmu/mmu.c | 6 +++--- arch/x86/kvm/mmu/mmu_internal.h | 10 ---------- arch/x86/kvm/mmu/tdp_iter.c | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 12 ++++++------ 5 files changed, 12 insertions(+), 22 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index aa4eb8cfcd7e..0a819d40131a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -348,7 +348,7 @@ union kvm_mmu_page_role { * simple shift. While there is room, give it a whole * byte so it is also faster to load it from memory. */ - unsigned smm:8; + unsigned as_id:8; }; }; @@ -2056,7 +2056,7 @@ enum { # define __KVM_VCPU_MULTIPLE_ADDRESS_SPACE # define KVM_ADDRESS_SPACE_NUM 2 # define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0) -# define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).smm) +# define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).as_id) #else # define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, 0) #endif diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4d188f056933..f375b719f565 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5056,7 +5056,7 @@ kvm_calc_cpu_role(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) union kvm_cpu_role role = {0}; role.base.access = ACC_ALL; - role.base.smm = is_smm(vcpu); + role.base.as_id = is_smm(vcpu); role.base.guest_mode = is_guest_mode(vcpu); role.ext.valid = 1; @@ -5112,7 +5112,7 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, role.access = ACC_ALL; role.cr0_wp = true; role.efer_nx = true; - role.smm = cpu_role.base.smm; + role.as_id = cpu_role.base.as_id; role.guest_mode = cpu_role.base.guest_mode; role.ad_disabled = !kvm_ad_enabled(); role.level = kvm_mmu_get_tdp_level(vcpu); @@ -5233,7 +5233,7 @@ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty, /* * KVM does not support SMM transfer monitors, and consequently does not - * support the "entry to SMM" control either. role.base.smm is always 0. + * support the "entry to SMM" control either. role.base.as_id is always 0. */ WARN_ON_ONCE(is_smm(vcpu)); role.base.level = level; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index ac00bfbf32f6..5427f65117b4 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -133,16 +133,6 @@ struct kvm_mmu_page { extern struct kmem_cache *mmu_page_header_cache; -static inline int kvm_mmu_role_as_id(union kvm_mmu_page_role role) -{ - return role.smm ? 1 : 0; -} - -static inline int kvm_mmu_page_as_id(struct kvm_mmu_page *sp) -{ - return kvm_mmu_role_as_id(sp->role); -} - static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp) { /* diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index 39b48e7d7d1a..4a7d58bf81c4 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -52,7 +52,7 @@ void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu_page *root, iter->root_level = root_level; iter->min_level = min_level; iter->pt_path[iter->root_level - 1] = (tdp_ptep_t)root->spt; - iter->as_id = kvm_mmu_page_as_id(root); + iter->as_id = root->role.as_id; tdp_iter_restart(iter); } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 764f7c87286f..7ccac1aa8df6 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -237,7 +237,7 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, _root; \ _root = tdp_mmu_next_root(_kvm, _root, _shared, _only_valid)) \ if (kvm_lockdep_assert_mmu_lock_held(_kvm, _shared) && \ - kvm_mmu_page_as_id(_root) != _as_id) { \ + _root->role.as_id != _as_id) { \ } else #define for_each_valid_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared) \ @@ -256,7 +256,7 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, #define for_each_tdp_mmu_root(_kvm, _root, _as_id) \ list_for_each_entry(_root, &_kvm->arch.tdp_mmu_roots, link) \ if (kvm_lockdep_assert_mmu_lock_held(_kvm, false) && \ - kvm_mmu_page_as_id(_root) != _as_id) { \ + _root->role.as_id != _as_id) { \ } else static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) @@ -310,7 +310,7 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) * Check for an existing root before allocating a new one. Note, the * role check prevents consuming an invalid root. */ - for_each_tdp_mmu_root(kvm, root, kvm_mmu_role_as_id(role)) { + for_each_tdp_mmu_root(kvm, root, role.as_id) { if (root->role.word == role.word && kvm_tdp_mmu_get_root(root)) goto out; @@ -496,8 +496,8 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared) old_spte = kvm_tdp_mmu_write_spte(sptep, old_spte, REMOVED_SPTE, level); } - handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn, - old_spte, REMOVED_SPTE, level, shared); + handle_changed_spte(kvm, sp->role.as_id, gfn, old_spte, + REMOVED_SPTE, level, shared); } call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback); @@ -923,7 +923,7 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp) if (WARN_ON_ONCE(!is_shadow_present_pte(old_spte))) return false; - __tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte, 0, + __tdp_mmu_set_spte(kvm, sp->role.as_id, sp->ptep, old_spte, 0, sp->gfn, sp->role.level + 1, true, true); return true; From patchwork Thu Dec 8 19:38:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713899 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=wTgTKcVm; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=blsWKuOu; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSmYQ5xNYz23pD for ; Fri, 9 Dec 2022 07:52:10 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=DVTFkyQqAyKCGMFLVqjgwUnhIVR5l0WU/WhQFXgKMww=; b=wTgTKcVm+stdekFkmB7/mWlaQ8 2HwzP9LO7p5SRAiB18kqaTxzGE7M2VEyMSUazJQvBH6HJOW8Nlilspg9uYPXkle8lI/K2gSwRKgxK fpWgxD1++ce8rZssT1bxRFd/JQpREbTxflPC1rpGI1xSH3SNqLyPBp4aWxBUdhOfqyHTTHQZV1Rnk /jYutyMyKXaCdXH/cu/rxoSw35EoYbZGiBWiTe05SBxX2h3jESPTEqLkVwpJju2g8kROVV1P4p2Cl p2TX/jznM+VmzbXyitLhuPYrA+PCM8DPv4MPxTFwlMlLJcYeR+U+A17Xh9FpG9oBn7+WAp6fQd7qy jp6hPg7w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NsG-00AzG7-C9; Thu, 08 Dec 2022 20:52:08 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjj-009rJI-2N for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:25 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-39afd53dcdbso24729737b3.8 for ; Thu, 08 Dec 2022 11:39:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KqovNnYNqiZvNY+3x4jGBZSOBKhorYQcOMdDmlAeVA0=; b=blsWKuOuU9nBfjbx2ZvfooJjVg/Tk7E29BOlxwM52Mh4m9Gy5w5cZYlxndBsSY4dqj e/snZkpg/KsxnZBVk09yrntro8xgrBBTRp5xNEMxknp2oiInXsh0BQib9Zuq2shGBlMv ofp5dMV6pJa5cUvPDQu4p/P+faSALpbfpnCxIGMXnkbnc7HQyL87OTHgk2gjRv+9DgR1 zf4uWQPBfjDRAZz+kjct5rkcc6EWfXyC3sEt34sMJVMmqD20+jFh4bXPCHCtCQ4GAnOo vwg2XNZUbi9JTdc7rdOMCw99CjFWBWO6dpclFuZV2ap5LsvN1Gj8H3CGg4gY3kJabJ5+ 3Gnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KqovNnYNqiZvNY+3x4jGBZSOBKhorYQcOMdDmlAeVA0=; b=CsEXEMihdxVzoaA5w/cRP7/hVd256cIbtkwzu2kmXrLkLybpkT6fxIXUOKIuubZbGi uM/todCGp/Z8n08pgWtdMOqEC7TaVMsCc0VjKJ3Ec8Rn9NgqkT62Ne3sUAKmHo8TJ8LY qy2zaxEopE9ezNRC0Gx63pk1Ga22aTHQ7You3PeKdEWUR8CSFb7j9k7t1sQNK5/q8Mtu HR6l5k56vUfkuFCqk8Vp4ul4XmGYfQvH/bp/w/g9PBVMC4ZZ6pJ9fuV9zDgdM20I+a0k 4IU57pdgAWuOXjFrjs7fosVoKhOlDGbBX+ZmI3+9yl4SkshFmUj+Uibnyxqu481H0lii +4Xg== X-Gm-Message-State: ANoB5pnhPUWBeCwORz6R9vWt6blUDRr90UGXI4Deu8TJQMEKoNPR+VY1 jBugod8C3aki4J4YzQ8M5qJlInOxnMi31A== X-Google-Smtp-Source: AA0mqf40e1aWU3nMj030gXIfYJtDxcpJg/DK+IU9dwsqtYBiXfIBVm5+sqf++gOtXR+kPXfwZNzYqWMa6cBzLw== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:e755:0:b0:6f1:2a94:40f2 with SMTP id e82-20020a25e755000000b006f12a9440f2mr65803947ybh.332.1670528350927; Thu, 08 Dec 2022 11:39:10 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:22 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-3-dmatlack@google.com> Subject: [RFC PATCH 02/37] KVM: MMU: Move struct kvm_mmu_page_role into common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_113915_257968_83187444 X-CRM114-Status: GOOD ( 29.93 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Move struct kvm_mmu_page_role into common code, and move all x86-specific fields into a separate sub-struct within the role, kvm_mmu_page_role_arch. Signed-off-by: David Matlack --- MAINTAINERS | 4 +- arch/x86/include/asm/kvm/mmu_types.h | 56 ++++++++++ arch/x86/include/asm/kvm_host.h | 68 + arch/x86/kvm/mmu/mmu.c [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:114a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Move struct kvm_mmu_page_role into common code, and move all x86-specific fields into a separate sub-struct within the role, kvm_mmu_page_role_arch. Signed-off-by: David Matlack --- MAINTAINERS | 4 +- arch/x86/include/asm/kvm/mmu_types.h | 56 ++++++++++ arch/x86/include/asm/kvm_host.h | 68 +----------- arch/x86/kvm/mmu/mmu.c | 156 +++++++++++++-------------- arch/x86/kvm/mmu/mmu_internal.h | 4 +- arch/x86/kvm/mmu/mmutrace.h | 12 +-- arch/x86/kvm/mmu/paging_tmpl.h | 20 ++-- arch/x86/kvm/mmu/spte.c | 4 +- arch/x86/kvm/mmu/spte.h | 2 +- arch/x86/kvm/x86.c | 8 +- include/kvm/mmu_types.h | 37 +++++++ 11 files changed, 202 insertions(+), 169 deletions(-) create mode 100644 arch/x86/include/asm/kvm/mmu_types.h create mode 100644 include/kvm/mmu_types.h diff --git a/MAINTAINERS b/MAINTAINERS index 89672a59c0c3..7e586d7ba78c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -11198,7 +11198,8 @@ W: http://www.linux-kvm.org T: git git://git.kernel.org/pub/scm/virt/kvm/kvm.git F: Documentation/virt/kvm/ F: include/asm-generic/kvm* -F: include/kvm/iodev.h +F: include/kvm/ +X: include/kvm/arm_* F: include/linux/kvm* F: include/trace/events/kvm.h F: include/uapi/asm-generic/kvm* @@ -11285,6 +11286,7 @@ L: kvm@vger.kernel.org S: Supported T: git git://git.kernel.org/pub/scm/virt/kvm/kvm.git F: arch/x86/include/asm/kvm* +F: arch/x86/include/asm/kvm/ F: arch/x86/include/asm/svm.h F: arch/x86/include/asm/vmx*.h F: arch/x86/include/uapi/asm/kvm* diff --git a/arch/x86/include/asm/kvm/mmu_types.h b/arch/x86/include/asm/kvm/mmu_types.h new file mode 100644 index 000000000000..35f893ebab5a --- /dev/null +++ b/arch/x86/include/asm/kvm/mmu_types.h @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_KVM_MMU_TYPES_H +#define __ASM_KVM_MMU_TYPES_H + +#include + +/* + * This is a subset of the overall kvm_cpu_role to minimize the size of + * kvm_memory_slot.arch.gfn_track, i.e. allows allocating 2 bytes per gfn + * instead of 4 bytes per gfn. + * + * Upper-level shadow pages having gptes are tracked for write-protection via + * gfn_track. As above, gfn_track is a 16 bit counter, so KVM must not create + * more than 2^16-1 upper-level shadow pages at a single gfn, otherwise + * gfn_track will overflow and explosions will ensure. + * + * A unique shadow page (SP) for a gfn is created if and only if an existing SP + * cannot be reused. The ability to reuse a SP is tracked by its role, which + * incorporates various mode bits and properties of the SP. Roughly speaking, + * the number of unique SPs that can theoretically be created is 2^n, where n + * is the number of bits that are used to compute the role. + * + * Note, not all combinations of modes and flags below are possible: + * + * - invalid shadow pages are not accounted, so the bits are effectively 18 + * + * - quadrant will only be used if has_4_byte_gpte=1 (non-PAE paging); + * execonly and ad_disabled are only used for nested EPT which has + * has_4_byte_gpte=0. Therefore, 2 bits are always unused. + * + * - the 4 bits of level are effectively limited to the values 2/3/4/5, + * as 4k SPs are not tracked (allowed to go unsync). In addition non-PAE + * paging has exactly one upper level, making level completely redundant + * when has_4_byte_gpte=1. + * + * - on top of this, smep_andnot_wp and smap_andnot_wp are only set if + * cr0_wp=0, therefore these three bits only give rise to 5 possibilities. + * + * Therefore, the maximum number of possible upper-level shadow pages for a + * single gfn is a bit less than 2^13. + */ +struct kvm_mmu_page_role_arch { + u16 has_4_byte_gpte:1; + u16 quadrant:2; + u16 direct:1; + u16 access:3; + u16 efer_nx:1; + u16 cr0_wp:1; + u16 smep_andnot_wp:1; + u16 smap_andnot_wp:1; + u16 ad_disabled:1; + u16 guest_mode:1; + u16 passthrough:1; +}; + +#endif /* !__ASM_KVM_MMU_TYPES_H */ diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 0a819d40131a..ebcd7a0dabef 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -37,6 +37,8 @@ #include #include +#include + #define __KVM_HAVE_ARCH_VCPU_DEBUGFS #define KVM_MAX_VCPUS 1024 @@ -286,72 +288,6 @@ enum x86_intercept_stage; struct kvm_kernel_irq_routing_entry; -/* - * kvm_mmu_page_role tracks the properties of a shadow page (where shadow page - * also includes TDP pages) to determine whether or not a page can be used in - * the given MMU context. This is a subset of the overall kvm_cpu_role to - * minimize the size of kvm_memory_slot.arch.gfn_track, i.e. allows allocating - * 2 bytes per gfn instead of 4 bytes per gfn. - * - * Upper-level shadow pages having gptes are tracked for write-protection via - * gfn_track. As above, gfn_track is a 16 bit counter, so KVM must not create - * more than 2^16-1 upper-level shadow pages at a single gfn, otherwise - * gfn_track will overflow and explosions will ensure. - * - * A unique shadow page (SP) for a gfn is created if and only if an existing SP - * cannot be reused. The ability to reuse a SP is tracked by its role, which - * incorporates various mode bits and properties of the SP. Roughly speaking, - * the number of unique SPs that can theoretically be created is 2^n, where n - * is the number of bits that are used to compute the role. - * - * But, even though there are 19 bits in the mask below, not all combinations - * of modes and flags are possible: - * - * - invalid shadow pages are not accounted, so the bits are effectively 18 - * - * - quadrant will only be used if has_4_byte_gpte=1 (non-PAE paging); - * execonly and ad_disabled are only used for nested EPT which has - * has_4_byte_gpte=0. Therefore, 2 bits are always unused. - * - * - the 4 bits of level are effectively limited to the values 2/3/4/5, - * as 4k SPs are not tracked (allowed to go unsync). In addition non-PAE - * paging has exactly one upper level, making level completely redundant - * when has_4_byte_gpte=1. - * - * - on top of this, smep_andnot_wp and smap_andnot_wp are only set if - * cr0_wp=0, therefore these three bits only give rise to 5 possibilities. - * - * Therefore, the maximum number of possible upper-level shadow pages for a - * single gfn is a bit less than 2^13. - */ -union kvm_mmu_page_role { - u32 word; - struct { - unsigned level:4; - unsigned has_4_byte_gpte:1; - unsigned quadrant:2; - unsigned direct:1; - unsigned access:3; - unsigned invalid:1; - unsigned efer_nx:1; - unsigned cr0_wp:1; - unsigned smep_andnot_wp:1; - unsigned smap_andnot_wp:1; - unsigned ad_disabled:1; - unsigned guest_mode:1; - unsigned passthrough:1; - unsigned :5; - - /* - * This is left at the top of the word so that - * kvm_memslots_for_spte_role can extract it with a - * simple shift. While there is room, give it a whole - * byte so it is also faster to load it from memory. - */ - unsigned as_id:8; - }; -}; - /* * kvm_mmu_extended_role complements kvm_mmu_page_role, tracking properties * relevant to the current MMU configuration. When loading CR0, CR4, or EFER, diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f375b719f565..355548603960 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -210,13 +210,13 @@ static inline bool __maybe_unused is_##reg##_##name(struct kvm_mmu *mmu) \ { \ return !!(mmu->cpu_role. base_or_ext . reg##_##name); \ } -BUILD_MMU_ROLE_ACCESSOR(base, cr0, wp); +BUILD_MMU_ROLE_ACCESSOR(base.arch, cr0, wp); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pse); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, smep); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, smap); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pke); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, la57); -BUILD_MMU_ROLE_ACCESSOR(base, efer, nx); +BUILD_MMU_ROLE_ACCESSOR(base.arch, efer, nx); BUILD_MMU_ROLE_ACCESSOR(ext, efer, lma); static inline bool is_cr0_pg(struct kvm_mmu *mmu) @@ -226,7 +226,7 @@ static inline bool is_cr0_pg(struct kvm_mmu *mmu) static inline bool is_cr4_pae(struct kvm_mmu *mmu) { - return !mmu->cpu_role.base.has_4_byte_gpte; + return !mmu->cpu_role.base.arch.has_4_byte_gpte; } static struct kvm_mmu_role_regs vcpu_to_role_regs(struct kvm_vcpu *vcpu) @@ -618,7 +618,7 @@ static bool mmu_spte_age(u64 *sptep) static inline bool is_tdp_mmu_active(struct kvm_vcpu *vcpu) { - return tdp_mmu_enabled && vcpu->arch.mmu->root_role.direct; + return tdp_mmu_enabled && vcpu->arch.mmu->root_role.arch.direct; } static void walk_shadow_page_lockless_begin(struct kvm_vcpu *vcpu) @@ -695,10 +695,10 @@ static bool sp_has_gptes(struct kvm_mmu_page *sp); static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) { - if (sp->role.passthrough) + if (sp->role.arch.passthrough) return sp->gfn; - if (!sp->role.direct) + if (!sp->role.arch.direct) return sp->shadowed_translation[index] >> PAGE_SHIFT; return sp->gfn + (index << ((sp->role.level - 1) * SPTE_LEVEL_BITS)); @@ -727,7 +727,7 @@ static u32 kvm_mmu_page_get_access(struct kvm_mmu_page *sp, int index) * * In both cases, sp->role.access contains the correct access bits. */ - return sp->role.access; + return sp->role.arch.access; } static void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int index, @@ -739,14 +739,14 @@ static void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int index, } WARN_ONCE(access != kvm_mmu_page_get_access(sp, index), - "access mismatch under %s page %llx (expected %u, got %u)\n", - sp->role.passthrough ? "passthrough" : "direct", - sp->gfn, kvm_mmu_page_get_access(sp, index), access); + "access mismatch under %s page %llx (expected %u, got %u)\n", + sp->role.arch.passthrough ? "passthrough" : "direct", + sp->gfn, kvm_mmu_page_get_access(sp, index), access); WARN_ONCE(gfn != kvm_mmu_page_get_gfn(sp, index), - "gfn mismatch under %s page %llx (expected %llx, got %llx)\n", - sp->role.passthrough ? "passthrough" : "direct", - sp->gfn, kvm_mmu_page_get_gfn(sp, index), gfn); + "gfn mismatch under %s page %llx (expected %llx, got %llx)\n", + sp->role.arch.passthrough ? "passthrough" : "direct", + sp->gfn, kvm_mmu_page_get_gfn(sp, index), gfn); } static void kvm_mmu_page_set_access(struct kvm_mmu_page *sp, int index, @@ -1723,7 +1723,7 @@ static void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) hlist_del(&sp->hash_link); list_del(&sp->link); free_page((unsigned long)sp->spt); - if (!sp->role.direct) + if (!sp->role.arch.direct) free_page((unsigned long)sp->shadowed_translation); kmem_cache_free(mmu_page_header_cache, sp); } @@ -1884,10 +1884,10 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, static bool sp_has_gptes(struct kvm_mmu_page *sp) { - if (sp->role.direct) + if (sp->role.arch.direct) return false; - if (sp->role.passthrough) + if (sp->role.arch.passthrough) return false; return true; @@ -2065,7 +2065,7 @@ static void clear_sp_write_flooding_count(u64 *spte) * The vCPU is required when finding indirect shadow pages; the shadow * page may already exist and syncing it needs the vCPU pointer in * order to read guest page tables. Direct shadow pages are never - * unsync, thus @vcpu can be NULL if @role.direct is true. + * unsync, thus @vcpu can be NULL if @role.arch.direct is true. */ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm *kvm, struct kvm_vcpu *vcpu, @@ -2101,7 +2101,7 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm *kvm, } /* unsync and write-flooding only apply to indirect SPs. */ - if (sp->role.direct) + if (sp->role.arch.direct) goto out; if (sp->unsync) { @@ -2162,7 +2162,7 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm, sp = kvm_mmu_memory_cache_alloc(caches->page_header_cache); sp->spt = kvm_mmu_memory_cache_alloc(caches->shadow_page_cache); - if (!role.direct) + if (!role.arch.direct) sp->shadowed_translation = kvm_mmu_memory_cache_alloc(caches->shadowed_info_cache); set_page_private(virt_to_page(sp->spt), (unsigned long)sp); @@ -2187,7 +2187,7 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm, return sp; } -/* Note, @vcpu may be NULL if @role.direct is true; see kvm_mmu_find_shadow_page. */ +/* Note, @vcpu may be NULL if @role.arch.direct is true; see kvm_mmu_find_shadow_page. */ static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm *kvm, struct kvm_vcpu *vcpu, struct shadow_page_caches *caches, @@ -2231,9 +2231,9 @@ static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, role = parent_sp->role; role.level--; - role.access = access; - role.direct = direct; - role.passthrough = 0; + role.arch.access = access; + role.arch.direct = direct; + role.arch.passthrough = 0; /* * If the guest has 4-byte PTEs then that means it's using 32-bit, @@ -2261,9 +2261,9 @@ static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, * covers bit 21 (see above), thus the quadrant is calculated from the * _least_ significant bit of the PDE index. */ - if (role.has_4_byte_gpte) { + if (role.arch.has_4_byte_gpte) { WARN_ON_ONCE(role.level != PG_LEVEL_4K); - role.quadrant = spte_index(sptep) & 1; + role.arch.quadrant = spte_index(sptep) & 1; } return role; @@ -2292,7 +2292,7 @@ static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterato if (iterator->level >= PT64_ROOT_4LEVEL && vcpu->arch.mmu->cpu_role.base.level < PT64_ROOT_4LEVEL && - !vcpu->arch.mmu->root_role.direct) + !vcpu->arch.mmu->root_role.arch.direct) iterator->level = PT32E_ROOT_LEVEL; if (iterator->level == PT32E_ROOT_LEVEL) { @@ -2391,7 +2391,7 @@ static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, * a new sp with the correct access. */ child = spte_to_child_sp(*sptep); - if (child->role.access == direct_access) + if (child->role.arch.access == direct_access) return; drop_parent_pte(child, sptep); @@ -2420,7 +2420,7 @@ static int mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp, * avoids retaining a large number of stale nested SPs. */ if (tdp_enabled && invalid_list && - child->role.guest_mode && !child->parent_ptes.val) + child->role.arch.guest_mode && !child->parent_ptes.val) return kvm_mmu_prepare_zap_page(kvm, child, invalid_list); } @@ -2689,7 +2689,7 @@ static int kvm_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva) gpa_t gpa; int r; - if (vcpu->arch.mmu->root_role.direct) + if (vcpu->arch.mmu->root_role.arch.direct) return 0; gpa = kvm_mmu_gva_to_gpa_read(vcpu, gva, NULL); @@ -2900,7 +2900,7 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, { struct page *pages[PTE_PREFETCH_NUM]; struct kvm_memory_slot *slot; - unsigned int access = sp->role.access; + unsigned int access = sp->role.arch.access; int i, ret; gfn_t gfn; @@ -2928,7 +2928,7 @@ static void __direct_pte_prefetch(struct kvm_vcpu *vcpu, u64 *spte, *start = NULL; int i; - WARN_ON(!sp->role.direct); + WARN_ON(!sp->role.arch.direct); i = spte_index(sptep) & ~(PTE_PREFETCH_NUM - 1); spte = sp->spt + i; @@ -3549,7 +3549,7 @@ void kvm_mmu_free_guest_mode_roots(struct kvm *kvm, struct kvm_mmu *mmu) * This should not be called while L2 is active, L2 can't invalidate * _only_ its own roots, e.g. INVVPID unconditionally exits. */ - WARN_ON_ONCE(mmu->root_role.guest_mode); + WARN_ON_ONCE(mmu->root_role.arch.guest_mode); for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) { root_hpa = mmu->prev_roots[i].hpa; @@ -3557,7 +3557,7 @@ void kvm_mmu_free_guest_mode_roots(struct kvm *kvm, struct kvm_mmu *mmu) continue; if (!to_shadow_page(root_hpa) || - to_shadow_page(root_hpa)->role.guest_mode) + to_shadow_page(root_hpa)->role.arch.guest_mode) roots_to_free |= KVM_MMU_ROOT_PREVIOUS(i); } @@ -3585,10 +3585,10 @@ static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, int quadrant, struct kvm_mmu_page *sp; role.level = level; - role.quadrant = quadrant; + role.arch.quadrant = quadrant; - WARN_ON_ONCE(quadrant && !role.has_4_byte_gpte); - WARN_ON_ONCE(role.direct && role.has_4_byte_gpte); + WARN_ON_ONCE(quadrant && !role.arch.has_4_byte_gpte); + WARN_ON_ONCE(role.arch.direct && role.arch.has_4_byte_gpte); sp = kvm_mmu_get_shadow_page(vcpu, gfn, role); ++sp->root_count; @@ -3834,7 +3834,7 @@ static int mmu_alloc_special_roots(struct kvm_vcpu *vcpu) * equivalent level in the guest's NPT to shadow. Allocate the tables * on demand, as running a 32-bit L1 VMM on 64-bit KVM is very rare. */ - if (mmu->root_role.direct || + if (mmu->root_role.arch.direct || mmu->cpu_role.base.level >= PT64_ROOT_4LEVEL || mmu->root_role.level < PT64_ROOT_4LEVEL) return 0; @@ -3932,7 +3932,7 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) int i; struct kvm_mmu_page *sp; - if (vcpu->arch.mmu->root_role.direct) + if (vcpu->arch.mmu->root_role.arch.direct) return; if (!VALID_PAGE(vcpu->arch.mmu->root.hpa)) @@ -4161,7 +4161,7 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, arch.token = alloc_apf_token(vcpu); arch.gfn = gfn; - arch.direct_map = vcpu->arch.mmu->root_role.direct; + arch.direct_map = vcpu->arch.mmu->root_role.arch.direct; arch.cr3 = vcpu->arch.mmu->get_guest_pgd(vcpu); return kvm_setup_async_pf(vcpu, cr2_or_gpa, @@ -4172,7 +4172,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) { int r; - if ((vcpu->arch.mmu->root_role.direct != work->arch.direct_map) || + if ((vcpu->arch.mmu->root_role.arch.direct != work->arch.direct_map) || work->wakeup_all) return; @@ -4180,7 +4180,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) if (unlikely(r)) return; - if (!vcpu->arch.mmu->root_role.direct && + if (!vcpu->arch.mmu->root_role.arch.direct && work->arch.cr3 != vcpu->arch.mmu->get_guest_pgd(vcpu)) return; @@ -4456,7 +4456,7 @@ static void nonpaging_init_context(struct kvm_mmu *context) static inline bool is_root_usable(struct kvm_mmu_root_info *root, gpa_t pgd, union kvm_mmu_page_role role) { - return (role.direct || pgd == root->pgd) && + return (role.arch.direct || pgd == root->pgd) && VALID_PAGE(root->hpa) && role.word == to_shadow_page(root->hpa)->role.word; } @@ -4576,7 +4576,7 @@ void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd) * If this is a direct root page, it doesn't have a write flooding * count. Otherwise, clear the write flooding count. */ - if (!new_role.direct) + if (!new_role.arch.direct) __clear_sp_write_flooding_count( to_shadow_page(vcpu->arch.mmu->root.hpa)); } @@ -4803,7 +4803,7 @@ static void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, shadow_zero_check = &context->shadow_zero_check; __reset_rsvds_bits_mask(shadow_zero_check, reserved_hpa_bits(), context->root_role.level, - context->root_role.efer_nx, + context->root_role.arch.efer_nx, guest_can_use_gbpages(vcpu), is_pse, is_amd); if (!shadow_me_mask) @@ -5055,21 +5055,21 @@ kvm_calc_cpu_role(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) { union kvm_cpu_role role = {0}; - role.base.access = ACC_ALL; role.base.as_id = is_smm(vcpu); - role.base.guest_mode = is_guest_mode(vcpu); + role.base.arch.access = ACC_ALL; + role.base.arch.guest_mode = is_guest_mode(vcpu); role.ext.valid = 1; if (!____is_cr0_pg(regs)) { - role.base.direct = 1; + role.base.arch.direct = 1; return role; } - role.base.efer_nx = ____is_efer_nx(regs); - role.base.cr0_wp = ____is_cr0_wp(regs); - role.base.smep_andnot_wp = ____is_cr4_smep(regs) && !____is_cr0_wp(regs); - role.base.smap_andnot_wp = ____is_cr4_smap(regs) && !____is_cr0_wp(regs); - role.base.has_4_byte_gpte = !____is_cr4_pae(regs); + role.base.arch.efer_nx = ____is_efer_nx(regs); + role.base.arch.cr0_wp = ____is_cr0_wp(regs); + role.base.arch.smep_andnot_wp = ____is_cr4_smep(regs) && !____is_cr0_wp(regs); + role.base.arch.smap_andnot_wp = ____is_cr4_smap(regs) && !____is_cr0_wp(regs); + role.base.arch.has_4_byte_gpte = !____is_cr4_pae(regs); if (____is_efer_lma(regs)) role.base.level = ____is_cr4_la57(regs) ? PT64_ROOT_5LEVEL @@ -5109,15 +5109,15 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, { union kvm_mmu_page_role role = {0}; - role.access = ACC_ALL; - role.cr0_wp = true; - role.efer_nx = true; role.as_id = cpu_role.base.as_id; - role.guest_mode = cpu_role.base.guest_mode; - role.ad_disabled = !kvm_ad_enabled(); role.level = kvm_mmu_get_tdp_level(vcpu); - role.direct = true; - role.has_4_byte_gpte = false; + role.arch.access = ACC_ALL; + role.arch.cr0_wp = true; + role.arch.efer_nx = true; + role.arch.guest_mode = cpu_role.base.arch.guest_mode; + role.arch.ad_disabled = !kvm_ad_enabled(); + role.arch.direct = true; + role.arch.has_4_byte_gpte = false; return role; } @@ -5194,7 +5194,7 @@ static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, * NX can be used by any non-nested shadow MMU to avoid having to reset * MMU contexts. */ - root_role.efer_nx = true; + root_role.arch.efer_nx = true; shadow_mmu_init_context(vcpu, context, cpu_role, root_role); } @@ -5212,13 +5212,13 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, union kvm_mmu_page_role root_role; /* NPT requires CR0.PG=1. */ - WARN_ON_ONCE(cpu_role.base.direct); + WARN_ON_ONCE(cpu_role.base.arch.direct); root_role = cpu_role.base; root_role.level = kvm_mmu_get_tdp_level(vcpu); if (root_role.level == PT64_ROOT_5LEVEL && cpu_role.base.level == PT64_ROOT_4LEVEL) - root_role.passthrough = 1; + root_role.arch.passthrough = 1; shadow_mmu_init_context(vcpu, context, cpu_role, root_role); kvm_mmu_new_pgd(vcpu, nested_cr3); @@ -5237,11 +5237,11 @@ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty, */ WARN_ON_ONCE(is_smm(vcpu)); role.base.level = level; - role.base.has_4_byte_gpte = false; - role.base.direct = false; - role.base.ad_disabled = !accessed_dirty; - role.base.guest_mode = true; - role.base.access = ACC_ALL; + role.base.arch.has_4_byte_gpte = false; + role.base.arch.direct = false; + role.base.arch.ad_disabled = !accessed_dirty; + role.base.arch.guest_mode = true; + role.base.arch.access = ACC_ALL; role.ext.word = 0; role.ext.execonly = execonly; @@ -5385,13 +5385,13 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu) { int r; - r = mmu_topup_memory_caches(vcpu, !vcpu->arch.mmu->root_role.direct); + r = mmu_topup_memory_caches(vcpu, !vcpu->arch.mmu->root_role.arch.direct); if (r) goto out; r = mmu_alloc_special_roots(vcpu); if (r) goto out; - if (vcpu->arch.mmu->root_role.direct) + if (vcpu->arch.mmu->root_role.arch.direct) r = mmu_alloc_direct_roots(vcpu); else r = mmu_alloc_shadow_roots(vcpu); @@ -5526,7 +5526,7 @@ static bool detect_write_misaligned(struct kvm_mmu_page *sp, gpa_t gpa, gpa, bytes, sp->role.word); offset = offset_in_page(gpa); - pte_size = sp->role.has_4_byte_gpte ? 4 : 8; + pte_size = sp->role.arch.has_4_byte_gpte ? 4 : 8; /* * Sometimes, the OS only writes the last one bytes to update status @@ -5550,7 +5550,7 @@ static u64 *get_written_sptes(struct kvm_mmu_page *sp, gpa_t gpa, int *nspte) page_offset = offset_in_page(gpa); level = sp->role.level; *nspte = 1; - if (sp->role.has_4_byte_gpte) { + if (sp->role.arch.has_4_byte_gpte) { page_offset <<= 1; /* 32->64 */ /* * A 32-bit pde maps 4MB while the shadow pdes map @@ -5564,7 +5564,7 @@ static u64 *get_written_sptes(struct kvm_mmu_page *sp, gpa_t gpa, int *nspte) } quadrant = page_offset >> PAGE_SHIFT; page_offset &= ~PAGE_MASK; - if (quadrant != sp->role.quadrant) + if (quadrant != sp->role.arch.quadrant) return NULL; } @@ -5628,7 +5628,7 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err void *insn, int insn_len) { int r, emulation_type = EMULTYPE_PF; - bool direct = vcpu->arch.mmu->root_role.direct; + bool direct = vcpu->arch.mmu->root_role.arch.direct; if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root.hpa))) return RET_PF_RETRY; @@ -5659,7 +5659,7 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err * paging in both guests. If true, we simply unprotect the page * and resume the guest. */ - if (vcpu->arch.mmu->root_role.direct && + if (vcpu->arch.mmu->root_role.arch.direct && (error_code & PFERR_NESTED_GUEST_PAGE) == PFERR_NESTED_GUEST_PAGE) { kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2_or_gpa)); return 1; @@ -6321,7 +6321,7 @@ static void shadow_mmu_split_huge_page(struct kvm *kvm, spte = make_huge_page_split_spte(kvm, huge_spte, sp->role, index); mmu_spte_set(sptep, spte); - __rmap_add(kvm, cache, slot, sptep, gfn, sp->role.access); + __rmap_add(kvm, cache, slot, sptep, gfn, sp->role.arch.access); } __link_shadow_page(kvm, cache, huge_sptep, sp, flush); @@ -6380,7 +6380,7 @@ static bool shadow_mmu_try_split_huge_pages(struct kvm *kvm, sp = sptep_to_sp(huge_sptep); /* TDP MMU is enabled, so rmap only contains nested MMU SPs. */ - if (WARN_ON_ONCE(!sp->role.guest_mode)) + if (WARN_ON_ONCE(!sp->role.arch.guest_mode)) continue; /* The rmaps should never contain non-leaf SPTEs. */ @@ -6502,7 +6502,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, * the guest, and the guest page table is using 4K page size * mapping if the indirect sp has level = 1. */ - if (sp->role.direct && + if (sp->role.arch.direct && sp->role.level < kvm_mmu_max_mapping_level(kvm, slot, sp->gfn, PG_LEVEL_NUM)) { kvm_zap_one_rmap_spte(kvm, rmap_head, sptep); @@ -6942,7 +6942,7 @@ static void kvm_recover_nx_huge_pages(struct kvm *kvm) struct kvm_mmu_page, possible_nx_huge_page_link); WARN_ON_ONCE(!sp->nx_huge_page_disallowed); - WARN_ON_ONCE(!sp->role.direct); + WARN_ON_ONCE(!sp->role.arch.direct); /* * Unaccount and do not attempt to recover any NX Huge Pages diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 5427f65117b4..c19a80fdeb8d 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -143,7 +143,7 @@ static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp) * being enabled is mandatory as the bits used to denote WP-only SPTEs * are reserved for PAE paging (32-bit KVM). */ - return kvm_x86_ops.cpu_dirty_log_size && sp->role.guest_mode; + return kvm_x86_ops.cpu_dirty_log_size && sp->role.arch.guest_mode; } int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, @@ -270,7 +270,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, }; int r; - if (vcpu->arch.mmu->root_role.direct) { + if (vcpu->arch.mmu->root_role.arch.direct) { fault.gfn = fault.addr >> PAGE_SHIFT; fault.slot = kvm_vcpu_gfn_to_memslot(vcpu, fault.gfn); } diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h index ae86820cef69..6a4a43b90780 100644 --- a/arch/x86/kvm/mmu/mmutrace.h +++ b/arch/x86/kvm/mmu/mmutrace.h @@ -35,13 +35,13 @@ " %snxe %sad root %u %s%c", \ __entry->mmu_valid_gen, \ __entry->gfn, role.level, \ - role.has_4_byte_gpte ? 4 : 8, \ - role.quadrant, \ - role.direct ? " direct" : "", \ - access_str[role.access], \ + role.arch.has_4_byte_gpte ? 4 : 8, \ + role.arch.quadrant, \ + role.arch.direct ? " direct" : "", \ + access_str[role.arch.access], \ role.invalid ? " invalid" : "", \ - role.efer_nx ? "" : "!", \ - role.ad_disabled ? "!" : "", \ + role.arch.efer_nx ? "" : "!", \ + role.arch.ad_disabled ? "!" : "", \ __entry->root_count, \ __entry->unsync ? "unsync" : "sync", 0); \ saved_ptr; \ diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index e5662dbd519c..e15ec1c473da 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -55,7 +55,7 @@ #define PT_LEVEL_BITS 9 #define PT_GUEST_DIRTY_SHIFT 9 #define PT_GUEST_ACCESSED_SHIFT 8 - #define PT_HAVE_ACCESSED_DIRTY(mmu) (!(mmu)->cpu_role.base.ad_disabled) + #define PT_HAVE_ACCESSED_DIRTY(mmu) (!(mmu)->cpu_role.base.arch.ad_disabled) #define PT_MAX_FULL_LEVELS PT64_ROOT_MAX_LEVEL #else #error Invalid PTTYPE value @@ -532,7 +532,7 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, pgprintk("%s: gpte %llx spte %p\n", __func__, (u64)gpte, spte); gfn = gpte_to_gfn(gpte); - pte_access = sp->role.access & FNAME(gpte_access)(gpte); + pte_access = sp->role.arch.access & FNAME(gpte_access)(gpte); FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte); slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, @@ -592,7 +592,7 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw, if (unlikely(vcpu->kvm->mmu_invalidate_in_progress)) return; - if (sp->role.direct) + if (sp->role.arch.direct) return __direct_pte_prefetch(vcpu, sp, sptep); i = spte_index(sptep) & ~(PTE_PREFETCH_NUM - 1); @@ -884,7 +884,7 @@ static gpa_t FNAME(get_level1_sp_gpa)(struct kvm_mmu_page *sp) WARN_ON(sp->role.level != PG_LEVEL_4K); if (PTTYPE == 32) - offset = sp->role.quadrant << SPTE_LEVEL_BITS; + offset = sp->role.arch.quadrant << SPTE_LEVEL_BITS; return gfn_to_gpa(sp->gfn) + offset * sizeof(pt_element_t); } @@ -1003,9 +1003,11 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) */ const union kvm_mmu_page_role sync_role_ign = { .level = 0xf, - .access = 0x7, - .quadrant = 0x3, - .passthrough = 0x1, + .arch = { + .access = 0x7, + .quadrant = 0x3, + .passthrough = 0x1, + }, }; /* @@ -1014,7 +1016,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) * differs then the memslot lookup (SMM vs. non-SMM) will be bogus, the * reserved bits checks will be wrong, etc... */ - if (WARN_ON_ONCE(sp->role.direct || + if (WARN_ON_ONCE(sp->role.arch.direct || (sp->role.word ^ root_role.word) & ~sync_role_ign.word)) return -1; @@ -1043,7 +1045,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) } gfn = gpte_to_gfn(gpte); - pte_access = sp->role.access; + pte_access = sp->role.arch.access; pte_access &= FNAME(gpte_access)(gpte); FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte); diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index c0fd7e049b4e..fe4b626cb431 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -146,7 +146,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, WARN_ON_ONCE(!pte_access && !shadow_present_mask); - if (sp->role.ad_disabled) + if (sp->role.arch.ad_disabled) spte |= SPTE_TDP_AD_DISABLED_MASK; else if (kvm_mmu_page_ad_need_write_protect(sp)) spte |= SPTE_TDP_AD_WRPROT_ONLY_MASK; @@ -301,7 +301,7 @@ u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte, union kvm_mmu_page * the page executable as the NX hugepage mitigation no longer * applies. */ - if ((role.access & ACC_EXEC_MASK) && is_nx_huge_page_enabled(kvm)) + if ((role.arch.access & ACC_EXEC_MASK) && is_nx_huge_page_enabled(kvm)) child_spte = make_spte_executable(child_spte); } diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 1f03701b943a..ad84c549fe96 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -260,7 +260,7 @@ static inline bool kvm_ad_enabled(void) static inline bool sp_ad_disabled(struct kvm_mmu_page *sp) { - return sp->role.ad_disabled; + return sp->role.arch.ad_disabled; } static inline bool spte_ad_enabled(u64 spte) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9b2da8c8f30a..2bfe060768fc 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8442,7 +8442,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, WARN_ON_ONCE(!(emulation_type & EMULTYPE_PF))) return false; - if (!vcpu->arch.mmu->root_role.direct) { + if (!vcpu->arch.mmu->root_role.arch.direct) { /* * Write permission should be allowed since only * write access need to be emulated. @@ -8475,7 +8475,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, kvm_release_pfn_clean(pfn); /* The instructions are well-emulated on direct mmu. */ - if (vcpu->arch.mmu->root_role.direct) { + if (vcpu->arch.mmu->root_role.arch.direct) { unsigned int indirect_shadow_pages; write_lock(&vcpu->kvm->mmu_lock); @@ -8543,7 +8543,7 @@ static bool retry_instruction(struct x86_emulate_ctxt *ctxt, vcpu->arch.last_retry_eip = ctxt->eip; vcpu->arch.last_retry_addr = cr2_or_gpa; - if (!vcpu->arch.mmu->root_role.direct) + if (!vcpu->arch.mmu->root_role.arch.direct) gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL); kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); @@ -8846,7 +8846,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, ctxt->exception.address = cr2_or_gpa; /* With shadow page tables, cr2 contains a GVA or nGPA. */ - if (vcpu->arch.mmu->root_role.direct) { + if (vcpu->arch.mmu->root_role.arch.direct) { ctxt->gpa_available = true; ctxt->gpa_val = cr2_or_gpa; } diff --git a/include/kvm/mmu_types.h b/include/kvm/mmu_types.h new file mode 100644 index 000000000000..3f35a924e031 --- /dev/null +++ b/include/kvm/mmu_types.h @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __KVM_MMU_TYPES_H +#define __KVM_MMU_TYPES_H + +#include +#include +#include + +#include + +/* + * kvm_mmu_page_role tracks the properties of a shadow page (where shadow page + * also includes TDP pages) to determine whether or not a page can be used in + * the given MMU context. + */ +union kvm_mmu_page_role { + u32 word; + struct { + struct { + /* The address space ID mapped by the page. */ + u16 as_id:8; + + /* The level of the page in the page table hierarchy. */ + u16 level:4; + + /* Whether the page is invalid, i.e. pending destruction. */ + u16 invalid:1; + }; + + /* Architecture-specific properties. */ + struct kvm_mmu_page_role_arch arch; + }; +}; + +static_assert(sizeof(union kvm_mmu_page_role) == sizeof_field(union kvm_mmu_page_role, word)); + +#endif /* !__KVM_MMU_TYPES_H */ From patchwork Thu Dec 8 19:38:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713897 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=FspLb4bz; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=iEvz84Vo; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSmVz16F5z23pG for ; Fri, 9 Dec 2022 07:50:03 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=7US5N2F8kAz8B+Nh5gy1A/DL2BnQSnApF9SI6kA+DVI=; b=FspLb4bz7Y54GgWRxnUvoCpvao 3XDlp5eK0sxPCF4duPZfFsRtOJz6tjswNc6sWqJj2EUoaudQXxwsF5c/vYgINSnkruCJ+aZ96iT+f 3cQypnqQZJvOCUG6TJ7SSjFrVkkdD4M9DiOSuZIdmyZlC7TYiqRCNRDzcb0UPx90VqTDS2+StK3Le zOCnnMEKV4af81JBM7W5rVKBYR8RcCXeqqeg0JwuZw15PxpltVjwA2oWHLr9VZ9WdvNxcYGcetqAB h7d6+n0L9b0slaIVlBfsMwH+jzkvV7oJpIgrwQxqE61c3oBHySuV1gXbIjigiqwKQwITEeses9I5x h7DagU7w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NqC-00AxTM-SO; Thu, 08 Dec 2022 20:50:00 +0000 Received: from mail-pg1-x54a.google.com ([2607:f8b0:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjh-009rOG-O4 for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:19 +0000 Received: by mail-pg1-x54a.google.com with SMTP id g32-20020a635660000000b00478c21b8095so1626943pgm.10 for ; Thu, 08 Dec 2022 11:39:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zPqhilQnjSVH+DKsiF3UuxsA3mWYraWvqkpSQiCn4mk=; b=iEvz84VootOQzL2fuUOye77ff0RBTUJwBPc/nCp9n7wqfMIJewYjnWs+e+UV8BdMIw elYqajuLHReDubNQsxzeD4LQOafn7i58BjesdxlcK+vzw+x1HQo9F7VP57LAnqtKoesr UywCJYsjvD0BlUoCSDFWtr6dHwgWaW3JtPzOAkFIkkT8WGiWMzZ2oAHhyStfBm7eJSsS 38EqRrqfu/Bb+ppVcaHkYCPUXH92lKxDVFDbgcDFRtoe1sRIy80NzBB22C9JYr71mn8T A1Ozw0Mgmaj7KonevqN/mmVI3ivsgF7FtLFAmhFybNP5PHPBvBWnlJEVJf0pakv7c2rW r4Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zPqhilQnjSVH+DKsiF3UuxsA3mWYraWvqkpSQiCn4mk=; b=we8X3xtWqUcMMj6C3edVhdi7HvVCQxHIChgFSkBbuLWF/s61tzroG6d/jWOZDk08nD RO2dkzH3fDIqWhtNdBLf0NGWFdb8qQe5oBTyEncj9uwjmXw9gqC0gDMZveoj15TAvud9 Pcm1oQ+rMLvxrNhNF5GoyuDDc4o+mnSDtUVwjqfaaBEVHyIC2Vq3wNHMvscZymU1Y1lQ gTyv5FsjGFqM8ur0AAW7qUZdt1G/F7uJDVLSH0dy3IjxmnlMZUg2KepyQq6OlJAmB9Hv Hjo6MpbUb9liTGvAmJQ+SN//AHK5peKpq88KsIe2KX2jSXKbCYk/X6OC88UoK0rQyXr/ Er/g== X-Gm-Message-State: ANoB5pmqH+F8W0kP90mOGmdty04VHgZ1ecgH+afq43A6YMGvtLnY0vRx uCzC/tH+bsh/RJQG3s2yHyHnUa91bcofqQ== X-Google-Smtp-Source: AA0mqf7eM5A3bS0eYMJF7BGmyZN+evZ/sdXM528vLtaTteWzbjmMBYJhb583jjAiLFnL4yHJF6g542RYvvvpWw== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:1623:b0:577:4baf:f412 with SMTP id e3-20020a056a00162300b005774baff412mr10161002pfc.77.1670528352538; Thu, 08 Dec 2022 11:39:12 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:23 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-4-dmatlack@google.com> Subject: [RFC PATCH 03/37] KVM: MMU: Move tdp_ptep_t into common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_113913_799072_96E9E3F0 X-CRM114-Status: GOOD ( 11.46 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Move the definition of tdp_ptep_t into kvm/mmu_types.h so it can be used from common code in future commits. No functional changed intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu_internal.h | 2 -- include/kvm/mmu_types.h | 2 ++ 2 files changed, 2 insertions(+), 2 deletions(-) Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:54a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Move the definition of tdp_ptep_t into kvm/mmu_types.h so it can be used from common code in future commits. No functional changed intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu_internal.h | 2 -- include/kvm/mmu_types.h | 2 ++ 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index c19a80fdeb8d..e32379c5b1ad 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -44,8 +44,6 @@ extern bool dbg; #define INVALID_PAE_ROOT 0 #define IS_VALID_PAE_ROOT(x) (!!(x)) -typedef u64 __rcu *tdp_ptep_t; - struct kvm_mmu_page { /* * Note, "link" through "spt" fit in a single 64 byte cache line on diff --git a/include/kvm/mmu_types.h b/include/kvm/mmu_types.h index 3f35a924e031..14099956fdac 100644 --- a/include/kvm/mmu_types.h +++ b/include/kvm/mmu_types.h @@ -34,4 +34,6 @@ union kvm_mmu_page_role { static_assert(sizeof(union kvm_mmu_page_role) == sizeof_field(union kvm_mmu_page_role, word)); +typedef u64 __rcu *tdp_ptep_t; + #endif /* !__KVM_MMU_TYPES_H */ From patchwork Thu Dec 8 19:38:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713891 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=cTvuWeVq; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=desiato.20200630 header.b=lmk2QjAu; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=Gz2UiZJQ; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSmLH15g9z2404 for ; Fri, 9 Dec 2022 07:42:31 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=0qIKbm/ZqJ0+so3QKIj9h7bxbuJDYdtFmNw1SlJCGDU=; b=cTvuWeVqi0+x/PY4eyVBeVIA/p oABjVbkkNARv4L6plaL0X6s6M8DfkOYTnIvS3eABo4j0MvcNVkcgZkMFl1w4/UGxJB6dPSVoNOUOc ahE8aJKAC7enrmnhtYmqcIg9+cFXW+1tXq98B2d8/e4paEMoxF5OD29fd017ttaRTUzYzKetfz16w ZWWuS6ggkmvDQTzk2564479z/+AAHj1M4L27+EWGQo+RNz+9UwQRBBoSKoVmQZhlxqY7sMzlrFGVy COca+j7JrncUbUS63W6S67DDoyPmdg+pPjMoQLWl5KeIDR8KLBwTysYZHjyNBHHDuQOpAcoo7Tx21 I+PwfJJA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Nir-00Armh-MO; Thu, 08 Dec 2022 20:42:25 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NVH-00AgYd-4W for kvm-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 20:28:23 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=HLrD3t5bH+2UVRBzeB4pMLD/le/gTmNW425ksyvCqFk=; b=lmk2QjAudRlwK6zgKYVKxHoCFe PuWB+pcY/E7Ije223Hh6YgO4dRQxsQ3IkME+FmWC6mrKeP3NbU+EcVkT7zO876TO2qantELQrZt3E 1ufid9Ea0HBeUC9fNsU374K6GQ2/7M6BqAyjQlUk3/pjYHDQhODRafiG1xK8gj7VtCx4jFo5tWmUJ yiEV/ZfCVMyEUFFPAULPxIzMga0ujuAbMCZMDxVCmWukN1DloG7qTc9Qc/bzeRnBJZBq1GMWA7mA1 p7c9RAg8++q3TXMlBLknxazeBlbnFx0nLWi7AAllMJ/mRJr+bj6k+M4J8D91aLfKcWzy8FiF7wOPY ej+rzFxw==; Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjl-008Zlt-D6 for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:19 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-39afd53dcdbso24731487b3.8 for ; Thu, 08 Dec 2022 11:39:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HLrD3t5bH+2UVRBzeB4pMLD/le/gTmNW425ksyvCqFk=; b=Gz2UiZJQ4DZMvEnArLUIALIGxYLRWiZ17+uUATC7JW/9U1+U3CD+XxyUp4xTCXYyqm DDneOFTbuWB1kx+t4q0UHmlcS5n+bBhngCf182KgN6PPTjRQsSMq1oAQexRSMgVG39ZD WhmvaX0yvy8NFr4Zie7hieP7Qf0Cd9sbZbC0FQq7beeBSoCGOAI0ayUDbpD1ZJxACp3Y RsidsgionOp80YxgC6DQ4znzyHHQNpvrHQge9fW5yLMBKr5BHx+VLpdd/WsaXsD3cms0 3Ph9MeTQfZs+riAoOSxWCz4/93fX3jbybIXAJwuXHBDPPpC+reEQ9lxn1c7vmVVnZgpl TQhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HLrD3t5bH+2UVRBzeB4pMLD/le/gTmNW425ksyvCqFk=; b=hcMokJEb490PXjWuv6GfqepmhF2tPv5THt36dmjxISPljjeUvqYpC305T5OqE6iPTv 7tFhwZAcqxuY8rNrMT38TZXt8tlhEm/Bjfw5oom+zcS8EjfkW1Th2+xD4w5ns4DR/Puy PanHEZok3mh6aak/jcr1sHrCiw+/xZnCYp5vddvp9dV3LdeSdMp0MyD+5hib6fcqXNee 7q/U50GIbFArptaIF5d18YFMg29uPaw/Z/xNUuqjgKZi4bHUI+2CCcK2e1blCe1LpS/u W5jsrcGvzm9TYv2fX6dm6kuHFloUJaHkwYTGAuUQ+lntkJYO2VmGv8Os8YtPmLzgqiBK x2hA== X-Gm-Message-State: ANoB5pk1DKN5mikTANobpFf/3sybl+t/st59Mc52TiHZeaaK1g5Z0DkT c6GGV6d0Lxtyx7P+1scFaPsWnv1Zqi52bQ== X-Google-Smtp-Source: AA0mqf7CjJBroe1GU5lusLDR6DJkiCylX/dWdqQ1e2UjuKyXk5ZPZTZv3qlpbJO/f2YNw2OZIgu/a4kIbkSpkQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:2184:0:b0:6ea:e952:4d4a with SMTP id h126-20020a252184000000b006eae9524d4amr80656342ybh.120.1670528354217; Thu, 08 Dec 2022 11:39:14 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:24 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-5-dmatlack@google.com> Subject: [RFC PATCH 04/37] KVM: x86/mmu: Invert sp->tdp_mmu_page to sp->shadow_mmu_page From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193917_593767_CEF7AD5B X-CRM114-Status: GOOD ( 13.91 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "desiato.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Invert the meaning of sp->tdp_mmu_page and rename it accordingly. This allows the TDP MMU code to not care about this field, which will be used in a subsequent commit to move the TDP MMU to common cod [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:1149 listed in] [list.dnswl.org] -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Invert the meaning of sp->tdp_mmu_page and rename it accordingly. This allows the TDP MMU code to not care about this field, which will be used in a subsequent commit to move the TDP MMU to common code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 1 + arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 3 --- arch/x86/kvm/mmu/tdp_mmu.h | 5 ++++- 4 files changed, 6 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 355548603960..f7668a32721d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2180,6 +2180,7 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm, sp->gfn = gfn; sp->role = role; + sp->shadow_mmu_page = true; hlist_add_head(&sp->hash_link, sp_list); if (sp_has_gptes(sp)) account_shadowed(kvm, sp); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index e32379c5b1ad..c1a379fba24d 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -52,7 +52,7 @@ struct kvm_mmu_page { struct list_head link; struct hlist_node hash_link; - bool tdp_mmu_page; + bool shadow_mmu_page; bool unsync; u8 mmu_valid_gen; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 7ccac1aa8df6..fc0b87ceb1ea 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -133,8 +133,6 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, if (!refcount_dec_and_test(&root->tdp_mmu_root_count)) return; - WARN_ON(!is_tdp_mmu_page(root)); - /* * The root now has refcount=0. It is valid, but readers already * cannot acquire a reference to it because kvm_tdp_mmu_get_root() @@ -279,7 +277,6 @@ static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep, sp->role = role; sp->gfn = gfn; sp->ptep = sptep; - sp->tdp_mmu_page = true; trace_kvm_mmu_get_page(sp, true); } diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 0a63b1afabd3..18d3719f14ea 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -71,7 +71,10 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr, u64 *spte); #ifdef CONFIG_X86_64 -static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return sp->tdp_mmu_page; } +static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) +{ + return !sp->shadow_mmu_page; +} #else static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; } #endif From patchwork Thu Dec 8 19:38:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713890 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=DDrNeI9n; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=desiato.20200630 header.b=bKBzEWnn; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=irP5mcGo; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSmLF5DPzz23ns for ; Fri, 9 Dec 2022 07:42:29 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=pKdjt64TM73kaTmsfgCBl/2emmram0OKM4YkjfikTms=; b=DDrNeI9nALM4cAtjI/Vq8lUar5 /7xRrYdm8BlA2UABIyBZVXZrQ8Z0ibt5vjzwkSZLYg0HzgT/dBhDxDvcs0a89bvgOhCg+R7l/4dsJ NeNw44LJtlT3kttJAahHzAZaYPbdW4ty/OwmtArophagVW0ueoEIuZoawL08E2zxXANVDYj5ZFBSE Rag85ANpdR8gBaDeoBh8K5++nSYfQL5RMkUQcnwy/cP0u5hBm+gJbiwS3jW8mDKXh5R6Qb6MDqX+5 ADyzGFgJ0JkBy+rN5uwy77uF/04Y/eU35loWj+CbDJcEz7qfHwodKucpRpa7eVNJ7pWD/e0dg/9jP N3HUDB8Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Niq-00Arly-8a; Thu, 08 Dec 2022 20:42:24 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NV9-00AgYd-4K for kvm-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 20:28:22 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=7K9ka6MUeaf4VmwASDJsBFWEG85o/chirVO+ir7DJ1k=; b=bKBzEWnnUSmo38V1MoJOUz8eBd NKZOelA7lZSZJhPd/nbprBk19zV8BeQ7FN6g7K3FUIvSZ8/tPHZsBbXZI/6jh6xgz7MQGCmiWXhzL GUd1AEBNm5EkerkRmjG/mba98Tu3Gjy7oemQO3pGLSy+Ttdm3VR0abKyNemJzw9Wi1O7nwlWBIUWz axLKlfpRDREpfYUfmYpTrfi1fuH+4cPk9yanqQgi2TdAdSjzKuJLeSyINraysGgk4u8tnGG0QLQpI 5ncid/CSILalcQTimMFuq2JjBwStbasT18tJ3Fwm+vbxXA3vqasNbE3O+G02O9QWQaXU5zJjP4mmX LqYAhTMA==; Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjl-008Zm0-Ix for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:19 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id j6-20020a05690212c600b006fc7f6e6955so2549097ybu.12 for ; Thu, 08 Dec 2022 11:39:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7K9ka6MUeaf4VmwASDJsBFWEG85o/chirVO+ir7DJ1k=; b=irP5mcGoAOYc8wosnD/TtALC0WWa+NGjhVIHJTTnr/3Z8yjzEK7JM4oiVhdZFBRQwy zJ2t/ttFB0YDo1jKtcssuFL71HMxb9Mn2FDYgr/S1maeH1kDPYzBK0DF9LASFH6X+Gyz ADGVieTHTJS3Xu0juG8XWrdDMzDZvqxucwC6VjLm3/r24BngHKVLne44DdbtStaOGGOh nAuOImwdaJqBcKx9C/oAm0bmd7zRXkwa5CkgPV2RFBbSGZAPdiE+y9pF0NYdTwDcUcBb YIWPvUTgaFvUXFehRkfrZQ4c+oYOqfGi3KvXXK7lpuBLnfpJgagtw4O5VTtpSo61tAre q86Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7K9ka6MUeaf4VmwASDJsBFWEG85o/chirVO+ir7DJ1k=; b=kcbB06jVKN4iECL1zEoMaK58aUsug7wM9q3tVe8DGGDTx4TIJvuxUxLnMshKajd5L7 gDyyFf/99zapHugu/wco+Ok30U5+58XpgKa+SEV6UhXz9pMMkMO94tQYDqoZwkaNc0xU xW1BglzWpJzCPc6Ss6GLnMXIqgM6L7B9nuQNC5ZSk1NC88z4iDtIlxgKeOI3nOhDD4Vd Hu6bfzy5tCP3bUJger7V/DRJRqAC2fRoSzaN/KJOe2WbDK0e3GqfaL+2CbA5W6+2rxGm cW/bDgPNnje5u5WZ4GrIsAwHOwYAj+z/lxJ43GEGy2029HF8aYR1PXMIgtkNQEi2k9pH hRRg== X-Gm-Message-State: ANoB5pmxlEb4LrGXSshD1OXlZ0z1urmQA9/RcuWfkWflqnQH89Zq+6Wy JasH5xp0UOD4QE5C0hQKFuSoj9q70JFaFg== X-Google-Smtp-Source: AA0mqf4/lKyQk5lKEjo3ymU4c9dqtITM1gYvYc6O/tXnyNlkcxf5NSwbnRprMsGK5JwDZSs7UsMVtOf+2DbYHA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:120d:0:b0:3d5:ecbb:2923 with SMTP id 13-20020a81120d000000b003d5ecbb2923mr35023748yws.485.1670528355728; Thu, 08 Dec 2022 11:39:15 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:25 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-6-dmatlack@google.com> Subject: [RFC PATCH 05/37] KVM: x86/mmu: Unify TDP MMU and Shadow MMU root refcounts From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193917_718937_57A1C0C9 X-CRM114-Status: GOOD ( 19.28 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "desiato.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Use the same field for refcounting roots in the TDP MMU and Shadow MMU. The atomicity provided by refcount_t is overkill for the Shadow MMU, since it holds the write-lock. But converging this field wi [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:b4a listed in] [list.dnswl.org] -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Use the same field for refcounting roots in the TDP MMU and Shadow MMU. The atomicity provided by refcount_t is overkill for the Shadow MMU, since it holds the write-lock. But converging this field will enable a future commit to more easily move struct kvm_mmu_page to common code. Note, refcount_dec_and_test() returns true if the resulting refcount is 0. Hence the check in mmu_free_root_page() is inverted to check if shadow root refcount is 0. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 14 +++++++------- arch/x86/kvm/mmu/mmu_internal.h | 6 ++---- arch/x86/kvm/mmu/mmutrace.h | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 8 ++++---- arch/x86/kvm/mmu/tdp_mmu.h | 2 +- 5 files changed, 15 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f7668a32721d..11cef930d5ed 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2498,14 +2498,14 @@ static bool __kvm_mmu_prepare_zap_page(struct kvm *kvm, if (sp->unsync) kvm_unlink_unsync_page(kvm, sp); - if (!sp->root_count) { + if (!refcount_read(&sp->root_refcount)) { /* Count self */ (*nr_zapped)++; /* * Already invalid pages (previously active roots) are not on * the active page list. See list_del() in the "else" case of - * !sp->root_count. + * !sp->root_refcount. */ if (sp->role.invalid) list_add(&sp->link, invalid_list); @@ -2515,7 +2515,7 @@ static bool __kvm_mmu_prepare_zap_page(struct kvm *kvm, } else { /* * Remove the active root from the active page list, the root - * will be explicitly freed when the root_count hits zero. + * will be explicitly freed when the root_refcount hits zero. */ list_del(&sp->link); @@ -2570,7 +2570,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, kvm_flush_remote_tlbs(kvm); list_for_each_entry_safe(sp, nsp, invalid_list, link) { - WARN_ON(!sp->role.invalid || sp->root_count); + WARN_ON(!sp->role.invalid || refcount_read(&sp->root_refcount)); kvm_mmu_free_shadow_page(sp); } } @@ -2593,7 +2593,7 @@ static unsigned long kvm_mmu_zap_oldest_mmu_pages(struct kvm *kvm, * Don't zap active root pages, the page itself can't be freed * and zapping it will just force vCPUs to realloc and reload. */ - if (sp->root_count) + if (refcount_read(&sp->root_refcount)) continue; unstable = __kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list, @@ -3481,7 +3481,7 @@ static void mmu_free_root_page(struct kvm *kvm, hpa_t *root_hpa, if (is_tdp_mmu_page(sp)) kvm_tdp_mmu_put_root(kvm, sp, false); - else if (!--sp->root_count && sp->role.invalid) + else if (refcount_dec_and_test(&sp->root_refcount) && sp->role.invalid) kvm_mmu_prepare_zap_page(kvm, sp, invalid_list); *root_hpa = INVALID_PAGE; @@ -3592,7 +3592,7 @@ static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, int quadrant, WARN_ON_ONCE(role.arch.direct && role.arch.has_4_byte_gpte); sp = kvm_mmu_get_shadow_page(vcpu, gfn, role); - ++sp->root_count; + refcount_inc(&sp->root_refcount); return __pa(sp->spt); } diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index c1a379fba24d..fd4990c8b0e9 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -87,10 +87,8 @@ struct kvm_mmu_page { u64 *shadowed_translation; /* Currently serving as active root */ - union { - int root_count; - refcount_t tdp_mmu_root_count; - }; + refcount_t root_refcount; + unsigned int unsync_children; union { struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */ diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h index 6a4a43b90780..ffd10ce3eae3 100644 --- a/arch/x86/kvm/mmu/mmutrace.h +++ b/arch/x86/kvm/mmu/mmutrace.h @@ -19,7 +19,7 @@ __entry->mmu_valid_gen = sp->mmu_valid_gen; \ __entry->gfn = sp->gfn; \ __entry->role = sp->role.word; \ - __entry->root_count = sp->root_count; \ + __entry->root_count = refcount_read(&sp->root_refcount); \ __entry->unsync = sp->unsync; #define KVM_MMU_PAGE_PRINTK() ({ \ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index fc0b87ceb1ea..34d674080170 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -130,7 +130,7 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, { kvm_lockdep_assert_mmu_lock_held(kvm, shared); - if (!refcount_dec_and_test(&root->tdp_mmu_root_count)) + if (!refcount_dec_and_test(&root->root_refcount)) return; /* @@ -158,7 +158,7 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, * zap the root because a root cannot go from invalid to valid. */ if (!kvm_tdp_root_mark_invalid(root)) { - refcount_set(&root->tdp_mmu_root_count, 1); + refcount_set(&root->root_refcount, 1); /* * Zapping the root in a worker is not just "nice to have"; @@ -316,7 +316,7 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) root = tdp_mmu_alloc_sp(vcpu); tdp_mmu_init_sp(root, NULL, 0, role); - refcount_set(&root->tdp_mmu_root_count, 1); + refcount_set(&root->root_refcount, 1); spin_lock(&kvm->arch.tdp_mmu_pages_lock); list_add_rcu(&root->link, &kvm->arch.tdp_mmu_roots); @@ -883,7 +883,7 @@ static void tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root, * and lead to use-after-free as zapping a SPTE triggers "writeback" of * dirty accessed bits to the SPTE's associated struct page. */ - WARN_ON_ONCE(!refcount_read(&root->tdp_mmu_root_count)); + WARN_ON_ONCE(!refcount_read(&root->root_refcount)); kvm_lockdep_assert_mmu_lock_held(kvm, shared); diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 18d3719f14ea..19d3153051a3 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -14,7 +14,7 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu); __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root) { - return refcount_inc_not_zero(&root->tdp_mmu_root_count); + return refcount_inc_not_zero(&root->root_refcount); } void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, From patchwork Thu Dec 8 19:38:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713888 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=SlhNhuMj; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=desiato.20200630 header.b=QFkqKeIf; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=KPxzCU0d; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSmCz0YTLz23pD for ; Fri, 9 Dec 2022 07:37:03 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=q4HrakmYTjZZQOqba/kFcbeXLn42OA0MTByNB1Y/tGA=; b=SlhNhuMjdLfAbFIAZlklRaokA/ I4VeJyo6UHXOasz0F+3Jed9C5Usghs7i5IPPW48OGalWWofiDhzaUfpSeuSzZ7S//JGWif3xDPf1i X5A+CKQ5mZI/RXrMYdOk2kDMsypWnDByLa94+i+nhv32HfqO5ReRgXlLOSUt+3ZUWYpSL2/ZhKhto BjryvifbA4D1/HIo1l5LaO8UBmZK18xu2E2BxslbU39s+IOSCktl7lAxjBYZklLOtH/REIZi/tvBu kWvlmGZ5oa0VyDmvZbbQiey0m6IvsoOCxvjB8imqvNYr3ZTUSsu14ls91C5m2i14qbEpxSl6UPxAm 5yhEY20g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NdY-00AoHH-S3; Thu, 08 Dec 2022 20:36:56 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NV3-00AgYd-Nf for kvm-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 20:28:10 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=a3bKpO5b/v0RCxuNw899Xshau8/zESVZr6kYd+4zdKE=; b=QFkqKeIffWUjauuxHsNlkSOXCE pFNP/vkYkynaeS9vh7ZhihZ3PUe42WQele6cBoAyLXdwpjJwUjR2DL51fKJGDdOXXjXvYyy2It37h CXcrovG4kNvbA3LDiAOrgh0tp2nLyIx85SxGgoS/4c440KTnYWALPUnfmkMh4Fl21xcTyiPukwDNX V1Zpxsn6dSYvTepx3mdrFdjfONVMWcdnvTvcrbp3g95GJki64lkCHaaSCGW/+WRBl2F1HbuwLTCfY WmPU74WiFBZ+bSRjtBXZaJbrUo3AUyb4J7l3KNxXOn2OZeaEJ8p/EmTqsBdqqy+x49XsEfXplo7VY 4lO/4Bfw==; Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjn-008ZnM-4D for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:24 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-3bd1ff8fadfso24701107b3.18 for ; Thu, 08 Dec 2022 11:39:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=a3bKpO5b/v0RCxuNw899Xshau8/zESVZr6kYd+4zdKE=; b=KPxzCU0dxo9eGutKZUhotgl9jJKDzTmrDJ6vzV8erWlSEgK6SxsymFRTIzbdw/dwSY +SC6PtCJm9pU4ixS9rlPhFsl5m6gvXODTybYIHpQJFhDImrcri/mVQCf3U7geImSfYZv jOCNDVtIAcIEEh22mPn+jaizoJ2GqxOS+Wu/loB2TQYGTJav7XgL+IkKf50x/qxFwed5 KTLCUMIw8jkfg57J7bMRWqf0cY8RS9HRUaqkp6JSA+YDTR7c6IyBfSoqX9W9wVR0n1EG fdBWsC/Qy8Z8slhboisRHODy4G1CnX1V+3pJ0tUZkCd2ntcVE/dPnHvwgIZS+klnYn7M Opqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=a3bKpO5b/v0RCxuNw899Xshau8/zESVZr6kYd+4zdKE=; b=fIiolm5iq+McB87LLfF/ZG/Hg4Gj10IugtoPZfNHHmHBoO7B7ZkGTj5Xe7IYRDUaHz SUfz99z4chPUia8EER7CkHVhissFMTi8J6/rQDEdU0eCoJcDJSUTS8Acm0dQOH2Hsufn 0hZZzUrzMAq40qjVS7oknJl+R/7xmOY59VYY6TrLL2O7zFSN1XiEdpBNRhu0LsAbv8bN +nXJ3VnhKNJC4jycdavK9hnCUeifJwAS/z/KlniuDX4fBhySxbNMVUa/Uo+Yx3NAcOPk YIT+xe+MQ7j8oyqnK7rauSougI4JogVd2f/L3pXPgk4+c9T8xh7N3bI2TzjlJJ9Dpgyw +nog== X-Gm-Message-State: ANoB5pkFCQzto45aLnaTfKocwRzC2IqdhgAotD0JmUroXz5lM23YjSqy 49fIZ/txgDkFlMpBHuXy0pA7RywMa835qA== X-Google-Smtp-Source: AA0mqf5gdryBtEuIfjSANvRDkIbH3z8min3Cb8Q36LkcXIqZ65yLJBzUlATA8LbtoFr5VbGGv9+8vT8wYeHhZQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:8644:0:b0:3c7:3c2b:76b5 with SMTP id w65-20020a818644000000b003c73c2b76b5mr46075323ywf.22.1670528357332; Thu, 08 Dec 2022 11:39:17 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:26 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-7-dmatlack@google.com> Subject: [RFC PATCH 06/37] KVM: MMU: Move struct kvm_mmu_page to common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193919_288568_3AA10AB9 X-CRM114-Status: GOOD ( 29.44 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "desiato.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Move struct kvm_mmu_page to common code and all x86-specific fields into kvm_mmu_page_arch. This commit increases the size of struct kvm_mmu_page by 64 bytes on x86_64 (184 bytes -> 248 bytes). The size of this struct can be reduced in future commits by moving TDP MMU root fields into a sepa [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:1149 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Move struct kvm_mmu_page to common code and all x86-specific fields into kvm_mmu_page_arch. This commit increases the size of struct kvm_mmu_page by 64 bytes on x86_64 (184 bytes -> 248 bytes). The size of this struct can be reduced in future commits by moving TDP MMU root fields into a separate struct and by dynamically allocating fields only used by the Shadow MMU. No functional change intended. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm/mmu_types.h | 62 ++++++++++++++ arch/x86/include/asm/kvm_host.h | 4 - arch/x86/kvm/mmu/mmu.c | 122 ++++++++++++++------------- arch/x86/kvm/mmu/mmu_internal.h | 83 ------------------ arch/x86/kvm/mmu/mmutrace.h | 4 +- arch/x86/kvm/mmu/paging_tmpl.h | 10 +-- arch/x86/kvm/mmu/tdp_mmu.c | 8 +- arch/x86/kvm/mmu/tdp_mmu.h | 2 +- include/kvm/mmu_types.h | 32 ++++++- 9 files changed, 167 insertions(+), 160 deletions(-) diff --git a/arch/x86/include/asm/kvm/mmu_types.h b/arch/x86/include/asm/kvm/mmu_types.h index 35f893ebab5a..affcb520b482 100644 --- a/arch/x86/include/asm/kvm/mmu_types.h +++ b/arch/x86/include/asm/kvm/mmu_types.h @@ -2,6 +2,8 @@ #ifndef __ASM_KVM_MMU_TYPES_H #define __ASM_KVM_MMU_TYPES_H +#include +#include #include /* @@ -53,4 +55,64 @@ struct kvm_mmu_page_role_arch { u16 passthrough:1; }; +struct kvm_rmap_head { + unsigned long val; +}; + +struct kvm_mmu_page_arch { + struct hlist_node hash_link; + + bool shadow_mmu_page; + bool unsync; + u8 mmu_valid_gen; + + /* + * The shadow page can't be replaced by an equivalent huge page + * because it is being used to map an executable page in the guest + * and the NX huge page mitigation is enabled. + */ + bool nx_huge_page_disallowed; + + /* + * Stores the result of the guest translation being shadowed by each + * SPTE. KVM shadows two types of guest translations: nGPA -> GPA + * (shadow EPT/NPT) and GVA -> GPA (traditional shadow paging). In both + * cases the result of the translation is a GPA and a set of access + * constraints. + * + * The GFN is stored in the upper bits (PAGE_SHIFT) and the shadowed + * access permissions are stored in the lower bits. Note, for + * convenience and uniformity across guests, the access permissions are + * stored in KVM format (e.g. ACC_EXEC_MASK) not the raw guest format. + */ + u64 *shadowed_translation; + + unsigned int unsync_children; + + /* Rmap pointers to all parent sptes. */ + struct kvm_rmap_head parent_ptes; + + DECLARE_BITMAP(unsync_child_bitmap, 512); + + /* + * Tracks shadow pages that, if zapped, would allow KVM to create an NX + * huge page. A shadow page will have nx_huge_page_disallowed set but + * not be on the list if a huge page is disallowed for other reasons, + * e.g. because KVM is shadowing a PTE at the same gfn, the memslot + * isn't properly aligned, etc... + */ + struct list_head possible_nx_huge_page_link; + +#ifdef CONFIG_X86_32 + /* + * Used out of the mmu-lock to avoid reading spte values while an + * update is in progress; see the comments in __get_spte_lockless(). + */ + int clear_spte_count; +#endif + + /* Number of writes since the last time traversal visited this page. */ + atomic_t write_flooding_count; +}; + #endif /* !__ASM_KVM_MMU_TYPES_H */ diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index ebcd7a0dabef..f5743a652e10 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -329,10 +329,6 @@ union kvm_cpu_role { }; }; -struct kvm_rmap_head { - unsigned long val; -}; - struct kvm_pio_request { unsigned long linear_rip; unsigned long count; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 11cef930d5ed..e47f35878ab5 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -350,7 +350,7 @@ static void count_spte_clear(u64 *sptep, u64 spte) /* Ensure the spte is completely set before we increase the count */ smp_wmb(); - sp->clear_spte_count++; + sp->arch.clear_spte_count++; } static void __set_spte(u64 *sptep, u64 spte) @@ -432,7 +432,7 @@ static u64 __get_spte_lockless(u64 *sptep) int count; retry: - count = sp->clear_spte_count; + count = sp->arch.clear_spte_count; smp_rmb(); spte.spte_low = orig->spte_low; @@ -442,7 +442,7 @@ static u64 __get_spte_lockless(u64 *sptep) smp_rmb(); if (unlikely(spte.spte_low != orig->spte_low || - count != sp->clear_spte_count)) + count != sp->arch.clear_spte_count)) goto retry; return spte.spte; @@ -699,7 +699,7 @@ static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) return sp->gfn; if (!sp->role.arch.direct) - return sp->shadowed_translation[index] >> PAGE_SHIFT; + return sp->arch.shadowed_translation[index] >> PAGE_SHIFT; return sp->gfn + (index << ((sp->role.level - 1) * SPTE_LEVEL_BITS)); } @@ -713,7 +713,7 @@ static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) static u32 kvm_mmu_page_get_access(struct kvm_mmu_page *sp, int index) { if (sp_has_gptes(sp)) - return sp->shadowed_translation[index] & ACC_ALL; + return sp->arch.shadowed_translation[index] & ACC_ALL; /* * For direct MMUs (e.g. TDP or non-paging guests) or passthrough SPs, @@ -734,7 +734,7 @@ static void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int index, gfn_t gfn, unsigned int access) { if (sp_has_gptes(sp)) { - sp->shadowed_translation[index] = (gfn << PAGE_SHIFT) | access; + sp->arch.shadowed_translation[index] = (gfn << PAGE_SHIFT) | access; return; } @@ -825,18 +825,18 @@ void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp) * on the list if KVM is reusing an existing shadow page, i.e. if KVM * links a shadow page at multiple points. */ - if (!list_empty(&sp->possible_nx_huge_page_link)) + if (!list_empty(&sp->arch.possible_nx_huge_page_link)) return; ++kvm->stat.nx_lpage_splits; - list_add_tail(&sp->possible_nx_huge_page_link, + list_add_tail(&sp->arch.possible_nx_huge_page_link, &kvm->arch.possible_nx_huge_pages); } static void account_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp, bool nx_huge_page_possible) { - sp->nx_huge_page_disallowed = true; + sp->arch.nx_huge_page_disallowed = true; if (nx_huge_page_possible) track_possible_nx_huge_page(kvm, sp); @@ -861,16 +861,16 @@ static void unaccount_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp) { - if (list_empty(&sp->possible_nx_huge_page_link)) + if (list_empty(&sp->arch.possible_nx_huge_page_link)) return; --kvm->stat.nx_lpage_splits; - list_del_init(&sp->possible_nx_huge_page_link); + list_del_init(&sp->arch.possible_nx_huge_page_link); } static void unaccount_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp) { - sp->nx_huge_page_disallowed = false; + sp->arch.nx_huge_page_disallowed = false; untrack_possible_nx_huge_page(kvm, sp); } @@ -1720,11 +1720,11 @@ static void kvm_unaccount_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp) static void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) { MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); - hlist_del(&sp->hash_link); + hlist_del(&sp->arch.hash_link); list_del(&sp->link); free_page((unsigned long)sp->spt); if (!sp->role.arch.direct) - free_page((unsigned long)sp->shadowed_translation); + free_page((unsigned long)sp->arch.shadowed_translation); kmem_cache_free(mmu_page_header_cache, sp); } @@ -1739,13 +1739,13 @@ static void mmu_page_add_parent_pte(struct kvm_mmu_memory_cache *cache, if (!parent_pte) return; - pte_list_add(cache, parent_pte, &sp->parent_ptes); + pte_list_add(cache, parent_pte, &sp->arch.parent_ptes); } static void mmu_page_remove_parent_pte(struct kvm_mmu_page *sp, u64 *parent_pte) { - pte_list_remove(parent_pte, &sp->parent_ptes); + pte_list_remove(parent_pte, &sp->arch.parent_ptes); } static void drop_parent_pte(struct kvm_mmu_page *sp, @@ -1761,7 +1761,7 @@ static void kvm_mmu_mark_parents_unsync(struct kvm_mmu_page *sp) u64 *sptep; struct rmap_iterator iter; - for_each_rmap_spte(&sp->parent_ptes, &iter, sptep) { + for_each_rmap_spte(&sp->arch.parent_ptes, &iter, sptep) { mark_unsync(sptep); } } @@ -1771,9 +1771,9 @@ static void mark_unsync(u64 *spte) struct kvm_mmu_page *sp; sp = sptep_to_sp(spte); - if (__test_and_set_bit(spte_index(spte), sp->unsync_child_bitmap)) + if (__test_and_set_bit(spte_index(spte), sp->arch.unsync_child_bitmap)) return; - if (sp->unsync_children++) + if (sp->arch.unsync_children++) return; kvm_mmu_mark_parents_unsync(sp); } @@ -1799,7 +1799,7 @@ static int mmu_pages_add(struct kvm_mmu_pages *pvec, struct kvm_mmu_page *sp, { int i; - if (sp->unsync) + if (sp->arch.unsync) for (i=0; i < pvec->nr; i++) if (pvec->page[i].sp == sp) return 0; @@ -1812,9 +1812,9 @@ static int mmu_pages_add(struct kvm_mmu_pages *pvec, struct kvm_mmu_page *sp, static inline void clear_unsync_child_bit(struct kvm_mmu_page *sp, int idx) { - --sp->unsync_children; - WARN_ON((int)sp->unsync_children < 0); - __clear_bit(idx, sp->unsync_child_bitmap); + --sp->arch.unsync_children; + WARN_ON((int)sp->arch.unsync_children < 0); + __clear_bit(idx, sp->arch.unsync_child_bitmap); } static int __mmu_unsync_walk(struct kvm_mmu_page *sp, @@ -1822,7 +1822,7 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp, { int i, ret, nr_unsync_leaf = 0; - for_each_set_bit(i, sp->unsync_child_bitmap, 512) { + for_each_set_bit(i, sp->arch.unsync_child_bitmap, 512) { struct kvm_mmu_page *child; u64 ent = sp->spt[i]; @@ -1833,7 +1833,7 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp, child = spte_to_child_sp(ent); - if (child->unsync_children) { + if (child->arch.unsync_children) { if (mmu_pages_add(pvec, child, i)) return -ENOSPC; @@ -1845,7 +1845,7 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp, nr_unsync_leaf += ret; } else return ret; - } else if (child->unsync) { + } else if (child->arch.unsync) { nr_unsync_leaf++; if (mmu_pages_add(pvec, child, i)) return -ENOSPC; @@ -1862,7 +1862,7 @@ static int mmu_unsync_walk(struct kvm_mmu_page *sp, struct kvm_mmu_pages *pvec) { pvec->nr = 0; - if (!sp->unsync_children) + if (!sp->arch.unsync_children) return 0; mmu_pages_add(pvec, sp, INVALID_INDEX); @@ -1871,9 +1871,9 @@ static int mmu_unsync_walk(struct kvm_mmu_page *sp, static void kvm_unlink_unsync_page(struct kvm *kvm, struct kvm_mmu_page *sp) { - WARN_ON(!sp->unsync); + WARN_ON(!sp->arch.unsync); trace_kvm_mmu_sync_page(sp); - sp->unsync = 0; + sp->arch.unsync = 0; --kvm->stat.mmu_unsync; } @@ -1894,7 +1894,7 @@ static bool sp_has_gptes(struct kvm_mmu_page *sp) } #define for_each_valid_sp(_kvm, _sp, _list) \ - hlist_for_each_entry(_sp, _list, hash_link) \ + hlist_for_each_entry(_sp, _list, arch.hash_link) \ if (is_obsolete_sp((_kvm), (_sp))) { \ } else @@ -1934,7 +1934,7 @@ static bool is_obsolete_sp(struct kvm *kvm, struct kvm_mmu_page *sp) /* TDP MMU pages do not use the MMU generation. */ return !is_tdp_mmu_page(sp) && - unlikely(sp->mmu_valid_gen != kvm->arch.mmu_valid_gen); + unlikely(sp->arch.mmu_valid_gen != kvm->arch.mmu_valid_gen); } struct mmu_page_path { @@ -2006,7 +2006,7 @@ static void mmu_pages_clear_parents(struct mmu_page_path *parents) WARN_ON(idx == INVALID_INDEX); clear_unsync_child_bit(sp, idx); level++; - } while (!sp->unsync_children); + } while (!sp->arch.unsync_children); } static int mmu_sync_children(struct kvm_vcpu *vcpu, @@ -2053,7 +2053,7 @@ static int mmu_sync_children(struct kvm_vcpu *vcpu, static void __clear_sp_write_flooding_count(struct kvm_mmu_page *sp) { - atomic_set(&sp->write_flooding_count, 0); + atomic_set(&sp->arch.write_flooding_count, 0); } static void clear_sp_write_flooding_count(u64 *spte) @@ -2094,7 +2094,7 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm *kvm, * Unsync pages must not be left as is, because the new * upper-level page will be write-protected. */ - if (role.level > PG_LEVEL_4K && sp->unsync) + if (role.level > PG_LEVEL_4K && sp->arch.unsync) kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); continue; @@ -2104,7 +2104,7 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm *kvm, if (sp->role.arch.direct) goto out; - if (sp->unsync) { + if (sp->arch.unsync) { if (KVM_BUG_ON(!vcpu, kvm)) break; @@ -2163,25 +2163,26 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm, sp = kvm_mmu_memory_cache_alloc(caches->page_header_cache); sp->spt = kvm_mmu_memory_cache_alloc(caches->shadow_page_cache); if (!role.arch.direct) - sp->shadowed_translation = kvm_mmu_memory_cache_alloc(caches->shadowed_info_cache); + sp->arch.shadowed_translation = + kvm_mmu_memory_cache_alloc(caches->shadowed_info_cache); set_page_private(virt_to_page(sp->spt), (unsigned long)sp); - INIT_LIST_HEAD(&sp->possible_nx_huge_page_link); + INIT_LIST_HEAD(&sp->arch.possible_nx_huge_page_link); /* * active_mmu_pages must be a FIFO list, as kvm_zap_obsolete_pages() * depends on valid pages being added to the head of the list. See * comments in kvm_zap_obsolete_pages(). */ - sp->mmu_valid_gen = kvm->arch.mmu_valid_gen; + sp->arch.mmu_valid_gen = kvm->arch.mmu_valid_gen; list_add(&sp->link, &kvm->arch.active_mmu_pages); kvm_account_mmu_page(kvm, sp); sp->gfn = gfn; sp->role = role; - sp->shadow_mmu_page = true; - hlist_add_head(&sp->hash_link, sp_list); + sp->arch.shadow_mmu_page = true; + hlist_add_head(&sp->arch.hash_link, sp_list); if (sp_has_gptes(sp)) account_shadowed(kvm, sp); @@ -2368,7 +2369,7 @@ static void __link_shadow_page(struct kvm *kvm, mmu_page_add_parent_pte(cache, sp, sptep); - if (sp->unsync_children || sp->unsync) + if (sp->arch.unsync_children || sp->arch.unsync) mark_unsync(sptep); } @@ -2421,7 +2422,8 @@ static int mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp, * avoids retaining a large number of stale nested SPs. */ if (tdp_enabled && invalid_list && - child->role.arch.guest_mode && !child->parent_ptes.val) + child->role.arch.guest_mode && + !child->arch.parent_ptes.val) return kvm_mmu_prepare_zap_page(kvm, child, invalid_list); } @@ -2449,7 +2451,7 @@ static void kvm_mmu_unlink_parents(struct kvm_mmu_page *sp) u64 *sptep; struct rmap_iterator iter; - while ((sptep = rmap_get_first(&sp->parent_ptes, &iter))) + while ((sptep = rmap_get_first(&sp->arch.parent_ptes, &iter))) drop_parent_pte(sp, sptep); } @@ -2496,7 +2498,7 @@ static bool __kvm_mmu_prepare_zap_page(struct kvm *kvm, if (!sp->role.invalid && sp_has_gptes(sp)) unaccount_shadowed(kvm, sp); - if (sp->unsync) + if (sp->arch.unsync) kvm_unlink_unsync_page(kvm, sp); if (!refcount_read(&sp->root_refcount)) { /* Count self */ @@ -2527,7 +2529,7 @@ static bool __kvm_mmu_prepare_zap_page(struct kvm *kvm, zapped_root = !is_obsolete_sp(kvm, sp); } - if (sp->nx_huge_page_disallowed) + if (sp->arch.nx_huge_page_disallowed) unaccount_nx_huge_page(kvm, sp); sp->role.invalid = 1; @@ -2704,7 +2706,7 @@ static void kvm_unsync_page(struct kvm *kvm, struct kvm_mmu_page *sp) { trace_kvm_mmu_unsync_page(sp); ++kvm->stat.mmu_unsync; - sp->unsync = 1; + sp->arch.unsync = 1; kvm_mmu_mark_parents_unsync(sp); } @@ -2739,7 +2741,7 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, if (!can_unsync) return -EPERM; - if (sp->unsync) + if (sp->arch.unsync) continue; if (prefetch) @@ -2764,7 +2766,7 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, * for write, i.e. unsync cannot transition from 0->1 * while this CPU holds mmu_lock for read (or write). */ - if (READ_ONCE(sp->unsync)) + if (READ_ONCE(sp->arch.unsync)) continue; } @@ -2796,8 +2798,8 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, * 2.2 Guest issues TLB flush. * That causes a VM Exit. * - * 2.3 Walking of unsync pages sees sp->unsync is - * false and skips the page. + * 2.3 Walking of unsync pages sees sp->arch.unsync + * is false and skips the page. * * 2.4 Guest accesses GVA X. * Since the mapping in the SP was not updated, @@ -2805,7 +2807,7 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, * gets used. * 1.1 Host marks SP * as unsync - * (sp->unsync = true) + * (sp->arch.unsync = true) * * The write barrier below ensures that 1.1 happens before 1.2 and thus * the situation in 2.4 does not arise. It pairs with the read barrier @@ -3126,7 +3128,7 @@ void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_ cur_level == fault->goal_level && is_shadow_present_pte(spte) && !is_large_pte(spte) && - spte_to_child_sp(spte)->nx_huge_page_disallowed) { + spte_to_child_sp(spte)->arch.nx_huge_page_disallowed) { /* * A small SPTE exists for this pfn, but FNAME(fetch), * direct_map(), or kvm_tdp_mmu_map() would like to create a @@ -3902,7 +3904,7 @@ static bool is_unsync_root(hpa_t root) /* * The read barrier orders the CPU's read of SPTE.W during the page table - * walk before the reads of sp->unsync/sp->unsync_children here. + * walk before the reads of sp->arch.{unsync,unsync_children} here. * * Even if another CPU was marking the SP as unsync-ed simultaneously, * any guest page table changes are not guaranteed to be visible anyway @@ -3922,7 +3924,7 @@ static bool is_unsync_root(hpa_t root) if (WARN_ON_ONCE(!sp)) return false; - if (sp->unsync || sp->unsync_children) + if (sp->arch.unsync || sp->arch.unsync_children) return true; return false; @@ -5510,8 +5512,8 @@ static bool detect_write_flooding(struct kvm_mmu_page *sp) if (sp->role.level == PG_LEVEL_4K) return false; - atomic_inc(&sp->write_flooding_count); - return atomic_read(&sp->write_flooding_count) >= 3; + atomic_inc(&sp->arch.write_flooding_count); + return atomic_read(&sp->arch.write_flooding_count) >= 3; } /* @@ -6389,7 +6391,7 @@ static bool shadow_mmu_try_split_huge_pages(struct kvm *kvm, continue; /* SPs with level >PG_LEVEL_4K should never by unsync. */ - if (WARN_ON_ONCE(sp->unsync)) + if (WARN_ON_ONCE(sp->arch.unsync)) continue; /* Don't bother splitting huge pages on invalid SPs. */ @@ -6941,8 +6943,8 @@ static void kvm_recover_nx_huge_pages(struct kvm *kvm) */ sp = list_first_entry(&kvm->arch.possible_nx_huge_pages, struct kvm_mmu_page, - possible_nx_huge_page_link); - WARN_ON_ONCE(!sp->nx_huge_page_disallowed); + arch.possible_nx_huge_page_link); + WARN_ON_ONCE(!sp->arch.nx_huge_page_disallowed); WARN_ON_ONCE(!sp->role.arch.direct); /* @@ -6977,7 +6979,7 @@ static void kvm_recover_nx_huge_pages(struct kvm *kvm) flush |= kvm_tdp_mmu_zap_sp(kvm, sp); else kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); - WARN_ON_ONCE(sp->nx_huge_page_disallowed); + WARN_ON_ONCE(sp->arch.nx_huge_page_disallowed); if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) { kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index fd4990c8b0e9..af2ae4887e35 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -44,89 +44,6 @@ extern bool dbg; #define INVALID_PAE_ROOT 0 #define IS_VALID_PAE_ROOT(x) (!!(x)) -struct kvm_mmu_page { - /* - * Note, "link" through "spt" fit in a single 64 byte cache line on - * 64-bit kernels, keep it that way unless there's a reason not to. - */ - struct list_head link; - struct hlist_node hash_link; - - bool shadow_mmu_page; - bool unsync; - u8 mmu_valid_gen; - - /* - * The shadow page can't be replaced by an equivalent huge page - * because it is being used to map an executable page in the guest - * and the NX huge page mitigation is enabled. - */ - bool nx_huge_page_disallowed; - - /* - * The following two entries are used to key the shadow page in the - * hash table. - */ - union kvm_mmu_page_role role; - gfn_t gfn; - - u64 *spt; - - /* - * Stores the result of the guest translation being shadowed by each - * SPTE. KVM shadows two types of guest translations: nGPA -> GPA - * (shadow EPT/NPT) and GVA -> GPA (traditional shadow paging). In both - * cases the result of the translation is a GPA and a set of access - * constraints. - * - * The GFN is stored in the upper bits (PAGE_SHIFT) and the shadowed - * access permissions are stored in the lower bits. Note, for - * convenience and uniformity across guests, the access permissions are - * stored in KVM format (e.g. ACC_EXEC_MASK) not the raw guest format. - */ - u64 *shadowed_translation; - - /* Currently serving as active root */ - refcount_t root_refcount; - - unsigned int unsync_children; - union { - struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */ - tdp_ptep_t ptep; - }; - union { - DECLARE_BITMAP(unsync_child_bitmap, 512); - struct { - struct work_struct tdp_mmu_async_work; - void *tdp_mmu_async_data; - }; - }; - - /* - * Tracks shadow pages that, if zapped, would allow KVM to create an NX - * huge page. A shadow page will have nx_huge_page_disallowed set but - * not be on the list if a huge page is disallowed for other reasons, - * e.g. because KVM is shadowing a PTE at the same gfn, the memslot - * isn't properly aligned, etc... - */ - struct list_head possible_nx_huge_page_link; -#ifdef CONFIG_X86_32 - /* - * Used out of the mmu-lock to avoid reading spte values while an - * update is in progress; see the comments in __get_spte_lockless(). - */ - int clear_spte_count; -#endif - - /* Number of writes since the last time traversal visited this page. */ - atomic_t write_flooding_count; - -#ifdef CONFIG_X86_64 - /* Used for freeing the page asynchronously if it is a TDP MMU page. */ - struct rcu_head rcu_head; -#endif -}; - extern struct kmem_cache *mmu_page_header_cache; static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp) diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h index ffd10ce3eae3..335f26dabdf3 100644 --- a/arch/x86/kvm/mmu/mmutrace.h +++ b/arch/x86/kvm/mmu/mmutrace.h @@ -16,11 +16,11 @@ __field(bool, unsync) #define KVM_MMU_PAGE_ASSIGN(sp) \ - __entry->mmu_valid_gen = sp->mmu_valid_gen; \ + __entry->mmu_valid_gen = sp->arch.mmu_valid_gen; \ __entry->gfn = sp->gfn; \ __entry->role = sp->role.word; \ __entry->root_count = refcount_read(&sp->root_refcount); \ - __entry->unsync = sp->unsync; + __entry->unsync = sp->arch.unsync; #define KVM_MMU_PAGE_PRINTK() ({ \ const char *saved_ptr = trace_seq_buffer_ptr(p); \ diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index e15ec1c473da..18bb92b70a01 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -671,7 +671,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, * KVM_REQ_MMU_SYNC is not necessary but it * expedites the process. */ - if (sp->unsync_children && + if (sp->arch.unsync_children && mmu_sync_children(vcpu, sp, false)) return RET_PF_RETRY; } @@ -921,7 +921,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) pt_element_t gpte; gpa_t pte_gpa; - if (!sp->unsync) + if (!sp->arch.unsync) break; pte_gpa = FNAME(get_level1_sp_gpa)(sp); @@ -942,7 +942,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) FNAME(prefetch_gpte)(vcpu, sp, sptep, gpte, false); } - if (!sp->unsync_children) + if (!sp->arch.unsync_children) break; } write_unlock(&vcpu->kvm->mmu_lock); @@ -974,8 +974,8 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, } /* - * Using the information in sp->shadowed_translation (kvm_mmu_page_get_gfn()) is - * safe because: + * Using the information in sp->arch.shadowed_translation + * (kvm_mmu_page_get_gfn()) is safe because: * - The spte has a reference to the struct page, so the pfn for a given gfn * can't change unless all sptes pointing to it are nuked first. * diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 34d674080170..66231c7ed31e 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -270,7 +270,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep, gfn_t gfn, union kvm_mmu_page_role role) { - INIT_LIST_HEAD(&sp->possible_nx_huge_page_link); + INIT_LIST_HEAD(&sp->arch.possible_nx_huge_page_link); set_page_private(virt_to_page(sp->spt), (unsigned long)sp); @@ -385,7 +385,7 @@ static void tdp_mmu_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, { tdp_unaccount_mmu_page(kvm, sp); - if (!sp->nx_huge_page_disallowed) + if (!sp->arch.nx_huge_page_disallowed) return; if (shared) @@ -393,7 +393,7 @@ static void tdp_mmu_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, else lockdep_assert_held_write(&kvm->mmu_lock); - sp->nx_huge_page_disallowed = false; + sp->arch.nx_huge_page_disallowed = false; untrack_possible_nx_huge_page(kvm, sp); if (shared) @@ -1181,7 +1181,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) sp = tdp_mmu_alloc_sp(vcpu); tdp_mmu_init_child_sp(sp, &iter); - sp->nx_huge_page_disallowed = fault->huge_page_disallowed; + sp->arch.nx_huge_page_disallowed = fault->huge_page_disallowed; if (is_shadow_present_pte(iter.old_spte)) r = tdp_mmu_split_huge_page(kvm, &iter, sp, true); diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 19d3153051a3..e6a929089715 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -73,7 +73,7 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr, #ifdef CONFIG_X86_64 static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { - return !sp->shadow_mmu_page; + return !sp->arch.shadow_mmu_page; } #else static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; } diff --git a/include/kvm/mmu_types.h b/include/kvm/mmu_types.h index 14099956fdac..a9da33d4baa8 100644 --- a/include/kvm/mmu_types.h +++ b/include/kvm/mmu_types.h @@ -3,8 +3,11 @@ #define __KVM_MMU_TYPES_H #include -#include +#include +#include #include +#include +#include #include @@ -36,4 +39,31 @@ static_assert(sizeof(union kvm_mmu_page_role) == sizeof_field(union kvm_mmu_page typedef u64 __rcu *tdp_ptep_t; +struct kvm_mmu_page { + struct list_head link; + + union kvm_mmu_page_role role; + + /* The start of the GFN region mapped by this shadow page. */ + gfn_t gfn; + + /* The page table page. */ + u64 *spt; + + /* The PTE that points to this shadow page. */ + tdp_ptep_t ptep; + + /* Used for freeing TDP MMU pages asynchronously. */ + struct rcu_head rcu_head; + + /* The number of references to this shadow page as a root. */ + refcount_t root_refcount; + + /* Used for tearing down an entire page table tree. */ + struct work_struct tdp_mmu_async_work; + void *tdp_mmu_async_data; + + struct kvm_mmu_page_arch arch; +}; + #endif /* !__KVM_MMU_TYPES_H */ From patchwork Thu Dec 8 19:38:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713889 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=kZPob7b5; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=desiato.20200630 header.b=ohdKWnP7; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=GOQt4+sF; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSmFZ5TClz23pD for ; Fri, 9 Dec 2022 07:38:26 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=/iGXF/xobOjiOE2AIvKr+b9/ryluoIdiwyGDnoMpyho=; b=kZPob7b552aC/aFiGGWMeRr2+W 7FKvPvnpgjKZnZRCPdiiPaS6kWJmkRcRM5nnVtklM6H3cFnsGWYnzRDKPJW6bh3o7h8NL2YDtkD3B Ev8SI7TGFFDZGtxZOEEL8XkxK+AsDVMB0PAP96DbvGNWPlJVTRkj0lzUfj3fnx2EGCtynkmdedtbd +moJx4yFTNDcMCg2KJJGQ3OAJ8bV/NoUHju8mIUt8K8MId14NZKqfyyvFOEByu4x3liKVqwliINL9 daW0P15eL8M/MinBJ8e7zE76OmqMOVW5oRPz2aJFkXqPODXxdtJAy0Ma032MicMLNMw9zE3SWcvIU 4WHqK4iQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3New-00ApLh-27; Thu, 08 Dec 2022 20:38:22 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NV6-00AgYd-33 for kvm-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 20:28:12 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=yTAZyHH91Hz9k7OZtf9gLdlJIIrZMhdblzaFwMket5k=; b=ohdKWnP7Ti2lPActsUdTAi3SQ9 2SGS4IwZk8MeiDgesizUu/CQykZOgHVWIxrQ7CZxW8cgqe2u6G1qgTJdbt6psR3TWBzmSsw9nHxle O/W/opl+2bsntFPXtF4BkH5vtBRTnRIbCG3JWyyLxcCb3AbRmnEfsxj7oFm+wX5u7UC1KxSJCXz6G FrsvqM5g8SQ4xrh9B4GuUBCKleoOcOd8+fwdr3ICi0KH9h7P/JeTfxpnysy9vb3jl7icPnFMBRAmo e+n08T4qVlvnVoelhJpFg2Gf260W7baVAyIu1HMmc4W1k0aPlzImqePtHjkqhRLo+neMIkeuo6I36 LYToTdWQ==; Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjo-008ZoI-Hf for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:23 +0000 Received: by mail-yb1-xb49.google.com with SMTP id a5-20020a25af05000000b006e450a5e507so2547969ybh.22 for ; Thu, 08 Dec 2022 11:39:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yTAZyHH91Hz9k7OZtf9gLdlJIIrZMhdblzaFwMket5k=; b=GOQt4+sFqigGxf6Aa3fTvktBe1mjQiluGE/Jj6vv/IvtdV7LliPzUDKCSdDx4Qr+HY SefezSHoN8FcvlmM5v70XQPk/EyYEeJUQTBJUq5h/NPog1UINNQReWUqbVEFjNE5psNx aagqyJsONr1NaCPx7WLFrSAq/6+YU9H0nX9KZhJN95FCQM9/NOhRMdNzFnOuadNLctYN G+xWcIxFKNvCVcBGdVMIWulIdzku3X8ojhuncsVZJ/7ZuOiqyWG7OUDtS4/fr9vosjH0 V6V8OE7BSYaG+WEMOpz2hhgLOVkbabsraMBHq3XN+o9hqQFkHr8FyeB1wUdGuE+kwW+d bbVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yTAZyHH91Hz9k7OZtf9gLdlJIIrZMhdblzaFwMket5k=; b=4I9n9Mw93xvogqTvU/tdfHzs3rPvPN0AOBJNy0aPR+ksfmvVl6xuQvFakvxQDbCytu 5kUllxHl23mARXDSu6NFVpUnxTUW8TEUlkMd4nd8VmgX/V95CIgA+ewqfmPttVn+3jUj 2xaUH1IE8GKquJAuo+X6xWpycpdjlNi57UD8FmCmyqbDYmvPP+k4y9JahIBQC/BK8CGb pFSZpKtvPc+rhbkc1F8AIS+iliLYKQUrJmebNnG0R+vL5NYQ32CLkk43nHKulYmudd8L 5wbMXiW3hQm6tuIzxGLJq1n27AHS94D3gjROVWp5lKUBBvdNJxl6LhezJdepYGSQkqoS yQ1A== X-Gm-Message-State: ANoB5pkl/bzBdaJ1e3SShTRNi2J7Fqbnc2Y6mpi4eHJ69DlpMCpGyuYv FR+WTWOiGDB9i8iJfysocoV8+jDr9Dqc+w== X-Google-Smtp-Source: AA0mqf4+9oetgR3uKpdfVrPGzq2N3r/JU0sb91m8xpA2HrSBGD0IqxC3Vg4zr/dJJfrFDGejU78ci1LRJL+iRw== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:9f85:0:b0:700:f93d:c7cd with SMTP id u5-20020a259f85000000b00700f93dc7cdmr16638924ybq.166.1670528358851; Thu, 08 Dec 2022 11:39:18 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:27 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-8-dmatlack@google.com> Subject: [RFC PATCH 07/37] mm: Introduce architecture-neutral PG_LEVEL macros From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193920_671717_0744E260 X-CRM114-Status: UNSURE ( 9.90 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "desiato.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Introduce architecture-neutral versions of the x86 macros PG_LEVEL_4K, PG_LEVEL_2M, etc. The x86 macros are used extensively by KVM/x86's page table management code. Introducing architecture-neutral v [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:b49 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Introduce architecture-neutral versions of the x86 macros PG_LEVEL_4K, PG_LEVEL_2M, etc. The x86 macros are used extensively by KVM/x86's page table management code. Introducing architecture-neutral version of these macros paves the way for porting KVM/x86's page table management code to architecture-neutral code. Signed-off-by: David Matlack --- arch/x86/include/asm/pgtable_types.h | 12 ++++-------- include/linux/mm_types.h | 9 +++++++++ 2 files changed, 13 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index aa174fed3a71..bdf41325f089 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -518,14 +518,10 @@ extern void native_pagetable_init(void); struct seq_file; extern void arch_report_meminfo(struct seq_file *m); -enum pg_level { - PG_LEVEL_NONE, - PG_LEVEL_4K, - PG_LEVEL_2M, - PG_LEVEL_1G, - PG_LEVEL_512G, - PG_LEVEL_NUM -}; +#define PG_LEVEL_4K PG_LEVEL_PTE +#define PG_LEVEL_2M PG_LEVEL_PMD +#define PG_LEVEL_1G PG_LEVEL_PUD +#define PG_LEVEL_512G PG_LEVEL_P4D #ifdef CONFIG_PROC_FS extern void update_page_count(int level, unsigned long pages); diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 500e536796ca..0445d0673afe 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1003,4 +1003,13 @@ enum fault_flag { typedef unsigned int __bitwise zap_flags_t; +enum pg_level { + PG_LEVEL_NONE, + PG_LEVEL_PTE, + PG_LEVEL_PMD, + PG_LEVEL_PUD, + PG_LEVEL_P4D, + PG_LEVEL_NUM +}; + #endif /* _LINUX_MM_TYPES_H */ From patchwork Thu Dec 8 19:38:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713901 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=JGFAx0jY; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=casper.20170209 header.b=F+XT8wz6; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=ASwdiNIS; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSmd85Cj2z23ns for ; Fri, 9 Dec 2022 07:55:24 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=66Il6OFQ5s0+Umq7dcjWZi10FilV17h2Rs4Y3lUq81c=; b=JGFAx0jYM6d178ECC0v4zNmVRZ Pxc9c/dY4yEQcQhhNP++CxzypbtwH/oZLQTaoajCljoZvgp+JiTfcnDr/0ZifdI5LerNVKlBKYweD hxOV0CiCuDYXV1UKnYrPGwJMyuxEoJvmNlk1YnvpbDKUysM1TR1hgA9VgmCSyxk+wc/FgiNKnAQ81 RFfUGe2z5wl/+G9+nYrhtOi2MxonPWv2mivH5oWVl/xVRpxzTL/KwqUOyP8wwpN5WA2OAn6S/2chz 3gA3+MuuLGcFB/dapTVl8GYUlqfKkvEQ9z4kl9Vr1L82Q1o57RTGtnwS4XazFYyGM1bt67b1OBRSZ cWSdoSSw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NvO-00B1jy-Es; Thu, 08 Dec 2022 20:55:22 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjy-009rhD-0g for kvm-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 19:39:30 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=SSdGxEhCr0RqVU7mTEekOKbILUbkr12FLBfwU71mUxI=; b=F+XT8wz6ZyllLxbinEwHSYeifh jPSs6APnN/i7u/3yixjRXHU/GzWbhnPCUJH5wLzh+vco2KIxRvLsAzs9j8TAcqnhpSxZk5bcbjFR5 ps18u2p+8Z4R8CWM8tgox85FXKXFJanWal0hRKr6uvVT0Y3acT9Qke2rlwu1jwIgkMu0nhGraEy5h BxQia7/P+nRjEN+wUs44wWY4zxSSCGKqWmmF4rJgIQduPMk6DSid24W0XG0rPkUOqhvP446WMhHeL FpyZNsA5vUPaQQwOjTEIOrlzuqYMwZ339c0tOl1cDr6LLcNaPum6lQE/+y2Rbb4uqi/a2YRdOpDJQ 1mo/3a0Q==; Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjz-007FwI-BO for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:33 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-3b48b605351so24959337b3.22 for ; Thu, 08 Dec 2022 11:39:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=SSdGxEhCr0RqVU7mTEekOKbILUbkr12FLBfwU71mUxI=; b=ASwdiNISwi39e3pN7oE2KNZFMmRKqmr3VYiit4wIyv47H88t6EbCKRpQMvgIGjb8Ee rJVPUEvW73NAo8aFV+BbV0Vr3OSkq29yA95vS5SMRxjzWXbnV3DZI/1DBZWOB8qdIrDl W6zsADUJr9yTIozq12aJEIz0+DdiTk+hebpQwi9Spbwq0O7+Rp/AhyZ8kHc6W7GbLEQQ W0hWYQM1VKgOje1FPxZ4mA6ztD8KMzpQNn+h5VJfv3SoW4uFDeXDtliLmzD104DdQ5jS RPnONZy1sj8iUseTf5J8a4CjaI/8JgGCw7ZrAZK92Qp419zeHoy00JFMymW+m0ow9uJP AA4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=SSdGxEhCr0RqVU7mTEekOKbILUbkr12FLBfwU71mUxI=; b=3bu2oVmrXwolygSK+gjw3d46cYyOhHcefaXlTq3mXI/jhBN2qk4BrU6X7ecZzoirWS FKS9hgyTZLuWbexxcyxu84nlgo8uwiOfZ46d7k+MT1Xt52d2Jy2BJuNgIrdAyGvLbYwT UDExjR5RoHC141Ll9F8eAWR7U8+Ol+4owBzzy+SmYbBPVQ+g7p+iYgAcxXGsY6aukJWC 02TfCM9qKxSUIt3Ztk0WcIEl8QZb3BYaIOpgt2wcaXtvv/7QC1RralCDM6Iqs3iKcX2L YFO4a4Lfzdyq0Bi/CrJRJim0M5VInHbiUcnUEampURu++Eynkx7jbzL6rHawXep8Sy2m HTMg== X-Gm-Message-State: ANoB5pn9HzC4QUIoNednTBpcEtKB3ErCrn9uBsHxHUylw0XXfKyBWMP0 hyAgWl8i+tB0bxFgRNVDm2+EjKmd/gOoEw== X-Google-Smtp-Source: AA0mqf47fCCsz8lLNEesw2/yjinjmb5c0Q+BVTcOByCxMnOsH1R3SmOt/A4xjy4bk2xWJgLDn+u8z1uj6TTwRA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:9943:0:b0:3b6:79b0:c10e with SMTP id q64-20020a819943000000b003b679b0c10emr62215766ywg.87.1670528360357; Thu, 08 Dec 2022 11:39:20 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:28 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-9-dmatlack@google.com> Subject: [RFC PATCH 08/37] KVM: selftests: Stop assuming stats are contiguous in kvm_binary_stats_test From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193931_460894_15F20550 X-CRM114-Status: GOOD ( 10.70 ) X-Spam-Score: -9.6 (---------) X-Spam-Report: SpamAssassin version 3.4.6 on casper.infradead.org summary: Content analysis details: (-9.6 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:114a listed in] [list.dnswl.org] -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org From: Jing Zhang Remove the assumption from kvm_binary_stats_test that all stats are laid out contiguously in memory. The KVM stats ABI specifically allows holes in the stats data, since each stat exposes its own offset. While here drop the check that each stats' offset is less than size_data, as that is now always true by construction. Fixes: 0b45d58738cd ("KVM: selftests: Add selftest for KVM statistics data binary interface") Signed-off-by: Jing Zhang Signed-off-by: David Matlack [Re-worded the commit message.] Signed-off-by: David Matlack --- tools/testing/selftests/kvm/kvm_binary_stats_test.c | 11 ++--------- 1 file changed, 2 insertions(+), 9 deletions(-) diff --git a/tools/testing/selftests/kvm/kvm_binary_stats_test.c b/tools/testing/selftests/kvm/kvm_binary_stats_test.c index 894417c96f70..46a66ece29fd 100644 --- a/tools/testing/selftests/kvm/kvm_binary_stats_test.c +++ b/tools/testing/selftests/kvm/kvm_binary_stats_test.c @@ -134,7 +134,8 @@ static void stats_test(int stats_fd) "Bucket size of stats (%s) is not zero", pdesc->name); } - size_data += pdesc->size * sizeof(*stats_data); + size_data = max(size_data, + pdesc->offset + pdesc->size * sizeof(*stats_data)); } /* @@ -149,14 +150,6 @@ static void stats_test(int stats_fd) TEST_ASSERT(size_data >= header.num_desc * sizeof(*stats_data), "Data size is not correct"); - /* Check stats offset */ - for (i = 0; i < header.num_desc; ++i) { - pdesc = get_stats_descriptor(stats_desc, i, &header); - TEST_ASSERT(pdesc->offset < size_data, - "Invalid offset (%u) for stats: %s", - pdesc->offset, pdesc->name); - } - /* Allocate memory for stats data */ stats_data = malloc(size_data); TEST_ASSERT(stats_data, "Allocate memory for stats data"); From patchwork Thu Dec 8 19:38:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713902 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=v+5i+Q62; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=casper.20170209 header.b=AfAwUJaT; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=X27Fj8NM; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSmdN04Fsz23ns for ; Fri, 9 Dec 2022 07:55:36 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=u12pWavsU9Yqwo6Oh9/xaA17b3Cs2WGXH7C4avq3k18=; b=v+5i+Q62AlFTFdNY10JG3DLGgy LP9iycdUTEG5GHvfJSx05gcoIVMmZlTXh0qB/qC3252BKYT21ndMQ/b97BBHpzZGuSXG+0QJwYhcA 0wlVSdiI8GKEEgKEXvzSwM9EhmIm9jfKxAmJkid4jmhLHkiYjGwawCkOksKTGR9M7uAYJ5K+Ae1zG D8cc7KEpEgofcs+OjhROIswecRGF61kF+SqYXv/7Ijfgib86fE1BsF2nnKs+X2UQ+zRm1bhxcQuVy piXBmIqhYpkbQ02mR3Q8C4Z7xtyiPW2Ey7nNBSiSYW1LEsSPdq5caNKEQKJ66mO78kIADGP2S8Jg2 LmdjzE2g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NvZ-00B1rn-Ch; Thu, 08 Dec 2022 20:55:33 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjy-009rhD-Pd for kvm-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 19:39:30 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=HxpbJTUNlMZJd8tslVgq3JHdSPdfNWxn0UnaVUQQc0A=; b=AfAwUJaTcXn+/0Akuff+TkVju3 5k9VZL4PZuQH/8ZaJikDadD8txkKqw3MogOIaSetn/Q4p9FbDdXdbMD2PmPk3UIXvYKh8qzUcuxfs gYTQIFTKdEhyZD587k4URoNqw0htKaoPJKmCKnE3UjkRVumItM2NORt2I6mCDdPpmHZQWBgXI+RIA G4SI7V/rko/phrRrz5sXADpkPEAGrwmT8j6ZDNoPHAH4g1Sy8LFKC/Jipxmfq5xi0XQHkLNTyDXsw u1O9n012lhEe0KB594FEILlqDi9atvbcatlRvijHfufH2Ox9miKswrnwzRmOTHFTkVVgamzesXgC/ YLnIjNQQ==; Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mk4-007Fx6-6t for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:38 +0000 Received: by mail-yb1-xb49.google.com with SMTP id i7-20020a056902068700b006f848e998b5so2561616ybt.10 for ; Thu, 08 Dec 2022 11:39:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HxpbJTUNlMZJd8tslVgq3JHdSPdfNWxn0UnaVUQQc0A=; b=X27Fj8NMQTMpgpQSLD6fEhoU4fVXBG4FfQvE4hDE+MTeWoJhGcT5LpYCDXh1D+/w82 hPv1oB7AV9ecGMFLuN13hRpFiRPBuJVO9vanqpaNAxBJ4jir+35FDyU9qXPhlEpW7coO /8u9arxvokXqrqKeLqz9h601trg+4wcye67rx9tVyN1axVtupBSnJ7E5BSc5oEBXMlHO KSROKMtQE8+C88/p6gUIBi3yv6webYqOh7OUiLmP0ZCWyP2DF2whWsSz/1NabrWwZBVJ X38Wy5/22Z5LwdqIzNX3mhOM6a0liS8P2HZHtMcipS0tuORgcla3kymZxvCrrppA0ZTX 8LTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HxpbJTUNlMZJd8tslVgq3JHdSPdfNWxn0UnaVUQQc0A=; b=BTQf0/0b4eHNmtmitaKzRTrqQ2nlqcdOZyJ0GTuIXYtpqnmlFoJl2B3zFNRcpiXl7b o0zy+szh4vT5iYaz/K3apjA8ZD9TiUyi3dTB05+R5KLEnY4b0lJ1ZxSnGt1X+geCDxj+ G2CkNsJ65Jwu9o5NXxIu5lztOzDE1+yRQvZ3ZHXS/+ufnFqYTuBy7MmRdMqVeLoIWrJS 0xMWOb0ivD9igoS2Ns6EvpTuI4DT7Han8LlUo97c5jvHWO3/3ELm1KL8LA/rFpWmAXNL sBO3oaKt7dNaA2Jmfth/n3yj40V820OZ5tkyqH5QHDqdUg+d93Ly5ogmPmAE5zjsrQfX BcSQ== X-Gm-Message-State: ANoB5pmxxKhx2fyCGqg4q5DUinybn2108tZ0a6LmdMi077KIEtf/iohj r+m+vUtOpI6Np6yRprwzRk0WqmBUlLKfKg== X-Google-Smtp-Source: AA0mqf6nQ3bCSMPTn17mxc4GCVurLewt/EOjnyHvTIs/ImZGe4frb7eZXMD/Hm3cZq91ON4VNM6eRlaJXvRssg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:d807:0:b0:6f4:1c40:273a with SMTP id p7-20020a25d807000000b006f41c40273amr52016342ybg.622.1670528361862; Thu, 08 Dec 2022 11:39:21 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:29 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-10-dmatlack@google.com> Subject: [RFC PATCH 09/37] KVM: Move page size stats into common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193936_282972_49A48EFA X-CRM114-Status: GOOD ( 13.73 ) X-Spam-Score: -9.6 (---------) X-Spam-Report: SpamAssassin version 3.4.6 on casper.infradead.org summary: Content analysis details: (-9.6 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:b49 listed in] [list.dnswl.org] -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Move the page size stats into common code. This will be used in a future commit to move the TDP MMU, which populates these stats, into common code. Architectures can also start populating these stats if they wish, and export different stats depending on the page size. Continue to only expose these stats on x86, since that's currently the only architecture that populates them. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_host.h | 8 -------- arch/x86/kvm/mmu.h | 5 ----- arch/x86/kvm/x86.c | 6 +++--- include/linux/kvm_host.h | 5 +++++ include/linux/kvm_types.h | 9 +++++++++ 5 files changed, 17 insertions(+), 16 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index f5743a652e10..9cf8f956bac3 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1363,14 +1363,6 @@ struct kvm_vm_stat { u64 mmu_recycled; u64 mmu_cache_miss; u64 mmu_unsync; - union { - struct { - atomic64_t pages_4k; - atomic64_t pages_2m; - atomic64_t pages_1g; - }; - atomic64_t pages[KVM_NR_PAGE_SIZES]; - }; u64 nx_lpage_splits; u64 max_mmu_page_hash_collisions; u64 max_mmu_rmap_size; diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 168c46fd8dd1..ec662108d2eb 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -261,11 +261,6 @@ kvm_mmu_slot_lpages(struct kvm_memory_slot *slot, int level) return __kvm_mmu_slot_lpages(slot, slot->npages, level); } -static inline void kvm_update_page_stats(struct kvm *kvm, int level, int count) -{ - atomic64_add(count, &kvm->stat.pages[level - 1]); -} - gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u64 access, struct x86_exception *exception); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2bfe060768fc..517c8ed33542 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -231,6 +231,9 @@ EXPORT_SYMBOL_GPL(host_xss); const struct _kvm_stats_desc kvm_vm_stats_desc[] = { KVM_GENERIC_VM_STATS(), + STATS_DESC_ICOUNTER(VM_GENERIC, pages_4k), + STATS_DESC_ICOUNTER(VM_GENERIC, pages_2m), + STATS_DESC_ICOUNTER(VM_GENERIC, pages_1g), STATS_DESC_COUNTER(VM, mmu_shadow_zapped), STATS_DESC_COUNTER(VM, mmu_pte_write), STATS_DESC_COUNTER(VM, mmu_pde_zapped), @@ -238,9 +241,6 @@ const struct _kvm_stats_desc kvm_vm_stats_desc[] = { STATS_DESC_COUNTER(VM, mmu_recycled), STATS_DESC_COUNTER(VM, mmu_cache_miss), STATS_DESC_ICOUNTER(VM, mmu_unsync), - STATS_DESC_ICOUNTER(VM, pages_4k), - STATS_DESC_ICOUNTER(VM, pages_2m), - STATS_DESC_ICOUNTER(VM, pages_1g), STATS_DESC_ICOUNTER(VM, nx_lpage_splits), STATS_DESC_PCOUNTER(VM, max_mmu_rmap_size), STATS_DESC_PCOUNTER(VM, max_mmu_page_hash_collisions) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f16c4689322b..22ecb7ce4d31 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2280,4 +2280,9 @@ static inline void kvm_account_pgtable_pages(void *virt, int nr) /* Max number of entries allowed for each kvm dirty ring */ #define KVM_DIRTY_RING_MAX_ENTRIES 65536 +static inline void kvm_update_page_stats(struct kvm *kvm, int level, int count) +{ + atomic64_add(count, &kvm->stat.generic.pages[level - 1]); +} + #endif diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index 76de36e56cdf..59cf958d69df 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -20,6 +20,7 @@ enum kvm_mr_change; #include #include +#include #include #include @@ -105,6 +106,14 @@ struct kvm_mmu_memory_cache { struct kvm_vm_stat_generic { u64 remote_tlb_flush; u64 remote_tlb_flush_requests; + union { + struct { + atomic64_t pages_4k; + atomic64_t pages_2m; + atomic64_t pages_1g; + }; + atomic64_t pages[PG_LEVEL_NUM - 1]; + }; }; struct kvm_vcpu_stat_generic { From patchwork Thu Dec 8 19:38:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713900 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=tpi3UIhZ; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=casper.20170209 header.b=uVkHwtPb; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=U/vI9Qqz; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSmbz4p4Qz23pD for ; Fri, 9 Dec 2022 07:54:23 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=phbulTMPzFScXs7berD0IMaBeLeVJkeEZGu6LzUdbK8=; b=tpi3UIhZa/YXgmjTPSFVSfo5va C52WkVJgX9VMGgNgZgJyr9pWOUAys7Lda7qr/Q/KACj3WxNmeJEAh4eeqAagTBgAJHClwDy7leBZU GZd9x3DuCQFY102EHvVaM7BVywqO7CICPMAlHLuzgqzkjZaAoHZu66Tci/JkSxbvMNRebznGsEImO Ht0vRMtz0ctfxNi1tLl8qUjlCD9BKiSH4760ymKBaGPCG4l4p3kG3qBAc3ua5pr+Cm/A8GgRx/kmq rPWEJ6ainQEzUBP3VXpXjUdNveACkmRtziaeKMa2DlWPEOBttX7Y1/FcgJF75b1TXeTrkp9i0jzwy LnHwcBUA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NuP-00B0zu-BI; Thu, 08 Dec 2022 20:54:21 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjw-009rhD-M4 for kvm-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 19:39:28 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=t0ynoo+Uz1ultq2qpj0iPRG1UdPG676PKtdcuskxKiY=; b=uVkHwtPb/xeFeMmWV9H4NEuzaA ixhMjxDtQAfqKvhdTGvurrJHPGt9WCyPTa08mA3siOjO4a9fgx4Dd+a5GxsMMP250tEgJ9KPiskmy +PGA7ALVKje/fm3OjpDx7az+uO9euw6iw2ncVmmOrrEkI5YRlPXGYRQbKHtFuIppMRuxfhVkFbAQ5 we1kBA12flBfMVpXgQTSmQWIjzN8C0a6FLho1RkuX1gJGIzlCPzeXRFLgDR5r3Ci6b7HEpxxCgLek ghuOY7isPdpEbOVJOjAFYnh6an5Nkqs1vRXSUeRhAYrvGguqVj6eoJlP5V5ZHPUIXwNZgaE6lW63n hZ2CNrQQ==; Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mk1-007Fx8-Bu for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:35 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-3b5da1b3130so24901047b3.5 for ; Thu, 08 Dec 2022 11:39:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=t0ynoo+Uz1ultq2qpj0iPRG1UdPG676PKtdcuskxKiY=; b=U/vI9QqzlbcTVC94lbC1Vrl6wBTC4LSs8DK3cgCJS7aC6iT0NxkmeEA5mouoQXsBlB vRU1x2abo/B9HETsCE7T+2uriY4e0DO9aw6SG9ZXjTC7jV7baLhj28FmDQSoS1lWt/dS aglK4JsVpRNMuLPnpCysVcKOhpwpbnhqhgrQ9LDiIljFUfpf7nwLbkc5FSMZSWYhnXdi MJ1roetxxbRQNhMi2za2ifa3KJiSBK2dpnAPDcQ4ijs0KwT/UZusgTWxfPedLyDZasMk tEtglRM3wp0uQ6g5dcpXBb+41uZKtCjOAyCLdSN4q65o/P03ZN5Xi6fo2Cv6qPFtX41I 0IdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=t0ynoo+Uz1ultq2qpj0iPRG1UdPG676PKtdcuskxKiY=; b=F4DbkYSp8kjfEnq8LlfmU9JCKXsarsmGOI/27F01RlNJHOykbUM/0/aK1iQGyhcSFG ROSgcn0Hq97Bcmm7+AzxFzN8JPzf7aqioVx8jSjT8KgMeTyennzDgYnhHeRnLbVbvwRy gWT3wH8ZDKpbXqJ7rRnED08C+slq/d6DiDQ3MCwjv/4ITOms1YhnnEZB2si3PsoW+DUz TubYV5glDq12w3yibnzqVsu+LCTzP0FNeCULG7W6txCt8e07F4BIdiEZLas8tVQl8qS9 FBfLX7J5iM/BYoOwfhXF+i3GS5yV6S98h5GQcFkXdv3ROVXo1EZJOynehGXHJJa3lYAl a0qA== X-Gm-Message-State: ANoB5plnbF+OLw0lU8KeBSoP6UGuWcaKMd2InO4eOv8NCRuGZ73hB2yG sxU/U39nWBascPofKVWguxrw2WDOAV8R1A== X-Google-Smtp-Source: AA0mqf6OihMD/jRMNDsQCgTO9v71+W6lhbU6PuqI9O7PgDgRQqnZbXFvZf6gCEBh5Edo3Jt/baNcknprN6F5eA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a05:690c:f85:b0:408:babe:aa8c with SMTP id df5-20020a05690c0f8500b00408babeaa8cmr23477ywb.181.1670528363671; Thu, 08 Dec 2022 11:39:23 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:30 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-11-dmatlack@google.com> Subject: [RFC PATCH 10/37] KVM: MMU: Move struct kvm_page_fault to common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193933_488482_386B0DF5 X-CRM114-Status: GOOD ( 22.17 ) X-Spam-Score: -9.6 (---------) X-Spam-Report: SpamAssassin version 3.4.6 on casper.infradead.org summary: Content analysis details: (-9.6 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:1149 listed in] [list.dnswl.org] -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Move struct kvm_page_fault to common code. This will be used in a future commit to move the TDP MMU to common code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm/mmu_types.h | 20 +++++++ arch/x86/kvm/mmu/mmu.c | 19 +++---- arch/x86/kvm/mmu/mmu_internal.h | 79 ++++++---------------------- arch/x86/kvm/mmu/mmutrace.h | 2 +- arch/x86/kvm/mmu/paging_tmpl.h | 14 ++--- arch/x86/kvm/mmu/tdp_mmu.c | 6 +-- include/kvm/mmu_types.h | 44 ++++++++++++++++ 7 files changed, 100 insertions(+), 84 deletions(-) diff --git a/arch/x86/include/asm/kvm/mmu_types.h b/arch/x86/include/asm/kvm/mmu_types.h index affcb520b482..59d1be85f4b7 100644 --- a/arch/x86/include/asm/kvm/mmu_types.h +++ b/arch/x86/include/asm/kvm/mmu_types.h @@ -5,6 +5,7 @@ #include #include #include +#include /* * This is a subset of the overall kvm_cpu_role to minimize the size of @@ -115,4 +116,23 @@ struct kvm_mmu_page_arch { atomic_t write_flooding_count; }; +struct kvm_page_fault_arch { + const u32 error_code; + + /* x86-specific error code bits */ + const bool present; + const bool rsvd; + const bool user; + + /* Derived from mmu and global state. */ + const bool is_tdp; + const bool nx_huge_page_workaround_enabled; + + /* + * Whether a >4KB mapping can be created or is forbidden due to NX + * hugepages. + */ + bool huge_page_disallowed; +}; + #endif /* !__ASM_KVM_MMU_TYPES_H */ diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e47f35878ab5..0593d4a60139 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3092,7 +3092,8 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault struct kvm_memory_slot *slot = fault->slot; kvm_pfn_t mask; - fault->huge_page_disallowed = fault->exec && fault->nx_huge_page_workaround_enabled; + fault->arch.huge_page_disallowed = + fault->exec && fault->arch.nx_huge_page_workaround_enabled; if (unlikely(fault->max_level == PG_LEVEL_4K)) return; @@ -3109,7 +3110,7 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault */ fault->req_level = kvm_mmu_max_mapping_level(vcpu->kvm, slot, fault->gfn, fault->max_level); - if (fault->req_level == PG_LEVEL_4K || fault->huge_page_disallowed) + if (fault->req_level == PG_LEVEL_4K || fault->arch.huge_page_disallowed) return; /* @@ -3158,7 +3159,7 @@ static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) * We cannot overwrite existing page tables with an NX * large page, as the leaf could be executable. */ - if (fault->nx_huge_page_workaround_enabled) + if (fault->arch.nx_huge_page_workaround_enabled) disallowed_hugepage_adjust(fault, *it.sptep, it.level); base_gfn = fault->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1); @@ -3170,7 +3171,7 @@ static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) continue; link_shadow_page(vcpu, it.sptep, sp); - if (fault->huge_page_disallowed) + if (fault->arch.huge_page_disallowed) account_nx_huge_page(vcpu->kvm, sp, fault->req_level >= it.level); } @@ -3221,7 +3222,7 @@ static int kvm_handle_noslot_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, unsigned int access) { - gva_t gva = fault->is_tdp ? 0 : fault->addr; + gva_t gva = fault->arch.is_tdp ? 0 : fault->addr; vcpu_cache_mmio_info(vcpu, gva, fault->gfn, access & shadow_mmio_access_mask); @@ -3255,7 +3256,7 @@ static bool page_fault_can_be_fast(struct kvm_page_fault *fault) * generation number. Refreshing the MMIO generation needs to go down * the slow path. Note, EPT Misconfigs do NOT set the PRESENT flag! */ - if (fault->rsvd) + if (fault->arch.rsvd) return false; /* @@ -3273,7 +3274,7 @@ static bool page_fault_can_be_fast(struct kvm_page_fault *fault) * SPTE is MMU-writable (determined later), the fault can be fixed * by setting the Writable bit, which can be done out of mmu_lock. */ - if (!fault->present) + if (!fault->arch.present) return !kvm_ad_enabled(); /* @@ -4119,10 +4120,10 @@ static int handle_mmio_page_fault(struct kvm_vcpu *vcpu, u64 addr, bool direct) static bool page_fault_handle_page_track(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { - if (unlikely(fault->rsvd)) + if (unlikely(fault->arch.rsvd)) return false; - if (!fault->present || !fault->write) + if (!fault->arch.present || !fault->write) return false; /* diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index af2ae4887e35..4abb80a3bd01 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -77,60 +77,6 @@ static inline bool is_nx_huge_page_enabled(struct kvm *kvm) return READ_ONCE(nx_huge_pages) && !kvm->arch.disable_nx_huge_pages; } -struct kvm_page_fault { - /* arguments to kvm_mmu_do_page_fault. */ - const gpa_t addr; - const u32 error_code; - const bool prefetch; - - /* Derived from error_code. */ - const bool exec; - const bool write; - const bool present; - const bool rsvd; - const bool user; - - /* Derived from mmu and global state. */ - const bool is_tdp; - const bool nx_huge_page_workaround_enabled; - - /* - * Whether a >4KB mapping can be created or is forbidden due to NX - * hugepages. - */ - bool huge_page_disallowed; - - /* - * Maximum page size that can be created for this fault; input to - * FNAME(fetch), direct_map() and kvm_tdp_mmu_map(). - */ - u8 max_level; - - /* - * Page size that can be created based on the max_level and the - * page size used by the host mapping. - */ - u8 req_level; - - /* - * Page size that will be created based on the req_level and - * huge_page_disallowed. - */ - u8 goal_level; - - /* Shifted addr, or result of guest page table walk if addr is a gva. */ - gfn_t gfn; - - /* The memslot containing gfn. May be NULL. */ - struct kvm_memory_slot *slot; - - /* Outputs of kvm_faultin_pfn. */ - unsigned long mmu_seq; - kvm_pfn_t pfn; - hva_t hva; - bool map_writable; -}; - int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); /* @@ -164,22 +110,27 @@ enum { static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u32 err, bool prefetch) { + bool is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault); struct kvm_page_fault fault = { .addr = cr2_or_gpa, - .error_code = err, - .exec = err & PFERR_FETCH_MASK, - .write = err & PFERR_WRITE_MASK, - .present = err & PFERR_PRESENT_MASK, - .rsvd = err & PFERR_RSVD_MASK, - .user = err & PFERR_USER_MASK, .prefetch = prefetch, - .is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault), - .nx_huge_page_workaround_enabled = - is_nx_huge_page_enabled(vcpu->kvm), + + .write = err & PFERR_WRITE_MASK, + .exec = err & PFERR_FETCH_MASK, .max_level = KVM_MAX_HUGEPAGE_LEVEL, .req_level = PG_LEVEL_4K, .goal_level = PG_LEVEL_4K, + + .arch = { + .error_code = err, + .present = err & PFERR_PRESENT_MASK, + .rsvd = err & PFERR_RSVD_MASK, + .user = err & PFERR_USER_MASK, + .is_tdp = is_tdp, + .nx_huge_page_workaround_enabled = + is_nx_huge_page_enabled(vcpu->kvm), + }, }; int r; @@ -196,7 +147,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, if (!prefetch) vcpu->stat.pf_taken++; - if (IS_ENABLED(CONFIG_RETPOLINE) && fault.is_tdp) + if (IS_ENABLED(CONFIG_RETPOLINE) && fault.arch.is_tdp) r = kvm_tdp_page_fault(vcpu, &fault); else r = vcpu->arch.mmu->page_fault(vcpu, &fault); diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h index 335f26dabdf3..b01767acf073 100644 --- a/arch/x86/kvm/mmu/mmutrace.h +++ b/arch/x86/kvm/mmu/mmutrace.h @@ -270,7 +270,7 @@ TRACE_EVENT( TP_fast_assign( __entry->vcpu_id = vcpu->vcpu_id; __entry->cr2_or_gpa = fault->addr; - __entry->error_code = fault->error_code; + __entry->error_code = fault->arch.error_code; __entry->sptep = sptep; __entry->old_spte = old_spte; __entry->new_spte = *sptep; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 18bb92b70a01..daf9c7731edc 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -698,7 +698,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, * We cannot overwrite existing page tables with an NX * large page, as the leaf could be executable. */ - if (fault->nx_huge_page_workaround_enabled) + if (fault->arch.nx_huge_page_workaround_enabled) disallowed_hugepage_adjust(fault, *it.sptep, it.level); base_gfn = fault->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1); @@ -713,7 +713,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, continue; link_shadow_page(vcpu, it.sptep, sp); - if (fault->huge_page_disallowed) + if (fault->arch.huge_page_disallowed) account_nx_huge_page(vcpu->kvm, sp, fault->req_level >= it.level); } @@ -793,8 +793,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault int r; bool is_self_change_mapping; - pgprintk("%s: addr %lx err %x\n", __func__, fault->addr, fault->error_code); - WARN_ON_ONCE(fault->is_tdp); + pgprintk("%s: addr %lx err %x\n", __func__, fault->addr, fault->arch.error_code); + WARN_ON_ONCE(fault->arch.is_tdp); /* * Look up the guest pte for the faulting address. @@ -802,7 +802,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault * The bit needs to be cleared before walking guest page tables. */ r = FNAME(walk_addr)(&walker, vcpu, fault->addr, - fault->error_code & ~PFERR_RSVD_MASK); + fault->arch.error_code & ~PFERR_RSVD_MASK); /* * The page is not mapped by the guest. Let the guest handle it. @@ -830,7 +830,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault vcpu->arch.write_fault_to_shadow_pgtable = false; is_self_change_mapping = FNAME(is_self_change_mapping)(vcpu, - &walker, fault->user, &vcpu->arch.write_fault_to_shadow_pgtable); + &walker, fault->arch.user, &vcpu->arch.write_fault_to_shadow_pgtable); if (is_self_change_mapping) fault->max_level = PG_LEVEL_4K; @@ -846,7 +846,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault * we will cache the incorrect access into mmio spte. */ if (fault->write && !(walker.pte_access & ACC_WRITE_MASK) && - !is_cr0_wp(vcpu->arch.mmu) && !fault->user && fault->slot) { + !is_cr0_wp(vcpu->arch.mmu) && !fault->arch.user && fault->slot) { walker.pte_access |= ACC_WRITE_MASK; walker.pte_access &= ~ACC_USER_MASK; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 66231c7ed31e..4940413d3767 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1156,7 +1156,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) tdp_mmu_for_each_pte(iter, mmu, fault->gfn, fault->gfn + 1) { int r; - if (fault->nx_huge_page_workaround_enabled) + if (fault->arch.nx_huge_page_workaround_enabled) disallowed_hugepage_adjust(fault, iter.old_spte, iter.level); if (iter.level == fault->goal_level) @@ -1181,7 +1181,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) sp = tdp_mmu_alloc_sp(vcpu); tdp_mmu_init_child_sp(sp, &iter); - sp->arch.nx_huge_page_disallowed = fault->huge_page_disallowed; + sp->arch.nx_huge_page_disallowed = fault->arch.huge_page_disallowed; if (is_shadow_present_pte(iter.old_spte)) r = tdp_mmu_split_huge_page(kvm, &iter, sp, true); @@ -1197,7 +1197,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) goto retry; } - if (fault->huge_page_disallowed && + if (fault->arch.huge_page_disallowed && fault->req_level >= iter.level) { spin_lock(&kvm->arch.tdp_mmu_pages_lock); track_possible_nx_huge_page(kvm, sp); diff --git a/include/kvm/mmu_types.h b/include/kvm/mmu_types.h index a9da33d4baa8..9f0ca920bf68 100644 --- a/include/kvm/mmu_types.h +++ b/include/kvm/mmu_types.h @@ -66,4 +66,48 @@ struct kvm_mmu_page { struct kvm_mmu_page_arch arch; }; +struct kvm_page_fault { + /* The raw faulting address. */ + const gpa_t addr; + + /* Whether the fault was synthesized to prefetch a mapping. */ + const bool prefetch; + + /* Information about the cause of the fault. */ + const bool write; + const bool exec; + + /* Shifted addr, or result of guest page table walk if shadow paging. */ + gfn_t gfn; + + /* The memslot that contains @gfn. May be NULL. */ + struct kvm_memory_slot *slot; + + /* Maximum page size that can be created for this fault. */ + u8 max_level; + + /* + * Page size that can be created based on the max_level and the page + * size used by the host mapping. + */ + u8 req_level; + + /* Final page size that will be created. */ + u8 goal_level; + + /* + * The value of kvm->mmu_invalidate_seq before fetching the host + * mapping. Used to verify that the host mapping has not changed + * after grabbing the MMU lock. + */ + unsigned long mmu_seq; + + /* Information about the host mapping. */ + kvm_pfn_t pfn; + hva_t hva; + bool map_writable; + + struct kvm_page_fault_arch arch; +}; + #endif /* !__KVM_MMU_TYPES_H */ From patchwork Thu Dec 8 19:38:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713858 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=vnLkJDAi; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=QrMj5zLD; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSl3X0Cmcz23pD for ; Fri, 9 Dec 2022 06:44:40 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=sELuAJk+tgOJz55eWVfABBU5FOKHj18aKhOGCVWP0sc=; b=vnLkJDAi7n4Wit+I7R9q6SBvPZ pree842RyaTjeETZ/NJKirkRGzyMiFmCuWGWayOKBeXVbPgAsltW9sp3+Mp5f7PLYyIG+pvnWQ30u /rcWkA8Yu40tx2RHHfgJsGNxSBPYXQpmB5+bV47Ie/fHtzm2r+jiUUCAnU0q3bKvtjN3XA6656i4n J5rEqlAgK3LhHPsbATQ/4RUaHISR1IvH7nSGoze4a+X0lOMAR8zOLWYS4jmwThHIjpZA9dtQdkdjq zL18QXzCHTl1aVbhS94H8dy2dTJei+qivK8afi3AEKHf3gfQMAuVT+s0G+p/xf0uNt9TQJOppIprS PC7FPUqw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mos-009zR8-MS; Thu, 08 Dec 2022 19:44:34 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mop-009zPs-Qa for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:44:33 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-3cf0762f741so25022147b3.16 for ; Thu, 08 Dec 2022 11:44:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jgO+J07Uxt+qL/UpUVJ8WFrJDihM9AG5kpD8vJwjTTs=; b=QrMj5zLDlUXO+ZJ45hsRtxdfgmeW4IfijEqSAU7FIBqFwathovRwq/mzAGouVficrL gQcJIU9VmUG2hiNERRKbyTP3Lu3VIHMZ2AloTdIYXMt+wMOuMe8kDJQZMRgSvO6lAwNU Y6eBROrQrZ0gTi3qA1W8jY3mdXkghgQbhZG8f+ggIaWBggOtbOEtA4aAL9qoZ6wA6iTL UulvutyuA0lBG+LimgEdM3A93jCX+bLrq+G1CQa39qEIq4rZ4XwAKrU+qa0csJ6wjd9Z 9Ua/MZeoHkveM5nl5wYxeuSnKf6S8qPxrJT/LRGNG++VotU8B0SenTfkKWEFNecwApXV JVtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jgO+J07Uxt+qL/UpUVJ8WFrJDihM9AG5kpD8vJwjTTs=; b=zMIRU4uRfJdyzTh8ejP/93kOXcSpbu4tE9L/ARZ5lpS8OVWJ7hb95DSZMRXw+lFlZS Z0yuNs1X876PjJ9xtI7Ehp8gmebAcCc9RH+l/x+RQu/dqgwJszOPPpDjQj39BvanPY9/ nqeqqUU8JgmYdbaBrtQJWC+ti8g9DegFXWN98zS44Juj4+fSh1p39C4uwLwVSH1xyPFC mo5Wk8vMvBUKSzDJgjMpKR9EZJ2vlhdZe2cD3Y3YD+mBWaCQMk5J2p3GtDArS+Ldt46P 5Y3jZLXfOdZXmwPfTbWxxTaYa9IO1geG8BFuvbk0u9jxECqd+3e5uncw8Nxz1xz6QpnJ 8Vog== X-Gm-Message-State: ANoB5pmqGzqv5YRIZt8I00BLn4nxIE068SR5TPaCGtSXDsAEOsbacCVV sJKcCcwfDF4j/gpd5hezIbrS6SqJTCZn6g== X-Google-Smtp-Source: AA0mqf4cuymNRC2hTArY/T3ebCSKh1ZW9fvhwu59cqHtPCYYXp2h3U5rWse1hGwJtXv7Z28ATD5Q6YFLoVGbsg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:d03:0:b0:706:e165:f752 with SMTP id 3-20020a250d03000000b00706e165f752mr8717208ybn.510.1670528365193; Thu, 08 Dec 2022 11:39:25 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:31 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-12-dmatlack@google.com> Subject: [RFC PATCH 11/37] KVM: MMU: Move RET_PF_* into common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114431_883624_2E876F22 X-CRM114-Status: GOOD ( 14.99 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Move the RET_PF_* enum into common code in preparation for moving the TDP MMU into common code. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu_internal.h | 28 include/kvm/mmu_types.h | 26 ++++++++++++++++++++++++++ 2 files changed, 26 ins [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:1149 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Move the RET_PF_* enum into common code in preparation for moving the TDP MMU into common code. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu_internal.h | 28 ---------------------------- include/kvm/mmu_types.h | 26 ++++++++++++++++++++++++++ 2 files changed, 26 insertions(+), 28 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 4abb80a3bd01..d3c1d08002af 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -79,34 +79,6 @@ static inline bool is_nx_huge_page_enabled(struct kvm *kvm) int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); -/* - * Return values of handle_mmio_page_fault(), mmu.page_fault(), fast_page_fault(), - * and of course kvm_mmu_do_page_fault(). - * - * RET_PF_CONTINUE: So far, so good, keep handling the page fault. - * RET_PF_RETRY: let CPU fault again on the address. - * RET_PF_EMULATE: mmio page fault, emulate the instruction directly. - * RET_PF_INVALID: the spte is invalid, let the real page fault path update it. - * RET_PF_FIXED: The faulting entry has been fixed. - * RET_PF_SPURIOUS: The faulting entry was already fixed, e.g. by another vCPU. - * - * Any names added to this enum should be exported to userspace for use in - * tracepoints via TRACE_DEFINE_ENUM() in mmutrace.h - * - * Note, all values must be greater than or equal to zero so as not to encroach - * on -errno return values. Somewhat arbitrarily use '0' for CONTINUE, which - * will allow for efficient machine code when checking for CONTINUE, e.g. - * "TEST %rax, %rax, JNZ", as all "stop!" values are non-zero. - */ -enum { - RET_PF_CONTINUE = 0, - RET_PF_RETRY, - RET_PF_EMULATE, - RET_PF_INVALID, - RET_PF_FIXED, - RET_PF_SPURIOUS, -}; - static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u32 err, bool prefetch) { diff --git a/include/kvm/mmu_types.h b/include/kvm/mmu_types.h index 9f0ca920bf68..07c9962f9aea 100644 --- a/include/kvm/mmu_types.h +++ b/include/kvm/mmu_types.h @@ -110,4 +110,30 @@ struct kvm_page_fault { struct kvm_page_fault_arch arch; }; +/* + * Return values for page fault handling routines. + * + * RET_PF_CONTINUE: So far, so good, keep handling the page fault. + * RET_PF_RETRY: let CPU fault again on the address. + * RET_PF_EMULATE: mmio page fault, emulate the instruction directly. + * RET_PF_INVALID: the spte is invalid, let the real page fault path update it. + * RET_PF_FIXED: The faulting entry has been fixed. + * RET_PF_SPURIOUS: The faulting entry was already fixed, e.g. by another vCPU. + * + * Any names added to this enum should be exported to userspace for use in + * tracepoints via TRACE_DEFINE_ENUM() in arch/x86/kvm/mmu/mmutrace.h. + * + * Note, all values must be greater than or equal to zero so as not to encroach + * on -errno return values. Somewhat arbitrarily use '0' for CONTINUE, which + * will allow for efficient machine code when checking for CONTINUE. + */ +enum { + RET_PF_CONTINUE = 0, + RET_PF_RETRY, + RET_PF_EMULATE, + RET_PF_INVALID, + RET_PF_FIXED, + RET_PF_SPURIOUS, +}; + #endif /* !__KVM_MMU_TYPES_H */ From patchwork Thu Dec 8 19:38:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713865 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=FRqWYauZ; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=j2HX3lJ7; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSl5L2l2Kz23yq for ; Fri, 9 Dec 2022 06:46:14 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=OLBzMxTka03Ay7zMqSlTWe5XbuWgqL94mZaFbItWb9s=; b=FRqWYauZE3zousWgoP6INKTDcy Fen0cdv3l4C5NI9ogEOZtm4AkJ8Y73RNM0ZEMgrMtv50csyohnMzMQ2YAchS2xaYNap9FCjuQ9b5H AAv9Tvb94LllrybQ3B2DJ02TnGgo6GnLNHLF/Qhq32Y5ssOCNcKODiB1HdyLtW6OX+S4t+02ePEMM hkZ5HybwGUlUlRuzYewBrSMJkoLgf2fkqyJIiCd0ExjYhR4NqPyUkJYsrmHGGj01Z1u/cLi4r58jd r1DI9kyU1iiBhiBzEjHMHyX18/x/4OyN8AqLZ2VmYZgHPgqx5OSgPe+lCDCmBaM/z4aisbRmIsK8R O+I1PwAg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MqP-00A19N-BJ; Thu, 08 Dec 2022 19:46:09 +0000 Received: from mail-oo1-xc49.google.com ([2607:f8b0:4864:20::c49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MqE-00A0z4-Ku for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:46:00 +0000 Received: by mail-oo1-xc49.google.com with SMTP id w14-20020a4a9d0e000000b004a35517bb38so781061ooj.18 for ; Thu, 08 Dec 2022 11:45:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=T0+hU8k+RyCmVBvBo6qJA+pYHpeWz4BnzaTdktBKy2A=; b=j2HX3lJ7u8ucEzkeGBQfuBx9Yb7N2Qy9oTH39ulAVDzwqd+G0fIW4WjnRqotxR/RTx o/pna9eXTXuMdJmAYL1UVpM36PkT4GZM2h0bpfrNAurm7bBa7yOIol4vLsYL261GeqzA 3HOdEsZ41tJib1TeC34Adynr+Edi81xeW1SDkV4CdXiQGoTOGOVJ4WxSxzRqIQRX0/iB RSZpalWLzZZk3q25mDyKyCcOP/lgTKFRKvpNiWk49MLHspz7m3Pd9fzp8sxVKQa7WFIV Cq+v/eIrOgIUSWXVgUqA7VpSfMfnDwSrS/lkhXxrcLWaE6ZViWl/62plvpN2oVhZ91U+ +4vQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=T0+hU8k+RyCmVBvBo6qJA+pYHpeWz4BnzaTdktBKy2A=; b=I6Gdh35X/hcQOzJoYCPKGCvgQ9UQOyu6U7Arh8kWA1NaCezbAThBtQnqLdA/ZLQcf/ nAKzakGpEnfg+QMJDcqcqSEZxdrlqLiltQyd6atsdjGgcdDkHZrmpqu6zXzHbUGXjUMG L2nxCUfwctFG0uQNRe8nXdnV3Bm4Acd17yRT5AEbWmaiWHYpiMFt65Qiee+X2zdB2j3J H9U8rnjhVnH6zcZEWWSA+DywsxZWAQBzpNtYxpizjmef1PruInxSUQ/+ki7gk2fFV2Aa pWNTdl2OzbFvzYwLvLoSh9ggT1GfxQYwn1Iuva9+TshG2JiTtspV2xz2t2iyoohSkjDK 7irA== X-Gm-Message-State: ANoB5pkdVtinArp0GBdYbvJLlf5D5GbT5XzYPPFWDeekm2Qtkarxj5L3 W0xwIKSBCuxHwFtsJ7sZYYo/pfyAcHgw8g== X-Google-Smtp-Source: AA0mqf4KN0ToQ1wmZzmflZp36b4L5Nl4FpG0VJHdW80YwtBlwu869ju+Ri2dl6OmwZR+dYAtfRM+xdMRlNvXFw== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:6994:0:b0:6f3:c535:4ea3 with SMTP id e142-20020a256994000000b006f3c5354ea3mr55731660ybc.498.1670528366862; Thu, 08 Dec 2022 11:39:26 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:32 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-13-dmatlack@google.com> Subject: [RFC PATCH 12/37] KVM: x86/mmu: Use PG_LEVEL_{PTE,PMD,PUD} in the TDP MMU From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114558_741897_0754595A X-CRM114-Status: GOOD ( 13.44 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Use PG_LEVEL_{PTE,PMD,PUD} in the TDP MMU instead of the x86-specific PG_LEVEL_{4K,2M,1G} aliases. This prepares for moving the TDP MMU to common code, where not all architectures will have 4K PAGE_SI [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:c49 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Use PG_LEVEL_{PTE,PMD,PUD} in the TDP MMU instead of the x86-specific PG_LEVEL_{4K,2M,1G} aliases. This prepares for moving the TDP MMU to common code, where not all architectures will have 4K PAGE_SIZE. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/tdp_iter.h | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 16 ++++++++-------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index f0af385c56e0..892c078aab58 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -106,7 +106,7 @@ struct tdp_iter { tdp_iter_next(&iter)) #define for_each_tdp_pte(iter, root, start, end) \ - for_each_tdp_pte_min_level(iter, root, PG_LEVEL_4K, start, end) + for_each_tdp_pte_min_level(iter, root, PG_LEVEL_PTE, start, end) tdp_ptep_t spte_to_child_pt(u64 pte, int level); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 4940413d3767..bce0566f2d94 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -347,7 +347,7 @@ static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn, bool pfn_changed; struct kvm_memory_slot *slot; - if (level > PG_LEVEL_4K) + if (level > PG_LEVEL_PTE) return; pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); @@ -526,7 +526,7 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); WARN_ON(level > PT64_ROOT_MAX_LEVEL); - WARN_ON(level < PG_LEVEL_4K); + WARN_ON(level < PG_LEVEL_PTE); WARN_ON(gfn & (KVM_PAGES_PER_HPAGE(level) - 1)); /* @@ -897,9 +897,9 @@ static void tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root, * inducing a stall to allow in-place replacement with a 1gb hugepage. * * Because zapping a SP recurses on its children, stepping down to - * PG_LEVEL_4K in the iterator itself is unnecessary. + * PG_LEVEL_PTE in the iterator itself is unnecessary. */ - __tdp_mmu_zap_root(kvm, root, shared, PG_LEVEL_1G); + __tdp_mmu_zap_root(kvm, root, shared, PG_LEVEL_PUD); __tdp_mmu_zap_root(kvm, root, shared, root->role.level); rcu_read_unlock(); @@ -944,7 +944,7 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, rcu_read_lock(); - for_each_tdp_pte_min_level(iter, root, PG_LEVEL_4K, start, end) { + for_each_tdp_pte_min_level(iter, root, PG_LEVEL_PTE, start, end) { if (can_yield && tdp_mmu_iter_cond_resched(kvm, &iter, flush, false)) { flush = false; @@ -1303,7 +1303,7 @@ static bool set_spte_gfn(struct kvm *kvm, struct tdp_iter *iter, /* Huge pages aren't expected to be modified without first being zapped. */ WARN_ON(pte_huge(range->pte) || range->start + 1 != range->end); - if (iter->level != PG_LEVEL_4K || + if (iter->level != PG_LEVEL_PTE || !is_shadow_present_pte(iter->old_spte)) return false; @@ -1672,7 +1672,7 @@ static void clear_dirty_pt_masked(struct kvm *kvm, struct kvm_mmu_page *root, if (!mask) break; - if (iter.level > PG_LEVEL_4K || + if (iter.level > PG_LEVEL_PTE || !(mask & (1UL << (iter.gfn - gfn)))) continue; @@ -1726,7 +1726,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm, rcu_read_lock(); - for_each_tdp_pte_min_level(iter, root, PG_LEVEL_2M, start, end) { + for_each_tdp_pte_min_level(iter, root, PG_LEVEL_PMD, start, end) { retry: if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) continue; From patchwork Thu Dec 8 19:38:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713887 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=3rt/xL2d; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=desiato.20200630 header.b=CBxdBDFF; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=ighxqg/s; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSm9H1ssXz23ns for ; Fri, 9 Dec 2022 07:34:43 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=mTVLKRKKxn2czOgOQqoeRNFElgJo8LBS0d4AtEwO3e0=; b=3rt/xL2dM+DCOpaToKESIS4xyU ZDazm1gPJGwllPj4CkeusQoeyY/fOrDYawaWzQ9DM+09ZY8r5fDsB/ju25Wti3F3djltvgzqYViqx M3sVrRZ7sMzUNBsoy9GUP3u+CbQFyE7uTH9eFVaOe6EGZNX7P628K0ZfpIvjkDoxXtH4aTzmdQ9G4 R3H/vaTGC74kTso/ZTqhlZz3+UX3SpXRFJ78b43qIFzXyomHoWACQJa6NnqfFOfXWK0Lpffw7ID1+ MboxEDxsGWmcpvB2a0PQU2H208xGhCV1rDRMEIfR5xtkiossTOOgtn/+Kp7W3zBrkVko8jV7QVlOg J9dJcbfA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NbK-00AmJ3-9m; Thu, 08 Dec 2022 20:34:38 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NUy-00AgYd-OW for kvm-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 20:28:04 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=jUiXg4Z/i5LqOe33gxGyRTPndvvAyQq0aDF8Gx926dw=; b=CBxdBDFF4AMvkCEOczFXTRJ/PB oXj9lPi+QYXyc7bISJ1yxYCDspVymPeGotUTDAuw4i2WjsrLAjNw24dPYwavtuw8vOpLhXyB3pmDc P7alpFVuTKjPQn1hHlnD0hSRXMFPlEYLS5ReXbZWcoefdYDij5GFTZGaacghBjTWpat097xlT2NcB WSMpychqwGVLZvUh/awCqOhsEkPaT+vZlmer6xuu+Os9lz5Ip5e6FGsXRIzvfa1GRc/Ty5+ISIVjB rUZMckME1IWJc88l+Mis7HqZUwJdZYE2kCcmg0VgZzM9S9vQ2CYGPGFEtz0nju2MtgC7jv8A22HB/ VPmTUbzg==; Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjx-008ZoI-01 for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:30 +0000 Received: by mail-yb1-xb49.google.com with SMTP id a5-20020a25af05000000b006e450a5e507so2548381ybh.22 for ; Thu, 08 Dec 2022 11:39:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jUiXg4Z/i5LqOe33gxGyRTPndvvAyQq0aDF8Gx926dw=; b=ighxqg/snU4tXdcxe5rmS9diIJMEYpBho3CcnpnWuQAtED+AhBGXYG30AvZnewuYSW +PxEPLajwD76T6tHoZB9rvsCIME05+Ml22rzK5huArVtcScCbqC6ekIOU+P5myHm4vXi R4EgQzje0PuBByDUR9/7eRDNQOVtYlSk3/+EZQFbpySSyYsvRa5mxUOW+WJy7P2kVHMr LdNRgzhOQbMvhiL/Ue8veJ5C2WtFDdDcwfdEEzx+4yjZrEaMU4AD1VnloLuIWDHrzj2p Nqw4S8uRQquwfvmcYjqYXUnPBaJxW1MX4LbeXgp1e25+ZAuz42vzjcVwAFjS+W3H33cg ky4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jUiXg4Z/i5LqOe33gxGyRTPndvvAyQq0aDF8Gx926dw=; b=nTx10BoPTZ2ysOXJqI5BKpWcuReZGIYJ0a4wwqOecWLLtzoXo+3cYbMa4asnxaR2Up //XNDdCe/aS+3KD3aqm9f1rTdnvu0TRQbOWbeLmuO9j4YHLl7yrkfDbcs3LlH7TQT11K 2NW8JP5ABS4X7sNU0lptrgT1tWAkn1k/YYpFbdB2rai2tovf/hZhhlMChhzWCzgwu6DR df71D1I63AHKQ0DqZSCv17IcSzZ7WJdX15EGD9NFx11GDqFQqafbXLFEugJc8vwGx4SE Nl3kr4z8r6Ka5aFVp68hAKtIs1E1Vbhqfrx9O0qHLmJjwwyo9tFJhui0CSN29H6YMX/h dV7Q== X-Gm-Message-State: ANoB5pnfx4zvsiUa0GT1Qk4ch2L1SaJHAcadF/8dqkwj6R06lyizsoEx b+3r+2W7BBPNAfNRT3QPbpMJ7KRkhr+MFg== X-Google-Smtp-Source: AA0mqf5wtd6nzz7Bw3k0h8Agwz4nMHxILunKJx9Z/GTmhAEf2vfAyvwOajGTYXd4NdEjBogKL293TlwJdHSxug== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:7a03:0:b0:6dd:7c0a:4520 with SMTP id v3-20020a257a03000000b006dd7c0a4520mr93728044ybc.352.1670528368448; Thu, 08 Dec 2022 11:39:28 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:33 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-14-dmatlack@google.com> Subject: [RFC PATCH 13/37] KVM: MMU: Move sptep_to_sp() to common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193929_127829_7B7AC2FD X-CRM114-Status: GOOD ( 12.19 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "desiato.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Move sptep_to_sp() to common code in preparation for moving the TDP MMU to common code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/spte.h | 14 ++ include/kvm/mmu.h | 19 +++++++++++++++++++ 2 files changed, 21 insertions(+), 12 deletions(-) create [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:b49 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Move sptep_to_sp() to common code in preparation for moving the TDP MMU to common code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/spte.h | 14 ++------------ include/kvm/mmu.h | 19 +++++++++++++++++++ 2 files changed, 21 insertions(+), 12 deletions(-) create mode 100644 include/kvm/mmu.h diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index ad84c549fe96..4c5d518e3ac6 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -3,6 +3,8 @@ #ifndef KVM_X86_MMU_SPTE_H #define KVM_X86_MMU_SPTE_H +#include + #include "mmu_internal.h" /* @@ -219,23 +221,11 @@ static inline int spte_index(u64 *sptep) */ extern u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask; -static inline struct kvm_mmu_page *to_shadow_page(hpa_t shadow_page) -{ - struct page *page = pfn_to_page((shadow_page) >> PAGE_SHIFT); - - return (struct kvm_mmu_page *)page_private(page); -} - static inline struct kvm_mmu_page *spte_to_child_sp(u64 spte) { return to_shadow_page(spte & SPTE_BASE_ADDR_MASK); } -static inline struct kvm_mmu_page *sptep_to_sp(u64 *sptep) -{ - return to_shadow_page(__pa(sptep)); -} - static inline bool is_mmio_spte(u64 spte) { return (spte & shadow_mmio_mask) == shadow_mmio_value && diff --git a/include/kvm/mmu.h b/include/kvm/mmu.h new file mode 100644 index 000000000000..425db8e4f8e9 --- /dev/null +++ b/include/kvm/mmu.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __KVM_MMU_H +#define __KVM_MMU_H + +#include + +static inline struct kvm_mmu_page *to_shadow_page(hpa_t shadow_page) +{ + struct page *page = pfn_to_page((shadow_page) >> PAGE_SHIFT); + + return (struct kvm_mmu_page *)page_private(page); +} + +static inline struct kvm_mmu_page *sptep_to_sp(u64 *sptep) +{ + return to_shadow_page(__pa(sptep)); +} + +#endif /* !__KVM_MMU_H */ From patchwork Thu Dec 8 19:38:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713863 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=CePv2yS2; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=mywUMpb9; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSl531B76z23np for ; Fri, 9 Dec 2022 06:45:59 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=K772DYsSjqKHMzYdfe0/zHMnutTT/mGaakXNwYEFWf8=; b=CePv2yS289uoACbg8H3OnOnXZl dOx2+R0rsVH1YOVi8J7IM8LAoB6rtKiO1oqaJKpeIPfb8z+XDiz34c5HgBHDN0xJldiT2CWxdGFWJ rrPWiKD+eEtR9ijsdkuetqZ0LkbXR9N1l6wzL2yqZ32jSSyDZk9rt4/B3SeRV89+bcNR6AmFfnFu6 7mx+95SgqltQgLAjQ2h6Qts0qDoAPCmvD7Eyv8/LfUuDaRalRsggXFm6XOPYpZK2p+Xk82w7RRkte sLVkCny1FQdwhu68kJ2AFXbW9M/9HBO5cRoqfgZYQmTZS9o6pDOjNdIfC2k2f4UhXskmupiLZ4OL+ qUiKl4LA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mq8-00A0uj-Lm; Thu, 08 Dec 2022 19:45:52 +0000 Received: from mail-ot1-x349.google.com ([2607:f8b0:4864:20::349]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mq4-00A0kl-6v for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:45:50 +0000 Received: by mail-ot1-x349.google.com with SMTP id s22-20020a9d7596000000b0066eb4e77127so1229931otk.13 for ; Thu, 08 Dec 2022 11:45:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=R9LB1ZVMz4QUxz5pvo7qXgMqL6aaiH3Zx5L4jHC3TiM=; b=mywUMpb92Kk62g8KyYHaPYWn30FT7swKwMFkNaLBEhkNuRveiOM0s+d7nPhqWh1i20 qKWH28ZXa9H3ErjccI7/elONDf6HMJv/1AWg+60kRtXloHd5SH/aA/RqLJiocTnzRK+4 qa/jQ2r9oK1kv9p/wzDB5zVKYlUOpnBkrFdQIT+lXyBj2bLc+7ULy763Z3Fv68nr72wU wZ0+1kqAvAm4uefKGOueZXZxwCZCF0d3A8tP6aktlY+sm7wFxKszgnOuav6ixDC1wwUf feaUTdtfA74RTkZlBDWknWQ9H1jBlPUp4tvzao/YHBU2Be6WzqIB8r4Ypw1t0aArCdTf AMaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=R9LB1ZVMz4QUxz5pvo7qXgMqL6aaiH3Zx5L4jHC3TiM=; b=KeVstFdHfxYj5zixo6oZP0VcScQMZjaiVRimscd1mLQhBRyafQNtqWc9FBRTbPgMWm g3Y63fBjjUtyjKuOZN65y9s48SV6biE7EXKAxcosCifQSJ0wNtBjW4Ze3sJRTjEWhrgY PrK8K65PH8iK4xTwkYlI4qBPQayBTyF2DeUc4EXGBOLTgOCvDbLQCFvFzxRtvbZbfwET nwOSDBwufaIVZU3OQAQoImmeNCyRXDDMGaLJi7j1UkxOH1XI/l2KuPucwe4t60wKlhGB huKxXlzUElqi0kCM8PcgtXbd0Mr/REu1leFHh7ZXdv6nquuyWJ4h5q8kLzG9TBOmkcww o8UA== X-Gm-Message-State: ANoB5pk0JbAE3J6Ci1kCk5cCJykrJyM1NBzYDhKallsQjER9xnkiwSAJ eNMlpyXCE7WmmKpbkvXO0bZ0XAEdYGJ6gw== X-Google-Smtp-Source: AA0mqf5n9lF4ktHiWZ6/n7J28Yg81SXih514CU7QAEeCqTrDG0HULq/c0zXQphuQ/kH7ZrnkeE/ikcBwHWzRug== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:90a:43a4:b0:219:1d0a:34a6 with SMTP id r33-20020a17090a43a400b002191d0a34a6mr5768991pjg.1.1670528369971; Thu, 08 Dec 2022 11:39:29 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:34 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-15-dmatlack@google.com> Subject: [RFC PATCH 14/37] KVM: MMU: Introduce common macros for TDP page tables From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114548_307328_EEFA984E X-CRM114-Status: GOOD ( 25.44 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Introduce macros in common KVM code for dealing with TDP page tables. TDP page tables are assumed to be PAGE_SIZE with 64-bit PTEs. ARM will have some nuance, e.g. for root page table concatenation, b [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:349 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Introduce macros in common KVM code for dealing with TDP page tables. TDP page tables are assumed to be PAGE_SIZE with 64-bit PTEs. ARM will have some nuance, e.g. for root page table concatenation, but that will be handled separately when the time comes. Furthermore, we can add arch-specific overrides for any of these macros in the future on a case by case basis. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/tdp_iter.c | 14 +++++++------- arch/x86/kvm/mmu/tdp_iter.h | 3 ++- arch/x86/kvm/mmu/tdp_mmu.c | 24 +++++++++++++----------- include/kvm/tdp_pgtable.h | 21 +++++++++++++++++++++ 4 files changed, 43 insertions(+), 19 deletions(-) create mode 100644 include/kvm/tdp_pgtable.h diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index 4a7d58bf81c4..d6328dac9cd3 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -10,14 +10,15 @@ */ static void tdp_iter_refresh_sptep(struct tdp_iter *iter) { - iter->sptep = iter->pt_path[iter->level - 1] + - SPTE_INDEX(iter->gfn << PAGE_SHIFT, iter->level); + int pte_index = TDP_PTE_INDEX(iter->gfn, iter->level); + + iter->sptep = iter->pt_path[iter->level - 1] + pte_index; iter->old_spte = kvm_tdp_mmu_read_spte(iter->sptep); } static gfn_t round_gfn_for_level(gfn_t gfn, int level) { - return gfn & -KVM_PAGES_PER_HPAGE(level); + return gfn & -TDP_PAGES_PER_LEVEL(level); } /* @@ -46,7 +47,7 @@ void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu_page *root, int root_level = root->role.level; WARN_ON(root_level < 1); - WARN_ON(root_level > PT64_ROOT_MAX_LEVEL); + WARN_ON(root_level > TDP_ROOT_MAX_LEVEL); iter->next_last_level_gfn = next_last_level_gfn; iter->root_level = root_level; @@ -116,11 +117,10 @@ static bool try_step_side(struct tdp_iter *iter) * Check if the iterator is already at the end of the current page * table. */ - if (SPTE_INDEX(iter->gfn << PAGE_SHIFT, iter->level) == - (SPTE_ENT_PER_PAGE - 1)) + if (TDP_PTE_INDEX(iter->gfn, iter->level) == (TDP_PTES_PER_PAGE - 1)) return false; - iter->gfn += KVM_PAGES_PER_HPAGE(iter->level); + iter->gfn += TDP_PAGES_PER_LEVEL(iter->level); iter->next_last_level_gfn = iter->gfn; iter->sptep++; iter->old_spte = kvm_tdp_mmu_read_spte(iter->sptep); diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index 892c078aab58..bfac83ab52db 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -4,6 +4,7 @@ #define __KVM_X86_MMU_TDP_ITER_H #include +#include #include "mmu.h" #include "spte.h" @@ -68,7 +69,7 @@ struct tdp_iter { */ gfn_t yielded_gfn; /* Pointers to the page tables traversed to reach the current SPTE */ - tdp_ptep_t pt_path[PT64_ROOT_MAX_LEVEL]; + tdp_ptep_t pt_path[TDP_ROOT_MAX_LEVEL]; /* A pointer to the current SPTE */ tdp_ptep_t sptep; /* The lowest GFN mapped by the current SPTE */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index bce0566f2d94..a6d6e393c009 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -7,6 +7,8 @@ #include "tdp_mmu.h" #include "spte.h" +#include + #include #include @@ -428,9 +430,9 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared) tdp_mmu_unlink_sp(kvm, sp, shared); - for (i = 0; i < SPTE_ENT_PER_PAGE; i++) { + for (i = 0; i < TDP_PTES_PER_PAGE; i++) { tdp_ptep_t sptep = pt + i; - gfn_t gfn = base_gfn + i * KVM_PAGES_PER_HPAGE(level); + gfn_t gfn = base_gfn + i * TDP_PAGES_PER_LEVEL(level); u64 old_spte; if (shared) { @@ -525,9 +527,9 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, bool is_leaf = is_present && is_last_spte(new_spte, level); bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); - WARN_ON(level > PT64_ROOT_MAX_LEVEL); + WARN_ON(level > TDP_ROOT_MAX_LEVEL); WARN_ON(level < PG_LEVEL_PTE); - WARN_ON(gfn & (KVM_PAGES_PER_HPAGE(level) - 1)); + WARN_ON(gfn & (TDP_PAGES_PER_LEVEL(level) - 1)); /* * If this warning were to trigger it would indicate that there was a @@ -677,7 +679,7 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm, return ret; kvm_flush_remote_tlbs_with_address(kvm, iter->gfn, - KVM_PAGES_PER_HPAGE(iter->level)); + TDP_PAGES_PER_LEVEL(iter->level)); /* * No other thread can overwrite the removed SPTE as they must either @@ -1075,7 +1077,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, else if (is_shadow_present_pte(iter->old_spte) && !is_last_spte(iter->old_spte, iter->level)) kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn, - KVM_PAGES_PER_HPAGE(iter->level + 1)); + TDP_PAGES_PER_LEVEL(iter->level + 1)); /* * If the page fault was caused by a write but the page is write @@ -1355,7 +1357,7 @@ static bool wrprot_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, rcu_read_lock(); - BUG_ON(min_level > KVM_MAX_HUGEPAGE_LEVEL); + BUG_ON(min_level > TDP_MAX_HUGEPAGE_LEVEL); for_each_tdp_pte_min_level(iter, root, min_level, start, end) { retry: @@ -1469,7 +1471,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, * No need for atomics when writing to sp->spt since the page table has * not been linked in yet and thus is not reachable from any other CPU. */ - for (i = 0; i < SPTE_ENT_PER_PAGE; i++) + for (i = 0; i < TDP_PTES_PER_PAGE; i++) sp->spt[i] = make_huge_page_split_spte(kvm, huge_spte, sp->role, i); /* @@ -1489,7 +1491,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, * are overwriting from the page stats. But we have to manually update * the page stats with the new present child pages. */ - kvm_update_page_stats(kvm, level - 1, SPTE_ENT_PER_PAGE); + kvm_update_page_stats(kvm, level - 1, TDP_PTES_PER_PAGE); out: trace_kvm_mmu_split_huge_page(iter->gfn, huge_spte, level, ret); @@ -1731,7 +1733,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm, if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) continue; - if (iter.level > KVM_MAX_HUGEPAGE_LEVEL || + if (iter.level > TDP_MAX_HUGEPAGE_LEVEL || !is_shadow_present_pte(iter.old_spte)) continue; @@ -1793,7 +1795,7 @@ static bool write_protect_gfn(struct kvm *kvm, struct kvm_mmu_page *root, u64 new_spte; bool spte_set = false; - BUG_ON(min_level > KVM_MAX_HUGEPAGE_LEVEL); + BUG_ON(min_level > TDP_MAX_HUGEPAGE_LEVEL); rcu_read_lock(); diff --git a/include/kvm/tdp_pgtable.h b/include/kvm/tdp_pgtable.h new file mode 100644 index 000000000000..968be8d92350 --- /dev/null +++ b/include/kvm/tdp_pgtable.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __KVM_TDP_PGTABLE_H +#define __KVM_TDP_PGTABLE_H + +#include +#include + +#define TDP_ROOT_MAX_LEVEL 5 +#define TDP_MAX_HUGEPAGE_LEVEL PG_LEVEL_PUD +#define TDP_PTES_PER_PAGE (PAGE_SIZE / sizeof(u64)) +#define TDP_LEVEL_BITS ilog2(TDP_PTES_PER_PAGE) +#define TDP_LEVEL_MASK ((1UL << TDP_LEVEL_BITS) - 1) + +#define TDP_LEVEL_SHIFT(level) (((level) - 1) * TDP_LEVEL_BITS) + +#define TDP_PAGES_PER_LEVEL(level) (1UL << TDP_LEVEL_SHIFT(level)) + +#define TDP_PTE_INDEX(gfn, level) \ + (((gfn) >> TDP_LEVEL_SHIFT(level)) & TDP_LEVEL_MASK) + +#endif /* !__KVM_TDP_PGTABLE_H */ From patchwork Thu Dec 8 19:38:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713867 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=bqUflHWz; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=sg2xkDze; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSlBv1Dydz23p1 for ; Fri, 9 Dec 2022 06:51:03 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=jMyrN7LkjH7mHMHOn9slZIPBSPrKDWpVyoffGvPtcg0=; b=bqUflHWzd36DgxL+ObJYllOUNM S4IJ/4EMP0M6EBJuUegFExNtvSwFapsdtFHZQy2GOPympXiDuDh7qOguNIfMmdC5IwlhSgKfhW9fR 82hI1lO6HLWTjc+D7+NF8xXMYRTWt9zHCxM35OiTMaOHWRN8AIvX9KTvsjwGmvB3nTM33JJ3BZBtu uFL632eMS4KZMUr0yqfMeqJp45d9ipZ1rbcKw8rBTAHvS9S4RDdLUqWJ4uEWMSA1UULs+CsDSOCLA LcHaMHkOWS7MeWAT0+XAFFPCEd1+XM0ZcCtpGEEszWvwS72DV60JOx+mkBSZVTYNv8ir8pg6aoTmi XlfW/fqw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mv1-00A4MC-C4; Thu, 08 Dec 2022 19:50:55 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MrE-00A1nn-5S for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:47:03 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-3d0465d32deso25215227b3.20 for ; Thu, 08 Dec 2022 11:46:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7lWdJkZrv9HoLXNU6i6MkXoxSJstT9R2oUrEFqSahss=; b=sg2xkDze7H4yV2lU/1/FpCU1cOSTt2gM7RO9hSQj+EGldzLsfkQyRpFJO1Dbgvo3W9 PfOPbmNzoC3iMvnIaOngSfZ5LH9xb9dgFCEw9aEBhfSkfZa0p2LGFJNWmPsP0YUfDno9 CQwSUJBX0kvzRAUmX2Me/KJgguCLZ+SfH2HWLEdPYDNzfjltbTnDARaVTFS5Gq443Bmi +QopmUV51BioUs/4b9KMy4gkmjkOujFVUjs6U8RTkqsv/HoGkTY5Ees6a9Iry1tmsH00 Td9EkW5zbDJ4OlpNw/cjDC4i3Hi3OifKlsJcAx0502DGt1TlOCLOeHM0BRmhVJInz4qJ qvxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7lWdJkZrv9HoLXNU6i6MkXoxSJstT9R2oUrEFqSahss=; b=420inbKfzAOu/0uMY90/7qKb3RXaHwISBHOpRYdBv7JnNBfb5vG7gvtLTujVfhOfQ5 sLypZaNFPTR0J4h78LME0Z470B2SUDOhZ+OZxgDpPLJrm08TUR/WIn2jTqYrH89H6hbG ciHSQI5lm/mNAoR31zthZpa6HYs7EwrMjrv43X7dazFddnAP6FMDOGlwZDoeOF1j+Cna nYvNyiwtmZQ4sWs4UHMjwNm3lQYnGme1y8HkdkiYoLdISbpe71hZuxPnjXZBCGgKs+cK a3wpu/OaW8Po7F665peS5KhJSUM2V9JeXnBdTnKf6WmR9vY8qJ7mZQURTP2snP6YNMA6 V1ag== X-Gm-Message-State: ANoB5pkV/AVhPMdIzLXUkEiLgNKOwYzeF4ZdwpC56jv780onc8XU/bgS PHfe3TJbpe92+IKkvdp1DmWULd+hbvIFaA== X-Google-Smtp-Source: AA0mqf63KmdX+eJrD5nf7+2lHNPi/RXEFfkP5c0ill5ei/9dZE+wqXzEkU41M9xqISk+TkkYSOsMnafoSWus4A== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:1884:0:b0:6ef:c66f:e0fa with SMTP id 126-20020a251884000000b006efc66fe0famr68123962yby.3.1670528371737; Thu, 08 Dec 2022 11:39:31 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:35 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-16-dmatlack@google.com> Subject: [RFC PATCH 15/37] KVM: x86/mmu: Add a common API for inspecting/modifying TDP PTEs From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114700_241454_40857642 X-CRM114-Status: GOOD ( 28.83 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Introduce an API for inspecting and modifying TDP PTEs from common code. This will be used in future commits to move the TDP MMU to common code. Specifically, introduce the following API that can be used in common code: Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:1149 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Introduce an API for inspecting and modifying TDP PTEs from common code. This will be used in future commits to move the TDP MMU to common code. Specifically, introduce the following API that can be used in common code: /* Inspection API */ tdp_pte_is_present() tdp_pte_is_writable() tdp_pte_is_huge() tdp_pte_is_leaf() tdp_pte_is_accessed() tdp_pte_is_dirty() tdp_pte_is_mmio() tdp_pte_is_volatile() tdp_pte_to_pfn() tdp_pte_check_leaf_invariants() /* Modification API */ tdp_pte_clear_writable() tdp_pte_clear_mmu_writable() tdp_pte_clear_dirty() tdp_pte_clear_accessed() Note that this does not cover constructing PTEs from scratch (e.g. during page fault handling). This will be added in a subsequent commit. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm/tdp_pgtable.h | 58 +++++++++ arch/x86/kvm/Makefile | 2 +- arch/x86/kvm/mmu/spte.c | 3 +- arch/x86/kvm/mmu/spte.h | 22 ---- arch/x86/kvm/mmu/tdp_iter.c | 4 +- arch/x86/kvm/mmu/tdp_iter.h | 5 +- arch/x86/kvm/mmu/tdp_mmu.c | 171 +++++++++++-------------- arch/x86/kvm/mmu/tdp_pgtable.c | 72 +++++++++++ include/kvm/tdp_pgtable.h | 18 +++ 9 files changed, 231 insertions(+), 124 deletions(-) create mode 100644 arch/x86/include/asm/kvm/tdp_pgtable.h create mode 100644 arch/x86/kvm/mmu/tdp_pgtable.c diff --git a/arch/x86/include/asm/kvm/tdp_pgtable.h b/arch/x86/include/asm/kvm/tdp_pgtable.h new file mode 100644 index 000000000000..cebc4bc44b49 --- /dev/null +++ b/arch/x86/include/asm/kvm/tdp_pgtable.h @@ -0,0 +1,58 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_KVM_TDP_PGTABLE_H +#define __ASM_KVM_TDP_PGTABLE_H + +#include +#include + +/* + * Use a semi-arbitrary value that doesn't set RWX bits, i.e. is not-present on + * both AMD and Intel CPUs, and doesn't set PFN bits, i.e. doesn't create a L1TF + * vulnerability. Use only low bits to avoid 64-bit immediates. + */ +#define REMOVED_TDP_PTE 0x5a0ULL + +#define TDP_PTE_WRITABLE_MASK BIT_ULL(1) +#define TDP_PTE_HUGE_PAGE_MASK BIT_ULL(7) +#define TDP_PTE_PRESENT_MASK BIT_ULL(11) + +static inline bool tdp_pte_is_writable(u64 pte) +{ + return pte & TDP_PTE_WRITABLE_MASK; +} + +static inline bool tdp_pte_is_huge(u64 pte) +{ + return pte & TDP_PTE_HUGE_PAGE_MASK; +} + +static inline bool tdp_pte_is_present(u64 pte) +{ + return pte & TDP_PTE_PRESENT_MASK; +} + +bool tdp_pte_is_accessed(u64 pte); +bool tdp_pte_is_dirty(u64 pte); +bool tdp_pte_is_mmio(u64 pte); +bool tdp_pte_is_volatile(u64 pte); + +static inline u64 tdp_pte_clear_writable(u64 pte) +{ + return pte & ~TDP_PTE_WRITABLE_MASK; +} + +static inline u64 tdp_pte_clear_mmu_writable(u64 pte) +{ + extern u64 __read_mostly shadow_mmu_writable_mask; + + return pte & ~(TDP_PTE_WRITABLE_MASK | shadow_mmu_writable_mask); +} + +u64 tdp_pte_clear_dirty(u64 pte, bool force_wrprot); +u64 tdp_pte_clear_accessed(u64 pte); + +kvm_pfn_t tdp_pte_to_pfn(u64 pte); + +void tdp_pte_check_leaf_invariants(u64 pte); + +#endif /* !__ASM_KVM_TDP_PGTABLE_H */ diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile index 80e3fe184d17..c294ae51caba 100644 --- a/arch/x86/kvm/Makefile +++ b/arch/x86/kvm/Makefile @@ -18,7 +18,7 @@ ifdef CONFIG_HYPERV kvm-y += kvm_onhyperv.o endif -kvm-$(CONFIG_X86_64) += mmu/tdp_iter.o mmu/tdp_mmu.o +kvm-$(CONFIG_X86_64) += mmu/tdp_pgtable.o mmu/tdp_iter.o mmu/tdp_mmu.o kvm-$(CONFIG_KVM_XEN) += xen.o kvm-$(CONFIG_KVM_SMM) += smm.o diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index fe4b626cb431..493e109f1105 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -10,6 +10,7 @@ #include +#include #include "mmu.h" #include "mmu_internal.h" #include "x86.h" @@ -401,7 +402,7 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask) * not set any RWX bits. */ if (WARN_ON((mmio_value & mmio_mask) != mmio_value) || - WARN_ON(mmio_value && (REMOVED_SPTE & mmio_mask) == mmio_value)) + WARN_ON(mmio_value && (REMOVED_TDP_PTE & mmio_mask) == mmio_value)) mmio_value = 0; if (!mmio_value) diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 4c5d518e3ac6..a1b7d7730583 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -183,28 +183,6 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask; */ #define SHADOW_NONPRESENT_OR_RSVD_MASK_LEN 5 -/* - * If a thread running without exclusive control of the MMU lock must perform a - * multi-part operation on an SPTE, it can set the SPTE to REMOVED_SPTE as a - * non-present intermediate value. Other threads which encounter this value - * should not modify the SPTE. - * - * Use a semi-arbitrary value that doesn't set RWX bits, i.e. is not-present on - * both AMD and Intel CPUs, and doesn't set PFN bits, i.e. doesn't create a L1TF - * vulnerability. Use only low bits to avoid 64-bit immediates. - * - * Only used by the TDP MMU. - */ -#define REMOVED_SPTE 0x5a0ULL - -/* Removed SPTEs must not be misconstrued as shadow present PTEs. */ -static_assert(!(REMOVED_SPTE & SPTE_MMU_PRESENT_MASK)); - -static inline bool is_removed_spte(u64 spte) -{ - return spte == REMOVED_SPTE; -} - /* Get an SPTE's index into its parent's page table (and the spt array). */ static inline int spte_index(u64 *sptep) { diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index d6328dac9cd3..d5f024b7f6e4 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -69,10 +69,10 @@ tdp_ptep_t spte_to_child_pt(u64 spte, int level) * There's no child entry if this entry isn't present or is a * last-level entry. */ - if (!is_shadow_present_pte(spte) || is_last_spte(spte, level)) + if (!tdp_pte_is_present(spte) || tdp_pte_is_leaf(spte, level)) return NULL; - return (tdp_ptep_t)__va(spte_to_pfn(spte) << PAGE_SHIFT); + return (tdp_ptep_t)__va(tdp_pte_to_pfn(spte) << PAGE_SHIFT); } /* diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index bfac83ab52db..6e3c38532d1d 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -45,8 +45,9 @@ static inline u64 kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 old_spte, * logic needs to be reassessed if KVM were to use non-leaf Accessed * bits, e.g. to skip stepping down into child SPTEs when aging SPTEs. */ - if (is_shadow_present_pte(old_spte) && is_last_spte(old_spte, level) && - spte_has_volatile_bits(old_spte)) + if (tdp_pte_is_present(old_spte) && + tdp_pte_is_leaf(old_spte, level) && + tdp_pte_is_volatile(old_spte)) return kvm_tdp_mmu_write_spte_atomic(sptep, new_spte); __kvm_tdp_mmu_write_spte(sptep, new_spte); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index a6d6e393c009..fea42bbac984 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -334,13 +334,13 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, static void handle_changed_spte_acc_track(u64 old_spte, u64 new_spte, int level) { - if (!is_shadow_present_pte(old_spte) || !is_last_spte(old_spte, level)) + if (!tdp_pte_is_present(old_spte) || !tdp_pte_is_leaf(old_spte, level)) return; - if (is_accessed_spte(old_spte) && - (!is_shadow_present_pte(new_spte) || !is_accessed_spte(new_spte) || - spte_to_pfn(old_spte) != spte_to_pfn(new_spte))) - kvm_set_pfn_accessed(spte_to_pfn(old_spte)); + if (tdp_pte_is_accessed(old_spte) && + (!tdp_pte_is_present(new_spte) || !tdp_pte_is_accessed(new_spte) || + tdp_pte_to_pfn(old_spte) != tdp_pte_to_pfn(new_spte))) + kvm_set_pfn_accessed(tdp_pte_to_pfn(old_spte)); } static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn, @@ -352,10 +352,10 @@ static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn, if (level > PG_LEVEL_PTE) return; - pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); + pfn_changed = tdp_pte_to_pfn(old_spte) != tdp_pte_to_pfn(new_spte); - if ((!is_writable_pte(old_spte) || pfn_changed) && - is_writable_pte(new_spte)) { + if ((!tdp_pte_is_writable(old_spte) || pfn_changed) && + tdp_pte_is_writable(new_spte)) { slot = __gfn_to_memslot(__kvm_memslots(kvm, as_id), gfn); mark_page_dirty_in_slot(kvm, slot, gfn); } @@ -445,8 +445,8 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared) * value to the removed SPTE value. */ for (;;) { - old_spte = kvm_tdp_mmu_write_spte_atomic(sptep, REMOVED_SPTE); - if (!is_removed_spte(old_spte)) + old_spte = kvm_tdp_mmu_write_spte_atomic(sptep, REMOVED_TDP_PTE); + if (!tdp_pte_is_removed(old_spte)) break; cpu_relax(); } @@ -461,7 +461,7 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared) * unreachable. */ old_spte = kvm_tdp_mmu_read_spte(sptep); - if (!is_shadow_present_pte(old_spte)) + if (!tdp_pte_is_present(old_spte)) continue; /* @@ -481,7 +481,8 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared) * strictly necessary for the same reason, but using * the remove SPTE value keeps the shared/exclusive * paths consistent and allows the handle_changed_spte() - * call below to hardcode the new value to REMOVED_SPTE. + * call below to hardcode the new value to + * REMOVED_TDP_PTE. * * Note, even though dropping a Dirty bit is the only * scenario where a non-atomic update could result in a @@ -493,10 +494,11 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared) * it here. */ old_spte = kvm_tdp_mmu_write_spte(sptep, old_spte, - REMOVED_SPTE, level); + REMOVED_TDP_PTE, + level); } handle_changed_spte(kvm, sp->role.as_id, gfn, old_spte, - REMOVED_SPTE, level, shared); + REMOVED_TDP_PTE, level, shared); } call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback); @@ -521,11 +523,11 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, u64 old_spte, u64 new_spte, int level, bool shared) { - bool was_present = is_shadow_present_pte(old_spte); - bool is_present = is_shadow_present_pte(new_spte); - bool was_leaf = was_present && is_last_spte(old_spte, level); - bool is_leaf = is_present && is_last_spte(new_spte, level); - bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); + bool was_present = tdp_pte_is_present(old_spte); + bool is_present = tdp_pte_is_present(new_spte); + bool was_leaf = was_present && tdp_pte_is_leaf(old_spte, level); + bool is_leaf = is_present && tdp_pte_is_leaf(new_spte, level); + bool pfn_changed = tdp_pte_to_pfn(old_spte) != tdp_pte_to_pfn(new_spte); WARN_ON(level > TDP_ROOT_MAX_LEVEL); WARN_ON(level < PG_LEVEL_PTE); @@ -560,7 +562,7 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, trace_kvm_tdp_mmu_spte_changed(as_id, gfn, level, old_spte, new_spte); if (is_leaf) - check_spte_writable_invariants(new_spte); + tdp_pte_check_leaf_invariants(new_spte); /* * The only times a SPTE should be changed from a non-present to @@ -574,9 +576,9 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, * impact the guest since both the former and current SPTEs * are nonpresent. */ - if (WARN_ON(!is_mmio_spte(old_spte) && - !is_mmio_spte(new_spte) && - !is_removed_spte(new_spte))) + if (WARN_ON(!tdp_pte_is_mmio(old_spte) && + !tdp_pte_is_mmio(new_spte) && + !tdp_pte_is_removed(new_spte))) pr_err("Unexpected SPTE change! Nonpresent SPTEs\n" "should not be replaced with another,\n" "different nonpresent SPTE, unless one or both\n" @@ -590,9 +592,9 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, if (is_leaf != was_leaf) kvm_update_page_stats(kvm, level, is_leaf ? 1 : -1); - if (was_leaf && is_dirty_spte(old_spte) && - (!is_present || !is_dirty_spte(new_spte) || pfn_changed)) - kvm_set_pfn_dirty(spte_to_pfn(old_spte)); + if (was_leaf && tdp_pte_is_dirty(old_spte) && + (!is_present || !tdp_pte_is_dirty(new_spte) || pfn_changed)) + kvm_set_pfn_dirty(tdp_pte_to_pfn(old_spte)); /* * Recursively handle child PTs if the change removed a subtree from @@ -645,7 +647,7 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm, * and pre-checking before inserting a new SPTE is advantageous as it * avoids unnecessary work. */ - WARN_ON_ONCE(iter->yielded || is_removed_spte(iter->old_spte)); + WARN_ON_ONCE(iter->yielded || tdp_pte_is_removed(iter->old_spte)); lockdep_assert_held_read(&kvm->mmu_lock); @@ -674,7 +676,7 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm, * immediately installing a present entry in its place * before the TLBs are flushed. */ - ret = tdp_mmu_set_spte_atomic(kvm, iter, REMOVED_SPTE); + ret = tdp_mmu_set_spte_atomic(kvm, iter, REMOVED_TDP_PTE); if (ret) return ret; @@ -730,7 +732,7 @@ static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep, * should be used. If operating under the MMU lock in write mode, the * use of the removed SPTE should not be necessary. */ - WARN_ON(is_removed_spte(old_spte) || is_removed_spte(new_spte)); + WARN_ON(tdp_pte_is_removed(old_spte) || tdp_pte_is_removed(new_spte)); old_spte = kvm_tdp_mmu_write_spte(sptep, old_spte, new_spte, level); @@ -781,8 +783,8 @@ static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm, #define tdp_root_for_each_leaf_pte(_iter, _root, _start, _end) \ tdp_root_for_each_pte(_iter, _root, _start, _end) \ - if (!is_shadow_present_pte(_iter.old_spte) || \ - !is_last_spte(_iter.old_spte, _iter.level)) \ + if (!tdp_pte_is_present(_iter.old_spte) || \ + !tdp_pte_is_leaf(_iter.old_spte, _iter.level)) \ continue; \ else @@ -858,7 +860,7 @@ static void __tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root, if (tdp_mmu_iter_cond_resched(kvm, &iter, false, shared)) continue; - if (!is_shadow_present_pte(iter.old_spte)) + if (!tdp_pte_is_present(iter.old_spte)) continue; if (iter.level > zap_level) @@ -919,7 +921,7 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp) return false; old_spte = kvm_tdp_mmu_read_spte(sp->ptep); - if (WARN_ON_ONCE(!is_shadow_present_pte(old_spte))) + if (WARN_ON_ONCE(!tdp_pte_is_present(old_spte))) return false; __tdp_mmu_set_spte(kvm, sp->role.as_id, sp->ptep, old_spte, 0, @@ -953,8 +955,8 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, continue; } - if (!is_shadow_present_pte(iter.old_spte) || - !is_last_spte(iter.old_spte, iter.level)) + if (!tdp_pte_is_present(iter.old_spte) || + !tdp_pte_is_leaf(iter.old_spte, iter.level)) continue; tdp_mmu_set_spte(kvm, &iter, 0); @@ -1074,8 +1076,8 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, ret = RET_PF_SPURIOUS; else if (tdp_mmu_set_spte_atomic(vcpu->kvm, iter, new_spte)) return RET_PF_RETRY; - else if (is_shadow_present_pte(iter->old_spte) && - !is_last_spte(iter->old_spte, iter->level)) + else if (tdp_pte_is_present(iter->old_spte) && + !tdp_pte_is_leaf(iter->old_spte, iter->level)) kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn, TDP_PAGES_PER_LEVEL(iter->level + 1)); @@ -1090,7 +1092,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, } /* If a MMIO SPTE is installed, the MMIO will need to be emulated. */ - if (unlikely(is_mmio_spte(new_spte))) { + if (unlikely(tdp_pte_is_mmio(new_spte))) { vcpu->stat.pf_mmio_spte_created++; trace_mark_mmio_spte(rcu_dereference(iter->sptep), iter->gfn, new_spte); @@ -1168,12 +1170,12 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) * If SPTE has been frozen by another thread, just give up and * retry, avoiding unnecessary page table allocation and free. */ - if (is_removed_spte(iter.old_spte)) + if (tdp_pte_is_removed(iter.old_spte)) goto retry; /* Step down into the lower level page table if it exists. */ - if (is_shadow_present_pte(iter.old_spte) && - !is_large_pte(iter.old_spte)) + if (tdp_pte_is_present(iter.old_spte) && + !tdp_pte_is_huge(iter.old_spte)) continue; /* @@ -1185,7 +1187,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) sp->arch.nx_huge_page_disallowed = fault->arch.huge_page_disallowed; - if (is_shadow_present_pte(iter.old_spte)) + if (tdp_pte_is_present(iter.old_spte)) r = tdp_mmu_split_huge_page(kvm, &iter, sp, true); else r = tdp_mmu_link_sp(kvm, &iter, sp, true); @@ -1207,6 +1209,15 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) } } + /* + * Force the guest to retry the access if the upper level SPTEs aren't + * in place, or if the target leaf SPTE is frozen by another CPU. + */ + if (iter.level != fault->goal_level || tdp_pte_is_removed(iter.old_spte)) { + rcu_read_unlock(); + return RET_PF_RETRY; + } + ret = tdp_mmu_map_handle_target_level(vcpu, fault, &iter); retry: @@ -1255,27 +1266,13 @@ static __always_inline bool kvm_tdp_mmu_handle_gfn(struct kvm *kvm, static bool age_gfn_range(struct kvm *kvm, struct tdp_iter *iter, struct kvm_gfn_range *range) { - u64 new_spte = 0; + u64 new_spte; /* If we have a non-accessed entry we don't need to change the pte. */ - if (!is_accessed_spte(iter->old_spte)) + if (!tdp_pte_is_accessed(iter->old_spte)) return false; - new_spte = iter->old_spte; - - if (spte_ad_enabled(new_spte)) { - new_spte &= ~shadow_accessed_mask; - } else { - /* - * Capture the dirty status of the page, so that it doesn't get - * lost when the SPTE is marked for access tracking. - */ - if (is_writable_pte(new_spte)) - kvm_set_pfn_dirty(spte_to_pfn(new_spte)); - - new_spte = mark_spte_for_access_track(new_spte); - } - + new_spte = tdp_pte_clear_accessed(iter->old_spte); tdp_mmu_set_spte_no_acc_track(kvm, iter, new_spte); return true; @@ -1289,7 +1286,7 @@ bool kvm_tdp_mmu_age_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) static bool test_age_gfn(struct kvm *kvm, struct tdp_iter *iter, struct kvm_gfn_range *range) { - return is_accessed_spte(iter->old_spte); + return tdp_pte_is_accessed(iter->old_spte); } bool kvm_tdp_mmu_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) @@ -1306,7 +1303,7 @@ static bool set_spte_gfn(struct kvm *kvm, struct tdp_iter *iter, WARN_ON(pte_huge(range->pte) || range->start + 1 != range->end); if (iter->level != PG_LEVEL_PTE || - !is_shadow_present_pte(iter->old_spte)) + !tdp_pte_is_present(iter->old_spte)) return false; /* @@ -1364,12 +1361,12 @@ static bool wrprot_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) continue; - if (!is_shadow_present_pte(iter.old_spte) || - !is_last_spte(iter.old_spte, iter.level) || - !(iter.old_spte & PT_WRITABLE_MASK)) + if (!tdp_pte_is_present(iter.old_spte) || + !tdp_pte_is_leaf(iter.old_spte, iter.level) || + !tdp_pte_is_writable(iter.old_spte)) continue; - new_spte = iter.old_spte & ~PT_WRITABLE_MASK; + new_spte = tdp_pte_clear_writable(iter.old_spte); if (tdp_mmu_set_spte_atomic(kvm, &iter, new_spte)) goto retry; @@ -1525,7 +1522,7 @@ static int tdp_mmu_split_huge_pages_root(struct kvm *kvm, if (tdp_mmu_iter_cond_resched(kvm, &iter, false, shared)) continue; - if (!is_shadow_present_pte(iter.old_spte) || !is_large_pte(iter.old_spte)) + if (!tdp_pte_is_present(iter.old_spte) || !tdp_pte_is_huge(iter.old_spte)) continue; if (!sp) { @@ -1607,20 +1604,12 @@ static bool clear_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) continue; - if (!is_shadow_present_pte(iter.old_spte)) + if (!tdp_pte_is_present(iter.old_spte)) continue; - if (spte_ad_need_write_protect(iter.old_spte)) { - if (is_writable_pte(iter.old_spte)) - new_spte = iter.old_spte & ~PT_WRITABLE_MASK; - else - continue; - } else { - if (iter.old_spte & shadow_dirty_mask) - new_spte = iter.old_spte & ~shadow_dirty_mask; - else - continue; - } + new_spte = tdp_pte_clear_dirty(iter.old_spte, false); + if (new_spte == iter.old_spte) + continue; if (tdp_mmu_set_spte_atomic(kvm, &iter, new_spte)) goto retry; @@ -1680,17 +1669,9 @@ static void clear_dirty_pt_masked(struct kvm *kvm, struct kvm_mmu_page *root, mask &= ~(1UL << (iter.gfn - gfn)); - if (wrprot || spte_ad_need_write_protect(iter.old_spte)) { - if (is_writable_pte(iter.old_spte)) - new_spte = iter.old_spte & ~PT_WRITABLE_MASK; - else - continue; - } else { - if (iter.old_spte & shadow_dirty_mask) - new_spte = iter.old_spte & ~shadow_dirty_mask; - else - continue; - } + new_spte = tdp_pte_clear_dirty(iter.old_spte, wrprot); + if (new_spte == iter.old_spte) + continue; tdp_mmu_set_spte_no_dirty_log(kvm, &iter, new_spte); } @@ -1734,7 +1715,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm, continue; if (iter.level > TDP_MAX_HUGEPAGE_LEVEL || - !is_shadow_present_pte(iter.old_spte)) + !tdp_pte_is_present(iter.old_spte)) continue; /* @@ -1742,7 +1723,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm, * a large page size, then its parent would have been zapped * instead of stepping down. */ - if (is_last_spte(iter.old_spte, iter.level)) + if (tdp_pte_is_leaf(iter.old_spte, iter.level)) continue; /* @@ -1800,13 +1781,11 @@ static bool write_protect_gfn(struct kvm *kvm, struct kvm_mmu_page *root, rcu_read_lock(); for_each_tdp_pte_min_level(iter, root, min_level, gfn, gfn + 1) { - if (!is_shadow_present_pte(iter.old_spte) || - !is_last_spte(iter.old_spte, iter.level)) + if (!tdp_pte_is_present(iter.old_spte) || + !tdp_pte_is_leaf(iter.old_spte, iter.level)) continue; - new_spte = iter.old_spte & - ~(PT_WRITABLE_MASK | shadow_mmu_writable_mask); - + new_spte = tdp_pte_clear_mmu_writable(iter.old_spte); if (new_spte == iter.old_spte) break; diff --git a/arch/x86/kvm/mmu/tdp_pgtable.c b/arch/x86/kvm/mmu/tdp_pgtable.c new file mode 100644 index 000000000000..cf3b692d8e21 --- /dev/null +++ b/arch/x86/kvm/mmu/tdp_pgtable.c @@ -0,0 +1,72 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include + +#include "mmu.h" +#include "spte.h" + +/* Removed SPTEs must not be misconstrued as shadow present PTEs. */ +static_assert(!(REMOVED_TDP_PTE & SPTE_MMU_PRESENT_MASK)); + +static_assert(TDP_PTE_WRITABLE_MASK == PT_WRITABLE_MASK); +static_assert(TDP_PTE_HUGE_PAGE_MASK == PT_PAGE_SIZE_MASK); +static_assert(TDP_PTE_PRESENT_MASK == SPTE_MMU_PRESENT_MASK); + +bool tdp_pte_is_accessed(u64 pte) +{ + return is_accessed_spte(pte); +} + +bool tdp_pte_is_dirty(u64 pte) +{ + return is_dirty_spte(pte); +} + +bool tdp_pte_is_mmio(u64 pte) +{ + return is_mmio_spte(pte); +} + +bool tdp_pte_is_volatile(u64 pte) +{ + return spte_has_volatile_bits(pte); +} + +u64 tdp_pte_clear_dirty(u64 pte, bool force_wrprot) +{ + if (force_wrprot || spte_ad_need_write_protect(pte)) { + if (tdp_pte_is_writable(pte)) + pte &= ~PT_WRITABLE_MASK; + } else if (pte & shadow_dirty_mask) { + pte &= ~shadow_dirty_mask; + } + + return pte; +} + +u64 tdp_pte_clear_accessed(u64 old_spte) +{ + if (spte_ad_enabled(old_spte)) + return old_spte & ~shadow_accessed_mask; + + /* + * Capture the dirty status of the page, so that it doesn't get lost + * when the SPTE is marked for access tracking. + */ + if (tdp_pte_is_writable(old_spte)) + kvm_set_pfn_dirty(tdp_pte_to_pfn(old_spte)); + + return mark_spte_for_access_track(old_spte); +} + +kvm_pfn_t tdp_pte_to_pfn(u64 pte) +{ + return spte_to_pfn(pte); +} + +void tdp_pte_check_leaf_invariants(u64 pte) +{ + check_spte_writable_invariants(pte); +} + diff --git a/include/kvm/tdp_pgtable.h b/include/kvm/tdp_pgtable.h index 968be8d92350..a24c45ac7765 100644 --- a/include/kvm/tdp_pgtable.h +++ b/include/kvm/tdp_pgtable.h @@ -5,6 +5,8 @@ #include #include +#include + #define TDP_ROOT_MAX_LEVEL 5 #define TDP_MAX_HUGEPAGE_LEVEL PG_LEVEL_PUD #define TDP_PTES_PER_PAGE (PAGE_SIZE / sizeof(u64)) @@ -18,4 +20,20 @@ #define TDP_PTE_INDEX(gfn, level) \ (((gfn) >> TDP_LEVEL_SHIFT(level)) & TDP_LEVEL_MASK) +/* + * If a thread running without exclusive control of the MMU lock must perform a + * multi-part operation on a PTE, it can set the PTE to REMOVED_TDP_PTE as a + * non-present intermediate value. Other threads which encounter this value + * should not modify the PTE. + */ +static inline bool tdp_pte_is_removed(u64 pte) +{ + return pte == REMOVED_TDP_PTE; +} + +static inline bool tdp_pte_is_leaf(u64 pte, int level) +{ + return tdp_pte_is_huge(pte) || level == PG_LEVEL_PTE; +} + #endif /* !__KVM_TDP_PGTABLE_H */ From patchwork Thu Dec 8 19:38:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713860 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=oE+9rV3F; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=aDqf6pXx; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSl4d37Hlz23np for ; Fri, 9 Dec 2022 06:45:37 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=4ItMhcZN6rUEEAE+aDU2iFAdn5ZFV11ThvzuYMCZj38=; b=oE+9rV3FWswVXtgaT35CBcVg6P FxZ2DJeW20J35b2B/Xntu7KiEb+i89zE/iJC4x+4CB+6v6S0RD7iGMTg02lE7OXcoc8uNOLnGPhUa ujF3Pd76/Z99/c/ycB1FHEfDtJjpQNs/IVNn8DzXPy28KPomZ3jsQkdFOp+qhNlcYBsX+3b2rC17D A8m2kHDsbRK18iBa1UPdOTANxv7QcRpKMeiRjUByiU+f8kvZqaMRAgXKlJekID8PKSbsz1kULOI5I tEOoWlYRNWGdJNndwrjiUom68QthXkBIC3DM4QOHLAByI3AvSR/9PDJWUuPwBOgM7R5jJR1vsS/GU 6RQlqmJw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mpo-00A0YX-Dn; Thu, 08 Dec 2022 19:45:32 +0000 Received: from mail-oi1-x249.google.com ([2607:f8b0:4864:20::249]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mpc-00A0L8-EB for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:45:22 +0000 Received: by mail-oi1-x249.google.com with SMTP id x203-20020acae0d4000000b0035a623fce1aso1075425oig.10 for ; Thu, 08 Dec 2022 11:45:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Pg1Nq3rulqWxdtA2fJcZW38Nt6WCEHzRUfCCpYS2KV0=; b=aDqf6pXxZybLQuV+ns7MyujXZ7TaXxPsXCkY1cBCTnm7Z1Pk34Q2WxafDnw6pX7BHN lhEXYUvCnQrmxi67UqIabYRYVsFr1VmRYMnDgU/nCVe0vfpF9o0e/pk5kPNnjxwaoCkb 6UzjVRaqPFUO0UrFxoxAA1OT7EuOIecZ2ihew0iHfezGa4aHmagqVFYtCzqCsOXcoWMG 1BSI0f3h3n0sV4Xz4NHwLOJuYaVAHIAoL3P08pyo0Lb7cFRQm3wK50yoCvEQji2rKMPw TJvkYFZnHosx8g6qnVV89WiLKgj9ZlDUDj9dOxU8qckP9jmYepoaBePV4tvKaqieAvx3 Iqsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Pg1Nq3rulqWxdtA2fJcZW38Nt6WCEHzRUfCCpYS2KV0=; b=IVOaFvvXxbC9QrBXesQ1zUcHdgxqj0dTqGeVR5RSTtR+bgnZIFoAyl93SlMRm+T08e XalMfvke6gqHlDmW7nI6rsj21n2HfrTpMBYi5tJ+npneyekI0h0b4KH92tpt3QVMdqXj PT7c19VFyco5BUpPw6HM12e3SRyQrbn2DtqUssYO553s5Xa2n2MBwK1Sb5O0f80o5FzO MTZow2P4/jM4QRXI3jsAp5ZW0HQeaSxuofEApYg/MC+GBxehSiIk4h3ENzii2LdGhsJP ERtJ38gtwj99O+CL+s4RpXd2xdBdFB8Abo1CWPMEiB/QHcLK4ag88v3hoILmlpoGCdxb PTxA== X-Gm-Message-State: ANoB5pkzWroOP828gPXwbsdYHyhJsVmSu+2JjdLSBayRn91KvywTWN7W w+WGO3nNfFqD8oJXxCd0JF6WW16QDBJ3RQ== X-Google-Smtp-Source: AA0mqf7iyktvhKElr4AnSOjLzx6ehM+nhpfhEwE2rLq/JpjuQYZLUjwLo76vc17Zh1ckOWCNKP3vo6QzYulWTQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:25cf:0:b0:704:9d40:ecce with SMTP id l198-20020a2525cf000000b007049d40eccemr9539756ybl.316.1670528373481; Thu, 08 Dec 2022 11:39:33 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:36 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-17-dmatlack@google.com> Subject: [RFC PATCH 16/37] KVM: x86/mmu: Abstract away TDP MMU root lookup From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114520_500590_47838CA3 X-CRM114-Status: GOOD ( 14.71 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Abstract the code that looks up the TDP MMU root from vcpu->arch.mmu behind a function, tdp_mmu_root(). This will be used in a future commit to allow the TDP MMU to be moved to common code, where vcpu [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:249 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Abstract the code that looks up the TDP MMU root from vcpu->arch.mmu behind a function, tdp_mmu_root(). This will be used in a future commit to allow the TDP MMU to be moved to common code, where vcpu->arch.mmu cannot be accessed directly. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm/tdp_pgtable.h | 2 ++ arch/x86/kvm/mmu/tdp_mmu.c | 17 +++++++---------- arch/x86/kvm/mmu/tdp_pgtable.c | 5 +++++ 3 files changed, 14 insertions(+), 10 deletions(-) diff --git a/arch/x86/include/asm/kvm/tdp_pgtable.h b/arch/x86/include/asm/kvm/tdp_pgtable.h index cebc4bc44b49..c5c4e4cab24a 100644 --- a/arch/x86/include/asm/kvm/tdp_pgtable.h +++ b/arch/x86/include/asm/kvm/tdp_pgtable.h @@ -5,6 +5,8 @@ #include #include +struct kvm_mmu_page *tdp_mmu_root(struct kvm_vcpu *vcpu); + /* * Use a semi-arbitrary value that doesn't set RWX bits, i.e. is not-present on * both AMD and Intel CPUs, and doesn't set PFN bits, i.e. doesn't create a L1TF diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index fea42bbac984..8155a9e79203 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -788,9 +788,6 @@ static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm, continue; \ else -#define tdp_mmu_for_each_pte(_iter, _mmu, _start, _end) \ - for_each_tdp_pte(_iter, to_shadow_page(_mmu->root.hpa), _start, _end) - /* * Yield if the MMU lock is contended or this thread needs to return control * to the scheduler. @@ -1145,7 +1142,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, */ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { - struct kvm_mmu *mmu = vcpu->arch.mmu; + struct kvm_mmu_page *root = tdp_mmu_root(vcpu); struct kvm *kvm = vcpu->kvm; struct tdp_iter iter; struct kvm_mmu_page *sp; @@ -1157,7 +1154,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) rcu_read_lock(); - tdp_mmu_for_each_pte(iter, mmu, fault->gfn, fault->gfn + 1) { + for_each_tdp_pte(iter, root, fault->gfn, fault->gfn + 1) { int r; if (fault->arch.nx_huge_page_workaround_enabled) @@ -1826,14 +1823,14 @@ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level) { + struct kvm_mmu_page *root = tdp_mmu_root(vcpu); struct tdp_iter iter; - struct kvm_mmu *mmu = vcpu->arch.mmu; gfn_t gfn = addr >> PAGE_SHIFT; int leaf = -1; - *root_level = vcpu->arch.mmu->root_role.level; + *root_level = root->role.level; - tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) { + for_each_tdp_pte(iter, root, gfn, gfn + 1) { leaf = iter.level; sptes[leaf] = iter.old_spte; } @@ -1855,12 +1852,12 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr, u64 *spte) { + struct kvm_mmu_page *root = tdp_mmu_root(vcpu); struct tdp_iter iter; - struct kvm_mmu *mmu = vcpu->arch.mmu; gfn_t gfn = addr >> PAGE_SHIFT; tdp_ptep_t sptep = NULL; - tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) { + for_each_tdp_pte(iter, root, gfn, gfn + 1) { *spte = iter.old_spte; sptep = iter.sptep; } diff --git a/arch/x86/kvm/mmu/tdp_pgtable.c b/arch/x86/kvm/mmu/tdp_pgtable.c index cf3b692d8e21..97cc900e8818 100644 --- a/arch/x86/kvm/mmu/tdp_pgtable.c +++ b/arch/x86/kvm/mmu/tdp_pgtable.c @@ -13,6 +13,11 @@ static_assert(TDP_PTE_WRITABLE_MASK == PT_WRITABLE_MASK); static_assert(TDP_PTE_HUGE_PAGE_MASK == PT_PAGE_SIZE_MASK); static_assert(TDP_PTE_PRESENT_MASK == SPTE_MMU_PRESENT_MASK); +struct kvm_mmu_page *tdp_mmu_root(struct kvm_vcpu *vcpu) +{ + return to_shadow_page(vcpu->arch.mmu->root.hpa); +} + bool tdp_pte_is_accessed(u64 pte) { return is_accessed_spte(pte); From patchwork Thu Dec 8 19:38:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713869 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=LUq5m9I1; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=pUH2F0cX; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSlCY6TYVz23p1 for ; Fri, 9 Dec 2022 06:51:37 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=GbwoEcjS4N4qPqIi+PbRMyYe9JeCe1jPEZXDWDhs1Sc=; b=LUq5m9I1mXiBocZ7iGvkOAyIjk e2L5XfE0zg5PIOz1pH6fe6eqaafgS2IbGtyxING86BY8BsY7v6HcZyFfpELmTPgfY+lY34sLuBcjx CeX9+GlRQz9vRW5qJ9lcyekgwGHw+g/1lxwz7vjWGi1tjJcbddTSTSui151FZ8vgGAoGS1tNwrU2/ OA2NwQ126fCcePDUNn62M7dWRWmcEDs8DXviQnC/wZKSmBrIRwgp8R/jNUyJubmlM11/zzONLTB7i qLXEo0LM1tHo3Dj2E4WgN+TblOU/m38wOP6NMA2BCS+wuru5Mxjqs2hayZY6ZxmSBwAW823qz7ZX6 HWpwywrg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mvd-00A4pU-7z; Thu, 08 Dec 2022 19:51:33 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Ms4-00A2Pk-TF for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:47:54 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-39afd53dcdbso24965947b3.8 for ; Thu, 08 Dec 2022 11:47:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9RmR6EwIqCFDBLvxI6selt5ga3zw1fPd7MYyun7sh04=; b=pUH2F0cXu3WIGRD2ce8NSaVZncsouuIq0PwSt3Hx5XNNzUTAHyLwoPUZmYYFi5CtSP nVhq3u0B+puFODMcSNnljB/IQui4QQoBBoT7VK2vjldq5K0TcwfXSpD6l8CQHX5MJLf1 +ObCB8QWeBiWlCf7gO5CWF4MLRqEya3iXno1lfQYF2S+3pFzK9GESTP8LV94Jcgc/kmm DGrmedpnRTomnz9+8f0JJ8smD91kG4llzTsIt3IMmR0C9KC9suLzy6zk0g20DYlyZJGB ATs/tP0070Y72621Cd+brELKn4hhj6+ekuepajebi56ZFA2sqrDn/ZHk1U80/2BWwxTx E8Jg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9RmR6EwIqCFDBLvxI6selt5ga3zw1fPd7MYyun7sh04=; b=Hrr4vxCOl/AhHELM9czt1JCAB5xLAdg2kT/N/oU3yaEnR34rbNt4EIsHmXmh02KkC0 9a688+W1yExAbbYCjpHZRL8kRSGeMVtrnzB3kMm2MBXnv5Bn8vgz5tH1Qd53MAd37PYc 9fNdvAeh9mlg1m51S3rym+f9TtrCpbmBLJ5KIHjtCEP6QaFtBNLqMoUO5+bEqe6ujHmo GbqnmlXPwrE9u00P9KV+FO88NAW0VB67WQgbKSRv/+16y8fZRbh/XQ+SFfW/H+UD59lj RIM96TF1ipl3f3zvgopd2Bxp9ReH+/L8eae0zJ4zvBNY3bIaut0ll0TxrJ+TXyWx4GTZ wpvA== X-Gm-Message-State: ANoB5pnz6kTGb1pWx+PAjFOLYM38twqUNBfkG2vq1aQwzuOGgENINd6M WHRn0B87EVXCfq4Hr2kqJM40Z1mHbhRUXg== X-Google-Smtp-Source: AA0mqf7cV11l2t3OM7agzc6qRsrai4lnDlxSMNa6gi84ZckjSbH/wAyvmmZJEJroavTOpSqZ0tfjElEm8egcuA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:d9c1:0:b0:6fd:d054:a0a7 with SMTP id q184-20020a25d9c1000000b006fdd054a0a7mr25462879ybg.112.1670528375112; Thu, 08 Dec 2022 11:39:35 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:37 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-18-dmatlack@google.com> Subject: [RFC PATCH 17/37] KVM: Move struct kvm_gfn_range to kvm_types.h From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114752_960355_6DDB5502 X-CRM114-Status: GOOD ( 10.27 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Move struct kvm_gfn_range to kvm_types.h so that it's definition can be accessed in a future commit by arch/x86/include/asm/kvm/tdp_pgtable.h without needing to include the mega-header kvm_host.h. No functional change intended. Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:1149 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Move struct kvm_gfn_range to kvm_types.h so that it's definition can be accessed in a future commit by arch/x86/include/asm/kvm/tdp_pgtable.h without needing to include the mega-header kvm_host.h. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Ben Gardon --- include/linux/kvm_host.h | 7 ------- include/linux/kvm_types.h | 8 ++++++++ 2 files changed, 8 insertions(+), 7 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 22ecb7ce4d31..469ff4202a0d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -256,13 +256,6 @@ int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu); #endif #ifdef KVM_ARCH_WANT_MMU_NOTIFIER -struct kvm_gfn_range { - struct kvm_memory_slot *slot; - gfn_t start; - gfn_t end; - pte_t pte; - bool may_block; -}; bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range); bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index 59cf958d69df..001aad9ea987 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -132,4 +132,12 @@ struct kvm_vcpu_stat_generic { #define KVM_STATS_NAME_SIZE 48 +struct kvm_gfn_range { + struct kvm_memory_slot *slot; + gfn_t start; + gfn_t end; + pte_t pte; + bool may_block; +}; + #endif /* __KVM_TYPES_H__ */ From patchwork Thu Dec 8 19:38:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713903 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=AATYLO2C; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=dlUm8Izb; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSmdc1yTmz23ns for ; Fri, 9 Dec 2022 07:55:48 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=WVrMxkbD7lax5sxWN+Zkofm9X43/RTXBbG7MlrBpHNM=; b=AATYLO2CraNQoeabwJtP7wczFF WQKGvA50PYlXzTtMAwBPzoRlNn74Tqm10EIQiSrIyxlv9QOeJQSnro2wskcYcQdxXfshxD5THDTJQ IYHmeklZJ3xzneRbQp3Q5sE7pOZ9uFyLVZ8SsgeLB5k4dHrBBk5jbUQtZEGLN0uhDFrG6Rr8vDst/ 979w/tYFrY9JhX1GT100bSBwMHGwbka6U0gG2pp428rT+QV9726Nn3UAb5X5a/1Vs9qxH/F3IAadN +1d4D4yrtlz3B5fuRkoZ2SQ5JtrIa0c8OZHmHP0eerN/WYZ0XW5d6hevg96xpDC/KEWBtDIrchVNN mKzwCvxw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Nvl-00B23Y-Lq; Thu, 08 Dec 2022 20:55:45 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mk5-009rJX-Qi for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:39 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-3dddef6adb6so24905217b3.11 for ; Thu, 08 Dec 2022 11:39:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IqmyM2QEYva7JW7axwbl4xIkL86pAwW/H4sjSNNkIgM=; b=dlUm8IzbDvKb7N133pMZbM/DFF2KBU+Uj9aLv9J4HYtcIkmIb26+E+NQc6WW5lGWbL gRjhF2Xtb1+eYvjvVNPz1fbza7FzGrlR1Lu4PkwObVflvo1HWtD/PdV/GtwH/OYCGqmW unzqYTbSQw2kEfHvH067uKfBCEte20dfejQENL3zjpmLPkBmJgJlntv4NgZXLQazrFdX xuOUsmX6ghWNWZUkpgBa0nNKdgiEWdVdSAmfBWBxgOuca3Crx0ITb6PP4ZfWYj0CDybH 3DDCzi9SmVtrCH3Ln+bCV9cJJ5HGVNrG9C8r5e5Ny4xwAEq3sPHh7YTtyJKFJzf22O7k p2Aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IqmyM2QEYva7JW7axwbl4xIkL86pAwW/H4sjSNNkIgM=; b=NHoAPufDeh/mavXpXSUhgQSDKQ6QuJf1TnwGAocQWXBc/16YS3XWMOsmxqxT4aAJHp b3qONuiwPVxa+/pSHMZBrXkuY0i6ah2DW60pwGrly6EDz86D1vtg1FN/f0bM/oJTi4Ri A78v46Mzn0L3JJYBWqPyLwPCOB+mPN5QMYBKFphDJ8l1jaC031KbbPP3OGSOBYgxbHVw 73jJeuq/Kx/CA7KJqvNdRmm+GhPecHnF3LocjebrHG1LvTijg3XVV/32+A1pdaT9eZAI 4sYhWAHkH7+85BbiGqLAuaFsSRlAF95stcXgs1QOyuwHRwiidMvDiG4/gtow1TNgUFDu REcA== X-Gm-Message-State: ANoB5plmCYzZft2u+z/EZ8HS8rAyGBPmJd15N3N9BewgFJ9FltTGVT9h qJ6Y5XJNeubjghHchtUju/cGCJ2cADb6vw== X-Google-Smtp-Source: AA0mqf7z1vyD3rdlMeoQGriM43dtIzP7Gztt83Sfud7zIuPh1YZpIi0HN2kxDCnBaSlw5o2SzSSWGj7vYJ4cig== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:7851:0:b0:3b6:e1ed:4185 with SMTP id t78-20020a817851000000b003b6e1ed4185mr57305022ywc.330.1670528376810; Thu, 08 Dec 2022 11:39:36 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:38 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-19-dmatlack@google.com> Subject: [RFC PATCH 18/37] KVM: x86/mmu: Add common API for creating TDP PTEs From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_113937_904086_55599178 X-CRM114-Status: GOOD ( 15.03 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Introduce an API for construction of TDP PTEs. - tdp_mmu_make_leaf_pte() - tdp_mmu_make_nonleaf_pte() - tdp_mmu_make_huge_page_split_pte() - tdp_mmu_make_changed_pte_notifier_pte() This will be used in a future commit to move the TDP MMU to common code, while PTE construction will stay in the architecture-specific code. Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:1149 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Introduce an API for construction of TDP PTEs. - tdp_mmu_make_leaf_pte() - tdp_mmu_make_nonleaf_pte() - tdp_mmu_make_huge_page_split_pte() - tdp_mmu_make_changed_pte_notifier_pte() This will be used in a future commit to move the TDP MMU to common code, while PTE construction will stay in the architecture-specific code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm/tdp_pgtable.h | 10 +++++++ arch/x86/kvm/mmu/tdp_mmu.c | 18 +++++-------- arch/x86/kvm/mmu/tdp_pgtable.c | 36 ++++++++++++++++++++++++++ 3 files changed, 52 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/kvm/tdp_pgtable.h b/arch/x86/include/asm/kvm/tdp_pgtable.h index c5c4e4cab24a..ff2691ced38b 100644 --- a/arch/x86/include/asm/kvm/tdp_pgtable.h +++ b/arch/x86/include/asm/kvm/tdp_pgtable.h @@ -4,6 +4,7 @@ #include #include +#include struct kvm_mmu_page *tdp_mmu_root(struct kvm_vcpu *vcpu); @@ -57,4 +58,13 @@ kvm_pfn_t tdp_pte_to_pfn(u64 pte); void tdp_pte_check_leaf_invariants(u64 pte); +struct tdp_iter; + +u64 tdp_mmu_make_leaf_pte(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, + struct tdp_iter *iter, bool *wrprot); +u64 tdp_mmu_make_nonleaf_pte(struct kvm_mmu_page *sp); +u64 tdp_mmu_make_changed_pte_notifier_pte(struct tdp_iter *iter, + struct kvm_gfn_range *range); +u64 tdp_mmu_make_huge_page_split_pte(struct kvm *kvm, u64 huge_spte, + struct kvm_mmu_page *sp, int index); #endif /* !__ASM_KVM_TDP_PGTABLE_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 8155a9e79203..0172b0e44817 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1057,17 +1057,13 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, struct tdp_iter *iter) { struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(iter->sptep)); - u64 new_spte; int ret = RET_PF_FIXED; bool wrprot = false; + u64 new_spte; WARN_ON(sp->role.level != fault->goal_level); - if (unlikely(!fault->slot)) - new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL); - else - wrprot = make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn, - fault->pfn, iter->old_spte, fault->prefetch, true, - fault->map_writable, &new_spte); + + new_spte = tdp_mmu_make_leaf_pte(vcpu, fault, iter, &wrprot); if (new_spte == iter->old_spte) ret = RET_PF_SPURIOUS; @@ -1117,7 +1113,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, static int tdp_mmu_link_sp(struct kvm *kvm, struct tdp_iter *iter, struct kvm_mmu_page *sp, bool shared) { - u64 spte = make_nonleaf_spte(sp->spt, !kvm_ad_enabled()); + u64 spte = tdp_mmu_make_nonleaf_pte(sp); int ret = 0; if (shared) { @@ -1312,9 +1308,7 @@ static bool set_spte_gfn(struct kvm *kvm, struct tdp_iter *iter, tdp_mmu_set_spte(kvm, iter, 0); if (!pte_write(range->pte)) { - new_spte = kvm_mmu_changed_pte_notifier_make_spte(iter->old_spte, - pte_pfn(range->pte)); - + new_spte = tdp_mmu_make_changed_pte_notifier_pte(iter, range); tdp_mmu_set_spte(kvm, iter, new_spte); } @@ -1466,7 +1460,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, * not been linked in yet and thus is not reachable from any other CPU. */ for (i = 0; i < TDP_PTES_PER_PAGE; i++) - sp->spt[i] = make_huge_page_split_spte(kvm, huge_spte, sp->role, i); + sp->spt[i] = tdp_mmu_make_huge_page_split_pte(kvm, huge_spte, sp, i); /* * Replace the huge spte with a pointer to the populated lower level diff --git a/arch/x86/kvm/mmu/tdp_pgtable.c b/arch/x86/kvm/mmu/tdp_pgtable.c index 97cc900e8818..e036ba0c6bee 100644 --- a/arch/x86/kvm/mmu/tdp_pgtable.c +++ b/arch/x86/kvm/mmu/tdp_pgtable.c @@ -5,6 +5,7 @@ #include "mmu.h" #include "spte.h" +#include "tdp_iter.h" /* Removed SPTEs must not be misconstrued as shadow present PTEs. */ static_assert(!(REMOVED_TDP_PTE & SPTE_MMU_PRESENT_MASK)); @@ -75,3 +76,38 @@ void tdp_pte_check_leaf_invariants(u64 pte) check_spte_writable_invariants(pte); } +u64 tdp_mmu_make_leaf_pte(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, + struct tdp_iter *iter, + bool *wrprot) +{ + struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(iter->sptep)); + u64 new_spte; + + if (unlikely(!fault->slot)) + return make_mmio_spte(vcpu, iter->gfn, ACC_ALL); + + *wrprot = make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn, + fault->pfn, iter->old_spte, fault->prefetch, true, + fault->map_writable, &new_spte); + + return new_spte; +} + +u64 tdp_mmu_make_nonleaf_pte(struct kvm_mmu_page *sp) +{ + return make_nonleaf_spte(sp->spt, !kvm_ad_enabled()); +} + +u64 tdp_mmu_make_changed_pte_notifier_pte(struct tdp_iter *iter, + struct kvm_gfn_range *range) +{ + return kvm_mmu_changed_pte_notifier_make_spte(iter->old_spte, + pte_pfn(range->pte)); +} + +u64 tdp_mmu_make_huge_page_split_pte(struct kvm *kvm, u64 huge_spte, + struct kvm_mmu_page *sp, int index) +{ + return make_huge_page_split_spte(kvm, huge_spte, sp->role, index); +} From patchwork Thu Dec 8 19:38:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713864 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=FJZXFiRJ; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=mYob1xjd; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSl5K5nJRz23np for ; Fri, 9 Dec 2022 06:46:13 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=hlayoVkC8kw0lGga4mV/uFyfqhejNuuAdasaRmJ59mw=; b=FJZXFiRJIdJbIRSPY8dKPcn66+ QQwajrqTV9tC58YYutVzZxirhx1D6soVXEy8wKgHosiFWVnIJldh5c6FjUCnwc4arYiSbph3RXLOd fbkKTncx1NCpQbYYmXKKZ0mAo4NA1me9wyZjPJdm5YxirDNdbgeG454ObPo8arV4UyPiIqcwUuinU SCbwQOYIioGO9Vi0I8/fbBZ6FbssHxZVwNPwAOVYFoltGXeLyfEpIBNhBzjrDLCAwAe3Zg4lc+qY3 FcOFHcZAqjx/WXDWhPNQjnDmo1vWXoeJ3lFOR2gfxco4olg8xEAqK8tedhBXcJU+728YiN1CwHFzq qF2U3WmA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MqP-00A18q-2a; Thu, 08 Dec 2022 19:46:09 +0000 Received: from mail-oo1-xc49.google.com ([2607:f8b0:4864:20::c49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MqC-00A0vu-Lj for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:45:58 +0000 Received: by mail-oo1-xc49.google.com with SMTP id j2-20020a4a7502000000b0049bdd099de9so777893ooc.0 for ; Thu, 08 Dec 2022 11:45:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=z1OT4qzC8agmu6kAkGnJ2Z35e3wCX96lD5z5dcXDmZ8=; b=mYob1xjdrgICrvdwJ/y+dLEYrXUw8yHkANGFMt6kNW2zwoh86yOA5IqdKm5On/HK71 rCr+r9ZI6b6AUTyVpRLi2HPUhb4KBRes2A7dYxEjzKSFKujmjFxrPF/BYUNv2Dy6Czkl f0L60qH/T+Z7MgC6YZCM/qFdYGB6ww2JwcmxKxWSiQbNX520z0l4UVo1vs6DH+jCVtRB Ica965ONYXFaPKJ0DJgySM/ZwSZbyoeXM3nEtFV/eijoma5iS8BYP7SnArJIeoWtwwAs /dEWQtSRXEmOgV2EdezDQjp58v5mEPq257Ptg1IsS9acpYvYjSy/WRIyS1Ye2wh2Jdug JTKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=z1OT4qzC8agmu6kAkGnJ2Z35e3wCX96lD5z5dcXDmZ8=; b=t8O3cT6V5wXU44KMxWLFkoQG3erF9fXQufwKl1LNfvV4/HatNjRMJB6KzrGGKf9tUq KZch7jZz9UPVEj4W3CTl+wYp3SjzbVGsayD4jJHwzUjKvz1UM3VcIRoyyeNvplEpmiq7 kBsKYGEkIzHQ7cNGIvQNWmj76VaCs1H9JWm0TkbJnebTtd34+4BrpEsXwtGETT7IRuXK jW8NTwPcTy8+RgXnZNDv3GePi4P8O8AHxarrcD1cT6qmdFfXBNZyQGRsN8bkQXGWFqOb y7EpusWXfeg1/tgAmH4YAVDzdFBCdMxRi1Zfb+XMTQup5bej3OEHOqJJBTW50QVnYfiR Rjiw== X-Gm-Message-State: ANoB5pla9I1Dl3lvqBiolPhyVLsocf/CiNcVFdT3NSOWrOVYSY4iWk8a 6Kh5HdUwOBUKdO67EiZO/EJigMm8uyG+uA== X-Google-Smtp-Source: AA0mqf4wDINWYdyDuh5bcErKdq4AXe0SreaRGxGTgiHfTgQy11e7AFp7261AXIzWuvopVOIy1EidXNtJ/FLxDQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:90a:a78c:b0:219:ef00:9ffe with SMTP id f12-20020a17090aa78c00b00219ef009ffemr14658083pjq.106.1670528378506; Thu, 08 Dec 2022 11:39:38 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:39 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-20-dmatlack@google.com> Subject: [RFC PATCH 19/37] KVM: x86/mmu: Add arch hooks for NX Huge Pages From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114556_756507_3859C6FA X-CRM114-Status: GOOD ( 13.40 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Abstract away the handling for NX Huge Pages down to arch-specific hooks. This will be used in a future commit to move the TDP MMU to common code despite NX Huge Pages, which is x86-specific. NX Huge Pages is by far the most disruptive feature in terms of needing the most arch hooks in the TDP MMU. Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:c49 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Abstract away the handling for NX Huge Pages down to arch-specific hooks. This will be used in a future commit to move the TDP MMU to common code despite NX Huge Pages, which is x86-specific. NX Huge Pages is by far the most disruptive feature in terms of needing the most arch hooks in the TDP MMU. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 57 +++++++++++++++++++--------------- arch/x86/kvm/mmu/tdp_pgtable.c | 52 +++++++++++++++++++++++++++++++ 2 files changed, 84 insertions(+), 25 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 0172b0e44817..7670fbd8e72d 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -269,17 +269,21 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) return sp; } +__weak void tdp_mmu_arch_init_sp(struct kvm_mmu_page *sp) +{ +} + static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep, gfn_t gfn, union kvm_mmu_page_role role) { - INIT_LIST_HEAD(&sp->arch.possible_nx_huge_page_link); - set_page_private(virt_to_page(sp->spt), (unsigned long)sp); sp->role = role; sp->gfn = gfn; sp->ptep = sptep; + tdp_mmu_arch_init_sp(sp); + trace_kvm_mmu_get_page(sp, true); } @@ -373,6 +377,11 @@ static void tdp_unaccount_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp) atomic64_dec(&kvm->arch.tdp_mmu_pages); } +__weak void tdp_mmu_arch_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, + bool shared) +{ +} + /** * tdp_mmu_unlink_sp() - Remove a shadow page from the list of used pages * @@ -386,20 +395,7 @@ static void tdp_mmu_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, bool shared) { tdp_unaccount_mmu_page(kvm, sp); - - if (!sp->arch.nx_huge_page_disallowed) - return; - - if (shared) - spin_lock(&kvm->arch.tdp_mmu_pages_lock); - else - lockdep_assert_held_write(&kvm->mmu_lock); - - sp->arch.nx_huge_page_disallowed = false; - untrack_possible_nx_huge_page(kvm, sp); - - if (shared) - spin_unlock(&kvm->arch.tdp_mmu_pages_lock); + tdp_mmu_arch_unlink_sp(kvm, sp, shared); } /** @@ -1129,6 +1125,23 @@ static int tdp_mmu_link_sp(struct kvm *kvm, struct tdp_iter *iter, return 0; } +__weak void tdp_mmu_arch_adjust_map_level(struct kvm_page_fault *fault, + struct tdp_iter *iter) +{ +} + +__weak void tdp_mmu_arch_pre_link_sp(struct kvm *kvm, + struct kvm_mmu_page *sp, + struct kvm_page_fault *fault) +{ +} + +__weak void tdp_mmu_arch_post_link_sp(struct kvm *kvm, + struct kvm_mmu_page *sp, + struct kvm_page_fault *fault) +{ +} + static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, struct kvm_mmu_page *sp, bool shared); @@ -1153,8 +1166,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) for_each_tdp_pte(iter, root, fault->gfn, fault->gfn + 1) { int r; - if (fault->arch.nx_huge_page_workaround_enabled) - disallowed_hugepage_adjust(fault, iter.old_spte, iter.level); + tdp_mmu_arch_adjust_map_level(fault, &iter); if (iter.level == fault->goal_level) break; @@ -1178,7 +1190,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) sp = tdp_mmu_alloc_sp(vcpu); tdp_mmu_init_child_sp(sp, &iter); - sp->arch.nx_huge_page_disallowed = fault->arch.huge_page_disallowed; + tdp_mmu_arch_pre_link_sp(kvm, sp, fault); if (tdp_pte_is_present(iter.old_spte)) r = tdp_mmu_split_huge_page(kvm, &iter, sp, true); @@ -1194,12 +1206,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) goto retry; } - if (fault->arch.huge_page_disallowed && - fault->req_level >= iter.level) { - spin_lock(&kvm->arch.tdp_mmu_pages_lock); - track_possible_nx_huge_page(kvm, sp); - spin_unlock(&kvm->arch.tdp_mmu_pages_lock); - } + tdp_mmu_arch_post_link_sp(kvm, sp, fault); } /* diff --git a/arch/x86/kvm/mmu/tdp_pgtable.c b/arch/x86/kvm/mmu/tdp_pgtable.c index e036ba0c6bee..b07ed99b4ab1 100644 --- a/arch/x86/kvm/mmu/tdp_pgtable.c +++ b/arch/x86/kvm/mmu/tdp_pgtable.c @@ -111,3 +111,55 @@ u64 tdp_mmu_make_huge_page_split_pte(struct kvm *kvm, u64 huge_spte, { return make_huge_page_split_spte(kvm, huge_spte, sp->role, index); } + +void tdp_mmu_arch_adjust_map_level(struct kvm_page_fault *fault, + struct tdp_iter *iter) +{ + if (fault->arch.nx_huge_page_workaround_enabled) + disallowed_hugepage_adjust(fault, iter->old_spte, iter->level); +} + +void tdp_mmu_arch_init_sp(struct kvm_mmu_page *sp) +{ + INIT_LIST_HEAD(&sp->arch.possible_nx_huge_page_link); +} + +void tdp_mmu_arch_pre_link_sp(struct kvm *kvm, + struct kvm_mmu_page *sp, + struct kvm_page_fault *fault) +{ + sp->arch.nx_huge_page_disallowed = fault->arch.huge_page_disallowed; +} + +void tdp_mmu_arch_post_link_sp(struct kvm *kvm, + struct kvm_mmu_page *sp, + struct kvm_page_fault *fault) +{ + if (!fault->arch.huge_page_disallowed) + return; + + if (fault->req_level < sp->role.level) + return; + + spin_lock(&kvm->arch.tdp_mmu_pages_lock); + track_possible_nx_huge_page(kvm, sp); + spin_unlock(&kvm->arch.tdp_mmu_pages_lock); +} + +void tdp_mmu_arch_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, + bool shared) +{ + if (!sp->arch.nx_huge_page_disallowed) + return; + + if (shared) + spin_lock(&kvm->arch.tdp_mmu_pages_lock); + else + lockdep_assert_held_write(&kvm->mmu_lock); + + sp->arch.nx_huge_page_disallowed = false; + untrack_possible_nx_huge_page(kvm, sp); + + if (shared) + spin_unlock(&kvm->arch.tdp_mmu_pages_lock); +} From patchwork Thu Dec 8 19:38:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713862 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=FqDBCdoR; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=FQRVJBa4; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSl4m6FHyz23np for ; Fri, 9 Dec 2022 06:45:44 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=48N07/0ULRhUt/sxGy0aJDt5dgm8/K1ljUdeqiNqOjo=; b=FqDBCdoRgbYssokhZ7vLoxZDbd oQBK7Sa17Xm35opXM+IOWs3F1fAD53dg0NCiRO1CZipbDUbquRknkCEHQi0fJqvEwL/f6KFcZirwp MkpzY6d5yoyN7TPA1itHHOEVbWBJkRZcA19KjEXrIfXzz8ok5I12icpyT9NkJW7S5vs5yEttnAz0u M7vDFDCyfsw2xS1a0MycDjk4pvQFbWNGqWElR1B/eodpPp/RTkk0PRvkiDSQ8JfRau8idzlQGsmIj jl9bVgclgTX0K67NxbEAmhlPaUz7tHAeuJKnGl/mZaSYOmtAnEvceL4WuTnJPpuUStdDs20UfhGAG SNRIOIKA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mpw-00A0iE-59; Thu, 08 Dec 2022 19:45:40 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mpr-00A0a9-1F for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:45:38 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-3dfb9d11141so25091787b3.3 for ; Thu, 08 Dec 2022 11:45:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CaDXua+LFbRv3ei+U1Fgv/kax6A2A04L4lM62rMK90Q=; b=FQRVJBa4KRKObP8S3n7w3f2m+Bcx87+PQgURkRJCXrk7m4BdVxUrIfCvxY3dvCxGvX DFNpL9OiA6gV3a7OA6rX48DXADwjzhjgTFiSwWT9yA/sZdiwLepPrDM/Dmol5FJsTFtd 1z4mxGJZs7yrfwU4FQjUHnKK5PWmnwdRAprtxH5S0Ua5f0HoeWKVyWfSamiqMLNdnacf wBp7G+B2Y+E/qsY0zWkyQWEjRsad/cIklCbMqtgv/ZnA3GI3slUIM7MR6Z7bKiBB5z6N n2PV1n0iU9J6gxAvFUL2pGUf26H+n9zVjk4Eh4mosAVvSNZBvAGEZTWWsWyjSDwLmgGs n+cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CaDXua+LFbRv3ei+U1Fgv/kax6A2A04L4lM62rMK90Q=; b=Q9ft+TyHXdBhoH3MkxFVgWsnpoiBrEhuOHKyFbhzq/+cbfSsjMV9Aeiq3o0RslZ2f6 SPZZggF2I/ngjW7ZLcpIJSUnsNsk0gEIoDKAN56gzKVUVIsJlyP7ZXNYjHLMbiKcKJfH yYgo0ItA7DiODyMF5CDxiQhS3QyISMGIaN+d1Mqb6xP68iO71lLon5cbK6dH4dRpahWm PeXsyjJETgVQomB+qfuOa+lJeXBY2CUXstDbd9vtcyHsfb8QyL/IvduGpg0tLtb8fhhi f068iO28JxDymP+AFQPmGYhsduxoo3lv3p0iPyQu7oNEbj3Q9FJIDBtVQEqCtsjE6FVz 91Xg== X-Gm-Message-State: ANoB5pn1xNK30ypRm2/VRkWdHCTkR2Vm/khiVclvdXAT/0aqKaAuOw1e FHvNDjrzFpUvWU4wQagQ1sdN2gChPIxiCw== X-Google-Smtp-Source: AA0mqf5XxSPsh1gElJLISTCH8jBxKw2l85vxM5vFZMR7VtMxrzDAz9u8B+smh4G6eaCc3YG8CKpqOh9t0eSRDA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a05:690c:b18:b0:388:7d2:587b with SMTP id cj24-20020a05690c0b1800b0038807d2587bmr8897906ywb.416.1670528380375; Thu, 08 Dec 2022 11:39:40 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:40 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-21-dmatlack@google.com> Subject: [RFC PATCH 20/37] KVM: x86/mmu: Abstract away computing the max mapping level From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114535_256177_52FB8ED5 X-CRM114-Status: GOOD ( 12.82 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Abstract away kvm_mmu_max_mapping_level(), which is an x86-specific function for computing the max level that a given GFN can be mapped in KVM's page tables. This will be used in a future commit to en [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:114a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Abstract away kvm_mmu_max_mapping_level(), which is an x86-specific function for computing the max level that a given GFN can be mapped in KVM's page tables. This will be used in a future commit to enable moving the TDP MMU to common code. Provide a default implementation for non-x86 architectures that just returns the max level. This will result in more zapping than necessary when disabling dirty logging (i.e. less than optimal performance) but no correctness issues. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 14 ++++++++++---- arch/x86/kvm/mmu/tdp_pgtable.c | 7 +++++++ 2 files changed, 17 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 7670fbd8e72d..24d1dbd0a1ec 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1696,6 +1696,13 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, clear_dirty_pt_masked(kvm, root, gfn, mask, wrprot); } +__weak int tdp_mmu_max_mapping_level(struct kvm *kvm, + const struct kvm_memory_slot *slot, + struct tdp_iter *iter) +{ + return TDP_MAX_HUGEPAGE_LEVEL; +} + static void zap_collapsible_spte_range(struct kvm *kvm, struct kvm_mmu_page *root, const struct kvm_memory_slot *slot) @@ -1727,15 +1734,14 @@ static void zap_collapsible_spte_range(struct kvm *kvm, /* * If iter.gfn resides outside of the slot, i.e. the page for * the current level overlaps but is not contained by the slot, - * then the SPTE can't be made huge. More importantly, trying - * to query that info from slot->arch.lpage_info will cause an + * then the SPTE can't be made huge. On x86, trying to query + * that info from slot->arch.lpage_info will cause an * out-of-bounds access. */ if (iter.gfn < start || iter.gfn >= end) continue; - max_mapping_level = kvm_mmu_max_mapping_level(kvm, slot, - iter.gfn, PG_LEVEL_NUM); + max_mapping_level = tdp_mmu_max_mapping_level(kvm, slot, &iter); if (max_mapping_level < iter.level) continue; diff --git a/arch/x86/kvm/mmu/tdp_pgtable.c b/arch/x86/kvm/mmu/tdp_pgtable.c index b07ed99b4ab1..840d063c45b8 100644 --- a/arch/x86/kvm/mmu/tdp_pgtable.c +++ b/arch/x86/kvm/mmu/tdp_pgtable.c @@ -163,3 +163,10 @@ void tdp_mmu_arch_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, if (shared) spin_unlock(&kvm->arch.tdp_mmu_pages_lock); } + +int tdp_mmu_max_mapping_level(struct kvm *kvm, + const struct kvm_memory_slot *slot, + struct tdp_iter *iter) +{ + return kvm_mmu_max_mapping_level(kvm, slot, iter->gfn, PG_LEVEL_NUM); +} From patchwork Thu Dec 8 19:38:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713868 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=jbAv4WOk; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=SdJhXds2; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSlCN16fHz23p1 for ; Fri, 9 Dec 2022 06:51:28 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=w6Mn+lNrnkFA+7W0cStLNUoo5CLte70J1JHXEMUF94k=; b=jbAv4WOkYUCU0vkoD1o/5MI5QT tc0pjIBD//0LLapQfekSTCIykCAgFoAJOn1Qdan58bJHemBusKMLry+YPPK5kvzUEgX+OeKF8iTd3 jQhk9a2TXvjlNhim64srEwUwc7aPdzUi5hOQbIxRFJo0GbETpUqyh0sUjbln98ZoSFHwt5m5z0yqb sYOhLKxzdrPaT+/ER3HA9igquKuApTDqr/DTKkB5efgEeSVRV2VJhosjoLox5k6jJKzJt7Wls9RrR VmOh5xCcabrRGN0i/INR5/1wOa1EulRqbPNKKUfCYxqc/NLRaylwD1ULX9FJzWXXT+veNGPTTp6kG CkQltEWw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MvT-00A4g5-8W; Thu, 08 Dec 2022 19:51:23 +0000 Received: from mail-oo1-xc49.google.com ([2607:f8b0:4864:20::c49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mru-00A2IS-RR for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:47:44 +0000 Received: by mail-oo1-xc49.google.com with SMTP id v10-20020a4a860a000000b00480b3e2b5afso779936ooh.12 for ; Thu, 08 Dec 2022 11:47:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4Zxu59m+BC3IHN6G+DZ7Gfccf72aG4aOXJ0CWcNu+xQ=; b=SdJhXds2L90YC8FNHmtWh5Fh9TEgerNWhfkuw8x3d9CBnfm/9eipZjv0gPdpJNH4I0 S+51SqBNWDjQiWTiQlTyu6PNy4niyZRnQhbIkbzuxQiNibwxH6mkZqPrFqBbyJTrkaSd 9Mn4ob8QKmJos009LfgYNnOTusdfrKp1xLSuoRmHmtBWBuBrOTTzfiu5JzBu+friunut 7j3QXKw2PfZd7vhhYsSAGJiGY/32mBAW8UAr3zD1WlpFMVlQcqneZ096cl3TmZsBGjzc 3x4d21+FAOrJhsugApLkXvPF1NkwgnZMzvQRH+9jPPL4ZLADP+yEglXH2v1r5QVynX/K vjCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4Zxu59m+BC3IHN6G+DZ7Gfccf72aG4aOXJ0CWcNu+xQ=; b=gMdjAjseqtaKpET64fzcasX/n38cq/b+OiIeP7+RTXkMfwdkPCMIHQEiqq0qdK+66A eTjEDhXFn8TZvmzwKv115Zj8Mk/dzrQUICVAZ88o5mLJSAZ/1KrKQ1yk/FWFOaCWiV5X 2KPbKCJqb/3syPQGlApJjHQOrUTFvKrBS3eDzteUzkj3hrMWy3cTu5xv07CQ2EpsJM9A ISCpuU/4swKEIbitRsGrvXiqb2m0fuOHZtzFGDgqop+rN20nQsipY7+H6R1yy5yDd0lD AW8fht6UTbRNTXl3hYEaAPRcYRDHEeu+W5eck5sugbNoi1KVFFu38etFG09gsZGofiEK bePQ== X-Gm-Message-State: ANoB5pkUau4XNJggwPIOfq2LIkfIg9i6MxkZ1ooEZj7t5u/vrpgP3Kan VSelP8Ai17bDzM7XfPNibtyZnikFEqUC6Q== X-Google-Smtp-Source: AA0mqf7xeoOaS0AA84SgKnbp6lpiXTBq7xMlUXV6MswQxMERGECN0IbonM/14NtNvPIycx9F8VqJbtOKzbBrCA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:90a:d3d5:b0:218:845f:36a1 with SMTP id d21-20020a17090ad3d500b00218845f36a1mr97581242pjw.117.1670528382072; Thu, 08 Dec 2022 11:39:42 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:41 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-22-dmatlack@google.com> Subject: [RFC PATCH 21/37] KVM: Introduce CONFIG_HAVE_TDP_MMU From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114742_903141_5EC4E719 X-CRM114-Status: UNSURE ( 8.90 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Introduce a new config option to gate support for the common TDP MMU. This will be used in future commits to avoid compiling the TDP MMU code and avoid adding fields to common structs (e.g. struct kvm [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:c49 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Introduce a new config option to gate support for the common TDP MMU. This will be used in future commits to avoid compiling the TDP MMU code and avoid adding fields to common structs (e.g. struct kvm) on architectures that do not support the TDP MMU yet. No functional change intended. Signed-off-by: David Matlack --- virt/kvm/Kconfig | 3 +++ 1 file changed, 3 insertions(+) diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 9fb1ff6f19e5..75d86794d6cf 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -92,3 +92,6 @@ config KVM_XFER_TO_GUEST_WORK config HAVE_KVM_PM_NOTIFIER bool + +config HAVE_TDP_MMU + bool From patchwork Thu Dec 8 19:38:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713854 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=oHBucsH0; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=HTWjIigc; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSky43wbxz23p1 for ; Fri, 9 Dec 2022 06:39:56 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=gJJLd3oCbmgcPo46z7sV8RP/eYiYiuN8w7u4/E+JgOc=; b=oHBucsH0WffprMOaAzrkKKWYkn 8+qjMk0oakr2pfv7fyUAAGRrRC7B6GtM9qc/GBsxBBF+i57536fhK57dWkmcOEHScHeYcUhkFnKwa jCvWNp297pMEyoa7w/xRunFhG3BxJaLVV0K4R7zpOXpDyh7M0K0OrFJrpL97BXzk2/R8eL21YcRSc Yxrm4jI3NfhCwt4U7DyW/XAwsw79ay8QmeYf+bwmKRomAQxMOAjex7WJakannjcKRLwdEIRPm4dKx p0k3fUJMbAWBqKwAi7NRAlVW8cMO4twgV2DNmWrEjAj1CY11630Of0I993yXSt6h7tbhxBT80CfDm ImPfA+Iw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MkG-009snR-QY; Thu, 08 Dec 2022 19:39:48 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MkE-009seD-0N for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:47 +0000 Received: by mail-pj1-x1049.google.com with SMTP id l2-20020a17090add8200b00218daa1a812so1736993pjv.3 for ; Thu, 08 Dec 2022 11:39:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LgQMdy+BRV6etzCdcak09K9MMPBRYTlG1+oihaFX4ek=; b=HTWjIigcZ7WZjOnOg61QHaCTfbHLyHGvpSvV9l8zZQ5K6Grf0VUcQH9XdDcrgQ0qnO 1QAcwSASOHpForiZANO2+68V5a8rNealSQ0apW5C0hOTyqLw/F+VPLY3asRhXgPKSuVc ATHZcBO/0vYL56E/Ir829fUHKApmsLZ4Axd/FvOUVfKwdR9J0BHthDhp0OlK+DWkEs9G 8DbJlx/DyawOHuuPdY5BhEZwMd2ETb8lHyA5mmcw6iiBiq+/CMkbN4u0OnU/XPiCWYAg 4c1jbNQi9nK7+jDlMHAK9qxmS0z+lVbXn90csRVgjpeYpmndVSm5rYF7D0TXl9GZ2RVc 4lEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LgQMdy+BRV6etzCdcak09K9MMPBRYTlG1+oihaFX4ek=; b=4fHUJmGhJG/RN9jpiXjH5gVwP2ui2Rt0gS+4iFutnnutP+Egmgy603OM7yq4AEV8KC aj3uQiusP1J5XZZFg8cQeWaQUzr9PcQ60LCZRgnV0oXaVAdBLg60DIoFje6HcIrQACWz /0M2fihRVDVA8NSkjSYUJB49dxTsm6TUsP/JKWKnhquzFfp0HGzUDhjb3ZpuIOVT1opf OweBWYUW2xeOlbfaTsI6hurqD79YsYWpNI1SxgGyLBcq/tHWW87Ku7NDEfwRGHDDD1uj VnXK87LULZa/SviVbQRrhFxtYhfTFROgcXpAXKaFR9NYho4xVKkzDutg7tIwRh26a3T3 /M1g== X-Gm-Message-State: ANoB5pmFyNZIbSaJT6woyubv4O8ItcGwGBt1Q0gVcocMOSUPrcxbe3gH rG3XPIAScG5fHrUxTw1g8L5XQNS3yDs3FA== X-Google-Smtp-Source: AA0mqf6s5jpCYuu3sADTa5hiPGsaDUIFXQc5seS/2ekXFn3zxyf2G6r6oJqYRkkShhh/Mij7t3AhSFvHIH2QTg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:90b:11d6:b0:219:ce92:17a1 with SMTP id gv22-20020a17090b11d600b00219ce9217a1mr20111973pjb.235.1670528383766; Thu, 08 Dec 2022 11:39:43 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:42 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-23-dmatlack@google.com> Subject: [RFC PATCH 22/37] KVM: x86: Select HAVE_TDP_MMU if X86_64 From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_113946_084948_2149F345 X-CRM114-Status: UNSURE ( 8.47 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: In preparation for moving the TDP MMU implementation into virt/kvm/ guarded behind HAVE_TDP_MMU, ensure that HAVE_TDP_MMU is selected in X86_64 builds that enable KVM. This matches the existing behavi [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:1049 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org In preparation for moving the TDP MMU implementation into virt/kvm/ guarded behind HAVE_TDP_MMU, ensure that HAVE_TDP_MMU is selected in X86_64 builds that enable KVM. This matches the existing behavior of the TDP MMU in x86. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index fbeaa9ddef59..849185d5020d 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -49,6 +49,7 @@ config KVM select SRCU select INTERVAL_TREE select HAVE_KVM_PM_NOTIFIER if PM + select HAVE_TDP_MMU if X86_64 help Support hosting fully virtualized guest machines using hardware virtualization extensions. You will need a fairly recent From patchwork Thu Dec 8 19:38:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713859 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=ciwWosKv; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=YNpQc6sf; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSl411GM1z23pD for ; Fri, 9 Dec 2022 06:45:05 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=9+YLZairl1iO7TqQR+97+c5+3ePpQSyzgbcURMJvrpY=; b=ciwWosKvi3dE5OgvvxOtqggME6 uTBc+/CpWVBu07FD2yX5+nErHeTLJoGnEPqnxBcx/RwB5u+pbUO4q8cGbFmUzLAS01Jh4Uf1cr44t SQa5K8KX+enBjR6uW5cHXhnZPqhEi59UaVrUNQneFuSSqBLUTLTejVsrvoxn5iv5m+FwqueOswYu7 TCCuHtwURtGgadepkMlUyI3w3NXEL4WRJZW0cfFaIZWxMjkLckjqhXeqnFdBRzWhQ06f2CE/HKswJ Qk6iZ9QAc1GxV1AyqWNd7RApK3rl9WWdoWJ1eazREJG3zYOLDVL10NyMGqInsl62Tvd+IbaXFpsL/ u4tZYkfw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MpG-009zvE-Ve; Thu, 08 Dec 2022 19:44:58 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MpC-009zs2-BR for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:44:56 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-349423f04dbso25068837b3.13 for ; Thu, 08 Dec 2022 11:44:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2pwzA4f8rtWUlHUGqnsAy0IX27mj+6SK4MtkmoP+7nc=; b=YNpQc6sfSqCzVxX+98b5Qnwj8wuyxxhqDjbjg0/heAwG3DRXzEldN1gIKGBIFpCp7T oDiD8aBh93F2gLJVmOT2ym3C645P/yw3/XHnbYeSTumbSBmY7zP00GflN7tkMmxLNmBe Eyj0CiSk43Q0LoHiRlqBC5JjuC34iLtd/5dBrLWEX3UnGsxMKEST/caxuMEWJUYebKOs nehVMqmtND0ojhbibus4NQMbYsHvqfsKKAwwxf22ZCzh3h0VLEy5xqHZhA2pY9RcqW5/ lncbITNr2DhMHSX1upupeaJKLUSgu3cZnHc3xOSkN7l8rOc8bBbyrfwrBPoClODvZ2W8 9Y3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2pwzA4f8rtWUlHUGqnsAy0IX27mj+6SK4MtkmoP+7nc=; b=WcfaROYvy6X+30ByTfeZaq8Zc2P7mBUn8u2l9/6kO3u0U6di8jLtYuV1ZaS9J03n5x mPj8jke0sKA+FQCtreX2mkdVy/lBwFNjHAMOHGk+kOiLvHJFZHSL+g90i/MjsEM2mE3u 8KCbcR/zpzGl6JGrI5ksajS2qmEwZThWNpSQlJFat4lwOqqy3cj5TUg8yez34scLyoHF cFANWW1Xls/928wTmvoOK0Sg7W6/FXckTICz5wWbHexuCY5DLBdCk9PLYWS61G8fN55N 6CtTtvg3gKdoCMv/ohlPwp0B6DhfxrQ4YvtAdewz7+LXOAN/eDwspNZCUUaDxQJuM9EJ ntHg== X-Gm-Message-State: ANoB5plu9B9On2k4Uwb7kqtXsvMxW09m4swnXzOKne8Rp7p22E2k0wiL TmR/uDUTRZ2wD4e9Lq5jqjv/QwSxTQGQYA== X-Google-Smtp-Source: AA0mqf63ChKxDkB73Q9eiHNZK/23L+Sw7TYmaHPIE3+6BorfXLWgR+rJnoRzaO6YbXrC3TqNmiAKa46Ml3GFRw== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:ed05:0:b0:6c4:8a9:e4d2 with SMTP id k5-20020a25ed05000000b006c408a9e4d2mr92058723ybh.164.1670528385572; Thu, 08 Dec 2022 11:39:45 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:43 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-24-dmatlack@google.com> Subject: [RFC PATCH 23/37] KVM: MMU: Move VM-level TDP MMU state to struct kvm From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114454_449604_15E71EB8 X-CRM114-Status: GOOD ( 23.55 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Move VM-level TDP MMU state to struct kvm so it can be accessed by common code in a future commit. No functional change intended. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_host.h | 39 arch/x86/kvm/mmu/tdp_mmu.c | 40 ++++++++++++++++ arch/x86/ [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:1149 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Move VM-level TDP MMU state to struct kvm so it can be accessed by common code in a future commit. No functional change intended. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_host.h | 39 -------------------------------- arch/x86/kvm/mmu/tdp_mmu.c | 40 ++++++++++++++++----------------- arch/x86/kvm/mmu/tdp_pgtable.c | 8 +++---- include/kvm/mmu_types.h | 40 +++++++++++++++++++++++++++++++++ include/linux/kvm_host.h | 8 +++++++ 5 files changed, 72 insertions(+), 63 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 9cf8f956bac3..95c731028452 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1272,45 +1272,6 @@ struct kvm_arch { struct kvm_pmu_event_filter __rcu *pmu_event_filter; struct task_struct *nx_huge_page_recovery_thread; -#ifdef CONFIG_X86_64 - /* The number of TDP MMU pages across all roots. */ - atomic64_t tdp_mmu_pages; - - /* - * List of struct kvm_mmu_pages being used as roots. - * All struct kvm_mmu_pages in the list should have - * tdp_mmu_page set. - * - * For reads, this list is protected by: - * the MMU lock in read mode + RCU or - * the MMU lock in write mode - * - * For writes, this list is protected by: - * the MMU lock in read mode + the tdp_mmu_pages_lock or - * the MMU lock in write mode - * - * Roots will remain in the list until their tdp_mmu_root_count - * drops to zero, at which point the thread that decremented the - * count to zero should removed the root from the list and clean - * it up, freeing the root after an RCU grace period. - */ - struct list_head tdp_mmu_roots; - - /* - * Protects accesses to the following fields when the MMU lock - * is held in read mode: - * - tdp_mmu_roots (above) - * - the link field of kvm_mmu_page structs used by the TDP MMU - * - possible_nx_huge_pages; - * - the possible_nx_huge_page_link field of kvm_mmu_page structs used - * by the TDP MMU - * It is acceptable, but not necessary, to acquire this lock when - * the thread holds the MMU lock in write mode. - */ - spinlock_t tdp_mmu_pages_lock; - struct workqueue_struct *tdp_mmu_zap_wq; -#endif /* CONFIG_X86_64 */ - /* * If set, at least one shadow root has been allocated. This flag * is used as one input when determining whether certain memslot diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 24d1dbd0a1ec..b997f84c0ea7 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -21,9 +21,9 @@ int kvm_mmu_init_tdp_mmu(struct kvm *kvm) if (!wq) return -ENOMEM; - INIT_LIST_HEAD(&kvm->arch.tdp_mmu_roots); - spin_lock_init(&kvm->arch.tdp_mmu_pages_lock); - kvm->arch.tdp_mmu_zap_wq = wq; + INIT_LIST_HEAD(&kvm->tdp_mmu.roots); + spin_lock_init(&kvm->tdp_mmu.pages_lock); + kvm->tdp_mmu.zap_wq = wq; return 1; } @@ -42,10 +42,10 @@ static __always_inline bool kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm, void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) { /* Also waits for any queued work items. */ - destroy_workqueue(kvm->arch.tdp_mmu_zap_wq); + destroy_workqueue(kvm->tdp_mmu.zap_wq); - WARN_ON(atomic64_read(&kvm->arch.tdp_mmu_pages)); - WARN_ON(!list_empty(&kvm->arch.tdp_mmu_roots)); + WARN_ON(atomic64_read(&kvm->tdp_mmu.pages)); + WARN_ON(!list_empty(&kvm->tdp_mmu.roots)); /* * Ensure that all the outstanding RCU callbacks to free shadow pages @@ -114,7 +114,7 @@ static void tdp_mmu_schedule_zap_root(struct kvm *kvm, struct kvm_mmu_page *root { root->tdp_mmu_async_data = kvm; INIT_WORK(&root->tdp_mmu_async_work, tdp_mmu_zap_root_work); - queue_work(kvm->arch.tdp_mmu_zap_wq, &root->tdp_mmu_async_work); + queue_work(kvm->tdp_mmu.zap_wq, &root->tdp_mmu_async_work); } static inline bool kvm_tdp_root_mark_invalid(struct kvm_mmu_page *page) @@ -173,9 +173,9 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, return; } - spin_lock(&kvm->arch.tdp_mmu_pages_lock); + spin_lock(&kvm->tdp_mmu.pages_lock); list_del_rcu(&root->link); - spin_unlock(&kvm->arch.tdp_mmu_pages_lock); + spin_unlock(&kvm->tdp_mmu.pages_lock); call_rcu(&root->rcu_head, tdp_mmu_free_sp_rcu_callback); } @@ -198,11 +198,11 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, rcu_read_lock(); if (prev_root) - next_root = list_next_or_null_rcu(&kvm->arch.tdp_mmu_roots, + next_root = list_next_or_null_rcu(&kvm->tdp_mmu.roots, &prev_root->link, typeof(*prev_root), link); else - next_root = list_first_or_null_rcu(&kvm->arch.tdp_mmu_roots, + next_root = list_first_or_null_rcu(&kvm->tdp_mmu.roots, typeof(*next_root), link); while (next_root) { @@ -210,7 +210,7 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, kvm_tdp_mmu_get_root(next_root)) break; - next_root = list_next_or_null_rcu(&kvm->arch.tdp_mmu_roots, + next_root = list_next_or_null_rcu(&kvm->tdp_mmu.roots, &next_root->link, typeof(*next_root), link); } @@ -254,7 +254,7 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, * is guaranteed to be stable. */ #define for_each_tdp_mmu_root(_kvm, _root, _as_id) \ - list_for_each_entry(_root, &_kvm->arch.tdp_mmu_roots, link) \ + list_for_each_entry(_root, &_kvm->tdp_mmu.roots, link) \ if (kvm_lockdep_assert_mmu_lock_held(_kvm, false) && \ _root->role.as_id != _as_id) { \ } else @@ -324,9 +324,9 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) refcount_set(&root->root_refcount, 1); - spin_lock(&kvm->arch.tdp_mmu_pages_lock); - list_add_rcu(&root->link, &kvm->arch.tdp_mmu_roots); - spin_unlock(&kvm->arch.tdp_mmu_pages_lock); + spin_lock(&kvm->tdp_mmu.pages_lock); + list_add_rcu(&root->link, &kvm->tdp_mmu.roots); + spin_unlock(&kvm->tdp_mmu.pages_lock); out: return __pa(root->spt); @@ -368,13 +368,13 @@ static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn, static void tdp_account_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp) { kvm_account_pgtable_pages((void *)sp->spt, +1); - atomic64_inc(&kvm->arch.tdp_mmu_pages); + atomic64_inc(&kvm->tdp_mmu.pages); } static void tdp_unaccount_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp) { kvm_account_pgtable_pages((void *)sp->spt, -1); - atomic64_dec(&kvm->arch.tdp_mmu_pages); + atomic64_dec(&kvm->tdp_mmu.pages); } __weak void tdp_mmu_arch_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, @@ -1010,7 +1010,7 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm) */ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm) { - flush_workqueue(kvm->arch.tdp_mmu_zap_wq); + flush_workqueue(kvm->tdp_mmu.zap_wq); } /* @@ -1035,7 +1035,7 @@ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm) struct kvm_mmu_page *root; lockdep_assert_held_write(&kvm->mmu_lock); - list_for_each_entry(root, &kvm->arch.tdp_mmu_roots, link) { + list_for_each_entry(root, &kvm->tdp_mmu.roots, link) { if (!root->role.invalid && !WARN_ON_ONCE(!kvm_tdp_mmu_get_root(root))) { root->role.invalid = true; diff --git a/arch/x86/kvm/mmu/tdp_pgtable.c b/arch/x86/kvm/mmu/tdp_pgtable.c index 840d063c45b8..cc7b10f703e1 100644 --- a/arch/x86/kvm/mmu/tdp_pgtable.c +++ b/arch/x86/kvm/mmu/tdp_pgtable.c @@ -141,9 +141,9 @@ void tdp_mmu_arch_post_link_sp(struct kvm *kvm, if (fault->req_level < sp->role.level) return; - spin_lock(&kvm->arch.tdp_mmu_pages_lock); + spin_lock(&kvm->tdp_mmu.pages_lock); track_possible_nx_huge_page(kvm, sp); - spin_unlock(&kvm->arch.tdp_mmu_pages_lock); + spin_unlock(&kvm->tdp_mmu.pages_lock); } void tdp_mmu_arch_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, @@ -153,7 +153,7 @@ void tdp_mmu_arch_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, return; if (shared) - spin_lock(&kvm->arch.tdp_mmu_pages_lock); + spin_lock(&kvm->tdp_mmu.pages_lock); else lockdep_assert_held_write(&kvm->mmu_lock); @@ -161,7 +161,7 @@ void tdp_mmu_arch_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, untrack_possible_nx_huge_page(kvm, sp); if (shared) - spin_unlock(&kvm->arch.tdp_mmu_pages_lock); + spin_unlock(&kvm->tdp_mmu.pages_lock); } int tdp_mmu_max_mapping_level(struct kvm *kvm, diff --git a/include/kvm/mmu_types.h b/include/kvm/mmu_types.h index 07c9962f9aea..8ccc48a1cd4c 100644 --- a/include/kvm/mmu_types.h +++ b/include/kvm/mmu_types.h @@ -136,4 +136,44 @@ enum { RET_PF_SPURIOUS, }; +struct tdp_mmu { + /* The number of TDP MMU pages across all roots. */ + atomic64_t pages; + + /* + * List of kvm_mmu_page structs being used as roots. + * All kvm_mmu_page structs in the list should have + * tdp_mmu_page set. + * + * For reads, this list is protected by: + * the MMU lock in read mode + RCU or + * the MMU lock in write mode + * + * For writes, this list is protected by: + * the MMU lock in read mode + the tdp_mmu_pages_lock or + * the MMU lock in write mode + * + * Roots will remain in the list until their tdp_mmu_root_count + * drops to zero, at which point the thread that decremented the + * count to zero should removed the root from the list and clean + * it up, freeing the root after an RCU grace period. + */ + struct list_head roots; + + /* + * Protects accesses to the following fields when the MMU lock + * is held in read mode: + * - roots (above) + * - the link field of kvm_mmu_page structs used by the TDP MMU + * - (x86-only) possible_nx_huge_pages; + * - (x86-only) the arch.possible_nx_huge_page_link field of + * kvm_mmu_page structs used by the TDP MMU + * It is acceptable, but not necessary, to acquire this lock when + * the thread holds the MMU lock in write mode. + */ + spinlock_t pages_lock; + + struct workqueue_struct *zap_wq; +}; + #endif /* !__KVM_MMU_TYPES_H */ diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 469ff4202a0d..242eaed55320 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -45,6 +45,10 @@ #include #include +#ifdef CONFIG_HAVE_TDP_MMU +#include +#endif + #ifndef KVM_MAX_VCPU_IDS #define KVM_MAX_VCPU_IDS KVM_MAX_VCPUS #endif @@ -797,6 +801,10 @@ struct kvm { struct notifier_block pm_notifier; #endif char stats_id[KVM_STATS_NAME_SIZE]; + +#ifdef CONFIG_HAVE_TDP_MMU + struct tdp_mmu tdp_mmu; +#endif }; #define kvm_err(fmt, ...) \ From patchwork Thu Dec 8 19:38:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713874 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=fnGk4cJx; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=casper.20170209 header.b=wPQuC3Rv; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=F0XYX+U7; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSlP510zJz23pD for ; Fri, 9 Dec 2022 06:59:53 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=oXl1ytphuqov/NOV9MQlK6w9NDIGb1IGLXOjkAo2z2s=; b=fnGk4cJxUzG3q5ACHZoQ1MuVBP 8yHIk3cna3hX0wLHeCJjkUiaczYd7+TmN0KgA42BFs2AFDVKMWi0ian5oH47xLMAoELgvAJ1urJHU 1J7zJ1gFH0KLj/e+6eL65IQ2AllslLO4cAQL0Xa14LYZ32HGgtTfAKkbC4Xr1ChWwBwCiu4LrgJp/ QLNiS2JlzY/zZzLDiHpCFZRmYcCMoswi8LlMzIWab8IiXlH48O0Vni+UJ1ms9pjUVIzVx1GZo/FwE sOfKa+MwdmXBYMK/UGVG/dIZdocddtQVK9wqUJMJzI+T8hVNmYF5QMutVcWlv2yK5uGG5Pajwmr4B 71X7AsfQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3N3c-00AAYz-Nq; Thu, 08 Dec 2022 19:59:48 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mz1-00A7bR-6U for kvm-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 19:55:03 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=1a1z/b3owkRfttdiEuiESfzblYsje7V7GrYJ8kI0Po8=; b=wPQuC3Rvpxv9xCAS3ah1BWt8VT px4w99NKcrO4IQMeFgBnV0qOriKyw+cQCNFKdWcq5XmylePfj7PLsRlu6yr26GQi/a6bUisG47JSN PnVX4rW6U/L9l2+uowtKefP14faBao9nLPGDTrgz4caSQ86ifldZV1zZf8oB2QkGM36AzZbHsmI9m U4doPKkoM7uhVepcJrRBG9eecwpKsj/b31zSnVcqRzqwjHZIfceaTZ96rc/ni1VD/5BDYj/Cc50nt u4Of1yYbwsbpA+BK3CjS3neAiOPC6zyXIs3m8eEw4+apS0z72VCxn1+ac557D3GiWJvQiOSy5MfVs XDvYGnxw==; Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MkP-007G4l-RH for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:59 +0000 Received: by mail-pg1-x549.google.com with SMTP id i19-20020a63e913000000b004705d1506a6so1627785pgh.13 for ; Thu, 08 Dec 2022 11:39:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=1a1z/b3owkRfttdiEuiESfzblYsje7V7GrYJ8kI0Po8=; b=F0XYX+U7NiHCkK7bsbIKwHt4xn5xSLrANWygs7k8NFHbu1R2R/rmJKDDnvUWdIHP92 Ly2IYp0NOWTwXh63NaBMZcPqgDrfFaYYlcINvpDuRoegsyJJVoxOj7s7XDFnwq4W8GTC f+4upgVNo5G4bbwSbrkVhi3YHBvy4jYzsDFUG9wwYOpDkZx7UBWzJoj2myxNLvIdqZ4W pfI5REShGLM7+b3RergwOgWIMIsTPACWYsmBoNyo8t79eVgzWxxkGk1mGWxh5mVT27dB xSUuXJrP+ASrLLbWhs1/6bZw6Zgr2IKK/y+UkG+xQAvodbxK4QFJj/l93proPVAsOFHJ 1lsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1a1z/b3owkRfttdiEuiESfzblYsje7V7GrYJ8kI0Po8=; b=HbAY4vOtPfHlY0E6WOIN5x74KTmrBhccF3MYID5/XUJPa2Pm0FXyCAZV/dFgSdvcOr rEUaiDSv0c282v81pjC5zcWSxe5I6BZepClEkbmagXKzY9QUL2Loi2nmob6b9QE5uhZx 5pe928gl4dEnhDuSE1CmHh7m08moWC7fgJ5CLQkPEddfchbmZJseifVaTT4Hd1SfkAda INRB2Xj3cPGcfwEkpUvcoMxGWreWDOt3judeeknw60XEi4fSarLRfeOM9xegzdxZGHLw QDcbOrnaFkw6vWrTf37FtOSvgPdkFottWGR9Mv8YEvAlDf0FgW7/WMoV2yMZtqTKYQP8 PJVw== X-Gm-Message-State: ANoB5pneMi8xOvlpSILNJSGaOMiHZSoAH6uUuk0hUq9BOQ3CuMi+H4r5 wjo5JLsiSdLUEQ76XJk9y7t1UFeR1R17Mg== X-Google-Smtp-Source: AA0mqf5kngnEG6TtWra1+xFIY37aBRwogffq20uJpUzBVyAWsE1WUGsuHoa/DzL+7L0/Sgg0ER4DuRQldLsQMw== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:90b:1997:b0:219:8ee5:a226 with SMTP id mv23-20020a17090b199700b002198ee5a226mr29019577pjb.13.1670528387307; Thu, 08 Dec 2022 11:39:47 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:44 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-25-dmatlack@google.com> Subject: [RFC PATCH 24/37] KVM: x86/mmu: Move kvm_mmu_hugepage_adjust() up to fault handler From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193957_913922_DB17FCC4 X-CRM114-Status: GOOD ( 10.01 ) X-Spam-Score: -9.6 (---------) X-Spam-Report: SpamAssassin version 3.4.6 on casper.infradead.org summary: Content analysis details: (-9.6 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:549 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Move the call to kvm_mmu_hugepage_adjust() up to the fault handler rather than calling it from kvm_tdp_mmu_map(). Also make the same change to direct_map() for consistency. This reduces the TDP MMU's dependency on an x86-specific function, so that it can be moved into common code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 6 ++++-- arch/x86/kvm/mmu/tdp_mmu.c | 2 -- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 0593d4a60139..9307608ae975 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3151,8 +3151,6 @@ static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) int ret; gfn_t base_gfn = fault->gfn; - kvm_mmu_hugepage_adjust(vcpu, fault); - trace_kvm_mmu_spte_requested(fault); for_each_shadow_entry(vcpu, fault->addr, it) { /* @@ -4330,6 +4328,8 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (r) goto out_unlock; + kvm_mmu_hugepage_adjust(vcpu, fault); + r = direct_map(vcpu, fault); out_unlock: @@ -4408,6 +4408,8 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu, if (is_page_fault_stale(vcpu, fault)) goto out_unlock; + kvm_mmu_hugepage_adjust(vcpu, fault); + r = kvm_tdp_mmu_map(vcpu, fault); out_unlock: diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index b997f84c0ea7..e6708829714c 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1157,8 +1157,6 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) struct kvm_mmu_page *sp; int ret = RET_PF_RETRY; - kvm_mmu_hugepage_adjust(vcpu, fault); - trace_kvm_mmu_spte_requested(fault); rcu_read_lock(); From patchwork Thu Dec 8 19:38:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713886 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=vP7S2Jwi; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=desiato.20200630 header.b=RQuryVHm; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=it3ha/1H; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSm5H5XcBz23ns for ; Fri, 9 Dec 2022 07:31:15 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Tk25o+oxBxDMirGC2DZ68PTDLtCW471OcEy9swlQsIE=; b=vP7S2JwijmNe6yF7ip4Uhrm//f 43oSL5vd2v5hZoOaNUVBUUji3lN6/dvdzIbQ9XFe3ySxgJwKxwX9M4FPEFEQLoZ/QxQ27FMHkSgnl F8nu34jLzLTQnM5kqj/Ltn8Z6bis8cDqv62ZscmhYWxXLiOGkNxgybQPK5CMFQXOEOnB+Gz8GjWvQ 10Gthn0pjwGkzgHhapz8DCoXhIPoD0KMhcYgXbEUSPJv523b8AiwbIiaWnuGyaUBsmcrSI/P0Jyjq vRtzS7Mez4FZUmYp6c1D/m+1BljZbAvP/W918hPGoPq9vUvL+z/9QFeBJfxflzAJpgMm0M91YwO7q H0X30CIg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NXy-00AjuV-EJ; Thu, 08 Dec 2022 20:31:10 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NUt-00AgYd-Qd for kvm-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 20:27:59 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=PeJRMD1iiEfAcW0wse8YvMRXDysx/LxLCpDqGqOGbOE=; b=RQuryVHm3u39kbuzHtxLW9Z33M JddQG97uUiMn1hsu1bvjob6B1a2Ojv+eVd/s/lUIjWPoGlcHqNR+Acc6HqNSWDJ/1Wg/nFkL9N3Dj wzYaSkZQB5Zt1HcK0R1H7MjrKZNJdML96lyT6mpPYKsr9qpDyzodwr9Ii/4aO/iJseKEc9ZbbT0DJ 6+mECOUQBdpsyXdBJyf4GIkx6tVY+fboT7kv52fyiIigIXv76m5q/47I2bUEZljNx2e/zsHP2mRvt LHSq1I5rbFM6qvFnKVfFXEmJD1xNaXYbFQSq2IGnSRrF0sjAUszAHfOhtfFNK21V6ltM64rhNKdSi KbvdcwLw==; Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MkH-008Zlv-Nr for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:51 +0000 Received: by mail-yb1-xb49.google.com with SMTP id k7-20020a256f07000000b006cbcc030bc8so2528621ybc.18 for ; Thu, 08 Dec 2022 11:39:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PeJRMD1iiEfAcW0wse8YvMRXDysx/LxLCpDqGqOGbOE=; b=it3ha/1H2vQVcMvrwLK2XI6kFW9SfZwdQ0HxxUSc3Ag107nHhiqTSUqlouPQ69hlpA gVlMznc/cXRCFulmlBil4kKdWgDSAUCR93SntgLaDJWCpUTB7FzwaETswPW9lX1yNG07 0tWWOwYFihDwyLqkpiOMrsIryZ3SjYTTI8b4R1X9oFZnj7/UyRLh3XP6ECDzERPz2C+i B3prUkRr3KEH0wVuRrtyBo09IY2F7MxhaUoLo7LOK5D3eYUuVBFYj36vL7P7ZiNQEA7c 1QUJk9A2erTf8dkQPtJhbcBzs55NBkUNAK3ruLZHLQeHsz23ls2YGKCDBr5v8J7edZjj uhPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PeJRMD1iiEfAcW0wse8YvMRXDysx/LxLCpDqGqOGbOE=; b=tzFyvXfcht2rqcnbRt1ON3InprGBTuiVXEyjJPT3Mo6qjs6T2RKTgd/h4Y6dKBvKde VyWqycCNpcZCZH6Cb9/p1qfIieKwrEcS5MOxAeeklatPOWm0WAmeSrjSISWzxCvYLo+t 1XbA46WnHVjSxWaSCEC38MB8QWiPlljLMYFsjDRY7MpFHToLHqvY+CGCpJjvC2p+SQ3x J4qkCLrhtUUUr5sgn1Z5xzkvOFURWB0nCbYBtHj25wWGJOtkuRUY6ts3vGwrpkHqWFVN RNoG9egzVbBFCFar22Z1ozcZFdZX8gNjItrJWFJysMxo/C73OeMGB2Fz8BCJQMuQYy5V wpDQ== X-Gm-Message-State: ANoB5pmmnHVb85ApBhgNZU2cQg6e2thOzIQc/o8zGBQUITcORDV5fniA m4O6l5x6oslWbswuKhIucp0ciDYxzwKa5Q== X-Google-Smtp-Source: AA0mqf42i4HuA7nX3hyDm6T4qhjEe3EzuwVfCeNTiMqXBDpZ0EVA3TMnNfFvR5iVyImke8GmCkOtSd5OsKu+oA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:6602:0:b0:6f9:890c:6468 with SMTP id a2-20020a256602000000b006f9890c6468mr39021311ybc.610.1670528389040; Thu, 08 Dec 2022 11:39:49 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:45 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-26-dmatlack@google.com> Subject: [RFC PATCH 25/37] KVM: x86/mmu: Pass root role to kvm_tdp_mmu_get_vcpu_root_hpa() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193949_897403_E063FC47 X-CRM114-Status: GOOD ( 12.07 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "desiato.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Pass the root role from the caller rather than grabbing it from vcpu->arch.mmu. This will enable the TDP MMU to be moved to common code in a future commit by removing a dependency on vcpu->arch. No functional change intended. Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:b49 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Pass the root role from the caller rather than grabbing it from vcpu->arch.mmu. This will enable the TDP MMU to be moved to common code in a future commit by removing a dependency on vcpu->arch. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 4 ++-- arch/x86/kvm/mmu/tdp_mmu.h | 3 ++- 3 files changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 9307608ae975..aea7df3c2dcb 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3612,7 +3612,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) goto out_unlock; if (tdp_mmu_enabled) { - root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu); + root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu, mmu->root_role); mmu->root.hpa = root; } else if (shadow_root_level >= PT64_ROOT_4LEVEL) { root = mmu_alloc_root(vcpu, 0, 0, shadow_root_level); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index e6708829714c..c5d1c9010d21 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -301,9 +301,9 @@ static void tdp_mmu_init_child_sp(struct kvm_mmu_page *child_sp, tdp_mmu_init_sp(child_sp, iter->sptep, iter->gfn, role); } -hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) +hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu, + union kvm_mmu_page_role role) { - union kvm_mmu_page_role role = vcpu->arch.mmu->root_role; struct kvm *kvm = vcpu->kvm; struct kvm_mmu_page *root; diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index e6a929089715..897608be7f75 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -10,7 +10,8 @@ int kvm_mmu_init_tdp_mmu(struct kvm *kvm); void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); -hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu); +hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu, + union kvm_mmu_page_role role); __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root) { From patchwork Thu Dec 8 19:38:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713875 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=Qjll7rWG; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=casper.20170209 header.b=lUVQBPXL; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=EZdYCeKR; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSlP56MLjz2404 for ; Fri, 9 Dec 2022 06:59:53 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Tum4rvv23L0HlcVSzAVwZc/B7NgjqKXcQwZEZsiGXJA=; b=Qjll7rWGts9fNyXRIIH/zDpvtZ RMODCh+kJFL/UoC1adwfVLrvdzVj1xP3ZVbnU1RWKHPiGvVWeDD23acjgBMtIfE95sKmaZnVojetm Hu9/JAzXcee6O6NMuk+VYNCdPSEkFT4CWc2nMyQs5UGQO+0rRecK7Uv/2fv/jWEYW/Vyixjce0jf0 kZhg2Q+NMWlmWIMViutXVfhcyVnQcBD25wtRnQlYC2fnMm578vo8e3l+MkKv0fL3p/jLkWE7mMTOf Qh3Cl0Iu+z5+U7dS82RMMzpycmiH6eXFQmB8Awfsk8HRv8V8H66noE+YiF8GNZPTfsL9QGEmY3dBg ghFRXHJg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3N3d-00AAZL-1v; Thu, 08 Dec 2022 19:59:49 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mz0-00A7bR-ND for kvm-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 19:55:02 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=4kZ8qvn0xSGEDmN1BULNYrOl9BfWlHWgwc5Hcokigkg=; b=lUVQBPXLDYPGwu+qxGgk1qywiD NTutTRxWqcRvL/pBP6fsqZhBUvZA7w/Y0A/z7rre9lK8gKWJHGMsae01ujHIFv4PurD8fhh02QpsG tQMwcv4LTS16erK0pFrhipQFgR1ClpJb+p3nKtSm1ASlMtyd628u3mlCEMkuGXW7hQ2pTM/lKxhje l9vdc3QviEMmlyyz8hmcfXTKASU3DYwSa5yiMqxvkHQRXYMLPgP2w+MDpfZVGXAwKIRnZ+USusu4a hW8Y+iNbBhdLyfW0/vdQEhsBsQ6uMl74QkbyHeJ3UCW/RRXyWQ1F7EDdZ+bnLyE2APxA40IrSqMf8 tXc4Ebrw==; Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MkR-007Fx5-At for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:40:01 +0000 Received: by mail-yb1-xb49.google.com with SMTP id y6-20020a25b9c6000000b006c1c6161716so2534696ybj.8 for ; Thu, 08 Dec 2022 11:39:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4kZ8qvn0xSGEDmN1BULNYrOl9BfWlHWgwc5Hcokigkg=; b=EZdYCeKRzwTZI0V7uIlvSkGrs4qHYIezoRhReYtQLA0czf3gVL/1TNq7yKqDrl8GTn oQz0Z7XBBQH/yEQMO1SGLwncbAgZKGbmAOjFD34HdUxLfAxVVglbdugT3hkL39u2KBaC zXXOHY89LcKvSnZnJXUb8dDzrzFe4X7e3xR+WnsmfduMxTHD82PxDk3bNR9/eVBZSZCJ 9KSr/LExzaCvHBnmGNmO5FnVmhBkHRZqq/6nuGCySvf95RTgk84Zq8NxZx5p92YocoN3 edBU29eHd/xnGcG9xhXPifzW3E7yu6pOK5Yi9gnxeJFRaCY91DpX/jIM8sH3OWbKiQbv X7yQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4kZ8qvn0xSGEDmN1BULNYrOl9BfWlHWgwc5Hcokigkg=; b=Z5aOe6D2tNXsVVnIFzVwXAXVPHbcpuWMNGXSQBCvSAfio5pmY6889DiHriLBKUOii5 gWhC2A/FHyLIEVAeb540fQpogLjamy9D3MHgcKTv7jug94wuWHUfBSB+ckmYxAHs3J31 QB1CS56tDdwwoNsxwL2fhkeDeeS9sfhGXO3eBfl46BInyJoLzeTrRXXlJH92papDSHeO 0KG+bMfs/ggCPEu6zukJF2VCq90G4NDY8ge9JZi7PEHYQKwh5GGqW6M/DzXlWZLXwLPc DNsOYxa4HxaDGoFBCKWax03slGsY2bgngqymh48WyliSNDzt9I4YOyUPoWT7cjqHBAkY p6dg== X-Gm-Message-State: ANoB5pnVaycOXQF9/Nuzrj0/EyB98h0+NpP9FIqJCKOja8fGCQigP2G3 nAEDjXiVTffiRwahkKiIjzJDTFIWqy5LPg== X-Google-Smtp-Source: AA0mqf7VJDwYeOhSUwL8eiN/HYDWJI0TjJovtybda2o797oyUV7aBV39Tw6SLnFjKIqdIxSPeuhwuLTwG1fExA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:dfc3:0:b0:6f3:2748:7469 with SMTP id w186-20020a25dfc3000000b006f327487469mr53271438ybg.564.1670528390734; Thu, 08 Dec 2022 11:39:50 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:46 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-27-dmatlack@google.com> Subject: [RFC PATCH 26/37] KVM: Move page table cache to struct kvm_vcpu From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193959_394502_A45EFFCC X-CRM114-Status: GOOD ( 19.30 ) X-Spam-Score: -9.6 (---------) X-Spam-Report: SpamAssassin version 3.4.6 on casper.infradead.org summary: Content analysis details: (-9.6 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:b49 listed in] [list.dnswl.org] -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Add a kvm_mmu_memory_cache to struct kvm_vcpu for allocating pages for page tables, replacing existing architecture-specific page table caches. Purposely do not make management of this cache architecture-neutral though to reduce churn and since not all architectures configure it the same way (MIPS does not set __GFP_ZERO.) This eliminates a dependency of the TDP MMU on an architecture-specific field, which will be used in a future commit to move the TDP MMU to common code. Signed-off-by: David Matlack --- arch/arm64/include/asm/kvm_host.h | 3 --- arch/arm64/kvm/arm.c | 4 ++-- arch/arm64/kvm/mmu.c | 2 +- arch/mips/include/asm/kvm_host.h | 3 --- arch/mips/kvm/mmu.c | 4 ++-- arch/riscv/include/asm/kvm_host.h | 3 --- arch/riscv/kvm/mmu.c | 2 +- arch/riscv/kvm/vcpu.c | 4 ++-- arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/mmu/mmu.c | 8 ++++---- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- include/linux/kvm_host.h | 5 +++++ 12 files changed, 18 insertions(+), 23 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 001c8abe87fc..da519d6c09a5 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -473,9 +473,6 @@ struct kvm_vcpu_arch { /* vcpu power state */ struct kvm_mp_state mp_state; - /* Cache some mmu pages needed inside spinlock regions */ - struct kvm_mmu_memory_cache mmu_page_cache; - /* Target CPU and feature flags */ int target; DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 9c5573bc4614..0e0d4c4f79a2 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -340,7 +340,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu->arch.target = -1; bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES); - vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO; + vcpu->mmu_page_table_cache.gfp_zero = __GFP_ZERO; /* * Default value for the FP state, will be overloaded at load @@ -375,7 +375,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) if (vcpu_has_run_once(vcpu) && unlikely(!irqchip_in_kernel(vcpu->kvm))) static_branch_dec(&userspace_irqchip_in_use); - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + kvm_mmu_free_memory_cache(&vcpu->mmu_page_table_cache); kvm_timer_vcpu_terminate(vcpu); kvm_pmu_vcpu_destroy(vcpu); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 31d7fa4c7c14..d431c5cdb26a 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1196,7 +1196,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, bool device = false; unsigned long mmu_seq; struct kvm *kvm = vcpu->kvm; - struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; + struct kvm_mmu_memory_cache *memcache = &vcpu->mmu_page_table_cache; struct vm_area_struct *vma; short vma_shift; gfn_t gfn; diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h index 5cedb28e8a40..b7f276331583 100644 --- a/arch/mips/include/asm/kvm_host.h +++ b/arch/mips/include/asm/kvm_host.h @@ -344,9 +344,6 @@ struct kvm_vcpu_arch { /* Bitmask of pending exceptions to be cleared */ unsigned long pending_exceptions_clr; - /* Cache some mmu pages needed inside spinlock regions */ - struct kvm_mmu_memory_cache mmu_page_cache; - /* vcpu's vzguestid is different on each host cpu in an smp system */ u32 vzguestid[NR_CPUS]; diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index 74cd64a24d05..638f728d0bbe 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -27,7 +27,7 @@ void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu) { - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + kvm_mmu_free_memory_cache(&vcpu->mmu_page_table_cache); } /** @@ -589,7 +589,7 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, pte_t *out_entry, pte_t *out_buddy) { struct kvm *kvm = vcpu->kvm; - struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; + struct kvm_mmu_memory_cache *memcache = &vcpu->mmu_page_table_cache; gfn_t gfn = gpa >> PAGE_SHIFT; int srcu_idx, err; kvm_pfn_t pfn; diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index dbbf43d52623..82e5d80347cc 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -219,9 +219,6 @@ struct kvm_vcpu_arch { /* SBI context */ struct kvm_sbi_context sbi_context; - /* Cache pages needed to program page tables with spinlock held */ - struct kvm_mmu_memory_cache mmu_page_cache; - /* VCPU power-off state */ bool power_off; diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 3620ecac2fa1..a8281a65cb3d 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -625,7 +625,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, gfn_t gfn = gpa >> PAGE_SHIFT; struct vm_area_struct *vma; struct kvm *kvm = vcpu->kvm; - struct kvm_mmu_memory_cache *pcache = &vcpu->arch.mmu_page_cache; + struct kvm_mmu_memory_cache *pcache = &vcpu->mmu_page_table_cache; bool logging = (memslot->dirty_bitmap && !(memslot->flags & KVM_MEM_READONLY)) ? true : false; unsigned long vma_pagesize, mmu_seq; diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 71ebbc4821f0..9a1001ada936 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -160,7 +160,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) /* Mark this VCPU never ran */ vcpu->arch.ran_atleast_once = false; - vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO; + vcpu->mmu_page_table_cache.gfp_zero = __GFP_ZERO; bitmap_zero(vcpu->arch.isa, RISCV_ISA_EXT_MAX); /* Setup ISA features available to VCPU */ @@ -211,7 +211,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) kvm_riscv_vcpu_timer_deinit(vcpu); /* Free unused pages pre-allocated for G-stage page table mappings */ - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + kvm_mmu_free_memory_cache(&vcpu->mmu_page_table_cache); } int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 95c731028452..8cac8ec29324 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -714,7 +714,6 @@ struct kvm_vcpu_arch { struct kvm_mmu *walk_mmu; struct kvm_mmu_memory_cache mmu_pte_list_desc_cache; - struct kvm_mmu_memory_cache mmu_shadow_page_cache; struct kvm_mmu_memory_cache mmu_shadowed_info_cache; struct kvm_mmu_memory_cache mmu_page_header_cache; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index aea7df3c2dcb..a845e9141ad4 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -664,7 +664,7 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect) 1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM); if (r) return r; - r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache, + r = kvm_mmu_topup_memory_cache(&vcpu->mmu_page_table_cache, PT64_ROOT_MAX_LEVEL); if (r) return r; @@ -681,7 +681,7 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect) static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) { kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache); - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache); + kvm_mmu_free_memory_cache(&vcpu->mmu_page_table_cache); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadowed_info_cache); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); } @@ -2218,7 +2218,7 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, { struct shadow_page_caches caches = { .page_header_cache = &vcpu->arch.mmu_page_header_cache, - .shadow_page_cache = &vcpu->arch.mmu_shadow_page_cache, + .shadow_page_cache = &vcpu->mmu_page_table_cache, .shadowed_info_cache = &vcpu->arch.mmu_shadowed_info_cache, }; @@ -5920,7 +5920,7 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu) vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache; vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO; - vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO; + vcpu->mmu_page_table_cache.gfp_zero = __GFP_ZERO; vcpu->arch.mmu = &vcpu->arch.root_mmu; vcpu->arch.walk_mmu = &vcpu->arch.root_mmu; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index c5d1c9010d21..922815407b7e 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -264,7 +264,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) struct kvm_mmu_page *sp; sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); - sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); + sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->mmu_page_table_cache); return sp; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 242eaed55320..0a9baa493760 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -387,6 +387,11 @@ struct kvm_vcpu { */ struct kvm_memory_slot *last_used_slot; u64 last_used_slot_gen; + +#ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE + /* Cache used to allocate pages for use as page tables. */ + struct kvm_mmu_memory_cache mmu_page_table_cache; +#endif }; /* From patchwork Thu Dec 8 19:38:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713885 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=GasRxpZ3; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=desiato.20200630 header.b=lB1+TylS; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=f4QHeg61; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSm4M69Rzz2404 for ; Fri, 9 Dec 2022 07:30:27 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=kKuuYQvpXARbYSg8l4n1eSqFF7bSx+cQHYfZIPw7+7A=; b=GasRxpZ3yHPB1c+BXarNm694Ao eltsPV6mcIftVYG6JIeWGFFJ9RKIOOR8HyWGlNdefyIGxo1czeJiAbFZ7N6wk9W4ktOZu05YuAiqY x5L/giJ4qwh3MATRWuf/ZjAZOwV8BBqd38vPbaKM4fKpXrewCBFrzpZQx1Oi5K+ClqMGCpmOTXkvV XK7Y0QcK6qOj+fpEYkhLX/10Q21f1N7JQYCgZD7iZRkllcDY5ip6VfdPTM6J56avRQF9IETvrz37Z awogWVXU6F0dYziMKap5Z5wU06ehFyaDhX/x1d9DUxlxrCISyAK9PAJ7d+KlZZlnCj8dBG9mhkzH4 6m7d470A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NXD-00AjOR-G7; Thu, 08 Dec 2022 20:30:23 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NUr-00AgYd-KV for kvm-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 20:27:57 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=dgqMnMt1QIlmBLa3o//o520F4RUXcCCtqFrzTbNkpWk=; b=lB1+TylSjOwJcGg+7HEzkzXIWC 9tKKtVZ+b3HyFyhY2gyu6pCotzZXgpMN5XIgecwnsXXMmsOktnE6+zKqXP5ro+Pft7ZUorwpw1+Iy rLVP7klQRC/rjFuwzt+AqRayri5V/c+XiqGg8p36QSCsMT/Y5k1NyA/3eEzhELq7ukC98jeRyzeGs NUxhbmbFPP4iMtNwwIxvddJkr/pH1710xvoDAoM/MkI7bZIXBHZmPKSpxhx94F8uZzECfQOltyRNf Ada0miLAHnKJ69g/fKmuRyDeUTeXqEyoAXWPWGWblA3gvTD5wuYs3TLJ8HoNyAy8V0I1mU3APAYen QnSAf9Rg==; Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MkM-008ZuG-5s for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:56 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-3b102317668so24892517b3.23 for ; Thu, 08 Dec 2022 11:39:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dgqMnMt1QIlmBLa3o//o520F4RUXcCCtqFrzTbNkpWk=; b=f4QHeg61IzowzK+7yuPHF1TXnPTFwNskqnQKjcsu0WxAKIY1McB7wZRn+VuSfBM9b8 ylyyA9kcFWxQm6+GX+bMJZFHcWTGXb7YNV/fpxz2TqMv2QX/5/VLhbXkWs5BZNCv8lvE iAqaka88+2+N5tw21tP1s0c6lUAdukNFngVoXini/VGsGOpfrraDUlE9nKTf3FFTHm9y KUTGqSOH2A4NquRC0hECU12i0F9wQKuKP4xBOjtC4vxLFMrVdTK8VzxOC9xjDjBZVdgu 7TIFyQpSoyu3BW883wqt4S03Vt2JnLjwrIPiyoLIXLGjApjxjBQRjkRdpzCFxXjOiqSy 115A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dgqMnMt1QIlmBLa3o//o520F4RUXcCCtqFrzTbNkpWk=; b=e53f0thyuHZihf9TtX5NbmRxalVpaKX+PC5AR8/5DC7cGXyyf/+Zv706+2pGGpZgx+ GdTbrJF76qSdQEnLtykcuRnmpgLyiRak3ugNJwZMamJ8FJrz9On0NXUPVN4JNuYTPUCJ ifSDc3kNXrZ4y8NcDXbbzdcOTOBX8HDgCw7aQXC7I1qGoVSN2njQhjHDuRWWcL8zBygv gL/df7bpn/oTJNBBfmghUdgkzA1IaHhmrzpHNfm3txxGDQ7ZDLFJBfr8YvHb1Sj0/mjF 6/QVZCCEA+QdmDxOY7dOmJV0BbJxpAK5i5ByL+50LvnbwwrQVx5Gvs/RUAbO5Bj8Z0u2 s4lA== X-Gm-Message-State: ANoB5pmyrU+kaXul5tPbSUZRTF+UwDBMu2E2nAjzLzeKVIQtqQkmrZhn H4GYmfC/NIBW7o+nAYN2s2/OrsvJqePgiw== X-Google-Smtp-Source: AA0mqf5mi9mDJ9tYVtakRomun512INrgKlNeUrlvHGHQBPGnddPX6ZC9IDNboVvCBGP15bWz6s6DsMaL8j57qA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:b885:0:b0:701:49ca:8ae8 with SMTP id w5-20020a25b885000000b0070149ca8ae8mr16438842ybj.553.1670528392392; Thu, 08 Dec 2022 11:39:52 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:47 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-28-dmatlack@google.com> Subject: [RFC PATCH 27/37] KVM: MMU: Move mmu_page_header_cache to common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193954_307211_6DFE9352 X-CRM114-Status: GOOD ( 16.15 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "desiato.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Move vcpu->arch.mmu_page_header_cache and its backing kmem_cache to common code in preparation for moving the TDP MMU to common code in a future commit. The kmem_cache is still only initialized and used on x86 for the time being to avoid affecting other architectures. A future commit can move the code to manage this cache to common code. Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:114a listed in] [list.dnswl.org] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Move vcpu->arch.mmu_page_header_cache and its backing kmem_cache to common code in preparation for moving the TDP MMU to common code in a future commit. The kmem_cache is still only initialized and used on x86 for the time being to avoid affecting other architectures. A future commit can move the code to manage this cache to common code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 11 +++++------ arch/x86/kvm/mmu/mmu_internal.h | 2 -- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- include/kvm/mmu.h | 2 ++ include/linux/kvm_host.h | 3 +++ virt/kvm/kvm_main.c | 1 + 6 files changed, 12 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a845e9141ad4..f01ee01f3509 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -163,7 +163,6 @@ struct kvm_shadow_walk_iterator { __shadow_walk_next(&(_walker), spte)) static struct kmem_cache *pte_list_desc_cache; -struct kmem_cache *mmu_page_header_cache; static struct percpu_counter kvm_total_used_mmu_pages; static void mmu_spte_set(u64 *sptep, u64 spte); @@ -674,7 +673,7 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect) if (r) return r; } - return kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache, + return kvm_mmu_topup_memory_cache(&vcpu->mmu_page_header_cache, PT64_ROOT_MAX_LEVEL); } @@ -683,7 +682,7 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache); kvm_mmu_free_memory_cache(&vcpu->mmu_page_table_cache); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadowed_info_cache); - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); + kvm_mmu_free_memory_cache(&vcpu->mmu_page_header_cache); } static void mmu_free_pte_list_desc(struct pte_list_desc *pte_list_desc) @@ -2217,7 +2216,7 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, union kvm_mmu_page_role role) { struct shadow_page_caches caches = { - .page_header_cache = &vcpu->arch.mmu_page_header_cache, + .page_header_cache = &vcpu->mmu_page_header_cache, .shadow_page_cache = &vcpu->mmu_page_table_cache, .shadowed_info_cache = &vcpu->arch.mmu_shadowed_info_cache, }; @@ -5917,8 +5916,8 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu) vcpu->arch.mmu_pte_list_desc_cache.kmem_cache = pte_list_desc_cache; vcpu->arch.mmu_pte_list_desc_cache.gfp_zero = __GFP_ZERO; - vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache; - vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO; + vcpu->mmu_page_header_cache.kmem_cache = mmu_page_header_cache; + vcpu->mmu_page_header_cache.gfp_zero = __GFP_ZERO; vcpu->mmu_page_table_cache.gfp_zero = __GFP_ZERO; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index d3c1d08002af..4aa60d5d87b0 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -44,8 +44,6 @@ extern bool dbg; #define INVALID_PAE_ROOT 0 #define IS_VALID_PAE_ROOT(x) (!!(x)) -extern struct kmem_cache *mmu_page_header_cache; - static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp) { /* diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 922815407b7e..891877a6fb78 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -263,7 +263,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) { struct kvm_mmu_page *sp; - sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); + sp = kvm_mmu_memory_cache_alloc(&vcpu->mmu_page_header_cache); sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->mmu_page_table_cache); return sp; diff --git a/include/kvm/mmu.h b/include/kvm/mmu.h index 425db8e4f8e9..f1416828f8fe 100644 --- a/include/kvm/mmu.h +++ b/include/kvm/mmu.h @@ -4,6 +4,8 @@ #include +extern struct kmem_cache *mmu_page_header_cache; + static inline struct kvm_mmu_page *to_shadow_page(hpa_t shadow_page) { struct page *page = pfn_to_page((shadow_page) >> PAGE_SHIFT); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 0a9baa493760..ec3a6de6d54e 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -391,6 +391,9 @@ struct kvm_vcpu { #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE /* Cache used to allocate pages for use as page tables. */ struct kvm_mmu_memory_cache mmu_page_table_cache; + + /* Cache used to allocate kvm_mmu_page structs. */ + struct kvm_mmu_memory_cache mmu_page_header_cache; #endif }; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 954ab969f55e..0f1d48ed7d57 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -108,6 +108,7 @@ static int kvm_usage_count; static atomic_t hardware_enable_failed; static struct kmem_cache *kvm_vcpu_cache; +struct kmem_cache *mmu_page_header_cache; static __read_mostly struct preempt_ops kvm_preempt_ops; static DEFINE_PER_CPU(struct kvm_vcpu *, kvm_running_vcpu); From patchwork Thu Dec 8 19:38:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713884 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=dAtUKLgS; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=desiato.20200630 header.b=eg9wJMYr; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=nlEUPxR6; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSm4M01K8z23ns for ; Fri, 9 Dec 2022 07:30:26 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=i6p8/r4+HDjaZxfmnJ/7IeQLZdoJ84RyIABAYxRiGi0=; b=dAtUKLgSAbcqhBdCvORzMYId5Z bYdvust5LygRuRvr1MvOoG9Ukmqg6DEj7Fr6i2TLwhgTndzs21P3qGU5v4OJwBYPAC86G9pGAr/rA qFwIqN0Rvj8tHgUEgurUIkBqk7bIdXU7pkDymvUrsURyTr5XqNr4wViHeISp4GE7FYoqIWcwl/I8s PkZUT+wYIcKBUtxGNPuydn3kB2q/x0pYYTnGPn1FkkbmQBkUjRisZ1fCGjRKOtvH8PZhFDHWdVpte oFAd/cs+4pie9tJEVRy7q6NUxyQToUmYIZx4os7WiDeYeRFWgeHabKg6AxLRC/WGNJ2/qIS756sWW Z94wb1TA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NXC-00AjO0-VI; Thu, 08 Dec 2022 20:30:22 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NUr-00AgYd-3w for kvm-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 20:27:57 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=XBAoBS5l4frpeZWkoJY/UpoQSbDyst8wBhQ9eKXZWXc=; b=eg9wJMYrlx9vSTcQ35deK6JlYH k3ndgyh1KvDGw6OOOOL68gJpTmX5qd0uwehsNdEGS88LlRhNAr3YxU5niaqpWvOUd7O67ccKg9FMB tWFmsh6tlG0KcH6aHb0n6cL1IGGj4vGHWnejU78RF861R9UhMPl5k15A1WHz5jVxpBV7gLWTBsnii zg4OSozFqr1JdcuW0DG83xJvjYrY9s1Qyfv+otN2lkYeJYZvycyRsESnBO2pRVcw/KxVVWpTPxitl e45lufajE3YGS2NMLlQ+EJTfBb5ua4MT9TnPSDJU5xuZNnN/X5XKXTENpjtHi3W2MAGToFcH/7Wd8 V5+pNSLA==; Received: from mail-yb1-f201.google.com ([209.85.219.201]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MkO-008Zuy-2S for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:58 +0000 Received: by mail-yb1-f201.google.com with SMTP id v195-20020a252fcc000000b007125383fe0dso2252008ybv.23 for ; Thu, 08 Dec 2022 11:39:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XBAoBS5l4frpeZWkoJY/UpoQSbDyst8wBhQ9eKXZWXc=; b=nlEUPxR6NdyPJq+SxFDJtYRvDL+tSADMT14c6WeSKQQEUdTazqiEtfTEk+SgwjuGEq FmaRagIrALcak/FA6XBnlNgJ1op1uF68f9N8DQlFMf5Lg2V/XHFSd+dxvK4sOLW7bEkP RGyMlm7idURAZy4kLXqSJMwKdSUs8Pvjw42c4F3o6sWDVgr7R6EWxFOWpXNGMLR+X4qI 871FbXvdNcwcg/9Wsr6QYxmjjpELz0YhLIMlRDhO3bl+KizlZ1eAChWn2q1SbXmr+KvW 0p6sqHH9DhxqHphJvurk7B3DlzGH0FBewCHxofIksuZgZ1LvKdA0uEEqGlD800vqy01i +GHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XBAoBS5l4frpeZWkoJY/UpoQSbDyst8wBhQ9eKXZWXc=; b=F5X3A3NPdEUq6kwN9l00uyG34gcd4if2/UGd6I/DXe6d0yZYnm9AYXkJ4MzCentJ2n uNgit2kXCehsqAh+5XECyyMPnMw2i6rJUOHXNOQWLnqFt/z2ZSLOVF9IuCJchw5oGXBh 7cPHnCbnRTqbax6KLpsrJ+X5bl6P/QDwUo9STy/MeF1/FCWvOnK2WCsm91SpSiFxmpln j/qVRalSAoDBNLNaEvjAu/6zSkwD8NdOLmE6XL9kpD0yqlpVahDK+F7CSjFIdaBeg5cD 1SVhw2ko6r5GTJAVxTLG1EuPl+V8plr1IY9c1t2oN+WInJ+Z8dlNzQO0iCZDA9DL6h3z KdJw== X-Gm-Message-State: ANoB5pmcCXwQklFEH/ng1enj1ZsPirD9zXEZvfhSB9jutK2Id0fOL0Gb QAMefM+3kZg7QE66kD/awxSJi9bp73a4mQ== X-Google-Smtp-Source: AA0mqf5xyrJDk8LkqxteQO3uKW9sySGhyFdSCZp1IMvsR62i1Fel4cZTd1/kwLAP2Vd6/CjzjTgnduujan+3NQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:d34d:0:b0:6fa:7b0a:3eed with SMTP id e74-20020a25d34d000000b006fa7b0a3eedmr35292093ybf.83.1670528393951; Thu, 08 Dec 2022 11:39:53 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:48 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-29-dmatlack@google.com> Subject: [RFC PATCH 28/37] KVM: MMU: Stub out tracepoints on non-x86 architectures From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193956_271746_ECC298E6 X-CRM114-Status: GOOD ( 12.33 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "desiato.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Create stub tracepoints outside of x86. The KVM MMU tracepoints can be moved to common code, but will be deferred to a future commit. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- include/kvm/mmutrace.h | 17 +++++++++++++++++ 2 files changed, 18 insertions(+), 1 deletion(-) create mode 1006 [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [209.85.219.201 listed in list.dnswl.org] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [209.85.219.201 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Create stub tracepoints outside of x86. The KVM MMU tracepoints can be moved to common code, but will be deferred to a future commit. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- include/kvm/mmutrace.h | 17 +++++++++++++++++ 2 files changed, 18 insertions(+), 1 deletion(-) create mode 100644 include/kvm/mmutrace.h diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 891877a6fb78..72746b645e99 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -2,12 +2,12 @@ #include "mmu.h" #include "mmu_internal.h" -#include "mmutrace.h" #include "tdp_iter.h" #include "tdp_mmu.h" #include "spte.h" #include +#include #include #include diff --git a/include/kvm/mmutrace.h b/include/kvm/mmutrace.h new file mode 100644 index 000000000000..e95a3cb47479 --- /dev/null +++ b/include/kvm/mmutrace.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#if !defined(_TRACE_KVM_MMUTRACE_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_KVM_MMUTRACE_H + +#ifdef CONFIG_X86 +#include "../../arch/x86/kvm/mmu/mmutrace.h" +#else +#define trace_mark_mmio_spte(...) do {} while (0) +#define trace_kvm_mmu_get_page(...) do {} while (0) +#define trace_kvm_mmu_prepare_zap_page(...) do {} while (0) +#define trace_kvm_mmu_set_spte(...) do {} while (0) +#define trace_kvm_mmu_spte_requested(...) do {} while (0) +#define trace_kvm_mmu_split_huge_page(...) do {} while (0) +#define trace_kvm_tdp_mmu_spte_changed(...) do {} while (0) +#endif + +#endif /* _TRACE_KVM_MMUTRACE_H */ From patchwork Thu Dec 8 19:38:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713861 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=kPv+4Fu9; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=e2wlSIbI; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSl4l64gJz23np for ; Fri, 9 Dec 2022 06:45:43 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=wMNr456jgtzB7+5TMgGjT82/Wt2cucLEq8CsrPfu5UM=; b=kPv+4Fu9zRasG3QvupT5XM+Xfd FraKpU4XTaOincMTj0kDvU6dlViPDS2ReiNrcziJugOM8ju/8Av4WuX84EGlSNfEUt2o16bUpsLde 490CiWC1tY5gyhSvqDY7ixNdvgijJ+cG4+vt8LhMnbWa4+PMevgCyqrke2Gh6TPW5vN5nvqJ4lMZd czHpjv6C6tuuSX5qlgBVsY049Of40dxOFIObElOr1IdPUQSU9TwAScXLsz6lqMWeLcRH8wqHzjAYX E75OtbKRggxDswEdiU+XhTyEOZ7KQ6nGERjxZdAt4rR/p0Azgwdh1wABsskPyqnwsHyq6vL2J7Fxp LZZav4IQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mpu-00A0fR-Tz; Thu, 08 Dec 2022 19:45:38 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mpr-00A0aY-1t for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:45:36 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-3dfb9d11141so25091837b3.3 for ; Thu, 08 Dec 2022 11:45:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IGXwNDL4QYyEmLxSnzAicnadmof6UqucM+DSeXyfLbg=; b=e2wlSIbIUNSSfaa5cE8nlhbofFQRnDINdERr5/zptolHxRg/aPkZUAkV/kMjTwqvnL VAny0I7JM9mmYwlM19HgFZ4z/Bt+AByLMvJNwn4T4fl0KwFC3zH/PllsEXxvlbxKUkc0 NdMHxpCwQrIPT0+bestZB+Anm+CNeXCBzxYs+7pB5OowuPNVic/k7alXblYXNWUiyNRn 3s7YLdEKTaInLDTRes3bQcWtyjH3N9OVLHAzY6Bg1V5A6pocb3CfBR9ECK9NIDh6py2b FQXi6i4x7GPqtsNYbTReUTLXYmSwdtvZ+/Ai3ANOoadydeN1zSK+lfHKCn5oiJ5BQfNn SJvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IGXwNDL4QYyEmLxSnzAicnadmof6UqucM+DSeXyfLbg=; b=JeJfKfRte/O2hkdidyKBdY8bxIT+OZ1PNSOX3sbHi8QkHC0aWX6hcZH3cMOMyo3XPO QF5Vtv0vlnyQYuw1I1A4DbtoA460ePk5VF6pZhVNmxM/eX+MQadpakGV9jNtVT/k9iFb B2zteo4zmLjAPwc0IG8FJFFkyZG8NpHnL9tM1Urs878YgTjtB7f4s8YRRQj9OjYDs2H+ zV2seOQnPT8mJ6s2Rj/hEBVaSiHGdFiDNtY5tbL98JwgfOMXAlZgq50wX3qcZOB6VudC wjToGBYnPUs4+J9t9XhzYYCBjbszpIf7Pve0dPwBiPT74UaIMXSrVfmDUevEwLS5Pn4n W+Ng== X-Gm-Message-State: ANoB5pnnEFmsFphoU0+Rp1fkUVf7DAXyXZsAkfHcDMMixv4Cth67yvLe ULsdaMkL3wwao0BxJexE7IkXfN/NyBhDNA== X-Google-Smtp-Source: AA0mqf7NcSr4AIKU//qVuXa9coH1f9XMOTx1jz8qMU/k8ctWA7f2fWXafNUay3Wq3KgA3j2vNzHhSJ2naqJ0zg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:6309:0:b0:3b0:773b:2b8f with SMTP id x9-20020a816309000000b003b0773b2b8fmr1555613ywb.350.1670528395580; Thu, 08 Dec 2022 11:39:55 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:49 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-30-dmatlack@google.com> Subject: [RFC PATCH 29/37] KVM: x86/mmu: Collapse kvm_flush_remote_tlbs_with_{range,address}() together From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114535_225107_42DFD0E9 X-CRM114-Status: GOOD ( 10.01 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Collapse kvm_flush_remote_tlbs_with_range() and kvm_flush_remote_tlbs_with_address() into a single function. This eliminates some lines of code and a useless NULL check on the range struct. Opportunistically switch from ENOTSUPP to EOPNOTSUPP to make checkpatch happy. Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:1149 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Collapse kvm_flush_remote_tlbs_with_range() and kvm_flush_remote_tlbs_with_address() into a single function. This eliminates some lines of code and a useless NULL check on the range struct. Opportunistically switch from ENOTSUPP to EOPNOTSUPP to make checkpatch happy. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 19 ++++++------------- 1 file changed, 6 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f01ee01f3509..b7bbabac9127 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -244,27 +244,20 @@ static inline bool kvm_available_flush_tlb_with_range(void) return kvm_x86_ops.tlb_remote_flush_with_range; } -static void kvm_flush_remote_tlbs_with_range(struct kvm *kvm, - struct kvm_tlb_range *range) -{ - int ret = -ENOTSUPP; - - if (range && kvm_x86_ops.tlb_remote_flush_with_range) - ret = static_call(kvm_x86_tlb_remote_flush_with_range)(kvm, range); - - if (ret) - kvm_flush_remote_tlbs(kvm); -} - void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, u64 start_gfn, u64 pages) { struct kvm_tlb_range range; + int ret = -EOPNOTSUPP; range.start_gfn = start_gfn; range.pages = pages; - kvm_flush_remote_tlbs_with_range(kvm, &range); + if (kvm_x86_ops.tlb_remote_flush_with_range) + ret = static_call(kvm_x86_tlb_remote_flush_with_range)(kvm, &range); + + if (ret) + kvm_flush_remote_tlbs(kvm); } static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn, From patchwork Thu Dec 8 19:38:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713855 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=cToDx+Yw; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=dgxL00zU; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSkyb4ShVz23pD for ; Fri, 9 Dec 2022 06:40:23 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=np07zx48gNbCuLXZ2sxhelQgTjF6EJmLgDyuuAm7xUk=; b=cToDx+YwaEv2pHqTp+6YXANi+a FUU+gQIfvi7ma5+vJ99Yr3I3vdrtw/xAjvoOqRqaVjAfduhT/Oea7U3cUS9AtnS+msrxAAjtivnv2 v1o4G+0XFKrR3a73Dlxlm/BXShI4VhiT+/7eCucQYKb3WyAgWE5bIsQeZ9sM6MIPKJDCziAWW1762 emkk2rQZvYmEppmTimCFRyRk54xgwNdcDXod0bahkZSiwhiqVQ9tNEwup9drtzqjWa93uy8SZPOdX V+uOLRu0AOXoYgmBGkbBUKoi9kvWKnUB70ik+nxyQzRDAeDen9/nR5F28KAWgf+X7dYYEqGRWaoZY TGZRr6ZA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mkj-009tPF-9c; Thu, 08 Dec 2022 19:40:17 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MkS-009t2E-SR for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:40:02 +0000 Received: by mail-pj1-x104a.google.com with SMTP id k2-20020a17090a514200b002198214abdcso1554669pjm.8 for ; Thu, 08 Dec 2022 11:39:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5KBqh+vKODV/WoeMK21JvgHm4EN7AAHrFHprSoZAMHM=; b=dgxL00zUKdsqPZ5BwEzchIDS0YimfPZdMBfaW+t1GD3oqv+/EkBpDPWncV98t5WdnV TBle0pbiXgVb8ZuL21bA0THizQHKychb2xBY5k3lsL418ll3MgYAwawojTcq9RnFZ3VC gp0JVFFhtOwaUQAjwCHxFRcjwkCDGdko0D8NkwhLAgADuuA/YgCHZFY/7eyI5fPHiQQh P3D22hbfX6yT3l3DPX4jYLR3Wh/wUR/jzAQOf03iuz6zdbNZOzxX9Oq4tp6kuycxWq7u Cl84n0bCDSkrJAp+5vHhyXqu5cM1FpccTofAXSy7fllgJw53/hy7Sw7ooPaKn/gbjJTT tssg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5KBqh+vKODV/WoeMK21JvgHm4EN7AAHrFHprSoZAMHM=; b=h23TOJqqwhlwooRXY2g65IODUSg5Kn2NG3g8wVNwTzypne110+jkweJkMHtjGJ1v9E eT1JJUDysz64OPbW+OzOGCc8X1ux0TS4QTfdMsDO91NBbc1J96ZLdg5rnjlfYkrODvEo FLNOZpzHSewzqlkBrltKDZCEWrt925eOW75C3ujdDjhExS7cg6tk2n91SjfqqRlG47j6 6pWB9a+pGRxCuvPap5vE84yP+oGrmgHr3LfqWaMwDcvZ15cDp8wwTm/iqq7qfdeVErlx +0TL9jSpQe3Bi8/X6tU82JP1UqqVBR4U2sxYaYFRNfuoSmQ8dyRSZQZnVQ+FfsdMeAtQ wXXg== X-Gm-Message-State: ANoB5pn3J+QXHxbnLaKmY3S8RkLfRtIKJ6HsixTEc9QgzslKdMnJJyX3 /0A5U7+aCSejbw/T/g5GWyJ3a2QGOO9l2Q== X-Google-Smtp-Source: AA0mqf54N4YTuY6bG4NbJkmo6GnKEgVcMOgySXZon5g0EbgRxJrycPbdzp1seojBkwcXkg4lld4eR/cLN9ZrOw== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:90b:f89:b0:219:5b3b:2b9f with SMTP id ft9-20020a17090b0f8900b002195b3b2b9fmr4096869pjb.2.1670528397168; Thu, 08 Dec 2022 11:39:57 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:50 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-31-dmatlack@google.com> Subject: [RFC PATCH 30/37] KVM: x86/mmu: Rename kvm_flush_remote_tlbs_with_address() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114001_154187_020BC8BA X-CRM114-Status: GOOD ( 13.45 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Rename kvm_flush_remote_tlbs_with_address() to kvm_flush_remote_tlbs_range(). This name is shorter, which reduces the number of callsites that need to be broken up across multiple lines, and more read [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:104a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Rename kvm_flush_remote_tlbs_with_address() to kvm_flush_remote_tlbs_range(). This name is shorter, which reduces the number of callsites that need to be broken up across multiple lines, and more readable since it conveys a range of memory is being flushed rather than a single address. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 32 ++++++++++++++------------------ arch/x86/kvm/mmu/mmu_internal.h | 3 +-- arch/x86/kvm/mmu/paging_tmpl.h | 4 ++-- arch/x86/kvm/mmu/tdp_mmu.c | 7 +++---- 4 files changed, 20 insertions(+), 26 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b7bbabac9127..4a28adaa92b4 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -244,8 +244,7 @@ static inline bool kvm_available_flush_tlb_with_range(void) return kvm_x86_ops.tlb_remote_flush_with_range; } -void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, - u64 start_gfn, u64 pages) +void kvm_flush_remote_tlbs_range(struct kvm *kvm, u64 start_gfn, u64 pages) { struct kvm_tlb_range range; int ret = -EOPNOTSUPP; @@ -804,7 +803,7 @@ static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) kvm_mmu_gfn_disallow_lpage(slot, gfn); if (kvm_mmu_slot_gfn_write_protect(kvm, slot, gfn, PG_LEVEL_4K)) - kvm_flush_remote_tlbs_with_address(kvm, gfn, 1); + kvm_flush_remote_tlbs_range(kvm, gfn, 1); } void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp) @@ -1178,7 +1177,7 @@ static void drop_large_spte(struct kvm *kvm, u64 *sptep, bool flush) drop_spte(kvm, sptep); if (flush) - kvm_flush_remote_tlbs_with_address(kvm, sp->gfn, + kvm_flush_remote_tlbs_range(kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); } @@ -1460,7 +1459,7 @@ static bool kvm_set_pte_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, } if (need_flush && kvm_available_flush_tlb_with_range()) { - kvm_flush_remote_tlbs_with_address(kvm, gfn, 1); + kvm_flush_remote_tlbs_range(kvm, gfn, 1); return false; } @@ -1630,8 +1629,8 @@ static void __rmap_add(struct kvm *kvm, kvm->stat.max_mmu_rmap_size = rmap_count; if (rmap_count > RMAP_RECYCLE_THRESHOLD) { kvm_zap_all_rmap_sptes(kvm, rmap_head); - kvm_flush_remote_tlbs_with_address( - kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm_flush_remote_tlbs_range(kvm, sp->gfn, + KVM_PAGES_PER_HPAGE(sp->role.level)); } } @@ -2389,7 +2388,7 @@ static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, return; drop_parent_pte(child, sptep); - kvm_flush_remote_tlbs_with_address(vcpu->kvm, child->gfn, 1); + kvm_flush_remote_tlbs_range(vcpu->kvm, child->gfn, 1); } } @@ -2873,8 +2872,8 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, } if (flush) - kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, - KVM_PAGES_PER_HPAGE(level)); + kvm_flush_remote_tlbs_range(vcpu->kvm, gfn, + KVM_PAGES_PER_HPAGE(level)); pgprintk("%s: setting spte %llx\n", __func__, *sptep); @@ -5809,9 +5808,8 @@ slot_handle_level_range(struct kvm *kvm, const struct kvm_memory_slot *memslot, if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) { if (flush && flush_on_yield) { - kvm_flush_remote_tlbs_with_address(kvm, - start_gfn, - iterator.gfn - start_gfn + 1); + kvm_flush_remote_tlbs_range(kvm, start_gfn, + iterator.gfn - start_gfn + 1); flush = false; } cond_resched_rwlock_write(&kvm->mmu_lock); @@ -6166,8 +6164,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) } if (flush) - kvm_flush_remote_tlbs_with_address(kvm, gfn_start, - gfn_end - gfn_start); + kvm_flush_remote_tlbs_range(kvm, gfn_start, gfn_end - gfn_start); kvm_mmu_invalidate_end(kvm, gfn_start, gfn_end); @@ -6506,7 +6503,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, kvm_zap_one_rmap_spte(kvm, rmap_head, sptep); if (kvm_available_flush_tlb_with_range()) - kvm_flush_remote_tlbs_with_address(kvm, sp->gfn, + kvm_flush_remote_tlbs_range(kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); else need_tlb_flush = 1; @@ -6557,8 +6554,7 @@ void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, * is observed by any other operation on the same memslot. */ lockdep_assert_held(&kvm->slots_lock); - kvm_flush_remote_tlbs_with_address(kvm, memslot->base_gfn, - memslot->npages); + kvm_flush_remote_tlbs_range(kvm, memslot->base_gfn, memslot->npages); } void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 4aa60d5d87b0..d35a5b408b98 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -65,8 +65,7 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn, int min_level); -void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, - u64 start_gfn, u64 pages); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, u64 start_gfn, u64 pages); unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); extern int nx_huge_pages; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index daf9c7731edc..bfee5e0d1ee1 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -929,8 +929,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) mmu_page_zap_pte(vcpu->kvm, sp, sptep, NULL); if (is_shadow_present_pte(old_spte)) - kvm_flush_remote_tlbs_with_address(vcpu->kvm, - sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm_flush_remote_tlbs_range(vcpu->kvm, sp->gfn, + KVM_PAGES_PER_HPAGE(sp->role.level)); if (!rmap_can_add(vcpu)) break; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 72746b645e99..1f1f511cd1a0 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -676,8 +676,7 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm, if (ret) return ret; - kvm_flush_remote_tlbs_with_address(kvm, iter->gfn, - TDP_PAGES_PER_LEVEL(iter->level)); + kvm_flush_remote_tlbs_range(kvm, iter->gfn, TDP_PAGES_PER_LEVEL(iter->level)); /* * No other thread can overwrite the removed SPTE as they must either @@ -1067,8 +1066,8 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, return RET_PF_RETRY; else if (tdp_pte_is_present(iter->old_spte) && !tdp_pte_is_leaf(iter->old_spte, iter->level)) - kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn, - TDP_PAGES_PER_LEVEL(iter->level + 1)); + kvm_flush_remote_tlbs_range(vcpu->kvm, sp->gfn, + TDP_PAGES_PER_LEVEL(iter->level + 1)); /* * If the page fault was caused by a write but the page is write From patchwork Thu Dec 8 19:38:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713883 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=mmJBx6fW; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=desiato.20200630 header.b=pBrfAqVD; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=rW1pkR31; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSm2B2k4xz23pD for ; Fri, 9 Dec 2022 07:28:34 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Rwh7XNDfR63/lid+3OG56VItndIT7hiHcYdFlcVgJco=; b=mmJBx6fWswT5FLsAf2BbRjlxYb tYTv40SH2gUEo2k1AIW07KXUNd+OMzU/oglOLvbeK+lHF/Rch4BI29NYvAQnXuVw+e0cQLQv5tArf dwpgj80TK350Y4fXdFsl1fQu0Io2f4iHI9w8ThlWP5rLGd3SvZvf/1Kfyuo4hx6QAu1UMGt8syXCD LOcAiq83xT0hzSWutUTcnSXUN3lYO/t2G3/sx6MYjh51cG+aKvt9+G9QYxe6uhGrephmfDkFe+TnM g5G47QcFVEip+U/89JkM7iGpPsRnF7xMloa12Hl1YMq34MGodsePb1cNvAZEwg2TAMeon4Nxf2iaw ZUdZ96qQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NVM-00Ahe5-H9; Thu, 08 Dec 2022 20:28:28 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NUn-00AgYd-5G for kvm-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 20:27:53 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=4nLMUWhCbgygsyxQ4LP+FBPYpvg+KdNjD6AAx7BEWII=; b=pBrfAqVD22yrjpBV3JnZ4Pu1Av VtddyB7/uaLVKxm3IlKn0k8wi4mpHO8J0whBoUplzXClA7Q/48yNiPGAqbMZTicDLwPVeIzx2Rc1r mdmDMAU560sKPu56BCtIP42GjLae8MVcB38agdOPKVqAeTO6o7i1LjHzXp4thunLqGl/DgPTAKKsJ O5Il/RDrpH/tpTrNrV0J2j7nDwBCzuu9ydZOMECvkr4qVpTjXnYoes/IHXeI2I5ZI9ooYUhBiSn7F EnYkYPKC/rafSGhmNAdUuG6uYn+hhHezx/Djb7BcI77ekhT08J4MgfZzTvxLHKpF9z8UydyYhWgym vQRg8XqA==; Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MkU-008Zx8-DF for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:40:04 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id 203-20020a2502d4000000b006f94ab02400so2552358ybc.2 for ; Thu, 08 Dec 2022 11:40:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4nLMUWhCbgygsyxQ4LP+FBPYpvg+KdNjD6AAx7BEWII=; b=rW1pkR31ur5/8vzbX6I0g9dHa5ieQLfxMNHl7/CiYy31x1gmEotFcgfb6bMg1XD9l4 cVrOC8CHqZzC+Gd2V/LwvTlYuSZNohfxzCXcTVs+DyirYljuXOWNIY4uY7CZr2a2YKF5 72eri2VmsAQGQbk2L3oKVAoFTeSQShiiqL6JV3UiQ4QZOs2UwURjWE1AFC0PhC62yXox X0dKf4S1b8a4rdREripCobtwsSCC9MJx4k5wGRfNkLSnee4Y5BiQ4yQ0Og5S8kz28vfk 0som8w77mpuwlpN/KA9+tJzi+pYWYZboOtEkERDzhNWA7MhWTNWQZafjFT2UdiruzKAQ QPVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4nLMUWhCbgygsyxQ4LP+FBPYpvg+KdNjD6AAx7BEWII=; b=DOrI9awptbtnhFo/J0L1acSjnZA9iqEp2K/QIsMcl5YLwlrUdbHahtk+tjFF9gGU1X n8w84ci2w5JfZnoTO01kVnz6agpjg4lxO04acBgjo5yiUyrsaKt5tBtzM7jbd+tU1NM/ 1JirojHjMIQwI0KStKgGcbZJheVVRkWcSc94RTavf2siXMj7DLyV8vk0iTmwDUPcFiEs J2qNOiqTXPiEfUKa7zGUyB+Wxb9LGzqhycn2qAewToAh2QBdmTTh/fXOC/3uiKUQReLp jBlwZQ5+Gd1jv5VyTr+6qCaNtqduFFMQeDmLZlnTtD0c28khDKELJ1FQ/XHxoFdT3nVs 2aRA== X-Gm-Message-State: ANoB5pmzBWbOTLGh2EubaGjLgsZSX+lF53OCTsbMKqYRQyxcInNO4gV6 NLPPSj02NtutVi8u4HKySBHbaSA1xl7hwA== X-Google-Smtp-Source: AA0mqf5kUDO3Tr6gk/fzW/ARrxEhzmkAXRSHhWoL0H7mQ0gz5ri8bxcSR3iro0wxy+7xhENFLIaYZergQrtKPg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:5:0:b0:6fc:8f88:813b with SMTP id 5-20020a250005000000b006fc8f88813bmr27601586yba.629.1670528399532; Thu, 08 Dec 2022 11:39:59 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:51 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-32-dmatlack@google.com> Subject: [RFC PATCH 31/37] KVM: x86/MMU: Use gfn_t in kvm_flush_remote_tlbs_range() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_194002_582622_FF3D0972 X-CRM114-Status: GOOD ( 10.57 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "desiato.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Use gfn_t instead of u64 for the start_gfn parameter to kvm_flush_remote_tlbs_range(), since that is the standard type for GFNs throughout KVM. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/mmu_internal.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:b4a listed in] [list.dnswl.org] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Use gfn_t instead of u64 for the start_gfn parameter to kvm_flush_remote_tlbs_range(), since that is the standard type for GFNs throughout KVM. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/mmu_internal.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4a28adaa92b4..19963ed83484 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -244,7 +244,7 @@ static inline bool kvm_available_flush_tlb_with_range(void) return kvm_x86_ops.tlb_remote_flush_with_range; } -void kvm_flush_remote_tlbs_range(struct kvm *kvm, u64 start_gfn, u64 pages) +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages) { struct kvm_tlb_range range; int ret = -EOPNOTSUPP; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index d35a5b408b98..e44fe7ad3cfb 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -65,7 +65,7 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn, int min_level); -void kvm_flush_remote_tlbs_range(struct kvm *kvm, u64 start_gfn, u64 pages); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages); unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); extern int nx_huge_pages; From patchwork Thu Dec 8 19:38:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713856 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=TzoJd79W; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=Or/+tQ35; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSkyq595sz23yq for ; Fri, 9 Dec 2022 06:40:35 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=b3vaK9wFofo3RtlNaPezrEN7fh6KCND3xc+qv4kORyo=; b=TzoJd79WWl6VUPKvWMG8e7K122 18AK2/XkA3IkahuWVeBrXlR9+7tdvF8Ad/SM7X/b6fLn6tlKxPT88z3FlT0GHnxdRNwqwUR9izVtf TjoIwbG3LtgYjEQfaulqMF+Xym5ExKJVX7pPwSUdkcllXrVPLCsLhDSajM1i38/r7O83zOBzPdYgS Hv1SRXQV8SvGLvh7j92FhE1XEUTz7D3lMVZfyyuVBvVyMuoGZOQI1GW6QdRDs94ZUJTxh9zVrF8Kz vivMCA5eMn/JEiOJYsvpslfFOXjF8P7XQIFo8BsCXMrz633rnT4jUcEYIhVd8YyjQxVGy/SoMKliB Fwk4JlOg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mkv-009tcB-OS; Thu, 08 Dec 2022 19:40:29 +0000 Received: from mail-yb1-f201.google.com ([209.85.219.201]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MkW-009t3f-9H for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:40:06 +0000 Received: by mail-yb1-f201.google.com with SMTP id e202-20020a2550d3000000b006f9d739c724so2530574ybb.6 for ; Thu, 08 Dec 2022 11:40:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XV6plMeF8MwFNJz3HbB05g+syGBZZ4afzsZVPB1fNOM=; b=Or/+tQ35VJl1emV0K4s+0pZ6UBajubG9tq8FrCH/hA3kAwbcecTNaCZrvYWvTtsU9g 5JZnVfab3n41kCOJL7IiSST4+aKZY4JSEhz9//xqRqAsRUBEKxw9xl8ylX8pwq0BwIdV I/Xh88N0cUIemHf3DBiDhLCAmUuyGnaY5gTHAU4MfbG4ILGutMlede6Fq/e7cS9eVM31 nHh094v1U0MR6DFbL/HmUOOC1ZS2/Zm3j4YdatXU4M2l2wqxp12xCNyRytDsAZdWIiVJ 2+SDuw6pj88vz17PG6Z8Q+mIQbHf2BO5E5P+E5Z5Z6J+LaEFzERjkL2lFcMUz4dZZ5j1 PSpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XV6plMeF8MwFNJz3HbB05g+syGBZZ4afzsZVPB1fNOM=; b=xYv8jLHhv4yyHrtPtNQ3Bk1HTT1/OtYg6PsoDKzf7xVboZ3IlwsEIyckF5/FD4B1sr xMiBs/wwb/yWBHYfwT0um98mVeo8V6YJb7AK3s6AegDVa6syqreAJSP+sVj0p+UyVj6c ZPNd6WTufxkiZ8MTDkSBM/VgrA0MblO1qmRLITlv/3N2nVfhRaKrBJe6f4K9Z7ZDaBaD vO2t8LyMcnHD1rtQ8Egu9qnsC0RjGI6M+5hWUHt54gX0ubTVrIwpen7v/J1E/vkMj5Wn qm5lx9/ixBHXtzXItCndvXfRbO9u2wqSJn3cqms5i2jnJX50lUmUwavw535+XObmKGYs 1PPw== X-Gm-Message-State: ANoB5plLA+oHyEy6IC4qDdvlCL1AwtD7hYydxALsGO0IvbjEQIg4R5CP 3ai9V75h6PzgA+dy/ATeuOS2IuDuwsYc5w== X-Google-Smtp-Source: AA0mqf6Wkh0OUy4PlZrJU9hOTo3IInMenz2jMKiaSBByUieGZ1GCpj8vRhZKagSo80e6dZu+mbM8OPIkZWhBWg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:230b:0:b0:6f9:b1b0:67f5 with SMTP id j11-20020a25230b000000b006f9b1b067f5mr28607189ybj.471.1670528401185; Thu, 08 Dec 2022 11:40:01 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:52 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-33-dmatlack@google.com> Subject: [RFC PATCH 32/37] KVM: Allow range-based TLB invalidation from common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114004_342698_74D0717D X-CRM114-Status: GOOD ( 10.94 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Make kvm_flush_remote_tlbs_range() visible in common code and create a default implementation that just invalidates the whole TLB. This will be used in future commits to clean up kvm_arch_flush_remote [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [209.85.219.201 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [209.85.219.201 listed in wl.mailspike.net] -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Make kvm_flush_remote_tlbs_range() visible in common code and create a default implementation that just invalidates the whole TLB. This will be used in future commits to clean up kvm_arch_flush_remote_tlbs_memslot() and to move the KVM/x86 TDP MMU to common code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu_internal.h | 1 - include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 9 +++++++++ 3 files changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index e44fe7ad3cfb..df815cb84bd2 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -65,7 +65,6 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn, int min_level); -void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages); unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); extern int nx_huge_pages; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ec3a6de6d54e..d9a7f559d2c5 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1365,6 +1365,7 @@ int kvm_vcpu_yield_to(struct kvm_vcpu *target); void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool usermode_vcpu_not_eligible); void kvm_flush_remote_tlbs(struct kvm *kvm); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages); #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 0f1d48ed7d57..662ca280c0cf 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -379,6 +379,15 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs); #endif +/* + * Architectures that support range-based TLB invalidation can override this + * function. + */ +void __weak kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages) +{ + kvm_flush_remote_tlbs(kvm); +} + static void kvm_flush_shadow_all(struct kvm *kvm) { kvm_arch_flush_shadow_all(kvm); From patchwork Thu Dec 8 19:38:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713873 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=gXKUFvpo; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=casper.20170209 header.b=FBbIdSFF; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=Bzp7aIG7; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSlNg0nppz23pD for ; Fri, 9 Dec 2022 06:59:31 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=nNsOGCFS/q0JYJLNWdSQZsXTuKpMZSZckG/1WvNwY4Y=; b=gXKUFvpo2yLxTWvIiXht6xKgAR YxBM8COC5EfKoeqJlhvQRw+NHh+ncA6IS2kWVfEnTF3lv5+ZfAG5A/45TLz0O4BQHHcUFEbmvL1sL A113casebKLgcJEJn57M92EpouL+yIiq0pMZFSzYnj1DNiEVuF8a5rjLxOX8mKEffWTVsGrIB9Te/ GYvFfpiMGSwDU1e8WeCq4ZBk9C0U60k0rUChgU5+mBXvDmL2YXKLd5id6QQRsRhzM8YzxaUop0kHL tNKUef1Qn+ir2CoYZrKsgKSFxrA50iTerYlVOdeIE1KJvikQjHDCXRii256k63HvSZu1g+OwDKfgv vEc3OVuQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3N3E-00AAJf-PW; Thu, 08 Dec 2022 19:59:24 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Myz-00A7bR-Qw for kvm-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 19:55:02 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=vwi/xVZU3lQNa24CSu/xiis0ESj3Tvwd8kSXjnMpdRQ=; b=FBbIdSFFQY7bLOk20D3pWyL/RV 491qguAjM9BSqxiYsjtM7dvMbuGoWv9SOBuSBUDrxL+CMgjRLCciKn+PQ4QqBhNxJefqojYJ5HqKI 5afG/iJ+a0inbUrLRrVEpwF3qRH8h/rDKxDbExEUVbwmjZ5c+MYqBVmXyGjGcPtS8sGS4J8REMQ3n yTwqR8jRh6+LNRCxlEerON8s38A7hEypHGU36WFC+4W8+17YqVWBReLAqr9KTr7qdJjoEQlLNsFGD cYI/T43oT7SCNxNC5nQjRdOx6/axQdnJc4iKprwUweNE9NBvi+WzDt38TlTiJMC29N5bOzvp2NpdV t1iKENog==; Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mkf-007G7C-60 for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:40:15 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-348608c1cd3so25108067b3.10 for ; Thu, 08 Dec 2022 11:40:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vwi/xVZU3lQNa24CSu/xiis0ESj3Tvwd8kSXjnMpdRQ=; b=Bzp7aIG7f+4q2dRplZSp+D5dxJXMlRGZAGs2Ki1iTWPjWVbRjAjdCCmUBNRhSmzRU7 ZSBl++QOMiASNUCpwZkSTptJv0PVzRpzPA1RdMro1kk5ldeBT1disLMcHKtlvhksgjSa zxjGGKv7w3ih9u7oBYtEkMdXSfYAVXKmN/5IlUrbnzelK4L5OTHFDSHUAM7mcm165YKs jWTeqbcZ1jFEYYYup6JiwBJ731RZgvquulPu84Z+AOXDerg9EEKJ5bvPB7+yfyKhc+cS ejSYoSjauyKuAn6t3wLtot9xxMATKi+2Luz61ZMb+pKr8eLNlPug4DgdrHpbqVCM1Nhg pdbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vwi/xVZU3lQNa24CSu/xiis0ESj3Tvwd8kSXjnMpdRQ=; b=eADUKwD2JbvVng/F3dslGfo4yDeZhngiGifct2erOtCFp4cP5q6fMiLBaqt2MCu7MP uUlz/DuggUKydh7nhOlutDWGBCr0/3UcH8Nr87q9YF3KC4kIPdjIOi/QGp1EoEtc1qpn cYYQxDkzcPLErq0kaAknt8l7dbnj0fqpK/6jz94/5u94x8FyUtzG+5dC9h2yTbinLeBh hU07s3ws85Ys+c7uXvl6AQ4PafL7Dg5eKQc/SoiC2k0LC0Mqebt0QXr6jsW9SM9Mk+8M 2ACAMCDT0iElrbOEA3zDcEhSIzwVb/JFJKX4q9mDb7T5ETe554XWVEAA5RfwZ8zqgLFW LEbg== X-Gm-Message-State: ANoB5plc26rde1Rf+akfmrNPjo0V569LUvnhefpRsbXOCQNpamwBFTsO VVQwfvl2GPXYK6Yk5Y0rCno2kB9W/KwPWA== X-Google-Smtp-Source: AA0mqf67Kq9rDxYgV+hummcIeYFqfheYhuuti7T81xEQvcx7DrJwDRhRzpL707BjojHfwgw2nWyzvQVlBe/ufQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:ac8:0:b0:6fa:678a:7623 with SMTP id 191-20020a250ac8000000b006fa678a7623mr32403861ybk.577.1670528402922; Thu, 08 Dec 2022 11:40:02 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:53 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-34-dmatlack@google.com> Subject: [RFC PATCH 33/37] KVM: Move kvm_arch_flush_remote_tlbs_memslot() to common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_194013_255379_978E1C17 X-CRM114-Status: GOOD ( 15.69 ) X-Spam-Score: -9.6 (---------) X-Spam-Report: SpamAssassin version 3.4.6 on casper.infradead.org summary: Content analysis details: (-9.6 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:1149 listed in] [list.dnswl.org] -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Move kvm_arch_flush_remote_tlbs_memslot() to common code and drop "arch_" from the name. kvm_arch_flush_remote_tlbs_memslot() is just a range-based TLB invalidation where the range is defined by the memslot. Now that kvm_flush_remote_tlbs_range() can be called from common code we can just use that and drop a bunch of duplicate code from the arch directories. Note this adds a lockdep assertion for slot_lock being held when calling kvm_flush_remote_tlbs_memslot(), which was previously only asserted on x86. Signed-off-by: David Matlack --- arch/arm64/kvm/arm.c | 6 ------ arch/mips/kvm/mips.c | 10 ++-------- arch/riscv/kvm/mmu.c | 6 ------ arch/x86/kvm/mmu/mmu.c | 16 +--------------- arch/x86/kvm/x86.c | 2 +- include/linux/kvm_host.h | 7 +++---- virt/kvm/kvm_main.c | 17 +++++++++++++++-- 7 files changed, 22 insertions(+), 42 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 0e0d4c4f79a2..4f1549c1d2d2 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1430,12 +1430,6 @@ void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - kvm_flush_remote_tlbs(kvm); -} - static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm, struct kvm_arm_device_addr *dev_addr) { diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index a25e0b73ee70..ecd8a051fd6b 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -209,7 +209,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, /* Flush slot from GPA */ kvm_mips_flush_gpa_pt(kvm, slot->base_gfn, slot->base_gfn + slot->npages - 1); - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + kvm_flush_remote_tlbs_memslot(kvm, slot); spin_unlock(&kvm->mmu_lock); } @@ -245,7 +245,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, needs_flush = kvm_mips_mkclean_gpa_pt(kvm, new->base_gfn, new->base_gfn + new->npages - 1); if (needs_flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, new); + kvm_flush_remote_tlbs_memslot(kvm, new); spin_unlock(&kvm->mmu_lock); } } @@ -997,12 +997,6 @@ int kvm_arch_flush_remote_tlb(struct kvm *kvm) return 1; } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - kvm_flush_remote_tlbs(kvm); -} - long kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { long r; diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index a8281a65cb3d..98bf3719a396 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -406,12 +406,6 @@ void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) { } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - kvm_flush_remote_tlbs(kvm); -} - void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free) { } diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 19963ed83484..f2602ee1771f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6524,7 +6524,7 @@ static void kvm_rmap_zap_collapsible_sptes(struct kvm *kvm, */ if (slot_handle_level(kvm, slot, kvm_mmu_zap_collapsible_spte, PG_LEVEL_4K, KVM_MAX_HUGEPAGE_LEVEL - 1, true)) - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + kvm_flush_remote_tlbs_memslot(kvm, slot); } void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, @@ -6543,20 +6543,6 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, } } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - /* - * All current use cases for flushing the TLBs for a specific memslot - * related to dirty logging, and many do the TLB flush out of mmu_lock. - * The interaction between the various operations on memslot must be - * serialized by slots_locks to ensure the TLB flush from one operation - * is observed by any other operation on the same memslot. - */ - lockdep_assert_held(&kvm->slots_lock); - kvm_flush_remote_tlbs_range(kvm, memslot->base_gfn, memslot->npages); -} - void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, const struct kvm_memory_slot *memslot) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 517c8ed33542..95ff95da55d5 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12574,7 +12574,7 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm, * See is_writable_pte() for more details (the case involving * access-tracked SPTEs is particularly relevant). */ - kvm_arch_flush_remote_tlbs_memslot(kvm, new); + kvm_flush_remote_tlbs_memslot(kvm, new); } } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d9a7f559d2c5..46ed0ef4fb79 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1366,6 +1366,8 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool usermode_vcpu_not_eligible); void kvm_flush_remote_tlbs(struct kvm *kvm); void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages); +void kvm_flush_remote_tlbs_memslot(struct kvm *kvm, + const struct kvm_memory_slot *memslot); #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); @@ -1394,10 +1396,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, unsigned long mask); void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot); -#ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot); -#else /* !CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT */ +#ifndef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log); int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log, int *is_dirty, struct kvm_memory_slot **memslot); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 662ca280c0cf..39c2efd15504 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -388,6 +388,19 @@ void __weak kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pa kvm_flush_remote_tlbs(kvm); } +void kvm_flush_remote_tlbs_memslot(struct kvm *kvm, const struct kvm_memory_slot *memslot) +{ + /* + * All current use cases for flushing the TLBs for a specific memslot + * related to dirty logging, and many do the TLB flush out of mmu_lock. + * The interaction between the various operations on memslot must be + * serialized by slots_locks to ensure the TLB flush from one operation + * is observed by any other operation on the same memslot. + */ + lockdep_assert_held(&kvm->slots_lock); + kvm_flush_remote_tlbs_range(kvm, memslot->base_gfn, memslot->npages); +} + static void kvm_flush_shadow_all(struct kvm *kvm) { kvm_arch_flush_shadow_all(kvm); @@ -2197,7 +2210,7 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log) } if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); + kvm_flush_remote_tlbs_memslot(kvm, memslot); if (copy_to_user(log->dirty_bitmap, dirty_bitmap_buffer, n)) return -EFAULT; @@ -2314,7 +2327,7 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm, KVM_MMU_UNLOCK(kvm); if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); + kvm_flush_remote_tlbs_memslot(kvm, memslot); return 0; } From patchwork Thu Dec 8 19:38:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713872 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=if+TtpV/; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=casper.20170209 header.b=Q52O4Zwj; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=O++bFFpS; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSlKG002Nz23pD for ; Fri, 9 Dec 2022 06:56:33 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=+FUBdt2zdODQcj1PNWT9yDPFspATvphr2gkEGY80w+g=; b=if+TtpV/rGQrhVcBdStGcKD9nw o2WJD/uAzFCGKIzFU5NSLJMW5t/mkxY2oo9dstn0KIabOKqffRxwUi6SLPE+tGhYFUh2Gc8Pq70ON e/EiXaUHKo/bBwrBmEelaoMvLN+o0y6s2U2b6e/pGxFC3J6ZwAjZ6Q0OlAnfi0yUcLiARfuUu/DAK 9EGR8YQRziX4XvdK1TK+pHUjbwYQbe0b96t/mX4IjAZO87Q1WMY2aa4Bo2ngkZsZe4VysUbYgK0Zl OaJl45DlmSs+pJXfLejh5z2IZMWJwt/KqqGmwb0j/2C+hpEtg8sOkZ8Y7ys+dbCKizs73gs9+9Ufa Zcjz0GsQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3N0N-00A8eS-Vk; Thu, 08 Dec 2022 19:56:27 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Myy-00A7bR-H2 for kvm-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 19:55:00 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=7el2leYxw6DuE2TKpxO/GVa0AQVZhVPYSCMmz19ZHuI=; b=Q52O4ZwjbN6zb1XcjxHSGcDBKL ihOTLHD4dGK6NE8zXcNJK4oXRt8a1vT71SIW3d5w0Faz/ONfVkvGLAU8qGwODBNVBjbN/9dR/rAJC vmwgPJIh2nYcCgX2mPx8oumQgJ7xJONX6WLSM+Tg5YF55UdAjffFH/pxO3oTHFU+0GKu+LfPyOsO8 hWebFfS3W/InlgwtE+nP73Nnql95i1hj6WV9+ls9VXgLz7Qf54OzqZ7t1rXAyNQpo58WIpZgm8F2p lbC0sTUZaT9Tr4qVR4knR09E6dXSnVl5YnmOw6oIdq2ufgV8/LIUigfdANBaCdQnG3u6889h9ANg4 QpJlsz5A==; Received: from mail-pg1-x54a.google.com ([2607:f8b0:4864:20::54a]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mkh-007G7U-5T for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:40:16 +0000 Received: by mail-pg1-x54a.google.com with SMTP id h185-20020a636cc2000000b0046fc6e0065dso1624962pgc.5 for ; Thu, 08 Dec 2022 11:40:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7el2leYxw6DuE2TKpxO/GVa0AQVZhVPYSCMmz19ZHuI=; b=O++bFFpSURwB8JUZDeWwEc8lxc4/9crS2K4HsyuyBbvilW8kdB4FZPUpBapa95NsxY 0xO+2BUFvsH7Lv9OpM/ZNw6lAX1BZkwmvpchLqFMC1N46mGiLFFJBG/HkNWtdDFKz0HV gaJZcacGlJwuJDiHLoKW0WYD/nKr6azk72Tngx5olXe8pBzakRuhI16DprQMVZTjM2QI klMPqFL6afDQzxyjJFzvNFwT6YpAogs4p8iE9rScX7ZpN21ivdBFghpL37wrQT+9CnOv tdduCPP5Ti74kCPUwjTk76Ln1pEwMrFI/tCITETf7aNuyVpMp2Rq1UKSZarbJfTksvTP 6Wqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7el2leYxw6DuE2TKpxO/GVa0AQVZhVPYSCMmz19ZHuI=; b=0mG+U5f/Jvltl+/gOsDLzVb7guZkuafyLnlsL/ENkj8l3g1NtiLJHmam8JmiIz6Ubr C/QFlrkSOIunMIDOgLNu73cORdvSRyZ2JHvL3Mex5xKV1wY2ALAFM+t7XidMSR2WKRv3 jpb1jDqWBUDAjVlpT0Ax38mUiqIOHx60dvAAdJE5WlTCkL68ukwQyI70Fp4RxFU0jV/l PAAWOH8+sGZPri9B5JoTg0ADxNqNL0sFyN8p3TE4RbI1/GKaQri5xnL1QFjMEWpo0X9v PtcEtmSGagvb3raqCHoqBjLxLWXXAoGIx8zWALdLT1q9wX0pbFsFQ16MFFmiRawxTerB 2f/g== X-Gm-Message-State: ANoB5pkfESXx9+TEQ931IpFMF11SvGAFCbYd8GgGC2707CgzNzhO5mDh HuskaU0Vs8VHUhKNepMjNEiHNZjcRnVi3A== X-Google-Smtp-Source: AA0mqf4ImIWXetG7XX1SXxRXGnECaJ0ODlsgksb3SPKCTTh3sr1uwrp/2qso+8Xm6A48WDYB3HG6hZIxDgKp9g== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:2183:b0:574:2104:5657 with SMTP id h3-20020a056a00218300b0057421045657mr5713585pfi.58.1670528404717; Thu, 08 Dec 2022 11:40:04 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:54 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-35-dmatlack@google.com> Subject: [RFC PATCH 34/37] KVM: MMU: Move the TDP iterator to common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_194015_244558_93C1BF6C X-CRM114-Status: GOOD ( 12.54 ) X-Spam-Score: -9.6 (---------) X-Spam-Report: SpamAssassin version 3.4.6 on casper.infradead.org summary: Content analysis details: (-9.6 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:54a listed in] [list.dnswl.org] -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Move arch/x86/kvm/mmu/tdp_iter.{c,h} to into common code so that it can be used by other architectures in the future. No functional change intended. Signed-off-by: David Matlack --- MAINTAINERS | 2 +- arch/x86/kvm/Makefile | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- arch/x86/kvm/mmu/tdp_pgtable.c | 2 +- {arch/x86/kvm/mmu => include/kvm}/tdp_iter.h | 9 +++------ virt/kvm/Makefile.kvm | 2 ++ {arch/x86 => virt}/kvm/mmu/tdp_iter.c | 4 +--- 7 files changed, 10 insertions(+), 13 deletions(-) rename {arch/x86/kvm/mmu => include/kvm}/tdp_iter.h (96%) rename {arch/x86 => virt}/kvm/mmu/tdp_iter.c (98%) diff --git a/MAINTAINERS b/MAINTAINERS index 7e586d7ba78c..3c33eca85480 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -11206,7 +11206,7 @@ F: include/uapi/asm-generic/kvm* F: include/uapi/linux/kvm* F: tools/kvm/ F: tools/testing/selftests/kvm/ -F: virt/kvm/* +F: virt/kvm/ KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64) M: Marc Zyngier diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile index c294ae51caba..cb9ae306892a 100644 --- a/arch/x86/kvm/Makefile +++ b/arch/x86/kvm/Makefile @@ -18,7 +18,7 @@ ifdef CONFIG_HYPERV kvm-y += kvm_onhyperv.o endif -kvm-$(CONFIG_X86_64) += mmu/tdp_pgtable.o mmu/tdp_iter.o mmu/tdp_mmu.o +kvm-$(CONFIG_X86_64) += mmu/tdp_pgtable.o mmu/tdp_mmu.o kvm-$(CONFIG_KVM_XEN) += xen.o kvm-$(CONFIG_KVM_SMM) += smm.o diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 1f1f511cd1a0..c035c051161c 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -2,10 +2,10 @@ #include "mmu.h" #include "mmu_internal.h" -#include "tdp_iter.h" #include "tdp_mmu.h" #include "spte.h" +#include #include #include diff --git a/arch/x86/kvm/mmu/tdp_pgtable.c b/arch/x86/kvm/mmu/tdp_pgtable.c index cc7b10f703e1..fb40abdb9234 100644 --- a/arch/x86/kvm/mmu/tdp_pgtable.c +++ b/arch/x86/kvm/mmu/tdp_pgtable.c @@ -2,10 +2,10 @@ #include #include +#include #include "mmu.h" #include "spte.h" -#include "tdp_iter.h" /* Removed SPTEs must not be misconstrued as shadow present PTEs. */ static_assert(!(REMOVED_TDP_PTE & SPTE_MMU_PRESENT_MASK)); diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/include/kvm/tdp_iter.h similarity index 96% rename from arch/x86/kvm/mmu/tdp_iter.h rename to include/kvm/tdp_iter.h index 6e3c38532d1d..0a154fcf2664 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/include/kvm/tdp_iter.h @@ -1,14 +1,11 @@ // SPDX-License-Identifier: GPL-2.0 -#ifndef __KVM_X86_MMU_TDP_ITER_H -#define __KVM_X86_MMU_TDP_ITER_H +#ifndef __KVM_TDP_ITER_H +#define __KVM_TDP_ITER_H #include #include -#include "mmu.h" -#include "spte.h" - /* * TDP MMU SPTEs are RCU protected to allow paging structures (non-leaf SPTEs) * to be zapped while holding mmu_lock for read, and to allow TLB flushes to be @@ -117,4 +114,4 @@ void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu_page *root, void tdp_iter_next(struct tdp_iter *iter); void tdp_iter_restart(struct tdp_iter *iter); -#endif /* __KVM_X86_MMU_TDP_ITER_H */ +#endif /* __KVM_TDP_ITER_H */ diff --git a/virt/kvm/Makefile.kvm b/virt/kvm/Makefile.kvm index 2c27d5d0c367..58b595ac9b8d 100644 --- a/virt/kvm/Makefile.kvm +++ b/virt/kvm/Makefile.kvm @@ -12,3 +12,5 @@ kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o kvm-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o kvm-$(CONFIG_HAVE_KVM_DIRTY_RING) += $(KVM)/dirty_ring.o kvm-$(CONFIG_HAVE_KVM_PFNCACHE) += $(KVM)/pfncache.o + +kvm-$(CONFIG_HAVE_TDP_MMU) += $(KVM)/mmu/tdp_iter.o diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/virt/kvm/mmu/tdp_iter.c similarity index 98% rename from arch/x86/kvm/mmu/tdp_iter.c rename to virt/kvm/mmu/tdp_iter.c index d5f024b7f6e4..674d93f91979 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/virt/kvm/mmu/tdp_iter.c @@ -1,8 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 -#include "mmu_internal.h" -#include "tdp_iter.h" -#include "spte.h" +#include /* * Recalculates the pointer to the SPTE for the current GFN and level and From patchwork Thu Dec 8 19:38:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713871 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=ooHn2zlK; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=casper.20170209 header.b=nJXuDimD; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=UW5TQaki; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSlJS6hxdz23pD for ; Fri, 9 Dec 2022 06:55:52 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=DLcf24TCIjvKht/G957ZF9vJ6AcslsoGkQhfpFtpin4=; b=ooHn2zlKoMjWlOgoz5DyM4XJgT atyLceE391gDrhBB6gIrtQ3O1g7p2GLthEArXPUsQCryJNoZsm7SwTFqSxIFrGA/t/II0Y8W+ISc3 lBVPOMMf4ZTHatpwQ6MHpZtNi7uPAaQ5I5Ln5giHCIE8YqrdQtDvR6rVC9Sy8DngqG4da2t5yJrzq e372iG7N1A8RDZYJTArrK5gBklVAd+YjxeRFDtYtWN75xW4rxk1G/OPGaBlPLWTIFOaK1fmXQ3E4c nH/efEvoZaq1DR1zBjhxBuSAFZUcqpcpQB/fgQ0zuL7a9jV7Yy3A5XrZord2Vyw0o2emHqvh9x1BO k6pSC/GQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mzk-00A8Ac-MF; Thu, 08 Dec 2022 19:55:48 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Myx-00A7bR-FY for kvm-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 19:54:59 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=ZDFxAZ/PI9aKYqwedjkDPDUn0PCD9WUxHYy13eGAWTI=; b=nJXuDimDoUSDTbMPHke94FTpPw TJhlYb1aSs7rk3ol/HFYAwl9b1gGetd4g8BZx+q/ezYCyb6cY3l7KTGGDtNRoqkhzNyqMrvyYRbbk juqa8UIWW3BZqKRv84x9OTv4HQt9D+/7Nf0+jl2jwI2p5aVFBiqL1lX0sb9ypRFWQfE9qDUavU274 3IZM3mvaH7LDGQYEZrxb8J3VrV894TZp29+60Kay7MswdM4O6SMGF/ucOqxrP31sTYb3UVv42g/sg HlevdOEAD6Gzq87cHSDXE4Gs9/kgZzmXOMv30mjc1YxCkWufLEikuuQvmSPVqrKKGI429jj3IRqcj 3dqWKkDA==; Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mki-007G8E-4X for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:40:18 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id t5-20020a5b07c5000000b006dfa2102debso2549838ybq.4 for ; Thu, 08 Dec 2022 11:40:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZDFxAZ/PI9aKYqwedjkDPDUn0PCD9WUxHYy13eGAWTI=; b=UW5TQakihQ1Rj8oX0fHh6ol3yHICPRAUjId2fBE9HBOx4Z4cfwRHCzOFro0DVog5Py z1d9aYSN4fRQh+yVjk9c6erIMnvGY+3XCA5eND2mJd3lmcs6ZMcic9/HWoIHmoYD+B1m XjqLDiAn+QBJHnf0vzOfnGkxZlkIa53ilOScS8egg7kyZLK+h2NKejyWCaVjbbt+sR5R IhfagMv7s1KO30A1KDYGC5ziILrcoD8HH6N1IH5+EVj3HMWjpbtHUPdv5fpd7dQKJ+67 MzWuFb+b2SqNDBq56KYHPAYiPt7QBIFy/9sPV8aGa7CXG5RzY4NELQedDkJBW9qr9WbA evPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZDFxAZ/PI9aKYqwedjkDPDUn0PCD9WUxHYy13eGAWTI=; b=6s1vJ74yYeaAhNZoCPJAyh44i35h61ze+awJUrt7n1oZNRBdpTxk1O3oV/cwmUxfE+ UoNNizZ/TdsxbEaZfLYI0mB/76ofgt1EKrho2mc7jlAS4BMwDLD0OV/T26MCgkN4lPYP XJwHwVYSfkMImzLS0CvPLWWSVtSmc8fVuPKvr9rXIg0MTzAxYLZSZyUOqHp5BOB3hX1F ZWNeWRIS2KnGSTFjtPmpb6egsVnPXrP0QeFFi23cmKmXyzi4N+dB7iKfkA4fbSiV4T6U anLtghlllUoGcGqrNKqtmfjlSLdlhxa9UIz5+t+7I3L2WHIWTQY/czA789vFcMo6F3uu 0vBg== X-Gm-Message-State: ANoB5plVqOzuE/QA6OvEKo5d+jV2JLDbsooPm8eRBqRsnmN/8zxAxabG 4Byg7N3d6CWtNvQZSqxxmdN4/B0KZJ+jdw== X-Google-Smtp-Source: AA0mqf4h5P1c4eLdLR7BQ8qrkqY9isQdWdfvRcIECxB1d810mucDHdG6W99tV9EeqSL89xl1lTy+w8pwh40XXQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:6a89:0:b0:6dd:989f:2af4 with SMTP id f131-20020a256a89000000b006dd989f2af4mr92028743ybc.38.1670528406445; Thu, 08 Dec 2022 11:40:06 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:55 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-36-dmatlack@google.com> Subject: [RFC PATCH 35/37] KVM: x86/mmu: Move tdp_mmu_max_gfn_exclusive() to tdp_pgtable.c From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_194016_274711_0B3F1523 X-CRM114-Status: GOOD ( 10.78 ) X-Spam-Score: -9.6 (---------) X-Spam-Report: SpamAssassin version 3.4.6 on casper.infradead.org summary: Content analysis details: (-9.6 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:b4a listed in] [list.dnswl.org] -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Move tdp_mmu_max_gfn_exclusive() to tdp_pgtable.c since it currently relies on the x86-specific kvm_mmu_max_gfn() function. This can be improved in the future by implementing a common API for calculating the max GFN. No functional change intended. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm/tdp_pgtable.h | 3 +++ arch/x86/kvm/mmu/tdp_mmu.c | 11 ----------- arch/x86/kvm/mmu/tdp_pgtable.c | 11 +++++++++++ 3 files changed, 14 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/kvm/tdp_pgtable.h b/arch/x86/include/asm/kvm/tdp_pgtable.h index ff2691ced38b..c1047fcf1a91 100644 --- a/arch/x86/include/asm/kvm/tdp_pgtable.h +++ b/arch/x86/include/asm/kvm/tdp_pgtable.h @@ -67,4 +67,7 @@ u64 tdp_mmu_make_changed_pte_notifier_pte(struct tdp_iter *iter, struct kvm_gfn_range *range); u64 tdp_mmu_make_huge_page_split_pte(struct kvm *kvm, u64 huge_spte, struct kvm_mmu_page *sp, int index); + +gfn_t tdp_mmu_max_gfn_exclusive(void); + #endif /* !__ASM_KVM_TDP_PGTABLE_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index c035c051161c..c950d688afea 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -828,17 +828,6 @@ static inline bool __must_check tdp_mmu_iter_cond_resched(struct kvm *kvm, return iter->yielded; } -static inline gfn_t tdp_mmu_max_gfn_exclusive(void) -{ - /* - * Bound TDP MMU walks at host.MAXPHYADDR. KVM disallows memslots with - * a gpa range that would exceed the max gfn, and KVM does not create - * MMIO SPTEs for "impossible" gfns, instead sending such accesses down - * the slow emulation path every time. - */ - return kvm_mmu_max_gfn() + 1; -} - static void __tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root, bool shared, int zap_level) { diff --git a/arch/x86/kvm/mmu/tdp_pgtable.c b/arch/x86/kvm/mmu/tdp_pgtable.c index fb40abdb9234..4e747956d6ee 100644 --- a/arch/x86/kvm/mmu/tdp_pgtable.c +++ b/arch/x86/kvm/mmu/tdp_pgtable.c @@ -170,3 +170,14 @@ int tdp_mmu_max_mapping_level(struct kvm *kvm, { return kvm_mmu_max_mapping_level(kvm, slot, iter->gfn, PG_LEVEL_NUM); } + +gfn_t tdp_mmu_max_gfn_exclusive(void) +{ + /* + * Bound TDP MMU walks at host.MAXPHYADDR. KVM disallows memslots with + * a gpa range that would exceed the max gfn, and KVM does not create + * MMIO SPTEs for "impossible" gfns, instead sending such accesses down + * the slow emulation path every time. + */ + return kvm_mmu_max_gfn() + 1; +} From patchwork Thu Dec 8 19:38:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713882 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=RaJ6n/9L; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=desiato.20200630 header.b=DtRosIYW; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=Jeo44smH; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSm1k2b35z23pD for ; Fri, 9 Dec 2022 07:28:08 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=sJLnw76MTTeBBYECvshDKnpjRLY4X9pvnN9Pum+ko44=; b=RaJ6n/9LqbO2coift65CMOwGLQ 4znNdtLd51QgEscJJwIhDln6+CTnWTnzv9/JyHY5Vn6lyiiQW29qJGlqBPqx3+qKC2zrqA+zOg0du nzxRkSLl5vAGYFnGd8GeVLXZ1eWvI1VRxV7LmIjErUrjWn4j7qv2w4o2nC6+bu8tMuM3Cw72Hqm0r squkMfk0uLuZeScWWkKiJ2k4448G2nzovj2TpB6UMP7IJCf9yjj1RQUyTea9kCYKGSeBJFoEEWyQU j1NWwAbdxVwdYJpP83VBXh/FCifEvgscsp0QSfRS/ja+OUsY0AJpDZdZ5EqucTYbU2j8aob3cR/aE f6T1QpWw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NUv-00AhGW-Ph; Thu, 08 Dec 2022 20:28:01 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NUi-00AgYd-9B for kvm-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 20:27:48 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=tkNWRrDgNwNzqtgNwpxgN/DTrlBNLFki8K4gCSCF3lo=; b=DtRosIYWQRMEhCkCJiKJTx9bd/ X04xSX9aNeuwmfw5pdbFORgyywCWWQZ9qAImAZuklLvL+Ev6RKxiamP1KuA+nrHgeknUSBdR/d0iz XBu9ZAENg3bzH2K9C2bAWEOzYWgV5gkDjtJV7uj980oQfhyLkAqbqO4VJGubJl4Ov5Dt49c0eRIB7 FPb1+yLwrUE1/g9UarzCezYMDPqm4fOoVQBWyI5bJafym5XAD1qI6z4AGaGD+MrGW5ADZFdO7A7IX bIsEupMQ/iSJ91QA/B+JsfVCOtY+nOWNcUYHG5+TQlAk29HOY/rdjFnWa3oxYXAZEqNprH/Q63J8u ywiHjA+g==; Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mka-008ZnK-Kk for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:40:10 +0000 Received: by mail-yb1-xb49.google.com with SMTP id e15-20020a5b0ccf000000b006ed1704b40cso2548649ybr.5 for ; Thu, 08 Dec 2022 11:40:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tkNWRrDgNwNzqtgNwpxgN/DTrlBNLFki8K4gCSCF3lo=; b=Jeo44smHjnYpe2uFwUmmqKcQe01M1Ze9ea9B4uMIU5ntQkPkvB3Zc4thQ3NAsyCsM6 njB6ARB2/jwqICHZjLWARLgqlROkOE8dE/Nlmg+cddGuhCUdBUCre+Ape5y9bEn1AH0m QTNzlGiRNxTFT7VabJhOsxdhwjPofjbAJZ1PCWCHwTDUYPBouSQgwVrjYVu7xNslHehx 1YNfAbWUQ8IDhGJBMwkGg2u+vzD4HHd3ftPyCOz4Y9YrRbPnwANRWTtpcXIgRmYN6yTS OWchh5FCUZ+isGuwUpxCxPxLXQApiy8OtxlS5CtERREsPneeyxTaL8RKmGDzc7PdZpGa /plg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tkNWRrDgNwNzqtgNwpxgN/DTrlBNLFki8K4gCSCF3lo=; b=mweFfytsnPrFZpVw6cI9w6qPALwFC78ASNtrS0dWClV3U8KZQ962Ahnsk1NsBA0XdO cGZAisViNj7v7rViic7/XwwH8z0GZ1beFpHzPsobpCr8/yXLxIKGvoE80JPeCeOb3tlB 9p+RN0XZnk3laAPlqGSOGTXcpc3OD3ZYS2OB9rmw691uuJS1bMIg9Lfa6ABgQwu5nXOy Mv78F2NbsysxihpWKfZSE8HjB1GDWE4Uc+XSmiH5tQDULrkfU/FXFgqdn95JGI3B7joS XyOAGBqiirmm6uQ5SuYPnGSFPZB3wZ5W7L3VSEUO3Q7LIf2UvEHsnY2R47iR3XaBnp3H jKPw== X-Gm-Message-State: ANoB5pkguMjD0uxOnqQDm9h3RsvrKY8rmiGNMhRPmkP8qBwcFO5y6SGJ UlELLlEwyuSd3Q77dBe0c1Xce5aEjIqClw== X-Google-Smtp-Source: AA0mqf6e/93WEAJhtz+5cgWWhhHXPdHpxdnhitMPWlmfhQ9qhBNIB9WsoIo+5YOGLZdc8lSlrXS+3/eouLkxig== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:818d:0:b0:70e:58c:1c8b with SMTP id p13-20020a25818d000000b0070e058c1c8bmr2474845ybk.229.1670528408086; Thu, 08 Dec 2022 11:40:08 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:56 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-37-dmatlack@google.com> Subject: [RFC PATCH 36/37] KVM: x86/mmu: Move is_tdp_mmu_page() to mmu_internal.h From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_194008_930021_476115AB X-CRM114-Status: UNSURE ( 8.25 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "desiato.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Move is_tdp_mmu_page(), which is x86-specific, into mmu_internal.h. This prepares for moving tdp_mmu.h into common code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu_internal.h | 9 +++++++++ arch/x86/kvm/mmu/tdp_mmu.h | 9 --------- 2 files changed, 9 insertions(+), 9 deletions(-) Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:b49 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Move is_tdp_mmu_page(), which is x86-specific, into mmu_internal.h. This prepares for moving tdp_mmu.h into common code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu_internal.h | 9 +++++++++ arch/x86/kvm/mmu/tdp_mmu.h | 9 --------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index df815cb84bd2..51aef9624521 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -147,4 +147,13 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); +#ifdef CONFIG_X86_64 +static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) +{ + return !sp->arch.shadow_mmu_page; +} +#else +static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; } +#endif + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 897608be7f75..607c1417abd1 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -71,13 +71,4 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr, u64 *spte); -#ifdef CONFIG_X86_64 -static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) -{ - return !sp->arch.shadow_mmu_page; -} -#else -static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; } -#endif - #endif /* __KVM_X86_MMU_TDP_MMU_H */ From patchwork Thu Dec 8 19:38:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1713870 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=RQCc7Gxh; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=casper.20170209 header.b=Wh2xeceP; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=RcX1fA9W; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NSlJN5kcMz23pD for ; Fri, 9 Dec 2022 06:55:48 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=V6ki4shhUGaK+awxh/0YpG2aF1W35RrdoplYmmfocMw=; b=RQCc7Gxhn5Y9JRXx5f1dONnd2Y dYvoUesxGagfGWNxpndtjDw6Yxzt4IWIanorpIxVrqVLibwj5DrjQ3Egn7v1a1wae01Rm6SJM9Wcg L43KxVxRVhzx2lHWMHfW0mjb88t5MjEc5nH3M9+pYG8MrXOSYHQOnuicJhzrfLcAMRbFLIVBKA4t/ 3wifTJjfb42JzR9YTmYmYmgmVqnFadkzEg4Kdi3EtmNK0rdIpUKPXJBKnQ86RABYfg3esEyErsilR t3Nl+TRT0DxGG+sZEEbSF2TmhfHDO7WZziLDmPdLEdnUSRZUtFx0QkxCXj4e0i3vGGdmFsD3q0oV7 jiasQaXA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mze-00A85V-RB; Thu, 08 Dec 2022 19:55:42 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Myw-00A7bR-QA for kvm-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 19:54:58 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=9jkEeOR+eFVYLdV/biNhqONLCqpOOeuWNQBN9kUw+a4=; b=Wh2xeceP4PzqwREMknhV/leYQQ Mcr5p6TtO18HCAFki/2lyK97bZFoFg8j1OxeEjJlxnOVqg5qGd/iSDv01DwhFZJCe2FlWwiQyGGBx 8R7JZjCSQ/R/leWvxWTYQTkNAV5Dy1IC0e17mW/PomGkKC4FPH6P3MW8x7LO+f3HUyvPKHMRx9DNN p8E5Rpjlna+o4fZ1X/YNcq0MR2JtzsZAzJ4hu1+xSfcXeeViOxxj+FbAf2rs6EUANePfF8PSDtyd7 NxWUDcRZ5AgJgUGtH5Zse+50WQ+oq8Sy/9YqtyhuttDoxc0HEpQzPmVvXGVouPdfva0IEaUc2MTNa FuPYbdAA==; Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mkl-007G9d-6O for kvm-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:40:20 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-3b4eb124be5so24670227b3.19 for ; Thu, 08 Dec 2022 11:40:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9jkEeOR+eFVYLdV/biNhqONLCqpOOeuWNQBN9kUw+a4=; b=RcX1fA9WGa6w4C9PUxG4OPVBo1zxjA/xcbKeSP6a3foCAqZ7FzqI1mFwD63t1Zj64f dUcsVkx5+LL+NIjjNV1LqNWVddFOpYikSd3WUTyGFLn96H50k7PwdHsttsFJt9P1ez0b 1mfPx6AlqK6ylftXIU2F9Nj5hSc+ikFrGZ/z1f/qPxYxfvx/OXVl/tDLZo61SUF3zvKK PYv1RXX6MMcltPMtFw1Jp+095PHbtWtFgqbpJbKkr0cp5YLn1DUp9X/nB2YNG55HXF6K /rKkgYv9i7/aE4/qJBLcexm7xz9bNSYN/tkjp5XaQYlxAPljLege78wWIo+X1uBQR1Re /MJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9jkEeOR+eFVYLdV/biNhqONLCqpOOeuWNQBN9kUw+a4=; b=0aAQBWse0vwcV9RuEDCpb5CphZc47CFZD0NR+bLZGV2vE635VI+VusNOBUa+Gv8Vte LgpArT8Clpb60sy1SxrLimkBFXxm29VEmkpVO0n4x0/Fj6fZ2hNcRamYiGcwfPLidcsT zRvPmclz9MxB7noOAs6fk/BB5XyBFdSKY0aw2jDurWk3S866R9uYe36wtDSb2Souca7T NiQatdfmdQSuSX6yvzXOGAc/9YfhUL4qI0BpNuhuKX+9hHY8+NYD4SeJJ2wxrlnWDUYY 4V/lb+bXAdMAqvvKy+yQYDcmrXtOpYGsfLNLTUoRPwPy3wiv1Dr5/ym9fNr4XrjMgWAw R0gQ== X-Gm-Message-State: ANoB5pkMMlbjKAJ8MR6rdbNSmngSAvn+I/kfPsBJkOuEADx9WJ/8kQVM uVKAGFMLNsO7CXbKHasRiwm5yE3L8xpb0A== X-Google-Smtp-Source: AA0mqf5Y/oazogGX+tpbF7bdOJO4aayzIsYsnlV7zDudirQdvI5dtEVAX371oDYlEt+kHPpaNkmdcoOtQ7CEag== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:c6c2:0:b0:6f0:b351:c300 with SMTP id k185-20020a25c6c2000000b006f0b351c300mr63444626ybf.102.1670528409549; Thu, 08 Dec 2022 11:40:09 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:57 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-38-dmatlack@google.com> Subject: [RFC PATCH 37/37] KVM: MMU: Move the TDP MMU to common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_194019_246081_38688842 X-CRM114-Status: GOOD ( 10.69 ) X-Spam-Score: -9.6 (---------) X-Spam-Report: SpamAssassin version 3.4.6 on casper.infradead.org summary: Content analysis details: (-9.6 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:114a listed in] [list.dnswl.org] -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Move tdp_mmu.[ch] from arch/x86 and into the common code directories. This will allow other architectures to use the TDP MMU in the future. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/Makefile | 2 +- arch/x86/kvm/mmu/mmu.c | 3 ++- {arch/x86/kvm/mmu => include/kvm}/tdp_mmu.h | 6 +++++- virt/kvm/Makefile.kvm | 1 + {arch/x86 => virt}/kvm/mmu/tdp_mmu.c | 8 +++----- 5 files changed, 12 insertions(+), 8 deletions(-) rename {arch/x86/kvm/mmu => include/kvm}/tdp_mmu.h (94%) rename {arch/x86 => virt}/kvm/mmu/tdp_mmu.c (99%) diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile index cb9ae306892a..06b61fdea539 100644 --- a/arch/x86/kvm/Makefile +++ b/arch/x86/kvm/Makefile @@ -18,7 +18,7 @@ ifdef CONFIG_HYPERV kvm-y += kvm_onhyperv.o endif -kvm-$(CONFIG_X86_64) += mmu/tdp_pgtable.o mmu/tdp_mmu.o +kvm-$(CONFIG_X86_64) += mmu/tdp_pgtable.o kvm-$(CONFIG_KVM_XEN) += xen.o kvm-$(CONFIG_KVM_SMM) += smm.o diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f2602ee1771f..8653776bca6f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -19,7 +19,6 @@ #include "ioapic.h" #include "mmu.h" #include "mmu_internal.h" -#include "tdp_mmu.h" #include "x86.h" #include "kvm_cache_regs.h" #include "smm.h" @@ -27,6 +26,8 @@ #include "cpuid.h" #include "spte.h" +#include + #include #include #include diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/include/kvm/tdp_mmu.h similarity index 94% rename from arch/x86/kvm/mmu/tdp_mmu.h rename to include/kvm/tdp_mmu.h index 607c1417abd1..538c848149c9 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/include/kvm/tdp_mmu.h @@ -5,7 +5,11 @@ #include -#include "spte.h" +#include +#include +#include +#include +#include int kvm_mmu_init_tdp_mmu(struct kvm *kvm); void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); diff --git a/virt/kvm/Makefile.kvm b/virt/kvm/Makefile.kvm index 58b595ac9b8d..942681308140 100644 --- a/virt/kvm/Makefile.kvm +++ b/virt/kvm/Makefile.kvm @@ -14,3 +14,4 @@ kvm-$(CONFIG_HAVE_KVM_DIRTY_RING) += $(KVM)/dirty_ring.o kvm-$(CONFIG_HAVE_KVM_PFNCACHE) += $(KVM)/pfncache.o kvm-$(CONFIG_HAVE_TDP_MMU) += $(KVM)/mmu/tdp_iter.o +kvm-$(CONFIG_HAVE_TDP_MMU) += $(KVM)/mmu/tdp_mmu.o diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/virt/kvm/mmu/tdp_mmu.c similarity index 99% rename from arch/x86/kvm/mmu/tdp_mmu.c rename to virt/kvm/mmu/tdp_mmu.c index c950d688afea..5ca8892ebef5 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/virt/kvm/mmu/tdp_mmu.c @@ -1,11 +1,9 @@ // SPDX-License-Identifier: GPL-2.0 -#include "mmu.h" -#include "mmu_internal.h" -#include "tdp_mmu.h" -#include "spte.h" - +#include +#include #include +#include #include #include