From patchwork Fri Mar 11 00:25:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604195 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=oA5kWAoE; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=ITUQYvDk; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6D90tfyz9s5P for ; Fri, 11 Mar 2022 11:26:01 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=920mCkePBu+9suZu4x0mOtMCknvCUNqnzNUavnDkpOk=; b=oA5kWAoEk2zpQIHBOLx+R+gpcd F20AB78uMAfklQ3/88aU4qR52W1ZuTz1WrfrnJGouIgyqE2Xh7JkumPzZs8eCdH+UqGUI8NHXQq+O ZyjoMfnABUrUxqWb7BowifOHRj1nzi6nxcSzMAyisltb7/BE7v92giF9XWZ7Y78eh69HL72RxbmCB 0UWBIa5+mVvdhCr3sq4gi51XjjpO8yPk9RO3z/f3bfr/hLElX+nnEDMlsAJYkcTlvjlcXG12xME0z br6JL/ViBX1+d1EdFwP7GKcT9CkIYJ4MEKAQ4WQFO1d9kbzhYpX0AfmdEuyb9sjwTJWHupg65CY0X hGF+TTkA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6U-00EPRl-Uk; Fri, 11 Mar 2022 00:25:58 +0000 Received: from mail-pg1-x54a.google.com ([2607:f8b0:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6S-00EPOY-8C for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:25:57 +0000 Received: by mail-pg1-x54a.google.com with SMTP id t62-20020a635f41000000b0037c9ae5fb8bso3766552pgb.19 for ; Thu, 10 Mar 2022 16:25:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=trCyse6kc5VaWQEWZpkBTeyDB9juUOFht/HeyrHGEd0=; b=ITUQYvDkX2h3RQafxt+nm82/5781iHDiISy4DhTqmrro4/iNkPMsCBDlg8t8eB6+vW WoKsbJ7zeDrbOuwfTkcWOWIHcZkJGUw2d4r+98Smj6dBLngOlw908mgtDChj3UyeFl76 FyOJizpm4NSOQnGdgQxrZIN82+kY5NHA6X6TJmku0dL5yKt8RAtw/MBwBf3lvsO9ufHQ Dqd5AKP8PlHMHCKEXNTDH4MaONsDnd2Q9BRw9Kd7sIUZhnRm7oa9dS0kBigX9ayD9ejf EqqRiqfDXYoPw8fi+Fl3DZk+QJK6k5SwEwYznvLftMTTXWKnvbsNNkitPFK/djwTdGSR vCjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=trCyse6kc5VaWQEWZpkBTeyDB9juUOFht/HeyrHGEd0=; b=WuYAj+Tja/3GNBwm4bVAr0JlY9CFlOv4Z1jCGhJNWYgbAtrxvm/yBL3vaLka1YphRn o4okCPE4WRbo+N9Q7r4gSqxbFVxexy4XjAh8I5CyD2u/sLoVqjlFfIgKQ4hFdcilWgqG iJYd0+oypykGR0XtNR0mYZ+Hn+92jpkHmnQz+LlesLRRfN/eFM0gOXDmcHWBq9rjbHUV 0m6NVA13T40db4XI8P8aEzB+VK0D/GwkFO2skaE+6R9iuwzBavUiuNWxm9Od7JPX8CQE UAZoVQkR270yKB7jg+bfbaQpR/S0TZuoR/2A+/ooynJ7ExN6grA7zhlxhWoqM1Btqn1m D6Ew== X-Gm-Message-State: AOAM5322s8/5hS6EvaBnep/3CSoILi1ho7mjmhqVhQ1+Wx7TLeyAriw/ JdMHEFcTsaLggV5Bs0fqQSYkXYVh13rXvg== X-Google-Smtp-Source: ABdhPJzvAwzKoOAunNI85/tiPMvRTJXBrV2pzYGx34nWHsviDIj876IlOqB4GDjd2lcrh2rYpRjUo90RjQTUhw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:1941:b0:1bf:5440:d716 with SMTP id nk1-20020a17090b194100b001bf5440d716mr7787622pjb.147.1646958354821; Thu, 10 Mar 2022 16:25:54 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:16 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-15-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 14/26] KVM: x86/mmu: Decouple rmap_add() and link_shadow_page() from kvm_vcpu From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162556_337296_2B79B93A X-CRM114-Status: GOOD ( 13.23 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Allow adding new entries to the rmap and linking shadow pages without a struct kvm_vcpu pointer by moving the implementation of rmap_add() and link_shadow_page() into inner helper functions. No functional change intended. Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:54a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Allow adding new entries to the rmap and linking shadow pages without a struct kvm_vcpu pointer by moving the implementation of rmap_add() and link_shadow_page() into inner helper functions. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: David Matlack Reviewed-by: Peter Xu --- arch/x86/kvm/mmu/mmu.c | 43 +++++++++++++++++++++++++++--------------- 1 file changed, 28 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index d7ad71be6c52..c57070ed157d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -725,9 +725,9 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); } -static struct pte_list_desc *mmu_alloc_pte_list_desc(struct kvm_vcpu *vcpu) +static struct pte_list_desc *mmu_alloc_pte_list_desc(struct kvm_mmu_memory_cache *cache) { - return kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_pte_list_desc_cache); + return kvm_mmu_memory_cache_alloc(cache); } static void mmu_free_pte_list_desc(struct pte_list_desc *pte_list_desc) @@ -874,7 +874,7 @@ gfn_to_memslot_dirty_bitmap(struct kvm_vcpu *vcpu, gfn_t gfn, /* * Returns the number of pointers in the rmap chain, not counting the new one. */ -static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte, +static int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, struct kvm_rmap_head *rmap_head) { struct pte_list_desc *desc; @@ -885,7 +885,7 @@ static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte, rmap_head->val = (unsigned long)spte; } else if (!(rmap_head->val & 1)) { rmap_printk("%p %llx 1->many\n", spte, *spte); - desc = mmu_alloc_pte_list_desc(vcpu); + desc = mmu_alloc_pte_list_desc(cache); desc->sptes[0] = (u64 *)rmap_head->val; desc->sptes[1] = spte; desc->spte_count = 2; @@ -897,7 +897,7 @@ static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte, while (desc->spte_count == PTE_LIST_EXT) { count += PTE_LIST_EXT; if (!desc->more) { - desc->more = mmu_alloc_pte_list_desc(vcpu); + desc->more = mmu_alloc_pte_list_desc(cache); desc = desc->more; desc->spte_count = 0; break; @@ -1596,8 +1596,10 @@ static bool kvm_test_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, #define RMAP_RECYCLE_THRESHOLD 1000 -static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, - u64 *spte, gfn_t gfn) +static void __rmap_add(struct kvm *kvm, + struct kvm_mmu_memory_cache *cache, + const struct kvm_memory_slot *slot, + u64 *spte, gfn_t gfn) { struct kvm_mmu_page *sp; struct kvm_rmap_head *rmap_head; @@ -1606,15 +1608,21 @@ static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, sp = sptep_to_sp(spte); kvm_mmu_page_set_gfn(sp, spte - sp->spt, gfn); rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); - rmap_count = pte_list_add(vcpu, spte, rmap_head); + rmap_count = pte_list_add(cache, spte, rmap_head); if (rmap_count > RMAP_RECYCLE_THRESHOLD) { - kvm_unmap_rmapp(vcpu->kvm, rmap_head, NULL, gfn, sp->role.level, __pte(0)); + kvm_unmap_rmapp(kvm, rmap_head, NULL, gfn, sp->role.level, __pte(0)); kvm_flush_remote_tlbs_with_address( - vcpu->kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); } } +static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, + u64 *spte, gfn_t gfn) +{ + __rmap_add(vcpu->kvm, &vcpu->arch.mmu_pte_list_desc_cache, slot, spte, gfn); +} + bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { bool young = false; @@ -1682,13 +1690,13 @@ static unsigned kvm_page_table_hashfn(gfn_t gfn) return hash_64(gfn, KVM_MMU_HASH_SHIFT); } -static void mmu_page_add_parent_pte(struct kvm_vcpu *vcpu, +static void mmu_page_add_parent_pte(struct kvm_mmu_memory_cache *cache, struct kvm_mmu_page *sp, u64 *parent_pte) { if (!parent_pte) return; - pte_list_add(vcpu, parent_pte, &sp->parent_ptes); + pte_list_add(cache, parent_pte, &sp->parent_ptes); } static void mmu_page_remove_parent_pte(struct kvm_mmu_page *sp, @@ -2307,8 +2315,8 @@ static void shadow_walk_next(struct kvm_shadow_walk_iterator *iterator) __shadow_walk_next(iterator, *iterator->sptep); } -static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, - struct kvm_mmu_page *sp) +static void __link_shadow_page(struct kvm_mmu_memory_cache *cache, u64 *sptep, + struct kvm_mmu_page *sp) { u64 spte; @@ -2318,12 +2326,17 @@ static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, mmu_spte_set(sptep, spte); - mmu_page_add_parent_pte(vcpu, sp, sptep); + mmu_page_add_parent_pte(cache, sp, sptep); if (sp->unsync_children || sp->unsync) mark_unsync(sptep); } +static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, struct kvm_mmu_page *sp) +{ + __link_shadow_page(&vcpu->arch.mmu_pte_list_desc_cache, sptep, sp); +} + static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, unsigned direct_access) {