From patchwork Fri Mar 11 00:25:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604181 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=dgdsqnP1; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=eY6emTpN; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6Ck60yFz9sGD for ; Fri, 11 Mar 2022 11:25:38 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=cAZJlpYT19Uu2N9w3KftWnn26+2vjVC6P0HujZf1zdE=; b=dgdsqnP1/tZeIAs5wwa07i0/72 sxmkrIBa0Q4jnjVm2UCm4pYOSpN0S5E6HRrXmBjERo0zW4TORiK1g3ke0Bf7YQLZMDODsIqJcDKs+ iC0ZKGqQlJhRiW/oVV3A8TAI/ubeNOKUzprZIlvt0+UUxUI6wdGX7wL/fPX2VccFjm2jhpYsBtxqm Mul6iwbSF8tHDvJCJVphbMUfhdjquzTvRM3WRc2f6a6RhfbRvNYlh+IkKqu/dRzDOraFVqGKafqAa QVjsIa4KxObvxIz5hbv5K4HxiBo80z6ciO4GIw7aEzWcklaOQBzRwJFxU4xcHMSQtft1zNv1ZL3QX i/Khvfvg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST68-00EP8k-G1; Fri, 11 Mar 2022 00:25:36 +0000 Received: from mail-pg1-x54a.google.com ([2607:f8b0:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST66-00EP7o-Bb for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:25:35 +0000 Received: by mail-pg1-x54a.google.com with SMTP id v8-20020a654608000000b0037d5ef9cfa0so3782480pgq.8 for ; Thu, 10 Mar 2022 16:25:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=j95pgE4Sx8peHA829tzkUc37bNityDTC3xUJdMO2UzA=; b=eY6emTpNoacvuiaoysnSFSYEGI+0B+krHa4P/Kh4D2MAjDRyvsIDsQzdiBWa0mReKe wLLIOmJC46PoM8Ve2y9iabrtTA3QVfRHxFm60wkR7QuQ3z3KjIr+FBV3ypD6+hqr/YlG oW2es0oK5N9Ctdp+T7UEZqjXrpgspiy/S7kJWnoT89pQXVGng1c0NPgbAZi4G2rX77Df GywIyvqUz+kS6x19Cro3q/w81nweCY468m7CiSYcFHlOvSXrGNZgMoAREj7DhcA1MQv+ Usb8QAEJ//5umwpZGGeozCTn8uddNLyxyhaHsmzSCQQKz02S7oHN67gRirf+EMGfG7em pUfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=j95pgE4Sx8peHA829tzkUc37bNityDTC3xUJdMO2UzA=; b=Ziw2C6XKRCawaJtpDIhPy9L3GduR33NeH9iBcqPmMX9crJb2xnimm2LeiRBqebd/WJ h9vawPdAY62VWaJ5+PiSUt9POrTvgKe4vwYJ4rwdA67MYyUcG6VYw8+KAWQRB0vo8wpX kwL9p+/6l5Lq2A/lw8S1cbWMMUnlk+bfPDZhwoOU/DIckXMzR+SzJJ1K7fM1MjBAHSCj eIP9GZdwIC5pcqglgggB5ZV9Ljf1pIWn9RAUGNevg56d+RluEpngbloiODtkL0y3bsPj 5Rf2tLGgagN9d+Y599XCGlX1DxYuDMCL3S6vd7zSH+PIfrK//KdRvo3mi+mVVod9Vkrt 3I5g== X-Gm-Message-State: AOAM532eow8GWur1QHNvJY2AYapvP01Dcro97goYybjBlZAZ+O3cQ5Ng 4AZm6EgVbwVUgHDP1Ydg2VUUQz3CN8QOnw== X-Google-Smtp-Source: ABdhPJyr0Yxi/VkLVPG0JOeVLGZN1qgnMSwCFGFVa2lsMkBsD9BSYG73ywf0RhTzqzlrQfQDXUPJWYoLTcZWYQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:1a0b:b0:4cf:9a9:5c5f with SMTP id g11-20020a056a001a0b00b004cf09a95c5fmr7693681pfv.45.1646958332777; Thu, 10 Mar 2022 16:25:32 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:03 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-2-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 01/26] KVM: x86/mmu: Optimize MMU page cache lookup for all direct SPs From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162534_420503_7C1220F4 X-CRM114-Status: GOOD ( 10.85 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Commit fb58a9c345f6 ("KVM: x86/mmu: Optimize MMU page cache lookup for fully direct MMUs") skipped the unsync checks and write flood clearing for full direct MMUs. We can extend this further and skip [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:54a listed in] [list.dnswl.org] -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Commit fb58a9c345f6 ("KVM: x86/mmu: Optimize MMU page cache lookup for fully direct MMUs") skipped the unsync checks and write flood clearing for full direct MMUs. We can extend this further and skip the checks for all direct shadow pages. Direct shadow pages are never marked unsynced or have a non-zero write-flooding count. Checking sp->role.direct alos generates better code than checking direct_map because, due to register pressure, direct_map has to get shoved onto the stack and then pulled back off. No functional change intended. Reviewed-by: Sean Christopherson Signed-off-by: David Matlack Reviewed-by: Peter Xu --- arch/x86/kvm/mmu/mmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) base-commit: ce41d078aaa9cf15cbbb4a42878cc6160d76525e diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3b8da8b0745e..3ad67f70e51c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2034,7 +2034,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, int direct, unsigned int access) { - bool direct_mmu = vcpu->arch.mmu->direct_map; union kvm_mmu_page_role role; struct hlist_head *sp_list; unsigned quadrant; @@ -2075,7 +2074,8 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, continue; } - if (direct_mmu) + /* unsync and write-flooding only apply to indirect SPs. */ + if (sp->role.direct) goto trace_get_page; if (sp->unsync) { From patchwork Fri Mar 11 00:25:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604183 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=TaCBhHZd; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=Ds65Duv7; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6Cl6mVVz9sGK for ; Fri, 11 Mar 2022 11:25:39 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Rdvryt8QIyyvep4oEG361741JmBVktQps1FHaSgkVEs=; b=TaCBhHZd8SH89b3lJY+8CsIPmV ZL6pUAfbEm6a4E/WPEIqj7tb+Udxi9TtHNpH+fGk+Mjnv9Q9stPT7Vd9tshKGsOI05V9W89YyyHlz FBuW58dxIqOLbYSETMZNEgXM87ZainERT/w/j5t/oX+UWcUOTMReknmydCyENL3wVuhxSpQfdaple 5V6IBgz2AQj/3eVkrY7niAW1x/lRBmMdQMId/hfjLOO00xiKAx58Nd1oLfFs/3OmDRugPerIpHGPs J0sArYRVO2PZVtwyzCTDSM/77vgwmbaMUfex6xwG+37gUAp258B/6Mm7EvC0YXnqcC9mHSWmjIZQq 0caIiapA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST69-00EP9L-QH; Fri, 11 Mar 2022 00:25:37 +0000 Received: from mail-pf1-x449.google.com ([2607:f8b0:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST67-00EP8K-HT for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:25:36 +0000 Received: by mail-pf1-x449.google.com with SMTP id h69-20020a628348000000b004f769e5b4bdso2389255pfe.10 for ; Thu, 10 Mar 2022 16:25:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=0eAfKiOxcQy9JlkrjXNpoHqYHtmPuiAwbCjCtsBF1gU=; b=Ds65Duv71Sj++gu858kzQzfx4qj+n74AY3f3En/PZmXyzUO8L6iQ55bITs/kJzLy7E 1jAfwm8/sK5JqfrHLJDw8N2+qOoDaqdIgsikIF90Gdja6qFOkEyH/1V+THlFIGsLKAUI NrzcYy53Cwxzug9SUZA/22KspdbY3akV2eXvpVmmP8/088xMO6De1aJppaQIBk6f513S D/KDc7JzburUmP0xkKQTMIcZS0hk68J7NNa+iw2WZ6/Q3TfdAEBsREKoblTyIGSfh6K2 k54muP2tXq0Pjriu0Esl1G3PuZQVW3OvY79X+cSH57wTuUftGGbDJIGQSSTasJuF1s2x vb6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=0eAfKiOxcQy9JlkrjXNpoHqYHtmPuiAwbCjCtsBF1gU=; b=De7JiGbMHhkbuDA/Y9rHWjfAnVaLLtvNW4X6ckFigm4FhbeBn39x4/uXyWXZDGKur+ BBf+tsPu6HRDhX+CVMKVrEhiL4GkxREr0/NyxcfGtZzcO8vLU79E/DcDkC6tOg0mzWdb kmoRvNBFo5O+Lb8z1qTwBCaJlRJK5QaEBE6/TZvvFoJK/7WCnGQ9WeiZB6zpasc/wT9A kyoHH0I3fz+0HUcJV1CNtdYnvqzVnbZBxNHwv8skYJccAaIH5Lb3QWDVtnQu+NzmA3lb TZUu92QzNoK8Z8pPYqsAjZcfxpJhVkC5qw4mj9bd6hA9IxlqaXk4Uf3VL7ELdKL2ZZTs 7ePw== X-Gm-Message-State: AOAM5317JvQ28o9HVp8LEI2tfyjsLXliId5WpuYG+cV+Wnvn64cZKJ2v QhH7j/VmIjr7I84an5yTcr51Q+dtdIy8TQ== X-Google-Smtp-Source: ABdhPJx2ihth+Lyp0jx20NkW4UoRmNnE2WoUi5zOLMEZXktLr7faLnrpgu/8QE08DLH0Spz2A92OMKfVSAfiBg== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:e5cd:b0:151:da9a:2b5d with SMTP id u13-20020a170902e5cd00b00151da9a2b5dmr7419566plf.67.1646958334306; Thu, 10 Mar 2022 16:25:34 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:04 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-3-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 02/26] KVM: x86/mmu: Use a bool for direct From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162535_599334_61AAAB18 X-CRM114-Status: GOOD ( 10.28 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: The parameter "direct" can either be true or false, and all of the callers pass in a bool variable or true/false literal, so just use the type bool. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:449 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org The parameter "direct" can either be true or false, and all of the callers pass in a bool variable or true/false literal, so just use the type bool. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3ad67f70e51c..146df73a982e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1706,7 +1706,7 @@ static void drop_parent_pte(struct kvm_mmu_page *sp, mmu_spte_clear_no_track(parent_pte); } -static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct) +static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, bool direct) { struct kvm_mmu_page *sp; @@ -2031,7 +2031,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, gva_t gaddr, unsigned level, - int direct, + bool direct, unsigned int access) { union kvm_mmu_page_role role; From patchwork Fri Mar 11 00:25:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604184 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=bBQNPmyc; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=dUwKeuep; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6Cr1cq4z9sGD for ; Fri, 11 Mar 2022 11:25:44 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=BISlR72kG8cYlrAgmawcXZMgy0Ltly9tfSMICz5ElmE=; b=bBQNPmycgGEjmTq8U425PP3m5u 3MfBJxa7eBM8+6Dx9DJnbYv3WyyRAHFiszsQPzYaUXpwoR/+u19QU1ig0XAwfD/Xqy7uv0NEuAPd3 eN7do9o54Bv1u1Nwndi4lEk0spNEwvf/bQER30dHoOrOdZYlsT9NPgSRQfpvYkr6xFF54oMTw1d6j Tyf7bvwFuCcIC0+M+oE01/S64FaFwUXeA5xjryv7kIy73/8SpkCSjiTrM/ZmuuHqA1rvX1io4wuV8 9JK90qSlPi8j5zT5rXbh19xGNre+ebVSD062p0wmBpQWrqBrdS+VeYxPvfqRBdLsBmE1N06TF9iqi 6GKg6Xxg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6D-00EPBs-Up; Fri, 11 Mar 2022 00:25:41 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6A-00EP9D-Ae for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:25:40 +0000 Received: by mail-pg1-x549.google.com with SMTP id v4-20020a63f844000000b003745fd0919aso3769087pgj.20 for ; Thu, 10 Mar 2022 16:25:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=hQ1Yj2jpLZU8m0KHZ3b2ZXi72STV++YRQqlTTShJ2JQ=; b=dUwKeuepnbyw5I0D0zMuN7US6l4wxLHljA5AApQQ7F8dXrKm4tCACM2Ugps7u8EbNv svxWhyuNtv4ZPIQtNMet7fhETJw6LR+X7042fh/hfghQ3hS09G7ORdIa66X94PRknWNe WK9BK6Rvd5d3JqbdXSUu3+i8v4WCdWIECzAvF96kUgv4gf++6omtsET5AXA6x5yqpYgI Tx8LI9oPouldhO9zpRH8GBq7bLkogNVqyx+Ou4bVr1nF61MdAK4Ix1sg8d9fdsmyAtt0 EltlXSsxXMf72xinEOHYBHxQQQ7yXKNtvWah99RgqxrSPGbmumNksNzAiNgfuIXaV7qV yJfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=hQ1Yj2jpLZU8m0KHZ3b2ZXi72STV++YRQqlTTShJ2JQ=; b=5DLIH3x1ygLoizaMLxzb72Ivi5VIrAYxLyy3h256EGuynLFoQbl6o3MwafHXXyIHBE CZkQrwRr+cEVjOvZoLKZydPSohkxmRurs46TGXqSRI4T5YOWdkw9Iye/YRK1X5jofGYM pyygOXJpRaVzuVN03itKvSxhdoIYHbF60WbKY73zVat8gfQWc3VaMJjKJnLHMPWm5Fwt 04J3P5jkrzi3TtiPRP+YpRup/7mR6QJImX37hmQC36ansJ0f6AMFGosmz8UA+bI/iMrP WFZa2FFcefcASqMUnQHfR5N83V27IjYspetIGq40vhnCd4hbXgiXfbChd/zIXxTZT7uu DKxg== X-Gm-Message-State: AOAM531w8rCc7EiJwcegOP7aYHSCnwoHOX2VVV5h1aQ7pbWw5KNHpyTC 4dB/pmcJzUOhY1j83ArCOF7bDN7knSvSoA== X-Google-Smtp-Source: ABdhPJxLj1e/WCXwjYQhN24wNkEVjpXHDZLELDbVrBV5t0YYirawE+RJcKOjpyzusMz/lduTyqsGweKfuUl38w== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:4d81:b0:1bf:8ce4:4f51 with SMTP id oj1-20020a17090b4d8100b001bf8ce44f51mr322491pjb.0.1646958335898; Thu, 10 Mar 2022 16:25:35 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:05 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-4-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 03/26] KVM: x86/mmu: Derive shadow MMU page role from parent From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162538_395432_281C5F16 X-CRM114-Status: GOOD ( 18.81 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Instead of computing the shadow page role from scratch for every new page, we can derive most of the information from the parent shadow page. This avoids redundant calculations and reduces the number [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:549 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Instead of computing the shadow page role from scratch for every new page, we can derive most of the information from the parent shadow page. This avoids redundant calculations and reduces the number of parameters to kvm_mmu_get_page(). Preemptively split out the role calculation to a separate function for use in a following commit. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Peter Xu --- arch/x86/kvm/mmu/mmu.c | 91 ++++++++++++++++++++++++---------- arch/x86/kvm/mmu/paging_tmpl.h | 9 ++-- 2 files changed, 71 insertions(+), 29 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 146df73a982e..23c2004c6435 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2027,30 +2027,14 @@ static void clear_sp_write_flooding_count(u64 *spte) __clear_sp_write_flooding_count(sptep_to_sp(spte)); } -static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, - gfn_t gfn, - gva_t gaddr, - unsigned level, - bool direct, - unsigned int access) +static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, + union kvm_mmu_page_role role) { - union kvm_mmu_page_role role; struct hlist_head *sp_list; - unsigned quadrant; struct kvm_mmu_page *sp; int collisions = 0; LIST_HEAD(invalid_list); - role = vcpu->arch.mmu->mmu_role.base; - role.level = level; - role.direct = direct; - role.access = access; - if (role.has_4_byte_gpte) { - quadrant = gaddr >> (PAGE_SHIFT + (PT64_PT_BITS * level)); - quadrant &= (1 << ((PT32_PT_BITS - PT64_PT_BITS) * level)) - 1; - role.quadrant = quadrant; - } - sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; for_each_valid_sp(vcpu->kvm, sp, sp_list) { if (sp->gfn != gfn) { @@ -2068,7 +2052,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, * Unsync pages must not be left as is, because the new * upper-level page will be write-protected. */ - if (level > PG_LEVEL_4K && sp->unsync) + if (role.level > PG_LEVEL_4K && sp->unsync) kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list); continue; @@ -2107,14 +2091,14 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, ++vcpu->kvm->stat.mmu_cache_miss; - sp = kvm_mmu_alloc_page(vcpu, direct); + sp = kvm_mmu_alloc_page(vcpu, role.direct); sp->gfn = gfn; sp->role = role; hlist_add_head(&sp->hash_link, sp_list); - if (!direct) { + if (!role.direct) { account_shadowed(vcpu->kvm, sp); - if (level == PG_LEVEL_4K && kvm_vcpu_write_protect_gfn(vcpu, gfn)) + if (role.level == PG_LEVEL_4K && kvm_vcpu_write_protect_gfn(vcpu, gfn)) kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1); } trace_kvm_mmu_get_page(sp, true); @@ -2126,6 +2110,51 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, return sp; } +static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, u32 access) +{ + struct kvm_mmu_page *parent_sp = sptep_to_sp(sptep); + union kvm_mmu_page_role role; + + role = parent_sp->role; + role.level--; + role.access = access; + role.direct = direct; + + /* + * If the guest has 4-byte PTEs then that means it's using 32-bit, + * 2-level, non-PAE paging. KVM shadows such guests using 4 PAE page + * directories, each mapping 1/4 of the guest's linear address space + * (1GiB). The shadow pages for those 4 page directories are + * pre-allocated and assigned a separate quadrant in their role. + * + * Since we are allocating a child shadow page and there are only 2 + * levels, this must be a PG_LEVEL_4K shadow page. Here the quadrant + * will either be 0 or 1 because it maps 1/2 of the address space mapped + * by the guest's PG_LEVEL_4K page table (or 4MiB huge page) that it + * is shadowing. In this case, the quadrant can be derived by the index + * of the SPTE that points to the new child shadow page in the page + * directory (parent_sp). Specifically, every 2 SPTEs in parent_sp + * shadow one half of a guest's page table (or 4MiB huge page) so the + * quadrant is just the parity of the index of the SPTE. + */ + if (role.has_4_byte_gpte) { + BUG_ON(role.level != PG_LEVEL_4K); + role.quadrant = (sptep - parent_sp->spt) % 2; + } + + return role; +} + +static struct kvm_mmu_page *kvm_mmu_get_child_sp(struct kvm_vcpu *vcpu, + u64 *sptep, gfn_t gfn, + bool direct, u32 access) +{ + union kvm_mmu_page_role role; + + role = kvm_mmu_child_role(sptep, direct, access); + return kvm_mmu_get_page(vcpu, gfn, role); +} + static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterator, struct kvm_vcpu *vcpu, hpa_t root, u64 addr) @@ -2930,8 +2959,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) if (is_shadow_present_pte(*it.sptep)) continue; - sp = kvm_mmu_get_page(vcpu, base_gfn, it.addr, - it.level - 1, true, ACC_ALL); + sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn, true, ACC_ALL); link_shadow_page(vcpu, it.sptep, sp); if (fault->is_tdp && fault->huge_page_disallowed && @@ -3316,9 +3344,22 @@ static int mmu_check_root(struct kvm_vcpu *vcpu, gfn_t root_gfn) static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, gva_t gva, u8 level, bool direct) { + union kvm_mmu_page_role role; struct kvm_mmu_page *sp; + unsigned int quadrant; + + role = vcpu->arch.mmu->mmu_role.base; + role.level = level; + role.direct = direct; + role.access = ACC_ALL; + + if (role.has_4_byte_gpte) { + quadrant = gva >> (PAGE_SHIFT + (PT64_PT_BITS * level)); + quadrant &= (1 << ((PT32_PT_BITS - PT64_PT_BITS) * level)) - 1; + role.quadrant = quadrant; + } - sp = kvm_mmu_get_page(vcpu, gfn, gva, level, direct, ACC_ALL); + sp = kvm_mmu_get_page(vcpu, gfn, role); ++sp->root_count; return __pa(sp->spt); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 252c77805eb9..c3909a07e938 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -683,8 +683,9 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, if (!is_shadow_present_pte(*it.sptep)) { table_gfn = gw->table_gfn[it.level - 2]; access = gw->pt_access[it.level - 2]; - sp = kvm_mmu_get_page(vcpu, table_gfn, fault->addr, - it.level-1, false, access); + sp = kvm_mmu_get_child_sp(vcpu, it.sptep, table_gfn, + false, access); + /* * We must synchronize the pagetable before linking it * because the guest doesn't need to flush tlb when @@ -740,8 +741,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, drop_large_spte(vcpu, it.sptep); if (!is_shadow_present_pte(*it.sptep)) { - sp = kvm_mmu_get_page(vcpu, base_gfn, fault->addr, - it.level - 1, true, direct_access); + sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn, + true, direct_access); link_shadow_page(vcpu, it.sptep, sp); if (fault->huge_page_disallowed && fault->req_level >= it.level) From patchwork Fri Mar 11 00:25:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604185 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=NSQmF3DZ; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=LgWaICfx; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6Cs2rrvz9sGD for ; Fri, 11 Mar 2022 11:25:45 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Um4QbMNKlq2I/DsJd9z5udQYR5zb1IMMqpMewWJjtL4=; b=NSQmF3DZDYFzgBUCPThhVYU95S oZDUOd/1eoAfzd+Ka4KObN8PGh77FHvxYUMw9UipX9iK8s0sRN6UvWy0KKPvplqKji0YHYjlrrjan LhBMWsFWADfEXEuYkidHZNEJxLxT2HNzGc+e8s1V42Ok+7XxNjVP41Iiy8sqNcC5Yntazsn1vVYa/ tBMP03xwbqQt1w0z5s8dVqfb5uraoJigUeh60jzcXma1fY4Q/9Mne/x5tuaIB03fQKu1mK21GIZas YY5mLueo6pQjKheJp9TtVmOiBWcCsgue27Lqcvd0ZZdy/1DWXSXi92cdGtyniROLTND4p4Z803ToQ 5Kk5R5vA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6F-00EPCz-6w; Fri, 11 Mar 2022 00:25:43 +0000 Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6C-00EP9e-5u for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:25:42 +0000 Received: by mail-pf1-x44a.google.com with SMTP id 64-20020a621743000000b004f778ce34eeso1619351pfx.20 for ; Thu, 10 Mar 2022 16:25:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=preyZmoxeWV7bpSqcaT1D2ICPD7/5D40rW8RXyddS98=; b=LgWaICfxIBaJDsFBkUTdBOob1Y2k3/FFvQsolbqLhuk8ZrTeTkJdlWwjlTc00oGLEe LBi95aRVL4ltQZa1ZoBqImyXhIE6KCt5BKv4apSd5/TOgPmKoi9AKXn5JzAOW8qfkaFV lZPfstcYqk94YJDubOqRH4/FF9knojxzys4+ZWckaWnU9yCfGXEnP/dJMk0PlsKerJEz RFtEO4H2uxJnJcfrKsAd8SgRK/Z+vZHqCgqBMoQjTSjUQPefL7dqPZMkO0BphwLCxzPg WXAwwCmQdYAe2L1ybmPKCSs84V9vIKdpb9I5nboekQUX1+Mq9bO8ySuBjnvDZm/f7Gzo TQrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=preyZmoxeWV7bpSqcaT1D2ICPD7/5D40rW8RXyddS98=; b=FGzRgcOQJdhPCyVYTFbSJuqmhYJQzNGlaWu4opbjbIGrJAYYNRbsJbdLWAK9lHExqT u8NxeteVkdmE6/3ZO79AYpBaMvPr4JX/dtTaZLOl0rZd41nd4NdN8ZbSavpGMfBt2jNH 2fet9Cui172BxfbBkWHQU+X2zHwDmDkhsmg9D6rTZYCpuDLh1a0SMYYYUEkAduy944tm cIr5Fq594hR/pmMA86lAE+KWjxAC7zOjf07QxYKVmjGqqvRFrkt7f9fctkkqLE+nAiv6 E212kFnvE7V0dCVgS0PbRgS/hdCwJ4vimh+SpGlHfM61SHurn5sno9DX5nFRJ3g6H2e3 mKPw== X-Gm-Message-State: AOAM5337RtCwzBbSSUUEeZydmDzyba6AZnYHHlxfH9QPpo1UvwnfrLbi wxFXqg4VrSpSC93U3AI5fGcaU6z0qTqAXg== X-Google-Smtp-Source: ABdhPJw4zPv1Jtev5tqhHtyUpqsXBloi5wAf9YuGxFv3zUHJ0lLUE9syCfw79D+xC+vv/19YPx/1nSCKvLkKDQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:1e10:b0:1bf:6c78:54a9 with SMTP id pg16-20020a17090b1e1000b001bf6c7854a9mr321576pjb.1.1646958337790; Thu, 10 Mar 2022 16:25:37 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:06 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-5-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 04/26] KVM: x86/mmu: Decompose kvm_mmu_get_page() into separate functions From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162540_281869_54BC1110 X-CRM114-Status: GOOD ( 26.12 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Decompose kvm_mmu_get_page() into separate helper functions to increase readability and prepare for allocating shadow pages without a vcpu pointer. Specifically, pull the guts of kvm_mmu_get_page() into 3 helper functions: Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:44a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Decompose kvm_mmu_get_page() into separate helper functions to increase readability and prepare for allocating shadow pages without a vcpu pointer. Specifically, pull the guts of kvm_mmu_get_page() into 3 helper functions: __kvm_mmu_find_shadow_page() - Walks the page hash checking for any existing mmu pages that match the given gfn and role. Does not attempt to synchronize the page if it is unsync. kvm_mmu_find_shadow_page() - Wraps __kvm_mmu_find_shadow_page() and handles syncing if necessary. kvm_mmu_new_shadow_page() Allocates and initializes an entirely new kvm_mmu_page. This currently requries a vcpu pointer for allocation and looking up the memslot but that will be removed in a future commit. Note, kvm_mmu_new_shadow_page() is temporary and will be removed in a subsequent commit. The name uses "new" rather than the more typical "alloc" to avoid clashing with the existing kvm_mmu_alloc_page(). No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 132 ++++++++++++++++++++++++--------- arch/x86/kvm/mmu/paging_tmpl.h | 5 +- arch/x86/kvm/mmu/spte.c | 5 +- 3 files changed, 101 insertions(+), 41 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 23c2004c6435..80dbfe07c87b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2027,16 +2027,25 @@ static void clear_sp_write_flooding_count(u64 *spte) __clear_sp_write_flooding_count(sptep_to_sp(spte)); } -static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, - union kvm_mmu_page_role role) +/* + * Searches for an existing SP for the given gfn and role. Makes no attempt to + * sync the SP if it is marked unsync. + * + * If creating an upper-level page table, zaps unsynced pages for the same + * gfn and adds them to the invalid_list. It's the callers responsibility + * to call kvm_mmu_commit_zap_page() on invalid_list. + */ +static struct kvm_mmu_page *__kvm_mmu_find_shadow_page(struct kvm *kvm, + gfn_t gfn, + union kvm_mmu_page_role role, + struct list_head *invalid_list) { struct hlist_head *sp_list; struct kvm_mmu_page *sp; int collisions = 0; - LIST_HEAD(invalid_list); - sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; - for_each_valid_sp(vcpu->kvm, sp, sp_list) { + sp_list = &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; + for_each_valid_sp(kvm, sp, sp_list) { if (sp->gfn != gfn) { collisions++; continue; @@ -2053,60 +2062,109 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, * upper-level page will be write-protected. */ if (role.level > PG_LEVEL_4K && sp->unsync) - kvm_mmu_prepare_zap_page(vcpu->kvm, sp, - &invalid_list); + kvm_mmu_prepare_zap_page(kvm, sp, invalid_list); + continue; } - /* unsync and write-flooding only apply to indirect SPs. */ - if (sp->role.direct) - goto trace_get_page; + /* Write-flooding is only tracked for indirect SPs. */ + if (!sp->role.direct) + __clear_sp_write_flooding_count(sp); - if (sp->unsync) { - /* - * The page is good, but is stale. kvm_sync_page does - * get the latest guest state, but (unlike mmu_unsync_children) - * it doesn't write-protect the page or mark it synchronized! - * This way the validity of the mapping is ensured, but the - * overhead of write protection is not incurred until the - * guest invalidates the TLB mapping. This allows multiple - * SPs for a single gfn to be unsync. - * - * If the sync fails, the page is zapped. If so, break - * in order to rebuild it. - */ - if (!kvm_sync_page(vcpu, sp, &invalid_list)) - break; + goto out; + } - WARN_ON(!list_empty(&invalid_list)); - kvm_flush_remote_tlbs(vcpu->kvm); - } + sp = NULL; - __clear_sp_write_flooding_count(sp); +out: + if (collisions > kvm->stat.max_mmu_page_hash_collisions) + kvm->stat.max_mmu_page_hash_collisions = collisions; + + return sp; +} -trace_get_page: - trace_kvm_mmu_get_page(sp, false); +/* + * Looks up an existing SP for the given gfn and role if one exists. The + * return SP is guaranteed to be synced. + */ +static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm_vcpu *vcpu, + gfn_t gfn, + union kvm_mmu_page_role role) +{ + struct kvm_mmu_page *sp; + LIST_HEAD(invalid_list); + + sp = __kvm_mmu_find_shadow_page(vcpu->kvm, gfn, role, &invalid_list); + if (!sp) goto out; + + if (sp->unsync) { + /* + * The page is good, but is stale. kvm_sync_page does + * get the latest guest state, but (unlike mmu_unsync_children) + * it doesn't write-protect the page or mark it synchronized! + * This way the validity of the mapping is ensured, but the + * overhead of write protection is not incurred until the + * guest invalidates the TLB mapping. This allows multiple + * SPs for a single gfn to be unsync. + * + * If the sync fails, the page is zapped and added to the + * invalid_list. + */ + if (!kvm_sync_page(vcpu, sp, &invalid_list)) { + sp = NULL; + goto out; + } + + WARN_ON(!list_empty(&invalid_list)); + kvm_flush_remote_tlbs(vcpu->kvm); } +out: + kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list); + return sp; +} + +static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, + gfn_t gfn, + union kvm_mmu_page_role role) +{ + struct kvm_mmu_page *sp; + struct hlist_head *sp_list; + ++vcpu->kvm->stat.mmu_cache_miss; sp = kvm_mmu_alloc_page(vcpu, role.direct); - sp->gfn = gfn; sp->role = role; + + sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; hlist_add_head(&sp->hash_link, sp_list); + if (!role.direct) { account_shadowed(vcpu->kvm, sp); if (role.level == PG_LEVEL_4K && kvm_vcpu_write_protect_gfn(vcpu, gfn)) kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1); } - trace_kvm_mmu_get_page(sp, true); -out: - kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list); - if (collisions > vcpu->kvm->stat.max_mmu_page_hash_collisions) - vcpu->kvm->stat.max_mmu_page_hash_collisions = collisions; + return sp; +} + +static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, + union kvm_mmu_page_role role) +{ + struct kvm_mmu_page *sp; + bool created = false; + + sp = kvm_mmu_find_shadow_page(vcpu, gfn, role); + if (sp) + goto out; + + created = true; + sp = kvm_mmu_new_shadow_page(vcpu, gfn, role); + +out: + trace_kvm_mmu_get_page(sp, created); return sp; } diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index c3909a07e938..55cac59b9c9b 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -692,8 +692,9 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, * the gpte is changed from non-present to present. * Otherwise, the guest may use the wrong mapping. * - * For PG_LEVEL_4K, kvm_mmu_get_page() has already - * synchronized it transiently via kvm_sync_page(). + * For PG_LEVEL_4K, kvm_mmu_get_existing_sp() has + * already synchronized it transiently via + * kvm_sync_page(). * * For higher level pagetable, we synchronize it via * the slower mmu_sync_children(). If it needs to diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 4739b53c9734..d10189d9c877 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -150,8 +150,9 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, /* * Optimization: for pte sync, if spte was writable the hash * lookup is unnecessary (and expensive). Write protection - * is responsibility of kvm_mmu_get_page / kvm_mmu_sync_roots. - * Same reasoning can be applied to dirty page accounting. + * is responsibility of kvm_mmu_create_sp() and + * kvm_mmu_sync_roots(). Same reasoning can be applied to dirty + * page accounting. */ if (is_writable_pte(old_spte)) goto out; From patchwork Fri Mar 11 00:25:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604186 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=k4StbZUM; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=ci1RV/cD; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6Cs3s11z9sGG for ; Fri, 11 Mar 2022 11:25:45 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=8eXP7aYRVUB94wYq60BtPgyY8ZiWBMqLlZVuo9sA0P4=; b=k4StbZUMGNKUYJAPBEYG56aHQ5 qBcX/Pu/gqvXKiqaNPNhi4xDXTf6mix9qeZGjaQdcoTGY2HI3ysoN2FLuUz96a0Vdk3kgkCqoOu9U uQY2ilSuuLwqZdJdHsYByvSaQ0aYL1u6+1wJJATxJjcQVY2ct7QVi3N3zWtSUn0B5BonevMcd9O7c GRzkBQyLSl9T+SKfOcAvAMW8Y0jN6yHrQgeprnmzDHPN9xgdjqlEl5AYxY1Vz3A4rYnlWk+BwxwQw jrJwGK7RzoDJZ9y4+3vmAUcZ65DfxA8rhawtPJw0qc3Nw48jMxwNFog9V9xENHEbyEfZXhjvGQ7NL 7W9devTQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6F-00EPD6-Bg; Fri, 11 Mar 2022 00:25:43 +0000 Received: from mail-pf1-x449.google.com ([2607:f8b0:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6D-00EPAr-GZ for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:25:42 +0000 Received: by mail-pf1-x449.google.com with SMTP id y2-20020a623202000000b004f6c2892ffeso4166392pfy.12 for ; Thu, 10 Mar 2022 16:25:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=HT5iizWaNyX/scodM/UgaJfvTolyFgDlKXTIQK5ovbE=; b=ci1RV/cDxZwD5zNwmfmqMYmXEiXEpZeRiCx1bWeGOaV4x9h0XElhebmz7axQ94hcJK bZap8UCpYa0n+D0V1WGsj49i9GocZPpnOW/R6liGZwWWd+MLFvGP4SmRm18uGvNf7VJA hM4w3wETJZ3BZ98Lv4ivSwkHviUvB82J6N2Xmv9hmmGXSXClEANNQCTSmZl/thTSHvF3 qMaREJKuH529np34JrNojketBObofC9CldBCVw8lMyUMfILV1nVUnlrgaj9SjXRE3cnb ez8mqQgD4jcpfyLRKrLcfMsV8AyaZdBvsT3nFJZl/AvexHb6QnSL9UFeXZ9O9z2drwbU ZGeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=HT5iizWaNyX/scodM/UgaJfvTolyFgDlKXTIQK5ovbE=; b=0WVSfX8WMkQTxZu9sBxz/T/q19aKvQp5rorLG8wDab8s8UwL7H8XVcGbaQ1xHGosCy ZEieKpr6MQ+wFOxCIns7kZcgw1ZmJYTSAshBJMJJZfAQszW3CtjV0MlIbFxw0Uw2d6So QtwBSTrLWEeGaJrhqJY1a9T9uUf1J+JrJ7Lpit7YOsKwn3dsGHa+Hwo2iS0qcknm+CfF t9sOj0vbipG3mINKKIVMZOKkE4/JwyXVpITBkzUG2UBztehhLmcMeGykZVgN6hjmNRuQ 5Fp7yz9okwqi0kCRr1Qw9ObZT2sAzu0kiaA5iGVOG/6mAx/3rkPh0b+BQCTR68viyXyC hgog== X-Gm-Message-State: AOAM531cgOj7JBVjLYS/s7qtpj/gtrZQz5WBh7IT+fq8W10chmQiBak9 vJrNGjGy/vktO+JHcKYjFlxHdv+lFSCB9w== X-Google-Smtp-Source: ABdhPJzSyoUJ2mS8Kxw0tWZmurwVG8N3wCZURTli9JtC3Otg5er4JZvYmqgznJT5OSafq4YssNwt2bbQk0+S3g== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:1e10:b0:1bf:6c78:54a9 with SMTP id pg16-20020a17090b1e1000b001bf6c7854a9mr321588pjb.1.1646958339516; Thu, 10 Mar 2022 16:25:39 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:07 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-6-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 05/26] KVM: x86/mmu: Rename shadow MMU functions that deal with shadow pages From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162541_580196_6AEDDA6A X-CRM114-Status: UNSURE ( 9.67 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Rename 3 functions: kvm_mmu_get_page() -> kvm_mmu_get_shadow_page() kvm_mmu_alloc_page() -> kvm_mmu_alloc_shadow_page() kvm_mmu_free_page() -> kvm_mmu_free_shadow_page() This change makes it clear that these functions deal with shadow pages rather than struct pages. Prefer "shadow_page" over the shorter "sp" since these are core routines. Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:449 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Rename 3 functions: kvm_mmu_get_page() -> kvm_mmu_get_shadow_page() kvm_mmu_alloc_page() -> kvm_mmu_alloc_shadow_page() kvm_mmu_free_page() -> kvm_mmu_free_shadow_page() This change makes it clear that these functions deal with shadow pages rather than struct pages. Prefer "shadow_page" over the shorter "sp" since these are core routines. Signed-off-by: David Matlack Acked-by: Peter Xu --- arch/x86/kvm/mmu/mmu.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 80dbfe07c87b..b6fb50e32291 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1668,7 +1668,7 @@ static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr) percpu_counter_add(&kvm_total_used_mmu_pages, nr); } -static void kvm_mmu_free_page(struct kvm_mmu_page *sp) +static void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) { MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); hlist_del(&sp->hash_link); @@ -1706,7 +1706,8 @@ static void drop_parent_pte(struct kvm_mmu_page *sp, mmu_spte_clear_no_track(parent_pte); } -static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, bool direct) +static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, + bool direct) { struct kvm_mmu_page *sp; @@ -2134,7 +2135,7 @@ static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, ++vcpu->kvm->stat.mmu_cache_miss; - sp = kvm_mmu_alloc_page(vcpu, role.direct); + sp = kvm_mmu_alloc_shadow_page(vcpu, role.direct); sp->gfn = gfn; sp->role = role; @@ -2150,8 +2151,9 @@ static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, return sp; } -static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, - union kvm_mmu_page_role role) +static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, + gfn_t gfn, + union kvm_mmu_page_role role) { struct kvm_mmu_page *sp; bool created = false; @@ -2210,7 +2212,7 @@ static struct kvm_mmu_page *kvm_mmu_get_child_sp(struct kvm_vcpu *vcpu, union kvm_mmu_page_role role; role = kvm_mmu_child_role(sptep, direct, access); - return kvm_mmu_get_page(vcpu, gfn, role); + return kvm_mmu_get_shadow_page(vcpu, gfn, role); } static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterator, @@ -2486,7 +2488,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, list_for_each_entry_safe(sp, nsp, invalid_list, link) { WARN_ON(!sp->role.invalid || sp->root_count); - kvm_mmu_free_page(sp); + kvm_mmu_free_shadow_page(sp); } } @@ -3417,7 +3419,7 @@ static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, gva_t gva, role.quadrant = quadrant; } - sp = kvm_mmu_get_page(vcpu, gfn, role); + sp = kvm_mmu_get_shadow_page(vcpu, gfn, role); ++sp->root_count; return __pa(sp->spt); From patchwork Fri Mar 11 00:25:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604187 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=ly3HIga+; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=dyFlaYzl; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6Cw5VBhz9sGD for ; Fri, 11 Mar 2022 11:25:48 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=qt5DZsLI/zQb8n6tLIBdjHcSxqy7sj0lqzip98syzNM=; b=ly3HIga+E1Byn5AEX1phNMTWFK +DAyHBUznVM8GHDFVwdd65BpLUjnXaS5LNT+we/heVjJJFCi+ZK46oYRSwtFRL0oM9KxieO/h3Nmb UZ/J/k+0xe5io5BLo5He28eU8FtaIUyWdTzvvSPnf1+LWRw67aPSBEljWLySUIFoK8y7ueuPzO9Yh qIWzLM/P6la7/zTYYtJIRyY+JZEkJx61D+fl9mULN9C8t7Q/JtulyKBBGJ1HCn2MhjbHeawbXjkWf vQYPAmtBiX/0FtI+k/x0A1d9P3tpE28NTXpCeI92/7bs475YcTxlJMmAQ+Z9uysx32IwZCdFLJeUu GUEl028g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6I-00EPFw-Gs; Fri, 11 Mar 2022 00:25:46 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6F-00EPBn-Rk for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:25:45 +0000 Received: by mail-pl1-x64a.google.com with SMTP id b1-20020a170902bd4100b00151f3f97b0cso3560594plx.5 for ; Thu, 10 Mar 2022 16:25:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=E1f7dwVMCTIQxzMxcNwcOjBa8usGSsNkBUA4TV7/K5Q=; b=dyFlaYzlo6UILgMHQAlPkaaMWow2Ul4B7LMcFG47WnUc9EGN8o4wI1lMnJ3utbSG7p q9GeTDSR1RhxNQy56RSpOC5NArGbG4gGdb7XRaIJJfeqUYsywCbqkmnVbTgkMS9Wau2M qeWdDfXmPCEfp84dJU25XOuzordVzgtSPCRymjS7qB1/GTeYHqtXALIOMRi2PQLz5BvD AdyrtucZlIf+ZhYFru+CvqSrBIOWiaZHMvEkUi3BaaeFk2pLY6D+IXBbJE9WAoXsMSuR A54OuPtVrIT/1g7o3yy9wrm+WFzZFhuIaFAK0CDO/gJBFzmHnhMWGPNd5MLO3oeSADwf qhOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=E1f7dwVMCTIQxzMxcNwcOjBa8usGSsNkBUA4TV7/K5Q=; b=J85QmH6RmNZ/FPZgF0344zeu8DegTV2UdOSmO55xQc17v16rJuzOVLvPj9RwSZhn+O FyqdCB6gr7UU75zuU/wh4LwrfNga1yhJg2LYgRjaQAzu372HNHeerwyF4epcZpcCJ9Xl R52y3QrRimh9wZSCsp5GcDlgP+RwT0lUEZkbfASXP50jmvoc1E2YE40rp3qg/44ppPfd aqbjXVL7N2uCGmmDlSCXTUvxrdW1dN3cI/eTJQaF4OxQgIpzGFv+LE/Xdv/i/COmBqsE bgKkZEf1BZ9xM1HImGDcMmGswP34kD4wqc0a1oz5Y1WU3XnBEgrESbg0LFx3/bRx/t/i JTWQ== X-Gm-Message-State: AOAM532VrtMtFYjVc3GaERP30+97CE/nkqVRQPnQ5OkTZr50mnqc1AZX LvBb7rd/qk3yqbZAAyov63dZstzIfTMIPA== X-Google-Smtp-Source: ABdhPJwCJvv1gmlTzqVHHEXadtDnxMhsFf7CFfV0r1s95pboZSQVSEZ+RKGDk2zGMU2B9w5mZno63N0XkT+ONQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:a3c3:b0:151:ec83:d88 with SMTP id q3-20020a170902a3c300b00151ec830d88mr8062530plb.9.1646958341309; Thu, 10 Mar 2022 16:25:41 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:08 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-7-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 06/26] KVM: x86/mmu: Pass memslot to kvm_mmu_new_shadow_page() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162543_944283_123A4B20 X-CRM114-Status: GOOD ( 13.90 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Passing the memslot to kvm_mmu_new_shadow_page() avoids the need for the vCPU pointer when write-protecting indirect 4k shadow pages. This moves us closer to being able to create new shadow pages duri [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:64a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Passing the memslot to kvm_mmu_new_shadow_page() avoids the need for the vCPU pointer when write-protecting indirect 4k shadow pages. This moves us closer to being able to create new shadow pages during VM ioctls for eager page splitting, where there is not vCPU pointer. This change does not negatively impact "Populate memory time" for ept=Y or ept=N configurations since kvm_vcpu_gfn_to_memslot() caches the last use slot. So even though we now look up the slot more often, it is a very cheap check. Opportunistically move the code to write-protect GFNs shadowed by PG_LEVEL_4K shadow pages into account_shadowed() to reduce indentation and consolidate the code. This also eliminates a memslot lookup. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Peter Xu --- arch/x86/kvm/mmu/mmu.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b6fb50e32291..519910938478 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -793,16 +793,14 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn) update_gfn_disallow_lpage_count(slot, gfn, -1); } -static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) +static void account_shadowed(struct kvm *kvm, + struct kvm_memory_slot *slot, + struct kvm_mmu_page *sp) { - struct kvm_memslots *slots; - struct kvm_memory_slot *slot; gfn_t gfn; kvm->arch.indirect_shadow_pages++; gfn = sp->gfn; - slots = kvm_memslots_for_spte_role(kvm, sp->role); - slot = __gfn_to_memslot(slots, gfn); /* the non-leaf shadow pages are keeping readonly. */ if (sp->role.level > PG_LEVEL_4K) @@ -810,6 +808,9 @@ static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) KVM_PAGE_TRACK_WRITE); kvm_mmu_gfn_disallow_lpage(slot, gfn); + + if (kvm_mmu_slot_gfn_write_protect(kvm, slot, gfn, PG_LEVEL_4K)) + kvm_flush_remote_tlbs_with_address(kvm, gfn, 1); } void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp) @@ -2127,6 +2128,7 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm_vcpu *vcpu, } static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, + struct kvm_memory_slot *slot, gfn_t gfn, union kvm_mmu_page_role role) { @@ -2142,11 +2144,8 @@ static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; hlist_add_head(&sp->hash_link, sp_list); - if (!role.direct) { - account_shadowed(vcpu->kvm, sp); - if (role.level == PG_LEVEL_4K && kvm_vcpu_write_protect_gfn(vcpu, gfn)) - kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1); - } + if (!role.direct) + account_shadowed(vcpu->kvm, slot, sp); return sp; } @@ -2155,6 +2154,7 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, gfn_t gfn, union kvm_mmu_page_role role) { + struct kvm_memory_slot *slot; struct kvm_mmu_page *sp; bool created = false; @@ -2163,7 +2163,8 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, goto out; created = true; - sp = kvm_mmu_new_shadow_page(vcpu, gfn, role); + slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); + sp = kvm_mmu_new_shadow_page(vcpu, slot, gfn, role); out: trace_kvm_mmu_get_page(sp, created); From patchwork Fri Mar 11 00:25:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604188 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=pzDDvTDy; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=JREOgPoL; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6Cx6l5Lz9s5P for ; Fri, 11 Mar 2022 11:25:49 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=0+w6AJi1FG0YumZ5qcaw02qGJMSA2LkRkU+ZqUMMEZg=; b=pzDDvTDy22yo0CqN9g5q+PdRS7 qFlNK1GQUiV6/qxJUU55S5Z9aUfTuXdQY4wuKZnacKG684TcaNikfJwCvSE/MAHHWNC7ZEOdhM5hj ZCPxAbhuEPGbc3Qw2U7Y35SFdviFvF8KeSADybqG0Jbfah753E23Mo24xwK3dD0w196Tup1sSf0P+ y7yy+/oxsPBmU6v4lYUouuwzFUnYMUxYPmq5c2pxUDvdf+JJroNeOhs5WNcbftGFpcBv4L1wR7nt4 6325jvGFRLDPoT4XOr64BFqFkevleU+w4LxxnAZXWacBjMCd02NC5tbjmdXv8Et7KqhfitL4nX6eu myBoEM0w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6J-00EPHM-Nm; Fri, 11 Mar 2022 00:25:47 +0000 Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6G-00EPDG-6R for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:25:46 +0000 Received: by mail-pf1-x44a.google.com with SMTP id m22-20020a628c16000000b004f6f1b43114so4195844pfd.3 for ; Thu, 10 Mar 2022 16:25:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=PR1rQAV6XpL1OAtut/tZohbNM2n5NWW4hR39uHFMmHs=; b=JREOgPoLzDDtiHf1IB99ZCsJV+z9ufBWFVMkpURvetyp3jEyGAs2o6YEPskH74BNqH 7WmF26FHEoNxVCox407leSficGgH5ww6LB5K5LSaGEZcVddfv47Z0KQDZ2OSEncfuw9m 92v7Hbt/vGHys2QQB1nnUcKUyOrNd0rhX+hL0MUHBikUFb4TL3LThU4aUj+pbscZTx8A rpGIRCeOg8KXOwzgrp+EQFNMUn9ArnU47X2XXyWsaFCppowp5wxUCU5HIdt94UO5U9vW OZslnV1bvhlKMGwicyF96n8kXJdz/1GSq5xk81YxfldwUaudK35H+Jy/D7nJlNcxqF1P gXYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=PR1rQAV6XpL1OAtut/tZohbNM2n5NWW4hR39uHFMmHs=; b=n/XXxQNaPOjWZHSWnWW9J7lYHq5hIDXMn4/NReHIQree3N93kVarz8twT0QE++gxNL va0+Ge3jpJEVIQY3l6PcNsCkRIUIUECRXicMAxkAId3NvWu0z5A3Fhf9nnQXbbQ4enPU 4YK74GJ1Cy86G4oGC8lJhBAM8t5JCEIYG2fYdDK+gb//1fOMxLcrNiZ0XDQhipt+TFEV dG3bYY8zjm6CctNxXZB5G2NpP4Sg8aufAhKsD+v4JnHwnAWyOSlAO3QnP8jJ95cyWKJA vND9RGlT8LzM568Mpl/2VUDFi3w7GxZDmtahoyjpukRFAdSD8UzLMce4LGi5OJFAtcxU +ocw== X-Gm-Message-State: AOAM5336LKDTNhKE8Oys/mXI+Hl9+Lxatws9RQGWAnNq0LlKuVBTXz+E olvKtR1dCjNvSVXT4Nq09l07PFsty0PUvw== X-Google-Smtp-Source: ABdhPJy72cfT+ItACidnNCLuDyWXf6ABkTliWSk/elCD++cwKTACMZ9AqHdEBI5pX6/vwsZztFdxcxA1tFgWPw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:d486:b0:151:8e66:621c with SMTP id c6-20020a170902d48600b001518e66621cmr7683225plg.141.1646958343059; Thu, 10 Mar 2022 16:25:43 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:09 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-8-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 07/26] KVM: x86/mmu: Separate shadow MMU sp allocation from initialization From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162544_263471_7ED90258 X-CRM114-Status: GOOD ( 13.38 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Separate the code that allocates a new shadow page from the vCPU caches from the code that initializes it. This is in preparation for creating new shadow pages from VM ioctls for eager page splitting, [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:44a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Separate the code that allocates a new shadow page from the vCPU caches from the code that initializes it. This is in preparation for creating new shadow pages from VM ioctls for eager page splitting, where we do not have access to the vCPU caches. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 38 ++++++++++++++++++-------------------- 1 file changed, 18 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 519910938478..e866e05c4ba5 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1716,16 +1716,9 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); if (!direct) sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache); + set_page_private(virt_to_page(sp->spt), (unsigned long)sp); - /* - * active_mmu_pages must be a FIFO list, as kvm_zap_obsolete_pages() - * depends on valid pages being added to the head of the list. See - * comments in kvm_zap_obsolete_pages(). - */ - sp->mmu_valid_gen = vcpu->kvm->arch.mmu_valid_gen; - list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages); - kvm_mod_used_mmu_pages(vcpu->kvm, +1); return sp; } @@ -2127,27 +2120,31 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm_vcpu *vcpu, return sp; } -static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, - struct kvm_memory_slot *slot, - gfn_t gfn, - union kvm_mmu_page_role role) +static void init_shadow_page(struct kvm *kvm, struct kvm_mmu_page *sp, + struct kvm_memory_slot *slot, gfn_t gfn, + union kvm_mmu_page_role role) { - struct kvm_mmu_page *sp; struct hlist_head *sp_list; - ++vcpu->kvm->stat.mmu_cache_miss; + ++kvm->stat.mmu_cache_miss; - sp = kvm_mmu_alloc_shadow_page(vcpu, role.direct); sp->gfn = gfn; sp->role = role; + sp->mmu_valid_gen = kvm->arch.mmu_valid_gen; - sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; + /* + * active_mmu_pages must be a FIFO list, as kvm_zap_obsolete_pages() + * depends on valid pages being added to the head of the list. See + * comments in kvm_zap_obsolete_pages(). + */ + list_add(&sp->link, &kvm->arch.active_mmu_pages); + kvm_mod_used_mmu_pages(kvm, 1); + + sp_list = &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; hlist_add_head(&sp->hash_link, sp_list); if (!role.direct) - account_shadowed(vcpu->kvm, slot, sp); - - return sp; + account_shadowed(kvm, slot, sp); } static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, @@ -2164,7 +2161,8 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, created = true; slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - sp = kvm_mmu_new_shadow_page(vcpu, slot, gfn, role); + sp = kvm_mmu_alloc_shadow_page(vcpu, role.direct); + init_shadow_page(vcpu->kvm, sp, slot, gfn, role); out: trace_kvm_mmu_get_page(sp, created); From patchwork Fri Mar 11 00:25:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604189 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=NZ2OxhLF; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=Ux2f+mr/; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6Cz0s4Yz9s5P for ; Fri, 11 Mar 2022 11:25:51 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=kAUhFf4OmBjPXB0DUoW2TsmAQTbGf8rsYGx8AwBmi68=; b=NZ2OxhLFAHsMi+j0e3SC/lbCYz /FfcPWi0O2huGefkqpYK8ZhBsAM5kRe3o7hdssjK+lMYbPbZmg9JCp8KTsN0T5Tahyfbbu6Wmp5uL qpd0tPUMMXCmSnm+r1ioC3bzCESfZ05NC2UiRrCmPO7H5EdcjIXQdGiD3+0Wk6S7/EQ9Ub1et9VoD crkmVlA1cCANBu8cyTA0QOG2A+hiAyt0OiU7l0l+G1m6fSg9qssT8lmUKALt29a7Pbn/MViq0MV// ouivxjUPnfIrPF6BbDONkoLerCfMaYTBaiVyOKJP989WUXR/uifel65G9QgxMByvj/UDpEjGI1018 HMz4HM1g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6K-00EPIg-Sk; Fri, 11 Mar 2022 00:25:48 +0000 Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6I-00EPEs-NJ for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:25:48 +0000 Received: by mail-pl1-x649.google.com with SMTP id j9-20020a170903024900b0015195e68490so3533923plh.19 for ; Thu, 10 Mar 2022 16:25:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=hUhTNfsRlcAhlCAEwuaf6qTSRETsPLu85xICgb6Pw2I=; b=Ux2f+mr/mK9ysuQSi/sBCmxVVqyTzZtZcuO+SYe9GBnc8TQpBaRNkuO26UcTq9+K7m 9XCEug+JYAx7QgnxtFzMKpDzCZ2Kl3NLGHYaMWkmd4Ic4q6ryxK1fh3eirtILH29GWRl y4ndBbNMnsnmerSi8z/TQIBMaNCb13Zxm6I81Hm3D5/Iw1XB83q9r0puc/5thPme9yUe 5CQHUvdH3HOyCN2dpgX6tbRevneUEVUCc5bYxwqg6D0Njsn05gMV/tq0tNKd8PUtrCZL TKX0vUgoYTO6j6iEsFtBVDflNzJ0qcKCJCSdpf3D5PzBxpdR1jblF6G0g4UsVDlY5RTe t19g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=hUhTNfsRlcAhlCAEwuaf6qTSRETsPLu85xICgb6Pw2I=; b=OfwVeE7wpo87p8MK6y/rH/uN5rUd01dVPECrrqN66IpEMmyD0ZNxrLqvcX+3ajxfGL xMpBLgefjp7GVXJH/a08IomnXPazG0Li9pEvgHoTZlVql8Y6w580O6QTjHld4r6z9w66 UsmPhUI7jeOhUA91z+afa0lHtgBTlIZIJGM2HXm/q1KzX7hXZA7y21Y+QjTU9xNuTpQo 69wkeBAHaOD1a5x1pXocaDADvnuedtVbbzZiprur/VB4btqG+u1LZB4AuRtBj3NH3wJm 8FXXpnSZg1hWUEXSxiBUFHLyyKG+rSsO7LZBjFSygaJuMgKcOuOhjKSuWFwF9f8u3Tpo E0pg== X-Gm-Message-State: AOAM531megtHfwitN0vhjsNyE/JIb68LwBaCtLpQP5glumBwaKsCGKt0 Dd95P/NMFOls9NNGxu7xBgnIL/ZwqHwUVw== X-Google-Smtp-Source: ABdhPJyVkv6e6MFwOzwVPNnjhMEMTwjec6c9yLeYXj0Dnt6k4PAJzbfaJIjMW+/lDxdSlSq9DXN35GK1V5t4+g== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a63:9d8c:0:b0:378:4b73:5b3b with SMTP id i134-20020a639d8c000000b003784b735b3bmr6119017pgd.564.1646958344889; Thu, 10 Mar 2022 16:25:44 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:10 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-9-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 08/26] KVM: x86/mmu: Link spt to sp during allocation From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162546_782701_DDF18A00 X-CRM114-Status: GOOD ( 11.66 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Link the shadow page table to the sp (via set_page_private()) during allocation rather than initialization. This is a more logical place to do it because allocation time is also where we do the revers [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:649 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Link the shadow page table to the sp (via set_page_private()) during allocation rather than initialization. This is a more logical place to do it because allocation time is also where we do the reverse link (setting sp->spt). This creates one extra call to set_page_private(), but having multiple calls to set_page_private() is unavoidable anyway. We either do set_page_private() during allocation, which requires 1 per allocation function, or we do it during initialization, which requires 1 per initialization function. No functional change intended. Suggested-by: Ben Gardon Signed-off-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index af60922906ef..eecb0215e636 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -274,6 +274,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); + set_page_private(virt_to_page(sp->spt), (unsigned long)sp); return sp; } @@ -281,8 +282,6 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep, gfn_t gfn, union kvm_mmu_page_role role) { - set_page_private(virt_to_page(sp->spt), (unsigned long)sp); - sp->role = role; sp->gfn = gfn; sp->ptep = sptep; @@ -1410,6 +1409,8 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp) return NULL; } + set_page_private(virt_to_page(sp->spt), (unsigned long)sp); + return sp; } From patchwork Fri Mar 11 00:25:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604190 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=ImFFbsfq; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=NrOteM9l; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6D22Zg7z9s5P for ; Fri, 11 Mar 2022 11:25:54 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=uOkH2T+ZYfbEuFGF61ah0/JsjYvs3BRPwF64qdLg1e4=; b=ImFFbsfqhPL4O7V6Ikcts1h0rG kPHx+icaLjzGnD78RpcfHmSNZwbTu2vR8Bq1XfxTb8NhRy1CKLL+CIYZXcZjbySu+4MwkukzvnzKx /5CTGmgLR0xCSGbnP76PfZiYi2o0fLflam0Q56gLfMmPK6gyO7Xc61Tb9QF6Dj8Kxgkrmbl229m9S tT1rh5c04mtcLBOUJYtuwAm8TR9ghohZIGtiorvSdRfzCSMwe6psXz85+4ahN5cVwvxofMJFyPSFf CYtW4IM61CJyZXWNgpvcoUFGkWBtUdDOBNfsMRm7C4wW4/F6FBL8fz9MmlRidJVcz+oRV4gc8J/JN z7TJ9/nA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6O-00EPLV-33; Fri, 11 Mar 2022 00:25:52 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6K-00EPHC-LM for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:25:50 +0000 Received: by mail-pj1-x1049.google.com with SMTP id md4-20020a17090b23c400b001bf675ff745so4174211pjb.3 for ; Thu, 10 Mar 2022 16:25:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=V/b+716hREpGluS5au7SFvhvrZUH1mP5ZRBJUo2aKek=; b=NrOteM9lQZRLTvGjh7Vk2YOgkSJONBJ9EpO+Rh0e4cd73vulmGH+hFimtPt0gE+8Ho k8JVHCrbIF0zjGII7zgfaIT8k7/FKxKSqC1qy+gJKY1Fn2gJfg1PT4ql7x80T5MMdSaX bC9lIGSQ2dpF23XdAOIxEeUHf4ph7pFTmtW6mlo8a7I2IbuHRD/FW6U8cEQZUD+p+OvW NX7TXaXYU4um5GdK+Pbvp3NhR93SLOD1Grc7+fg58Gq5DdoJsaPOmjlVu9RAJc+vFAXt UqlKZqaxGUd8fVlMc0DOlFb8fnPCoA2kMW1cjgW7bS5hxnqYubmgtW/N6whS2ZNh3U3y 2nmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=V/b+716hREpGluS5au7SFvhvrZUH1mP5ZRBJUo2aKek=; b=b4N4jT5hZWi2FW60wAQjDjGgIsGeQhJIvrryoJ+dmzWd4yX3ApVhQ33VrtNDjzAR/r u4dooDzlWrltCpKI5P/6eOiqe2eeEz+lA8Injd5QvACzv35DBGd+aAT4P8mkDOdLcteU rr3eIoyADQAOfQvWeiHN1R+jZzSCdFrYvm7kqw3pjLiJhI8ef4eSZvsJ+nl1KnamDaSA Q8IEunDxYhbz+l4nPWR2DQJAeFjY5TCnHdG4ZIdC0CdyyihXtGdA1VyU3FuTAZCHlfxg Llh+ViRmz84TekZt7V7sIIOBGHduKnzxDn7rJQmYhDhFeotVhX0iwqDuwubYwUjto9Mt 3xJg== X-Gm-Message-State: AOAM531dL2NS1rmQPBfbsGI4Xo7PNM7ANhYZws1mgJpogNlQMb4fvfwe kkCiSO+1BHvnQktDukPPh0WX2hImTRx3nA== X-Google-Smtp-Source: ABdhPJwJMvWtF2+lcHehUS9eYrjqEwNfEYfhcTLqNJ3E9xEg/1eZ72BHS9KkgILyrBX8MA8OHqfLuhNcfwn1aQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:4d81:b0:1bf:8ce4:4f51 with SMTP id oj1-20020a17090b4d8100b001bf8ce44f51mr322556pjb.0.1646958346675; Thu, 10 Mar 2022 16:25:46 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:11 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-10-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 09/26] KVM: x86/mmu: Move huge page split sp allocation code to mmu.c From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162548_726942_88A48F46 X-CRM114-Status: GOOD ( 18.43 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Move the code that allocates a new shadow page for splitting huge pages into mmu.c. Currently this code is only used by the TDP MMU but it will be reused in subsequent commits to also split huge pages [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:1049 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Move the code that allocates a new shadow page for splitting huge pages into mmu.c. Currently this code is only used by the TDP MMU but it will be reused in subsequent commits to also split huge pages mapped by the shadow MMU. While here, also shove the GFP complexity down into the allocation function so that it does not have to be duplicated when the shadow MMU needs to start allocating SPs for splitting. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Peter Xu --- arch/x86/kvm/mmu/mmu.c | 34 +++++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/mmu_internal.h | 2 ++ arch/x86/kvm/mmu/tdp_mmu.c | 34 ++------------------------------- 3 files changed, 38 insertions(+), 32 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e866e05c4ba5..c12d5016f6dc 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1722,6 +1722,40 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, return sp; } +/* + * Allocate a new shadow page, potentially while holding the MMU lock. + * + * Huge page splitting always uses direct shadow pages since the huge page is + * being mapped directly with a lower level page table. Thus there's no need to + * allocate the gfns array. + */ +struct kvm_mmu_page *kvm_mmu_alloc_direct_sp_for_split(bool locked) +{ + struct kvm_mmu_page *sp; + gfp_t gfp; + + /* + * If under the MMU lock, use GFP_NOWAIT to avoid direct reclaim (which + * is slow) and to avoid making any filesystem callbacks (which can end + * up invoking KVM MMU notifiers, resulting in a deadlock). + */ + gfp = (locked ? GFP_NOWAIT : GFP_KERNEL) | __GFP_ACCOUNT | __GFP_ZERO; + + sp = kmem_cache_alloc(mmu_page_header_cache, gfp); + if (!sp) + return NULL; + + sp->spt = (void *)__get_free_page(gfp); + if (!sp->spt) { + kmem_cache_free(mmu_page_header_cache, sp); + return NULL; + } + + set_page_private(virt_to_page(sp->spt), (unsigned long)sp); + + return sp; +} + static void mark_unsync(u64 *spte); static void kvm_mmu_mark_parents_unsync(struct kvm_mmu_page *sp) { diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 1bff453f7cbe..a0648e7ddd33 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -171,4 +171,6 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); +struct kvm_mmu_page *kvm_mmu_alloc_direct_sp_for_split(bool locked); + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index eecb0215e636..1a43f908d508 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1393,43 +1393,13 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm, return spte_set; } -static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp) -{ - struct kvm_mmu_page *sp; - - gfp |= __GFP_ZERO; - - sp = kmem_cache_alloc(mmu_page_header_cache, gfp); - if (!sp) - return NULL; - - sp->spt = (void *)__get_free_page(gfp); - if (!sp->spt) { - kmem_cache_free(mmu_page_header_cache, sp); - return NULL; - } - - set_page_private(virt_to_page(sp->spt), (unsigned long)sp); - - return sp; -} - static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm, struct tdp_iter *iter, bool shared) { struct kvm_mmu_page *sp; - /* - * Since we are allocating while under the MMU lock we have to be - * careful about GFP flags. Use GFP_NOWAIT to avoid blocking on direct - * reclaim and to avoid making any filesystem callbacks (which can end - * up invoking KVM MMU notifiers, resulting in a deadlock). - * - * If this allocation fails we drop the lock and retry with reclaim - * allowed. - */ - sp = __tdp_mmu_alloc_sp_for_split(GFP_NOWAIT | __GFP_ACCOUNT); + sp = kvm_mmu_alloc_direct_sp_for_split(true); if (sp) return sp; @@ -1441,7 +1411,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm, write_unlock(&kvm->mmu_lock); iter->yielded = true; - sp = __tdp_mmu_alloc_sp_for_split(GFP_KERNEL_ACCOUNT); + sp = kvm_mmu_alloc_direct_sp_for_split(false); if (shared) read_lock(&kvm->mmu_lock); From patchwork Fri Mar 11 00:25:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604191 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=qkeWwt9Y; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=nI8x0dte; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6D33Pm1z9s5P for ; Fri, 11 Mar 2022 11:25:55 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=jgPnVu5fgwJxSQmFLNkmasD6qohz8/GfsnDo6AR8CpU=; b=qkeWwt9Y3fvdIMCvukKnGtztUB 1ZCWhU823RgisdHrKJmxZjC7SvULU8DSRexEy/yYvfC2evytfPV0hiccbYXb4AYTLErhrkNDenAsn 61MDE370pJJA2D5jxGN+jqimtCe5oUqN1FJLpw/aUW2ott5JaKe71KNeRb3nT41JlXFKP+xl6ISrF DzqWyC5XomWF5Rfgob2LEjw2T4F7lF1O7jirz0Sbr7D2qNtcVMkBMBa4JpkM9vRGaUkoew1DE5DL/ YBtgrrPabXNb879Fn6agLVcVBS07nIFlk3XD6YKUl558IHCjhXtD6dVAfe6fK+JB9luaPAw/7ZrER BaD4bbiA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6P-00EPMX-8n; Fri, 11 Mar 2022 00:25:53 +0000 Received: from mail-pf1-x449.google.com ([2607:f8b0:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6L-00EPIi-Ug for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:25:51 +0000 Received: by mail-pf1-x449.google.com with SMTP id t134-20020a62788c000000b004e1367caccaso4171247pfc.14 for ; Thu, 10 Mar 2022 16:25:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=JTpQf7sMt9NLwDv0fAUwU2EA32MsJpiLJgXJ8737nsg=; b=nI8x0dteBcJQ0AeW7ZZZnCVKrEe68X0+npmVLf8lMk0Ag9nRT1msQlKpIkRULGuUI8 06ikZLRTCzLiUgNamd32eHHL90zFaZpJJSGCSIacLRIE5rwK5rwhE8gGl2fH/Qt7GyLM /hZTGRxO3Wm0LkCdC3D4JXrM3t3pqzYXHTarQbY0zLTLvXKJmLBwqD/zni0OcJqToo/c RxVg35KKH4nykhIefrkxNUzy6WjfNJ9TJUpctMWnzuDPi+ofAgYSLz+nuyuKLxJRSet2 nsIPDOHgYBYLQkkCzl53A3Lm+IRStEgvTXKbpKSxs0K+5Fi2pGo9vjO4eXKe0nSvQ0cZ Wq6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=JTpQf7sMt9NLwDv0fAUwU2EA32MsJpiLJgXJ8737nsg=; b=TU1ZMVj6Rj2v65+85E3QHH3sbGemKpWQUHg9NovzK/pKz6Sztw0EnX+/F7kfVBCNo3 84fySCC0cPtBkqRI3L/vDQgnLRqkzYxXK2LimLF4o9wxjw/26fhJLwYNRRRM9JRqa6G+ CUJA7n4eKkSa3cC0pL2VQ2yko/78xdBvjXnnMGhSIjEliS6b1u/nKEjivGefmTfxLjYQ oRQVrr3u1nPxEWT0pdCITAT4anwil44VtBywHAlrse/zXiVL2uOtjmTpQVMBJgFyDdqc 9cQtB+L4V3ruEeVhytnfPLAB/JRcPMaeTIlMp4Zwoye2NN8L74AAni/2CjsEqC0MuoIQ 2lkg== X-Gm-Message-State: AOAM5310/AHtC6LZ0PyMc6MwT+bVUeJP7mm64lRD83wLFl993ED7FL6B 5eWNVr7D82WykTiHLG54TbVTEl/Vfcu+rQ== X-Google-Smtp-Source: ABdhPJx0jVLYy5OgA1/trFkRvNs31FrB39hnDd/rYlVAUt8N2QRI9jD4iGvKLypSffqncxLl/Lr39n8gAymcHg== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:7296:b0:14f:2a67:b400 with SMTP id d22-20020a170902729600b0014f2a67b400mr7633127pll.172.1646958348502; Thu, 10 Mar 2022 16:25:48 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:12 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-11-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 10/26] KVM: x86/mmu: Use common code to free kvm_mmu_page structs From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162550_024715_FDC4A143 X-CRM114-Status: GOOD ( 11.61 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Use a common function to free kvm_mmu_page structs in the TDP MMU and the shadow MMU. This reduces the amount of duplicate code and is needed in subsequent commits that allocate and free kvm_mmu_pages [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:449 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Use a common function to free kvm_mmu_page structs in the TDP MMU and the shadow MMU. This reduces the amount of duplicate code and is needed in subsequent commits that allocate and free kvm_mmu_pages for eager page splitting. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Peter Xu --- arch/x86/kvm/mmu/mmu.c | 8 ++++---- arch/x86/kvm/mmu/mmu_internal.h | 2 ++ arch/x86/kvm/mmu/tdp_mmu.c | 3 +-- 3 files changed, 7 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c12d5016f6dc..2dcafbef5ffc 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1669,11 +1669,8 @@ static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr) percpu_counter_add(&kvm_total_used_mmu_pages, nr); } -static void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) +void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) { - MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); - hlist_del(&sp->hash_link); - list_del(&sp->link); free_page((unsigned long)sp->spt); if (!sp->role.direct) free_page((unsigned long)sp->gfns); @@ -2521,6 +2518,9 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, list_for_each_entry_safe(sp, nsp, invalid_list, link) { WARN_ON(!sp->role.invalid || sp->root_count); + MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); + hlist_del(&sp->hash_link); + list_del(&sp->link); kvm_mmu_free_shadow_page(sp); } } diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index a0648e7ddd33..5f91e4d07a95 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -173,4 +173,6 @@ void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); struct kvm_mmu_page *kvm_mmu_alloc_direct_sp_for_split(bool locked); +void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp); + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 1a43f908d508..184874a82a1b 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -64,8 +64,7 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) static void tdp_mmu_free_sp(struct kvm_mmu_page *sp) { - free_page((unsigned long)sp->spt); - kmem_cache_free(mmu_page_header_cache, sp); + kvm_mmu_free_shadow_page(sp); } /* From patchwork Fri Mar 11 00:25:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604193 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=i/O4Xu9Q; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=bEo8Cbzb; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6D54z7Lz9s5P for ; Fri, 11 Mar 2022 11:25:57 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=JleTSjEcMNjVt6QgwsAQ13fgS3d4Iq7CWcCAxeeQBxM=; b=i/O4Xu9QbxomHrBW4uiBNcSJBR ZhBIHslWerl4HjOPHL6brFqfxgiY12bIXt31UmjeM9DrfwBRzodxAv8OUz28uhZm9FsEV7W4NNtMa KsPjG9qxZ34E3Wzla3daB4WBcwpMStvnH6ppeIh18xKCl4+XBlkxKqv0FA2mS5UYJf/yYwnM9nYgo ElJATcJJj26+Z0Yla+mprsDncwdN+GQ/in9fjyYqD0Cu4wLRq1Em5c98rxNYTXy6AXWVHWmuwTGdE jd0awPfOnBpY/flKGayvFj3HKyhkoYan4M+2qdazcLbXJHnksC4craxtQSnNdxoRCLcYjBi7J7EuA u35cbKAg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6R-00EPOd-Ft; Fri, 11 Mar 2022 00:25:55 +0000 Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6N-00EPKE-JK for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:25:52 +0000 Received: by mail-pl1-x649.google.com with SMTP id b4-20020a170902e94400b0015309b5c481so2950816pll.6 for ; Thu, 10 Mar 2022 16:25:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=zCJDCK8B0lL6JubhIyGfaHMZb5Qp7vvPx+3RInS6vEE=; b=bEo8CbzbHriKnkqNIGJfYKHguuXOz576eX2T4cHlipuA8XvhukRnyam2cGRz+vvTpw RKJ42EslOVMjlUZ5CqQ4Ro08rbQLot9XVom2AtVhJJsFRpPuXeGRD7+kE5bpRt8Qchq7 mMho4sxejFkUfAZbxPpKnDuYiyy3NfeZqL47umZAHwKfmYzRsaUKEQSMQF6uEpU395dt bTBuze3KrAItL51NXFJk8Yx6OLDDZZ+pkLmxqY+EN/KoiYLwEqHKxOtAT/Xc+klpzLNy L6eSziMkwAPaIASt9BXbcqjCoVM19WcMyU3KXGrS5/AslQMgCIEuuN29MySTWd/dS5VG L/aA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=zCJDCK8B0lL6JubhIyGfaHMZb5Qp7vvPx+3RInS6vEE=; b=ZRjNh/q2qGkhXCt1i3YaGwB1EnoxiQN/jVDmqZklzHmjW8ttuzgO5GzSUBjelRn13h u5yNMIDT2yRyjfowIYFima7K6rMlTMjIbOAAeW3wh1plVKsQe+e7vJkUuXdt1fQs57XV GJGQ4bE08wuQpHNwOJvpHOiyRfvEWtqIwZB/F4z3RHbJc7+iWgIGNyKM4Py7DTEAshOS Xdt46+bSmoQbOiLqqon9Z2K1g62TWGLuFxoCMqTFquY1wmqW+EG4uMEng9Q/+wRMHxsP gfTEIav6ZHzf6l5fdcyUoTtwAnAYwFWlPfGid+DgeZF7HoCPuJ/Asrad4uCd+eojK9fz 1b8Q== X-Gm-Message-State: AOAM530yWa2XqKPFoQALfbeLxAFbAa5JWJeHzAahSKlVZgpnGn5hFmXr ujR70Pzzip0tEUPHJeEVa+YWO7mg9O/PuA== X-Google-Smtp-Source: ABdhPJz4xnTY2m6Isi4IBgbuJhluKtEZACOCW6bqOZj5r5ynVvnXZt7dHD/2bSBFHfhe17fyQtZKmIVyCTxRMw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:238f:b0:4f7:78b1:2f6b with SMTP id f15-20020a056a00238f00b004f778b12f6bmr4873058pfc.17.1646958350133; Thu, 10 Mar 2022 16:25:50 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:13 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-12-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 11/26] KVM: x86/mmu: Use common code to allocate kvm_mmu_page structs from vCPU caches From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162551_666631_7F01DFA5 X-CRM114-Status: GOOD ( 10.92 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Now that allocating a kvm_mmu_page struct is isolated to a helper function, it can be re-used in the TDP MMU. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 3 +-- arch/x86/kvm/mmu/mmu_internal.h | 1 + arch/x86/kvm/mmu/tdp_mmu.c | 8 +------- 3 files changed, 3 insertions(+), 9 [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:649 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Now that allocating a kvm_mmu_page struct is isolated to a helper function, it can be re-used in the TDP MMU. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Peter Xu --- arch/x86/kvm/mmu/mmu.c | 3 +-- arch/x86/kvm/mmu/mmu_internal.h | 1 + arch/x86/kvm/mmu/tdp_mmu.c | 8 +------- 3 files changed, 3 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2dcafbef5ffc..4c8feaeb063d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1704,8 +1704,7 @@ static void drop_parent_pte(struct kvm_mmu_page *sp, mmu_spte_clear_no_track(parent_pte); } -static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, - bool direct) +struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, bool direct) { struct kvm_mmu_page *sp; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 5f91e4d07a95..d4e2de5f2a6d 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -173,6 +173,7 @@ void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); struct kvm_mmu_page *kvm_mmu_alloc_direct_sp_for_split(bool locked); +struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, bool direct); void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp); #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 184874a82a1b..f285fd76717b 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -269,13 +269,7 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) { - struct kvm_mmu_page *sp; - - sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); - sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); - set_page_private(virt_to_page(sp->spt), (unsigned long)sp); - - return sp; + return kvm_mmu_alloc_shadow_page(vcpu, true); } static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep, From patchwork Fri Mar 11 00:25:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604192 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=i0d2GDpW; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=Ji7DjPBU; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6D55Xjzz9sGK for ; Fri, 11 Mar 2022 11:25:57 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=W70WCZw/GsO+bA+G6V9VzNqzXffxGqVkRMsOmm/tFTc=; b=i0d2GDpW7/UWQkoAeR7e1rVbMh E7CI4Oc0ndB4kTXi4dVm7VWXloCM6Cj2njqPN+AiYjCjkJKjOcEnicmnlmoy8kb9wtAfByLdeB1A3 rDesQ9WZNXxiobSUBHuS/CHJaDYjLMyETQcyfERY4N/BRW9KNeRXm1CA1EVFD7MOSGNZLUB19pczj n8VC1XfIJa8I0KJxpj9KWCNDEINkglAXEqg9YyfM72c9AlAJ43hcJCO+JcxClEeTSneNKIH2RR08O AkowrnrLlD7ers1ElwetLZ5rIGjH2ZCAluYF7mGMuFy/TrKooVW0R70n6aM+FK2sOWWhR1FauosTH AcFg0LLA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6R-00EPOj-KT; Fri, 11 Mar 2022 00:25:55 +0000 Received: from mail-pg1-x54a.google.com ([2607:f8b0:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6O-00EPLN-Vf for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:25:54 +0000 Received: by mail-pg1-x54a.google.com with SMTP id 196-20020a6307cd000000b0038027886594so3789771pgh.4 for ; Thu, 10 Mar 2022 16:25:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=FuGhM+ADTv/ZcWx+5nd4XLS/9fWvtmM0zVjBEofzrxU=; b=Ji7DjPBUNinn477C4vHC7v0CuY0YoR6Yspro2QMZ2A3u86aqpm1X0/aNnOB3Onp3w2 HgDtdMQ+gjJdoqlarq8fbzeMvb9spnuW9GS9uYaCSHgwIzu15JEmHu79mAnZ1tLJFwPT oFowkvO9jBF3oyujgESdSYeqoe0gYjAcidMXMJAHHt6OtFUYjbz9/XwZHovpDi/VB3a2 2w1CxktgeA3NrBx+qyKN/4Ey7VJIa/lVAfIOLkSA5mAPxkDNWBz2kAed064YBhLxUxUP f0JOgrZf/UTO6C022foFU+9duN7WQjx0LGEL0QoLxC6tLxyIOe8si9f3ojrRG/e9lbPJ RSag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FuGhM+ADTv/ZcWx+5nd4XLS/9fWvtmM0zVjBEofzrxU=; b=wqh+8Zenl8+Y41MZTdWHn2F6zeULRz5WrogFQKPIQ3lJt11jeRz1L10dESHa7/MZLq h20vcNc8zYym26xvg0D+RG8LdqhIhp7kvbGH4UrlJdTtgrjhPnXblQ1V0YR9zzkmPGS6 1rKFtoeB9/FSvstgbUSjPAydwmY2R5tX3ZbqXTGj+H+83xJGW01upDW23RissPYv+eNe 3pQNq2fVon6bYEcjQaAkeEAhlSHe0INVyhqq8LMm4Pe2hdEGoEVpGwA3Whg8nBbjuEqa hTLNYWIsQq71U/afPRc2ApDYZsHGEWPyVHGa1/xNmfvDNOMSRpLpy162iB2dVQjnoFUh 465Q== X-Gm-Message-State: AOAM530YBKbDL0UzMqn7n2xeZcPsWLf0+6hSIVFGiIhZLJ/K+DxYYN1l +B4K54zuaRupLG58Tz4emMjOLB14K7Mhuw== X-Google-Smtp-Source: ABdhPJwexzqWNekNQlu0Z8Q7kGjrMeWTrgWkJ92wr1Nrhoh86lXtdckWOSYIgTE9CJlEh1iQr24fdow+vmFtsQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:903:192:b0:151:8df9:6cdb with SMTP id z18-20020a170903019200b001518df96cdbmr7919327plg.20.1646958351495; Thu, 10 Mar 2022 16:25:51 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:14 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-13-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 12/26] KVM: x86/mmu: Pass const memslot to rmap_add() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162553_043015_A6904E65 X-CRM114-Status: UNSURE ( 9.81 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: rmap_add() only uses the slot to call gfn_to_rmap() which takes a const memslot. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:54a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org rmap_add() only uses the slot to call gfn_to_rmap() which takes a const memslot. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4c8feaeb063d..23c0a36ac93f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1596,7 +1596,7 @@ static bool kvm_test_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, #define RMAP_RECYCLE_THRESHOLD 1000 -static void rmap_add(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, +static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, u64 *spte, gfn_t gfn) { struct kvm_mmu_page *sp; From patchwork Fri Mar 11 00:25:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604194 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=QDICNs9S; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=sxbiv6bx; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6D66yLKz9s5P for ; Fri, 11 Mar 2022 11:25:58 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=x5sk5xmnuCmxniQaHtAql5um6fixNu7EVSOGLzXCnmo=; b=QDICNs9Sq/BWJXqgQh47xFF4jD RZUa1HX+r3C4BsGzaD6DlnazOWBy+cYxPn3sRKEysSA7IR6BvbB1mCZBJkwY1qCZ8+VzvzPBJFyZR BqDmJfAKSXuH7spr4m2oMvG69tcNKS1kdp7YopOmNHHY1L3lsDkqRvAD7KO9oJg1fFGSyQ1M0Ni7l rWijpNnm+AB2sTvroOXzH//6N7m5L/jGodHA4TCqBjOE+7pkdPrcrcumKeCwzpPJ3CO5Nky7Bgp22 Hb9eITTkFag8ICu2FRsqtHfsbxRLXlW868zWL6xeqj9K4tNTuzFGF/jDO4PKcHaBXxTu6uKWDk6o3 Ezv8s6gQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6S-00EPQB-Pg; Fri, 11 Mar 2022 00:25:56 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6Q-00EPNA-Jf for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:25:56 +0000 Received: by mail-pj1-x1049.google.com with SMTP id m9-20020a17090ade0900b001bedf2d1d4cso6787090pjv.2 for ; Thu, 10 Mar 2022 16:25:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=v7qCsbgsr/gR22mozQcF+sQ2jUkkViok2U86H9+3SCg=; b=sxbiv6bxYZKji+KMkwgFhZgmKilK+x7JKaeklm0aaY/0tPHo6PqSDrSP8HWopQYwNb nYjE55/Ylu41BjN1Wr8tR6Vj6dzqCF34FD3W/C18qtOwHO8+ZBiBG1Y+12IK5ZU+3UoP CC+BQNMbs5hN4UK+uFlYFs4kOqkifWM+N5qlJK7DTJ7axe2fdbrrMPmRjJrIVbzuWq0r 3k81g/OAwBL5gKGcgy6IkuE3EHezxxR9kafqxsj1rgsGhwauVhdI2uyKyMeYKz8leCWX I4Bqze8qQcKReoPNBhoTBprcMm2NQA/bx4Z1RgJ2wXF5sWAbMswIfJlKoVg2OV4VMNvY kgtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=v7qCsbgsr/gR22mozQcF+sQ2jUkkViok2U86H9+3SCg=; b=CEf/UW6Bv10CGGHCVAY6lzfHj94KX7a7UjMb4x0bWw5L80Q9RFrUi7COEJq5LTg3ia u7IXxj5vLO7OUEcpFEJUV5x/vFC9Te+1y2L3d9Xs6HIbR88ZKEYa64NvsOiT5Nbzd4BP FCqnEibFzllApeA2aaWeHxhHzQN7fzEkl4ULhr0X9CP3OLhqva3oNHMqMu27i1VLXERH QSia4vUXMSCTlWrow3iTZfrzdJN7PBs7J8PvFheqoK7DriCGEHFv+t66fxQbeueAErXi yBZNTqhvHbErj5v14FNgRwdtVptbJqEPv6aYdvXpRl8uz9lYWQ8IBAvbzg9ngwo2F40O ElUw== X-Gm-Message-State: AOAM532+5NqwfIHDTJhuye2TaOgkbC4L5xuQ+qPKwN6BthSy/0aSmwGO IW7pcVPvmFHtuWQiDfRt2FGRWLMpdzFUKg== X-Google-Smtp-Source: ABdhPJzvjuqM/MeknIQuH530q9QhRKE8ELO29h5EvZaK896mLaUdORuS4wDYPcFM+SuVielHfBcrazhJK1uDSA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:be18:b0:153:2444:9c1a with SMTP id r24-20020a170902be1800b0015324449c1amr4691462pls.152.1646958353220; Thu, 10 Mar 2022 16:25:53 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:15 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-14-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 13/26] KVM: x86/mmu: Pass const memslot to init_shadow_page() and descendants From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162554_675790_813DEFD6 X-CRM114-Status: GOOD ( 11.81 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Use a const pointer so that init_shadow_page() can be called from contexts where we have a const pointer. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_page_track.h | 2 +- arch/x86/kvm/mmu/mmu.c | 6 +++--- arch/x86/kvm/mmu/mmu_ [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:1049 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Use a const pointer so that init_shadow_page() can be called from contexts where we have a const pointer. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_page_track.h | 2 +- arch/x86/kvm/mmu/mmu.c | 6 +++--- arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/mmu/page_track.c | 4 ++-- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- arch/x86/kvm/mmu/tdp_mmu.h | 2 +- 6 files changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h index eb186bc57f6a..3a2dc183ae9a 100644 --- a/arch/x86/include/asm/kvm_page_track.h +++ b/arch/x86/include/asm/kvm_page_track.h @@ -58,7 +58,7 @@ int kvm_page_track_create_memslot(struct kvm *kvm, unsigned long npages); void kvm_slot_page_track_add_page(struct kvm *kvm, - struct kvm_memory_slot *slot, gfn_t gfn, + const struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode); void kvm_slot_page_track_remove_page(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 23c0a36ac93f..d7ad71be6c52 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -794,7 +794,7 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn) } static void account_shadowed(struct kvm *kvm, - struct kvm_memory_slot *slot, + const struct kvm_memory_slot *slot, struct kvm_mmu_page *sp) { gfn_t gfn; @@ -1373,7 +1373,7 @@ int kvm_cpu_dirty_log_size(void) } bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, - struct kvm_memory_slot *slot, u64 gfn, + const struct kvm_memory_slot *slot, u64 gfn, int min_level) { struct kvm_rmap_head *rmap_head; @@ -2151,7 +2151,7 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm_vcpu *vcpu, } static void init_shadow_page(struct kvm *kvm, struct kvm_mmu_page *sp, - struct kvm_memory_slot *slot, gfn_t gfn, + const struct kvm_memory_slot *slot, gfn_t gfn, union kvm_mmu_page_role role) { struct hlist_head *sp_list; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index d4e2de5f2a6d..b6e22ba9c654 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -134,7 +134,7 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, void kvm_mmu_gfn_disallow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, - struct kvm_memory_slot *slot, u64 gfn, + const struct kvm_memory_slot *slot, u64 gfn, int min_level); void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, u64 start_gfn, u64 pages); diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c index 2e09d1b6249f..3e7901294573 100644 --- a/arch/x86/kvm/mmu/page_track.c +++ b/arch/x86/kvm/mmu/page_track.c @@ -84,7 +84,7 @@ int kvm_page_track_write_tracking_alloc(struct kvm_memory_slot *slot) return 0; } -static void update_gfn_track(struct kvm_memory_slot *slot, gfn_t gfn, +static void update_gfn_track(const struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode, short count) { int index, val; @@ -112,7 +112,7 @@ static void update_gfn_track(struct kvm_memory_slot *slot, gfn_t gfn, * @mode: tracking mode, currently only write track is supported. */ void kvm_slot_page_track_add_page(struct kvm *kvm, - struct kvm_memory_slot *slot, gfn_t gfn, + const struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode) { diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index f285fd76717b..85b7bc333302 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1768,7 +1768,7 @@ static bool write_protect_gfn(struct kvm *kvm, struct kvm_mmu_page *root, * Returns true if an SPTE was set and a TLB flush is needed. */ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, - struct kvm_memory_slot *slot, gfn_t gfn, + const struct kvm_memory_slot *slot, gfn_t gfn, int min_level) { struct kvm_mmu_page *root; diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 54bc8118c40a..8308bfa4126b 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -42,7 +42,7 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, const struct kvm_memory_slot *slot); bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, - struct kvm_memory_slot *slot, gfn_t gfn, + const struct kvm_memory_slot *slot, gfn_t gfn, int min_level); void kvm_tdp_mmu_try_split_huge_pages(struct kvm *kvm, From patchwork Fri Mar 11 00:25:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604195 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=oA5kWAoE; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=ITUQYvDk; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6D90tfyz9s5P for ; Fri, 11 Mar 2022 11:26:01 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=920mCkePBu+9suZu4x0mOtMCknvCUNqnzNUavnDkpOk=; b=oA5kWAoEk2zpQIHBOLx+R+gpcd F20AB78uMAfklQ3/88aU4qR52W1ZuTz1WrfrnJGouIgyqE2Xh7JkumPzZs8eCdH+UqGUI8NHXQq+O ZyjoMfnABUrUxqWb7BowifOHRj1nzi6nxcSzMAyisltb7/BE7v92giF9XWZ7Y78eh69HL72RxbmCB 0UWBIa5+mVvdhCr3sq4gi51XjjpO8yPk9RO3z/f3bfr/hLElX+nnEDMlsAJYkcTlvjlcXG12xME0z br6JL/ViBX1+d1EdFwP7GKcT9CkIYJ4MEKAQ4WQFO1d9kbzhYpX0AfmdEuyb9sjwTJWHupg65CY0X hGF+TTkA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6U-00EPRl-Uk; Fri, 11 Mar 2022 00:25:58 +0000 Received: from mail-pg1-x54a.google.com ([2607:f8b0:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6S-00EPOY-8C for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:25:57 +0000 Received: by mail-pg1-x54a.google.com with SMTP id t62-20020a635f41000000b0037c9ae5fb8bso3766552pgb.19 for ; Thu, 10 Mar 2022 16:25:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=trCyse6kc5VaWQEWZpkBTeyDB9juUOFht/HeyrHGEd0=; b=ITUQYvDkX2h3RQafxt+nm82/5781iHDiISy4DhTqmrro4/iNkPMsCBDlg8t8eB6+vW WoKsbJ7zeDrbOuwfTkcWOWIHcZkJGUw2d4r+98Smj6dBLngOlw908mgtDChj3UyeFl76 FyOJizpm4NSOQnGdgQxrZIN82+kY5NHA6X6TJmku0dL5yKt8RAtw/MBwBf3lvsO9ufHQ Dqd5AKP8PlHMHCKEXNTDH4MaONsDnd2Q9BRw9Kd7sIUZhnRm7oa9dS0kBigX9ayD9ejf EqqRiqfDXYoPw8fi+Fl3DZk+QJK6k5SwEwYznvLftMTTXWKnvbsNNkitPFK/djwTdGSR vCjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=trCyse6kc5VaWQEWZpkBTeyDB9juUOFht/HeyrHGEd0=; b=WuYAj+Tja/3GNBwm4bVAr0JlY9CFlOv4Z1jCGhJNWYgbAtrxvm/yBL3vaLka1YphRn o4okCPE4WRbo+N9Q7r4gSqxbFVxexy4XjAh8I5CyD2u/sLoVqjlFfIgKQ4hFdcilWgqG iJYd0+oypykGR0XtNR0mYZ+Hn+92jpkHmnQz+LlesLRRfN/eFM0gOXDmcHWBq9rjbHUV 0m6NVA13T40db4XI8P8aEzB+VK0D/GwkFO2skaE+6R9iuwzBavUiuNWxm9Od7JPX8CQE UAZoVQkR270yKB7jg+bfbaQpR/S0TZuoR/2A+/ooynJ7ExN6grA7zhlxhWoqM1Btqn1m D6Ew== X-Gm-Message-State: AOAM5322s8/5hS6EvaBnep/3CSoILi1ho7mjmhqVhQ1+Wx7TLeyAriw/ JdMHEFcTsaLggV5Bs0fqQSYkXYVh13rXvg== X-Google-Smtp-Source: ABdhPJzvAwzKoOAunNI85/tiPMvRTJXBrV2pzYGx34nWHsviDIj876IlOqB4GDjd2lcrh2rYpRjUo90RjQTUhw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:1941:b0:1bf:5440:d716 with SMTP id nk1-20020a17090b194100b001bf5440d716mr7787622pjb.147.1646958354821; Thu, 10 Mar 2022 16:25:54 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:16 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-15-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 14/26] KVM: x86/mmu: Decouple rmap_add() and link_shadow_page() from kvm_vcpu From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162556_337296_2B79B93A X-CRM114-Status: GOOD ( 13.23 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Allow adding new entries to the rmap and linking shadow pages without a struct kvm_vcpu pointer by moving the implementation of rmap_add() and link_shadow_page() into inner helper functions. No functional change intended. Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:54a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Allow adding new entries to the rmap and linking shadow pages without a struct kvm_vcpu pointer by moving the implementation of rmap_add() and link_shadow_page() into inner helper functions. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: David Matlack Reviewed-by: Peter Xu --- arch/x86/kvm/mmu/mmu.c | 43 +++++++++++++++++++++++++++--------------- 1 file changed, 28 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index d7ad71be6c52..c57070ed157d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -725,9 +725,9 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); } -static struct pte_list_desc *mmu_alloc_pte_list_desc(struct kvm_vcpu *vcpu) +static struct pte_list_desc *mmu_alloc_pte_list_desc(struct kvm_mmu_memory_cache *cache) { - return kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_pte_list_desc_cache); + return kvm_mmu_memory_cache_alloc(cache); } static void mmu_free_pte_list_desc(struct pte_list_desc *pte_list_desc) @@ -874,7 +874,7 @@ gfn_to_memslot_dirty_bitmap(struct kvm_vcpu *vcpu, gfn_t gfn, /* * Returns the number of pointers in the rmap chain, not counting the new one. */ -static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte, +static int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, struct kvm_rmap_head *rmap_head) { struct pte_list_desc *desc; @@ -885,7 +885,7 @@ static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte, rmap_head->val = (unsigned long)spte; } else if (!(rmap_head->val & 1)) { rmap_printk("%p %llx 1->many\n", spte, *spte); - desc = mmu_alloc_pte_list_desc(vcpu); + desc = mmu_alloc_pte_list_desc(cache); desc->sptes[0] = (u64 *)rmap_head->val; desc->sptes[1] = spte; desc->spte_count = 2; @@ -897,7 +897,7 @@ static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte, while (desc->spte_count == PTE_LIST_EXT) { count += PTE_LIST_EXT; if (!desc->more) { - desc->more = mmu_alloc_pte_list_desc(vcpu); + desc->more = mmu_alloc_pte_list_desc(cache); desc = desc->more; desc->spte_count = 0; break; @@ -1596,8 +1596,10 @@ static bool kvm_test_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, #define RMAP_RECYCLE_THRESHOLD 1000 -static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, - u64 *spte, gfn_t gfn) +static void __rmap_add(struct kvm *kvm, + struct kvm_mmu_memory_cache *cache, + const struct kvm_memory_slot *slot, + u64 *spte, gfn_t gfn) { struct kvm_mmu_page *sp; struct kvm_rmap_head *rmap_head; @@ -1606,15 +1608,21 @@ static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, sp = sptep_to_sp(spte); kvm_mmu_page_set_gfn(sp, spte - sp->spt, gfn); rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); - rmap_count = pte_list_add(vcpu, spte, rmap_head); + rmap_count = pte_list_add(cache, spte, rmap_head); if (rmap_count > RMAP_RECYCLE_THRESHOLD) { - kvm_unmap_rmapp(vcpu->kvm, rmap_head, NULL, gfn, sp->role.level, __pte(0)); + kvm_unmap_rmapp(kvm, rmap_head, NULL, gfn, sp->role.level, __pte(0)); kvm_flush_remote_tlbs_with_address( - vcpu->kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); } } +static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, + u64 *spte, gfn_t gfn) +{ + __rmap_add(vcpu->kvm, &vcpu->arch.mmu_pte_list_desc_cache, slot, spte, gfn); +} + bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { bool young = false; @@ -1682,13 +1690,13 @@ static unsigned kvm_page_table_hashfn(gfn_t gfn) return hash_64(gfn, KVM_MMU_HASH_SHIFT); } -static void mmu_page_add_parent_pte(struct kvm_vcpu *vcpu, +static void mmu_page_add_parent_pte(struct kvm_mmu_memory_cache *cache, struct kvm_mmu_page *sp, u64 *parent_pte) { if (!parent_pte) return; - pte_list_add(vcpu, parent_pte, &sp->parent_ptes); + pte_list_add(cache, parent_pte, &sp->parent_ptes); } static void mmu_page_remove_parent_pte(struct kvm_mmu_page *sp, @@ -2307,8 +2315,8 @@ static void shadow_walk_next(struct kvm_shadow_walk_iterator *iterator) __shadow_walk_next(iterator, *iterator->sptep); } -static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, - struct kvm_mmu_page *sp) +static void __link_shadow_page(struct kvm_mmu_memory_cache *cache, u64 *sptep, + struct kvm_mmu_page *sp) { u64 spte; @@ -2318,12 +2326,17 @@ static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, mmu_spte_set(sptep, spte); - mmu_page_add_parent_pte(vcpu, sp, sptep); + mmu_page_add_parent_pte(cache, sp, sptep); if (sp->unsync_children || sp->unsync) mark_unsync(sptep); } +static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, struct kvm_mmu_page *sp) +{ + __link_shadow_page(&vcpu->arch.mmu_pte_list_desc_cache, sptep, sp); +} + static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, unsigned direct_access) { From patchwork Fri Mar 11 00:25:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604196 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=4mfz8QFi; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=kmeAXlYS; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6DD2RL7z9s5P for ; Fri, 11 Mar 2022 11:26:04 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=yIV3sXQCbNdHPJ2WoApS/3N5vcSj1OuhIma37e+wAzg=; b=4mfz8QFiKdmJvh+reiL3514Elk 1ff9YqDvL351D8rw8SY8e4hZd0fL5RoVvggfJjPaD2+8BueNaQTCI4kH6lhY/WfzA9cZOKIPul+Yg REXbRX2TH8XwgmZ7l+2B2t5i+IGizltByLE62WAemluH5k4oXtiTdOAvhMr02zwQzBwDzPBoHTj6y 5V9je9c+qIFw3BmjSOCdDykf3AYFXrUsor+mWyDlUl1X+uwcY+pFiEQ+SjnCIzZKekgm0jhmH8slz rhqv/H+xMOfLKgtrZFEulR8iAkoRCLRVQxc3qOl0aw5M6rAx0YFk3Uj9wOTOFPKPnhXfppRfmlirL FkVtNvoQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6Y-00EPUN-5h; Fri, 11 Mar 2022 00:26:02 +0000 Received: from mail-pf1-x449.google.com ([2607:f8b0:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6T-00EPQD-Lb for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:25:58 +0000 Received: by mail-pf1-x449.google.com with SMTP id 127-20020a620685000000b004f6eaf868easo4157100pfg.22 for ; Thu, 10 Mar 2022 16:25:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=O7PTjGgVVBCrsjwnVURq9XzsqcZsMncDOf7hGGBSbd0=; b=kmeAXlYSaDpRLGcUmN4Sv/xTcYDW6jclnp1q1rJJ4dL2RoKdVl5SeZYJBr8GYWH5m+ DKD4iQBnIfLFGWX9cwaqDbad8l6EyX7jK4BZPjX14nJp+Wg87hDMUKTEmjaE3Zz6UHm0 NFQQxbrHmJK9MHiqOLJ7843mXQn7NgLsSzLX/7CnhTcMy0PzlTku645wfWBqEuMBl3yU EwRBFkfr3fYm5vTZkWqKJ8rysORKE9UEYN6dw6cLPFscwbhIaxSWiLkf9LiPAIIADKaf vypriHQ917TpN7rZX2KwCHN3Tpuv7SMZQiwX93ecC+tARg/gwyRbtujWRsbhB0yYIilw sbQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=O7PTjGgVVBCrsjwnVURq9XzsqcZsMncDOf7hGGBSbd0=; b=d9vC9ciBR9PpUU+UJi4BHqFnm5LOIJLGK92XezIYRybvHV8f8gYEiGkadKOHff2/PC QH3C2YM6gXPXNJCLg/WX1NHVq51hJX5Oy9UxL3iOQ4MePFQsmlUgyhcCum6Kd/UD1QVQ f1gMblfdjzeg2yAZo/dgMH4CMKQzmwI94tlWaM+tIVfuMDQIBZeUxhW6CTMmiBymr4xJ LiOw3iVr/nCht2obxpIBsBggGF+ifvxGc8rnQZ+BSE3UjlwDSSlWSjiP1eI7aYxEzQTn OMr6paFZLN69C05SEsFTVD7er3EQNpLGCRGS/1WMt1uIpwm1CH9cniSHCxvMidXcRWV7 7A1A== X-Gm-Message-State: AOAM532gJ2hLo0MlLsxIng5f03yQrqJSN3vZpsWGVYTlenmUbgAgh9cY sdg+YtKfQCw9+elKnN7Xxb5z8+xKTv+ggg== X-Google-Smtp-Source: ABdhPJyfGZbmtHRGGR2TxbnIQFRzmM6E4in1hGACCBqL+EJ7gFLqMSHgs8hy0AVD/5cl522W64O1wnhoHBbOLQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:b92:b0:4f6:dfe0:9abb with SMTP id g18-20020a056a000b9200b004f6dfe09abbmr7383915pfj.68.1646958356428; Thu, 10 Mar 2022 16:25:56 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:17 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-16-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 15/26] KVM: x86/mmu: Update page stats in __rmap_add() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162557_731164_4C091002 X-CRM114-Status: UNSURE ( 9.47 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Update the page stats in __rmap_add() rather than at the call site. This will avoid having to manually update page stats when splitting huge pages in a subsequent commit. No functional change intended. Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:449 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Update the page stats in __rmap_add() rather than at the call site. This will avoid having to manually update page stats when splitting huge pages in a subsequent commit. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: David Matlack Reviewed-by: Peter Xu --- arch/x86/kvm/mmu/mmu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c57070ed157d..73a7077f9991 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1607,6 +1607,8 @@ static void __rmap_add(struct kvm *kvm, sp = sptep_to_sp(spte); kvm_mmu_page_set_gfn(sp, spte - sp->spt, gfn); + kvm_update_page_stats(kvm, sp->role.level, 1); + rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); rmap_count = pte_list_add(cache, spte, rmap_head); @@ -2847,7 +2849,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, if (!was_rmapped) { WARN_ON_ONCE(ret == RET_PF_SPURIOUS); - kvm_update_page_stats(vcpu->kvm, level, 1); rmap_add(vcpu, slot, sptep, gfn); } From patchwork Fri Mar 11 00:25:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604197 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=Qpfd97nK; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=MmYv+Rj5; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6DD3Z2rz9sGG for ; Fri, 11 Mar 2022 11:26:04 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=/4KYX37kuF+3ipkcTQjg4wCN6/CyiU2S8kKJQ3F2tOM=; b=Qpfd97nK2j635LZ8eqfcbeSawh /QH2z7DXjBlZpw1Lee8LupJNxLbd3Cv3og0hRV3FagU6C2/33AqjaH2/W+Zx1KV9Tv9HdKATsItN5 gNcdrVNpafPBlemlIqXpkFgnNNClqceJht1ql4ff6Q1jOrGm7uZygxWndXO/kQbh8jkBbY5I0FVtq w14+6pAjulzMYDrV6zxh7IpVKraSGaw+GI4sCt3Zk1zkl0fqLs0uYheNAiATlBqCrPvCnpj71i2Gn RX3e4IClFi0bSWFnhgFTPg4IopC+RtYrPIT2WQiDPf6IFx4q6tLYFVJzETS7zyTkJkZFeLwdbbDJ5 dEfxhb3A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6Y-00EPUT-9p; Fri, 11 Mar 2022 00:26:02 +0000 Received: from mail-pg1-x54a.google.com ([2607:f8b0:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6V-00EPRe-GI for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:26:01 +0000 Received: by mail-pg1-x54a.google.com with SMTP id u10-20020a63df0a000000b0037886b8707bso3755587pgg.23 for ; Thu, 10 Mar 2022 16:25:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ka577/Q99IMPRmJNwySEEB2HMNKyat5/nl7Ts9gRTDM=; b=MmYv+Rj5M/jSA2Ff3Se9bCgVKVRkMDvLGNqA7/gNeGpPE0ht9lEIJgS6sNOVq+Isze t/ymOK2sA9B+I6DLxG3QrvMdNOMB/bjitaErkmLNQdjFHfhqwPodK3Fhn1Iy2OHyZ19J 5pvzZRrcyKDFIzF+9U3G3rHK1TKLHSkn75JCz3/eLj1NRJMZjsJjl7asO8iIxU+bqDR1 AYieclXsjJ2XD32DoSZzMdRBwub12+YWqqm58qZ5yAEkYAG//8Tbv+dJgpHwu7EWTEsg NrL7n22PSbZf8iAQ9yMWfEsxDeZ/uzEubqfKjv2lpliaUMobGoZLYL4e5QsrRZFNOD0Q PuoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ka577/Q99IMPRmJNwySEEB2HMNKyat5/nl7Ts9gRTDM=; b=U5O235/gfC75Wy6ft6gny/tbj3imLsuINY+CZULbMeiR7wGt4TLMU7Ax2Mx15Wmsjd YzMqxUdC+fIisYbAjgpOyoGop5fk22WsbFERUhYi/nOXjcZrLG2LYGIRGP2QJWJS1cd0 45YEAKQZ1tUNWOyf7K4BQiVFBi2R60casq4BP5sLP6HhX1r/rQjh/FlmTNrdFxSoK8p1 KmQIYWMcWijdzJV5OfdNpQzxEMfMrEowz90FCfWZn4WRlBV9augRlCvmxL6mhI/cdEnY +DAPVHIPoVxecONvzznsitLQciEgK5WbmrOlv+y0hYMa2GTapLu1nY1pK1EtYfpGjnCN AS0w== X-Gm-Message-State: AOAM533kPMELJI1pxBRWI0Red9EO+pQkDD3tVbzdcRwG3FUjdZKQAzwF PnmsM+qkiCse/xBH0UJBW1Wmu6HMjXLZCQ== X-Google-Smtp-Source: ABdhPJxl1HbbReF9yQU4eyO9gbHYGhz96P3jHX6CHhtVHS6VGFBY3jktqYgtMbP+C8rBBM+D/bObxNupHpMttA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:903:120e:b0:151:71e4:dadf with SMTP id l14-20020a170903120e00b0015171e4dadfmr7886863plh.48.1646958358116; Thu, 10 Mar 2022 16:25:58 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:18 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-17-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 16/26] KVM: x86/mmu: Cache the access bits of shadowed translations From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162559_581756_84236DF4 X-CRM114-Status: GOOD ( 22.62 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: In order to split a huge page we need to know what access bits to assign to the role of the new child page table. This can't be easily derived from the huge page SPTE itself since KVM applies its own [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:54a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org In order to split a huge page we need to know what access bits to assign to the role of the new child page table. This can't be easily derived from the huge page SPTE itself since KVM applies its own access policies on top, such as for HugePage NX. We could walk the guest page tables to determine the correct access bits, but that is difficult to plumb outside of a vCPU fault context. Instead, we can store the original access bits for each leaf SPTE alongside the GFN in the gfns array. The access bits only take up 3 bits, which leaves 61 bits left over for gfns, which is more than enough. So this change does not require any additional memory. In order to keep the access bit cache in sync with the guest, we have to extend FNAME(sync_page) to also update the access bits. Now that the gfns array caches more information than just GFNs, rename it to shadowed_translation. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/mmu/mmu.c | 32 +++++++++++++++++++------------- arch/x86/kvm/mmu/mmu_internal.h | 15 +++++++++++++-- arch/x86/kvm/mmu/paging_tmpl.h | 7 +++++-- 4 files changed, 38 insertions(+), 18 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index f72e80178ffc..0f5a36772bdc 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -694,7 +694,7 @@ struct kvm_vcpu_arch { struct kvm_mmu_memory_cache mmu_pte_list_desc_cache; struct kvm_mmu_memory_cache mmu_shadow_page_cache; - struct kvm_mmu_memory_cache mmu_gfn_array_cache; + struct kvm_mmu_memory_cache mmu_shadowed_translation_cache; struct kvm_mmu_memory_cache mmu_page_header_cache; /* diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 73a7077f9991..89a7a8d7a632 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -708,7 +708,7 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect) if (r) return r; if (maybe_indirect) { - r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_gfn_array_cache, + r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadowed_translation_cache, PT64_ROOT_MAX_LEVEL); if (r) return r; @@ -721,7 +721,7 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) { kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache); - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_gfn_array_cache); + kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadowed_translation_cache); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); } @@ -738,15 +738,17 @@ static void mmu_free_pte_list_desc(struct pte_list_desc *pte_list_desc) static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) { if (!sp->role.direct) - return sp->gfns[index]; + return sp->shadowed_translation[index].gfn; return sp->gfn + (index << ((sp->role.level - 1) * PT64_LEVEL_BITS)); } -static void kvm_mmu_page_set_gfn(struct kvm_mmu_page *sp, int index, gfn_t gfn) +static void kvm_mmu_page_set_gfn_access(struct kvm_mmu_page *sp, int index, + gfn_t gfn, u32 access) { if (!sp->role.direct) { - sp->gfns[index] = gfn; + sp->shadowed_translation[index].gfn = gfn; + sp->shadowed_translation[index].access = access; return; } @@ -1599,14 +1601,14 @@ static bool kvm_test_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, static void __rmap_add(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, const struct kvm_memory_slot *slot, - u64 *spte, gfn_t gfn) + u64 *spte, gfn_t gfn, u32 access) { struct kvm_mmu_page *sp; struct kvm_rmap_head *rmap_head; int rmap_count; sp = sptep_to_sp(spte); - kvm_mmu_page_set_gfn(sp, spte - sp->spt, gfn); + kvm_mmu_page_set_gfn_access(sp, spte - sp->spt, gfn, access); kvm_update_page_stats(kvm, sp->role.level, 1); rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); @@ -1620,9 +1622,9 @@ static void __rmap_add(struct kvm *kvm, } static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, - u64 *spte, gfn_t gfn) + u64 *spte, gfn_t gfn, u32 access) { - __rmap_add(vcpu->kvm, &vcpu->arch.mmu_pte_list_desc_cache, slot, spte, gfn); + __rmap_add(vcpu->kvm, &vcpu->arch.mmu_pte_list_desc_cache, slot, spte, gfn, access); } bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) @@ -1683,7 +1685,7 @@ void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) { free_page((unsigned long)sp->spt); if (!sp->role.direct) - free_page((unsigned long)sp->gfns); + free_page((unsigned long)sp->shadowed_translation); kmem_cache_free(mmu_page_header_cache, sp); } @@ -1720,8 +1722,12 @@ struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, bool direc sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); + + BUILD_BUG_ON(sizeof(sp->shadowed_translation[0]) != sizeof(u64)); + if (!direct) - sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache); + sp->shadowed_translation = + kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadowed_translation_cache); set_page_private(virt_to_page(sp->spt), (unsigned long)sp); @@ -1733,7 +1739,7 @@ struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, bool direc * * Huge page splitting always uses direct shadow pages since the huge page is * being mapped directly with a lower level page table. Thus there's no need to - * allocate the gfns array. + * allocate the shadowed_translation array. */ struct kvm_mmu_page *kvm_mmu_alloc_direct_sp_for_split(bool locked) { @@ -2849,7 +2855,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, if (!was_rmapped) { WARN_ON_ONCE(ret == RET_PF_SPURIOUS); - rmap_add(vcpu, slot, sptep, gfn); + rmap_add(vcpu, slot, sptep, gfn, pte_access); } return ret; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index b6e22ba9c654..c5b8ee625df7 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -32,6 +32,11 @@ extern bool dbg; typedef u64 __rcu *tdp_ptep_t; +struct shadowed_translation_entry { + u64 access:3; + u64 gfn:56; +}; + struct kvm_mmu_page { /* * Note, "link" through "spt" fit in a single 64 byte cache line on @@ -53,8 +58,14 @@ struct kvm_mmu_page { gfn_t gfn; u64 *spt; - /* hold the gfn of each spte inside spt */ - gfn_t *gfns; + /* + * For indirect shadow pages, caches the result of the intermediate + * guest translation being shadowed by each SPTE. + * + * NULL for direct shadow pages. + */ + struct shadowed_translation_entry *shadowed_translation; + /* Currently serving as active root */ union { int root_count; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 55cac59b9c9b..128eccadf1de 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -1014,7 +1014,7 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, } /* - * Using the cached information from sp->gfns is safe because: + * Using the information in sp->shadowed_translation is safe because: * - The spte has a reference to the struct page, so the pfn for a given gfn * can't change unless all sptes pointing to it are nuked first. * @@ -1088,12 +1088,15 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) if (sync_mmio_spte(vcpu, &sp->spt[i], gfn, pte_access)) continue; - if (gfn != sp->gfns[i]) { + if (gfn != sp->shadowed_translation[i].gfn) { drop_spte(vcpu->kvm, &sp->spt[i]); flush = true; continue; } + if (pte_access != sp->shadowed_translation[i].access) + sp->shadowed_translation[i].access = pte_access; + sptep = &sp->spt[i]; spte = *sptep; host_writable = spte & shadow_host_writable_mask; From patchwork Fri Mar 11 00:25:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604198 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=ceaheu1C; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=rYx7Jqj6; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6DG5Gg3z9s5P for ; Fri, 11 Mar 2022 11:26:06 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=lppBtvx2YMyjIgQFSp7hfdr2DFNttIBJTDCVB+s1jgk=; b=ceaheu1CesgtIHU7LQWUJTf2Za TG+n7n1r863uvw6MelxsvpEI7CoNLfhL1H0A/E62h0q6/fdVTY5J5CREI0nGZmMdbSl7VViVuavR1 eSBnaOWazcYLqYX0XHLT60vCAGuVJGWt0KKVVt5wwgE83Kk7nlCiYOqA8THv3EtNX9BwWRUKN+yAC bdG/Dc4ac/j6KaP+WRc0yDL0sE/sg7smI2sDL6+f9hTuC+S4p+R9WbO2nvKjnvvl2A2GZsORw4coc hTy1F2j+ykhoVRpT+Wnzv8VBXRD6EaV3XxbhqoZ/A30caFlnvMcSbt0ox7gGuNL1R4vNQwqRx22Js k86r3Y8A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6a-00EPWJ-Ds; Fri, 11 Mar 2022 00:26:04 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6X-00EPTA-IX for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:26:03 +0000 Received: by mail-pl1-x64a.google.com with SMTP id n11-20020a170902d2cb00b0015331a5d02fso543225plc.12 for ; Thu, 10 Mar 2022 16:26:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=cgeEs8BEwlCwQg4uwdq2WGFHwIysPh6phqJCR0F6k5E=; b=rYx7Jqj6Gy3597TptoRhJdrqs/Hp8iKZ54PUtCCP1XWegMJL6/XfhmdhJT9uBJQiWH ZE5h7nYGSmoHWR3jekHywiWqVqzYnIn3iMjZLs1yTffpVYvhuqxw7VtoRdHcmknboDZV JaGspNzGf5/qUKHlZtXWJwiljrvD4FR+upssqyDkEZueUr53Bv3WvwOj/zzr8mYfrJiq k7f2lf8q256HKL3Wuv3+j022awWEg8sSZdUIR/trB9Yyd6Z8krtbJm0h2i+oQ8aXL1QH 3OLc2hD77uQGQ4LMYuHeQex2C+6qz2UP8W2rChJhSKxk3mMJLVkbpnGVBmgnfN6VoZen sB1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=cgeEs8BEwlCwQg4uwdq2WGFHwIysPh6phqJCR0F6k5E=; b=6dtxx5Czxw7rO77KuczPyrFDoQiHcH6DvT1VC+Bx1GtUNM9t4fJqijh5uR3OXsKIsz hvBRW289lKQxqbB2mSW0rNkBtOvMmdgw1tkbFt98VjV/hVizue7Qq1KME/CimZFB+UTB wbVdJUzD8CLb8efrdNUU36ysKMt3NN/ihRcWg7KQY4h3zCLvzG2Svd8yZ5ySTDJzBb9f kyoFPnGZI9t7euFfYrpg+XE2Qb9eGUCJ6udSHked+xItxHhs7upNqH4y1lblVmABCHME 8EqGKaiYSoaoX6vQBHY3PK+W9ItpWnIEa3zGy1HFImmOtCM282T+IXK8xk3OTXmqbqXY nhww== X-Gm-Message-State: AOAM532YQCh5ufpzdIIHfuZpAi4kbNkCORvgsPMiSn4qmVTCn+bcL/aX lLS5fkpW+TgPNagJzJzCrA6Qq5j7PjyFiw== X-Google-Smtp-Source: ABdhPJy4jPXZbommb1140O0m7nX/f+s6PPe4tO5XvxEJ5QF7X5XtVfhvTF/UqH4vCyBIyTH3kSFZezPw37Zl0Q== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:1e10:b0:1bf:6c78:54a9 with SMTP id pg16-20020a17090b1e1000b001bf6c7854a9mr321715pjb.1.1646958359778; Thu, 10 Mar 2022 16:25:59 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:19 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-18-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 17/26] KVM: x86/mmu: Pass access information to make_huge_page_split_spte() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162601_657726_25BCAB39 X-CRM114-Status: GOOD ( 14.09 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Currently make_huge_page_split_spte() assumes execute permissions can be granted to any 4K SPTE when splitting huge pages. This is true for the TDP MMU but is not necessarily true for the shadow MMU. [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:64a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Currently make_huge_page_split_spte() assumes execute permissions can be granted to any 4K SPTE when splitting huge pages. This is true for the TDP MMU but is not necessarily true for the shadow MMU. Huge pages mapped by the shadow MMU may be shadowing huge pages that the guest has disallowed execute permissions. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: David Matlack --- arch/x86/kvm/mmu/spte.c | 5 +++-- arch/x86/kvm/mmu/spte.h | 3 ++- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 3 files changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index d10189d9c877..7294f95464a7 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -216,7 +216,8 @@ static u64 make_spte_executable(u64 spte) * This is used during huge page splitting to build the SPTEs that make up the * new page table. */ -u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index) +u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index, + unsigned int access) { u64 child_spte; int child_level; @@ -244,7 +245,7 @@ u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index) * When splitting to a 4K page, mark the page executable as the * NX hugepage mitigation no longer applies. */ - if (is_nx_huge_page_enabled()) + if (is_nx_huge_page_enabled() && (access & ACC_EXEC_MASK)) child_spte = make_spte_executable(child_spte); } diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 73f12615416f..c7ccdd5c440d 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -415,7 +415,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, u64 *new_spte); -u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index); +u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index, + unsigned int access); u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled); u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access); u64 mark_spte_for_access_track(u64 spte); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 85b7bc333302..541b145b2df2 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1430,7 +1430,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, * not been linked in yet and thus is not reachable from any other CPU. */ for (i = 0; i < PT64_ENT_PER_PAGE; i++) - sp->spt[i] = make_huge_page_split_spte(huge_spte, level, i); + sp->spt[i] = make_huge_page_split_spte(huge_spte, level, i, ACC_ALL); /* * Replace the huge spte with a pointer to the populated lower level From patchwork Fri Mar 11 00:25:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604199 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=4np4+2bc; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=rVXfocbR; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6DJ5gqSz9s5P for ; Fri, 11 Mar 2022 11:26:08 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=hmI7hmRfammgzPZERBJYGzaJYeA1+G8Y2uF6XPxIFNo=; b=4np4+2bc1ItJm0kbKOABEU5ndN 0g7SiBTCa0NPwzDltf5Q9bXBGReGrmHsAJmlSNle4qlB2IWZrgnDzRZsn0sG06JKEmZsClfzifj7X WDxC2kooHrX7fr8VZE+gjyBTvYynq1F8PbsQ3VitGBva4XusFFUOZ9p+0KYij54AJ7yfvEv5k+ev5 kB/gUR6jjPvfqvx1BWU5RqQUf3L1i5qKqB3m5nhPdRMzfRUrAxLiNbwekPJRxzH6/coMrYjIOtsYJ HYiaeZK9Wgoggdg+TLgcbGUITez9M0GjvscwcEfm+dQ/UlzhNIlg/R13mjfuV67xX+ID7H6jemrlS MClKBWIw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6c-00EPXv-Ib; Fri, 11 Mar 2022 00:26:06 +0000 Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6Z-00EPUM-6C for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:26:04 +0000 Received: by mail-pl1-x649.google.com with SMTP id x10-20020a170902a38a00b00151e09a4e15so3543242pla.15 for ; Thu, 10 Mar 2022 16:26:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=sbKWGus3ebobJm3+FVUoYXRe2tQF3TQMqNwj5vm+oQs=; b=rVXfocbRBiQbddq7Mymh8A5+rpjWdaiL9E9VqTuaVfs85neIgNVqmcY2867zAHwnJa 9TkxrBsa1FXUF6fizdhytevf03ZL9ZGE36n9+/W481j9IRCF6RPrAqCOuCHs8jZNwKIx WZ8wZGpBYjcCDJjxR4zZSW6m8K6/KkXuyxJuluinWK57EZ80Yg2P+y55gHFkhA6w9Oa1 ZjUrekfxltWEIcYp6tKHwVmxG/FoOfIojaUB8GXZiciN0zPQtoYypB9jwqwoqblMNs58 x+sWxZzzU0AXvB2UhlDpsSCnQfHpWc1MNB/G1fw4SMqXvl1aRl6tolbMQZm49L18DnHb iqyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=sbKWGus3ebobJm3+FVUoYXRe2tQF3TQMqNwj5vm+oQs=; b=pa8whWlH/dthHibtiHDkU7NlGwbzWXT9stZTKHGN1F+bp/gJ5xmxJxOGEguMLXOoJm hW8ZEEB2PmI9Y5rAEkcS6qAH1v9p9td/zBb4p1//CxIelYfjr3XS0NpAMLzarUJaor9W 07UGBws+ZgMXWh1CZg97tG2QLcn/G7/UPhhaIvyCr6Ep/mLIp/fWNo3vRStjroNxwHlm 3rnGP8TZyYWIs3sPRIiPb/2bdWsbrhDX5l3Ta/UABN1+WvalpNsJbryGyaaw9ya1QIBf D/XjeJjxCtR6so3LmeEBFn22E31WbLq6i8MZkZtl7tO9pmgDiPrIsGq8CukStHNytRUS PvhA== X-Gm-Message-State: AOAM533mjzehyB0sK6d7ezP37LdAHme095bJghNMd1YGOB6aH6iAHpZ2 aJOAQm6J5eTLtZzLy5sbnVdhgGznda0f4w== X-Google-Smtp-Source: ABdhPJzlF0dOeLQBpCrN6EQ/1uv182Ojcd8GXVsCcPw3S8P1mroh+hcuj+7wbKMkJW298Mx+vaz9w72iGDnQxg== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:890:b0:4f6:686e:a8a9 with SMTP id q16-20020a056a00089000b004f6686ea8a9mr7226414pfj.83.1646958361665; Thu, 10 Mar 2022 16:26:01 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:20 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-19-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 18/26] KVM: x86/mmu: Zap collapsible SPTEs at all levels in the shadow MMU From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162603_252745_966F0E51 X-CRM114-Status: GOOD ( 12.80 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Currently KVM only zaps collapsible 4KiB SPTEs in the shadow MMU (i.e. in the rmap). This is fine for now KVM never creates intermediate huge pages during dirty logging, i.e. a 1GiB page is never part [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:649 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Currently KVM only zaps collapsible 4KiB SPTEs in the shadow MMU (i.e. in the rmap). This is fine for now KVM never creates intermediate huge pages during dirty logging, i.e. a 1GiB page is never partially split to a 2MiB page. However, this will stop being true once the shadow MMU participates in eager page splitting, which can in fact leave behind partially split huge pages. In preparation for that change, change the shadow MMU to iterate over all necessary levels when zapping collapsible SPTEs. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Peter Xu --- arch/x86/kvm/mmu/mmu.c | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 89a7a8d7a632..2032be3edd71 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6142,18 +6142,30 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, return need_tlb_flush; } +static void kvm_rmap_zap_collapsible_sptes(struct kvm *kvm, + const struct kvm_memory_slot *slot) +{ + bool flush; + + /* + * Note, use KVM_MAX_HUGEPAGE_LEVEL - 1 since there's no need to zap + * pages that are already mapped at the maximum possible level. + */ + flush = slot_handle_level(kvm, slot, kvm_mmu_zap_collapsible_spte, + PG_LEVEL_4K, KVM_MAX_HUGEPAGE_LEVEL - 1, + true); + + if (flush) + kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + +} + void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, const struct kvm_memory_slot *slot) { if (kvm_memslots_have_rmaps(kvm)) { write_lock(&kvm->mmu_lock); - /* - * Zap only 4k SPTEs since the legacy MMU only supports dirty - * logging at a 4k granularity and never creates collapsible - * 2m SPTEs during dirty logging. - */ - if (slot_handle_level_4k(kvm, slot, kvm_mmu_zap_collapsible_spte, true)) - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + kvm_rmap_zap_collapsible_sptes(kvm, slot); write_unlock(&kvm->mmu_lock); } From patchwork Fri Mar 11 00:25:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604200 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=1lVvsD79; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=K0TPB3uK; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6DK6rcMz9s5P for ; Fri, 11 Mar 2022 11:26:09 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=X7yr+DxttV8zs6qFbhs6aXjtKDt1h92is+5sQ+pnGWI=; b=1lVvsD79+UKlGNXHWjZ/6KV60x LHT2vR+RGYsFBBMtPe1/BDEYxP1ChkBZ5oOQ9ygl46xN9g6AABwHWtE3330CP1jUDFeOoax3JKkJt 4Glm/I+LM8yKGMAo2Vdyv/RNO343T3T7trXdApMs2tLONVX6ZqvCj+sgYkIsk4WPSiM3w6O9V6xIb JhW1xKinxSlvqeXNAT4ejM3TMh1rymV17ggikj5GxMkktnjHHEvUEhwMzk7KnhVjnwQ8/QC+rwRed 3IngAm/HtwUxPRQHsrth0HUQoO2A6tWAS1do8sIRyADHCXiZQv37ICXPEQSf3xnky7NcpwfFVogni Qo5+qxqQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6d-00EPYa-Oh; Fri, 11 Mar 2022 00:26:07 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6a-00EPVz-RO for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:26:06 +0000 Received: by mail-pg1-x549.google.com with SMTP id v4-20020a63f844000000b003745fd0919aso3769607pgj.20 for ; Thu, 10 Mar 2022 16:26:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=25+KDsWb88cITvAkDtK/k2hukUBdQk6h1+nWravTP4A=; b=K0TPB3uKAzhv2V8uMMeBatFQY8hZWLm9Du6sCL0+SuvLt4cuZsnxgLsdX/S1nOvVtD fOJgb/6kxC900RYeJvh122bm8r8COP0dUdAfk7iS0A5hw7OsHXDqi9bwu6Fcka2oVXTA wP2mdyJD6Bh+sA7+tTKBHIc8ZtBem1Ya8VZxq5wGKapUL1k9nJoMFuAfBjE8WKCxiWN1 OlgrdMpnZH9W0FOCWtVCHjw1n/zefXLlVMhEeaAI5THutPCrG55DAjR14Ub0YTKdRBav 4LhtJZs/D/q8/J1cBMRrkXueu0GUwMdyz890TM3Fupl3hdRaVYavWFTBmZ/OW44pfIAC zJdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=25+KDsWb88cITvAkDtK/k2hukUBdQk6h1+nWravTP4A=; b=P4gythR1SRPolFuqEKDgJGJDe+RPJzcpNs/bi9sTmgkGW6cir8ENplxCGNf78zjshD h45Ccy4fbcDIKikcR7NmvqxwQCceB0KsAhX1gxx1j47IWyQIvkXAAg9ITzL6Lki/T9Nf EeGYbFUcLpXheyejJEAI2rJYf3EfEkeaqMrgUT5iNGfRx6n7uEJolk+8xmw3YFJaNNV9 1dI4dKwUTewShiLYJiAwXJNKRpJ2S3Pc2MkjnGsoH6zm/lxgCzP5ZfWd96jAPUyRfgRr jzNpugw2mbEt4I76sSybjGJYw2sejwT589K78yZ7/R42V1LvOg/NtwT3qrEHWzz6cWYm 95Ww== X-Gm-Message-State: AOAM532qTk5bdIMQ8wJ8hDBJQ+TyEnhwzkNz8lO8tI/MTqhN0XLpnQ+w Lc49YvNmpZIf4kXg819GOBylXWlbZDk/sA== X-Google-Smtp-Source: ABdhPJzQQNzCUZydl9KF0V/z0eslYsBHitU/RkIzZ5f2qOcoErUZJMAOQ9EJr1dof3fsKDR0fIjt+SaohHYUBw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:6a86:b0:151:f1c5:2fa3 with SMTP id n6-20020a1709026a8600b00151f1c52fa3mr7493244plk.77.1646958363421; Thu, 10 Mar 2022 16:26:03 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:21 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-20-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 19/26] KVM: x86/mmu: Refactor drop_large_spte() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162604_910168_46553400 X-CRM114-Status: GOOD ( 11.21 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: drop_large_spte() drops a large SPTE if it exists and then flushes TLBs. Its helper function, __drop_large_spte(), does the drop without the flush. In preparation for eager page splitting, which will need to sometimes flush when dropping large SPTEs (and sometimes not), push the flushing logic down into __drop_large_spte() and add a bool paramete [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:549 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org drop_large_spte() drops a large SPTE if it exists and then flushes TLBs. Its helper function, __drop_large_spte(), does the drop without the flush. In preparation for eager page splitting, which will need to sometimes flush when dropping large SPTEs (and sometimes not), push the flushing logic down into __drop_large_spte() and add a bool parameter to control it. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Peter Xu --- arch/x86/kvm/mmu/mmu.c | 29 +++++++++++++++-------------- 1 file changed, 15 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2032be3edd71..926ddfaa9e1a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1150,28 +1150,29 @@ static void drop_spte(struct kvm *kvm, u64 *sptep) rmap_remove(kvm, sptep); } - -static bool __drop_large_spte(struct kvm *kvm, u64 *sptep) +static void __drop_large_spte(struct kvm *kvm, u64 *sptep, bool flush) { - if (is_large_pte(*sptep)) { - WARN_ON(sptep_to_sp(sptep)->role.level == PG_LEVEL_4K); - drop_spte(kvm, sptep); - return true; - } + struct kvm_mmu_page *sp; - return false; -} + if (!is_large_pte(*sptep)) + return; -static void drop_large_spte(struct kvm_vcpu *vcpu, u64 *sptep) -{ - if (__drop_large_spte(vcpu->kvm, sptep)) { - struct kvm_mmu_page *sp = sptep_to_sp(sptep); + sp = sptep_to_sp(sptep); + WARN_ON(sp->role.level == PG_LEVEL_4K); - kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn, + drop_spte(kvm, sptep); + + if (flush) { + kvm_flush_remote_tlbs_with_address(kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); } } +static void drop_large_spte(struct kvm_vcpu *vcpu, u64 *sptep) +{ + return __drop_large_spte(vcpu->kvm, sptep, true); +} + /* * Write-protect on the specified @sptep, @pt_protect indicates whether * spte write-protection is caused by protecting shadow page table. From patchwork Fri Mar 11 00:25:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604201 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=S5cnzAEv; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=a47vI4e5; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6DN0NRrz9s5P for ; Fri, 11 Mar 2022 11:26:12 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=WCsT8a37BS6s/nnOOUlNPjbWnMMQxksDmd69u1asFD8=; b=S5cnzAEvipq/peEL5EM3gQZfh2 128JgupjK6AojEIqrA9YYxgy1m/YQGv7y4ou85/NfLaj/wYoN9D4fTqnZSIjnSCPWAPFlSavnizsK 0jXc2y9PgsYNNoKwA1gsQhuyfe6rRiP0e9LmG9UkM96s/J40cs7FpeSx/tZ/idRH7BUP6j8uYB4AQ 25cbJkKBvJsDNZ8AOAvQl5iGeToUXAAyFhQJt4ablfsOUTmiBtd+ZiB1fEa5CVXnBto3wIx8itg4d yetBeELWtjEr39Et2yfcm1UwgZcFUo3LKglCbID2SS1Y9h0Ztu8ae9SITcRLYvaP20yRFwEKjCn/z VZrOTpTw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6f-00EPa4-TK; Fri, 11 Mar 2022 00:26:09 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6c-00EPXD-Es for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:26:08 +0000 Received: by mail-pg1-x549.google.com with SMTP id r11-20020a63440b000000b0038068f34b0cso3813576pga.0 for ; Thu, 10 Mar 2022 16:26:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=WA6HST15t1Zfp54ZQZHtPJ91hXGCHATf6ynvL/uqJAw=; b=a47vI4e5DCLaCtH52PqEaW29IZ49EU6bu9Rtjt5nca1GCTKOBl18RsRsz75mldZGaG XnFfgd02wvDk2wnozXhQbnICqjNQs17WnM/32z1+uCSsnejAyzh8b0lDcVaydkV54BuQ V3vkbcLloBzuDdED20UrMdE3Il0cIOBfanM0k5idb7yf57pB7WbDwd0NOg5s/9YoDsXc KHFuZ//SEJSl+hkOJCD4FX9bUeeL1FUDpeJTzgxJ5j3IzEcRqlj5a846ofSfsEvxbauK 2oj/JLoCtYAnLAYlhHcJSawLqdh5mHiPv0oIRiaTtKtya1rdgCovUPMxIz7AwSjgxD0M 1qbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=WA6HST15t1Zfp54ZQZHtPJ91hXGCHATf6ynvL/uqJAw=; b=foIzc5rDkei86GEg+K2+Fp64h/0rosnouBlnhv3AGlY4PwZZ4Oq1n1jsjO1ZJOFM5I Futg7beGZjccB4dMEQg2d92mBNUDCOUxhDJZl7ltjA9Vv5VRxdAxg5NJESVCQ9UNwlVe XFFkU+Bb5A37f45Qiqw32sFNr/3engeFVQc8BLEZ5bCvZGrDax9G5QheZ47CZw4Ha/UL i1exrx1XSVaiKP6hm7ZigY1xgY7hUzlIllypQmOunPbkAlulZvuX1OZjBgMilB+1DRau gyAspdcQm51Ha78/iBMi8gWLZZnicPPxqVP8UCN+WPrNhy3/feNZr8FdazNYECK9EMJD jU9Q== X-Gm-Message-State: AOAM530gZlYrsqakskSfa+CuQG1Vgn9/cQFUzbOL6Dq46Kmyju9sJ36g 5hulYGNsUUatjbcNlT96/Js98ygXpQUXMg== X-Google-Smtp-Source: ABdhPJwYWz8wUmi2nMsvX7GLlR+W5kd6r41rJtex5GTM+ljEkO3lCJi7MRe0hudTcE7ycgRLMGPLjQQzzRCeJg== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90a:c984:b0:1bf:aee2:3503 with SMTP id w4-20020a17090ac98400b001bfaee23503mr7712377pjt.28.1646958365095; Thu, 10 Mar 2022 16:26:05 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:22 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-21-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 20/26] KVM: x86/mmu: Extend Eager Page Splitting to the shadow MMU From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162606_532189_41D6BD27 X-CRM114-Status: GOOD ( 28.81 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Extend KVM's eager page splitting to also split huge pages that are mapped by the shadow MMU. Specifically, walk through the rmap splitting all 1GiB pages to 2MiB pages, and splitting all 2MiB pages t [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:549 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Extend KVM's eager page splitting to also split huge pages that are mapped by the shadow MMU. Specifically, walk through the rmap splitting all 1GiB pages to 2MiB pages, and splitting all 2MiB pages to 4KiB pages. Splitting huge pages mapped by the shadow MMU requries dealing with some extra complexity beyond that of the TDP MMU: (1) The shadow MMU has a limit on the number of shadow pages that are allowed to be allocated. So, as a policy, Eager Page Splitting refuses to split if there are KVM_MIN_FREE_MMU_PAGES or fewer pages available. (2) Huge pages may be mapped by indirect shadow pages which have the possibility of being unsync. As a policy we opt not to split such pages as their translation may no longer be valid. (3) Splitting a huge page may end up re-using an existing lower level shadow page tables. This is unlike the TDP MMU which always allocates new shadow page tables when splitting. This commit does *not* handle such aliasing and opts not to split such huge pages. (4) When installing the lower level SPTEs, they must be added to the rmap which may require allocating additional pte_list_desc structs. This commit does *not* handle such cases and instead opts to leave such lower-level SPTEs non-present. In this situation TLBs must be flushed before dropping the MMU lock as a portion of the huge page region is being unmapped. Suggested-by: Peter Feiner [ This commit is based off of the original implementation of Eager Page Splitting from Peter in Google's kernel from 2016. ] Signed-off-by: David Matlack --- .../admin-guide/kernel-parameters.txt | 3 - arch/x86/kvm/mmu/mmu.c | 307 ++++++++++++++++++ 2 files changed, 307 insertions(+), 3 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 05161afd7642..495f6ac53801 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2360,9 +2360,6 @@ the KVM_CLEAR_DIRTY ioctl, and only for the pages being cleared. - Eager page splitting currently only supports splitting - huge pages mapped by the TDP MMU. - Default is Y (on). kvm.enable_vmware_backdoor=[KVM] Support VMware backdoor PV interface. diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 926ddfaa9e1a..dd56b5b9624f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -727,6 +727,11 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) static struct pte_list_desc *mmu_alloc_pte_list_desc(struct kvm_mmu_memory_cache *cache) { + static const gfp_t gfp_nocache = GFP_ATOMIC | __GFP_ACCOUNT | __GFP_ZERO; + + if (WARN_ON_ONCE(!cache)) + return kmem_cache_alloc(pte_list_desc_cache, gfp_nocache); + return kvm_mmu_memory_cache_alloc(cache); } @@ -743,6 +748,28 @@ static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) return sp->gfn + (index << ((sp->role.level - 1) * PT64_LEVEL_BITS)); } +static gfn_t sptep_to_gfn(u64 *sptep) +{ + struct kvm_mmu_page *sp = sptep_to_sp(sptep); + + return kvm_mmu_page_get_gfn(sp, sptep - sp->spt); +} + +static unsigned int kvm_mmu_page_get_access(struct kvm_mmu_page *sp, int index) +{ + if (!sp->role.direct) + return sp->shadowed_translation[index].access; + + return sp->role.access; +} + +static unsigned int sptep_to_access(u64 *sptep) +{ + struct kvm_mmu_page *sp = sptep_to_sp(sptep); + + return kvm_mmu_page_get_access(sp, sptep - sp->spt); +} + static void kvm_mmu_page_set_gfn_access(struct kvm_mmu_page *sp, int index, gfn_t gfn, u32 access) { @@ -912,6 +939,9 @@ static int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, return count; } +static struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, + const struct kvm_memory_slot *slot); + static void pte_list_desc_remove_entry(struct kvm_rmap_head *rmap_head, struct pte_list_desc *desc, int i, @@ -2125,6 +2155,23 @@ static struct kvm_mmu_page *__kvm_mmu_find_shadow_page(struct kvm *kvm, return sp; } +static struct kvm_mmu_page *kvm_mmu_find_direct_sp(struct kvm *kvm, gfn_t gfn, + union kvm_mmu_page_role role) +{ + struct kvm_mmu_page *sp; + LIST_HEAD(invalid_list); + + BUG_ON(!role.direct); + + sp = __kvm_mmu_find_shadow_page(kvm, gfn, role, &invalid_list); + + /* Direct SPs are never unsync. */ + WARN_ON_ONCE(sp && sp->unsync); + + kvm_mmu_commit_zap_page(kvm, &invalid_list); + return sp; +} + /* * Looks up an existing SP for the given gfn and role if one exists. The * return SP is guaranteed to be synced. @@ -6063,12 +6110,266 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); } +static int prepare_to_split_huge_page(struct kvm *kvm, + const struct kvm_memory_slot *slot, + u64 *huge_sptep, + struct kvm_mmu_page **spp, + bool *flush, + bool *dropped_lock) +{ + int r = 0; + + *dropped_lock = false; + + if (kvm_mmu_available_pages(kvm) <= KVM_MIN_FREE_MMU_PAGES) + return -ENOSPC; + + if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) + goto drop_lock; + + *spp = kvm_mmu_alloc_direct_sp_for_split(true); + if (r) + goto drop_lock; + + return 0; + +drop_lock: + if (*flush) + kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + + *flush = false; + *dropped_lock = true; + + write_unlock(&kvm->mmu_lock); + cond_resched(); + *spp = kvm_mmu_alloc_direct_sp_for_split(false); + if (!*spp) + r = -ENOMEM; + write_lock(&kvm->mmu_lock); + + return r; +} + +static struct kvm_mmu_page *kvm_mmu_get_sp_for_split(struct kvm *kvm, + const struct kvm_memory_slot *slot, + u64 *huge_sptep, + struct kvm_mmu_page **spp) +{ + struct kvm_mmu_page *split_sp; + union kvm_mmu_page_role role; + unsigned int access; + gfn_t gfn; + + gfn = sptep_to_gfn(huge_sptep); + access = sptep_to_access(huge_sptep); + + /* + * Huge page splitting always uses direct shadow pages since we are + * directly mapping the huge page GFN region with smaller pages. + */ + role = kvm_mmu_child_role(huge_sptep, true, access); + split_sp = kvm_mmu_find_direct_sp(kvm, gfn, role); + + /* + * Opt not to split if the lower-level SP already exists. This requires + * more complex handling as the SP may be already partially filled in + * and may need extra pte_list_desc structs to update parent_ptes. + */ + if (split_sp) + return NULL; + + swap(split_sp, *spp); + init_shadow_page(kvm, split_sp, slot, gfn, role); + trace_kvm_mmu_get_page(split_sp, true); + + return split_sp; +} + +static int kvm_mmu_split_huge_page(struct kvm *kvm, + const struct kvm_memory_slot *slot, + u64 *huge_sptep, struct kvm_mmu_page **spp, + bool *flush) + +{ + struct kvm_mmu_page *split_sp; + u64 huge_spte, split_spte; + int split_level, index; + unsigned int access; + u64 *split_sptep; + gfn_t split_gfn; + + split_sp = kvm_mmu_get_sp_for_split(kvm, slot, huge_sptep, spp); + if (!split_sp) + return -EOPNOTSUPP; + + /* + * Since we did not allocate pte_list_desc_structs for the split, we + * cannot add a new parent SPTE to parent_ptes. This should never happen + * in practice though since this is a fresh SP. + * + * Note, this makes it safe to pass NULL to __link_shadow_page() below. + */ + if (WARN_ON_ONCE(split_sp->parent_ptes.val)) + return -EINVAL; + + huge_spte = READ_ONCE(*huge_sptep); + + split_level = split_sp->role.level; + access = split_sp->role.access; + + for (index = 0; index < PT64_ENT_PER_PAGE; index++) { + split_sptep = &split_sp->spt[index]; + split_gfn = kvm_mmu_page_get_gfn(split_sp, index); + + BUG_ON(is_shadow_present_pte(*split_sptep)); + + /* + * Since we did not allocate pte_list_desc structs for the + * split, we can't add a new SPTE that maps this GFN. + * Skipping this SPTE means we're only partially mapping the + * huge page, which means we'll need to flush TLBs before + * dropping the MMU lock. + * + * Note, this make it safe to pass NULL to __rmap_add() below. + */ + if (gfn_to_rmap(split_gfn, split_level, slot)->val) { + *flush = true; + continue; + } + + split_spte = make_huge_page_split_spte( + huge_spte, split_level + 1, index, access); + + mmu_spte_set(split_sptep, split_spte); + __rmap_add(kvm, NULL, slot, split_sptep, split_gfn, access); + } + + /* + * Replace the huge spte with a pointer to the populated lower level + * page table. Since we are making this change without a TLB flush vCPUs + * will see a mix of the split mappings and the original huge mapping, + * depending on what's currently in their TLB. This is fine from a + * correctness standpoint since the translation will either be identical + * or non-present. To account for non-present mappings, the TLB will be + * flushed prior to dropping the MMU lock. + */ + __drop_large_spte(kvm, huge_sptep, false); + __link_shadow_page(NULL, huge_sptep, split_sp); + + return 0; +} + +static bool should_split_huge_page(u64 *huge_sptep) +{ + struct kvm_mmu_page *huge_sp = sptep_to_sp(huge_sptep); + + if (WARN_ON_ONCE(!is_large_pte(*huge_sptep))) + return false; + + if (huge_sp->role.invalid) + return false; + + /* + * As a policy, do not split huge pages if SP on which they reside + * is unsync. Unsync means the guest is modifying the page table being + * shadowed by huge_sp, so splitting may be a waste of cycles and + * memory. + */ + if (huge_sp->unsync) + return false; + + return true; +} + +static bool rmap_try_split_huge_pages(struct kvm *kvm, + struct kvm_rmap_head *rmap_head, + const struct kvm_memory_slot *slot) +{ + struct kvm_mmu_page *sp = NULL; + struct rmap_iterator iter; + u64 *huge_sptep, spte; + bool flush = false; + bool dropped_lock; + int level; + gfn_t gfn; + int r; + +restart: + for_each_rmap_spte(rmap_head, &iter, huge_sptep) { + if (!should_split_huge_page(huge_sptep)) + continue; + + spte = *huge_sptep; + level = sptep_to_sp(huge_sptep)->role.level; + gfn = sptep_to_gfn(huge_sptep); + + r = prepare_to_split_huge_page(kvm, slot, huge_sptep, &sp, &flush, &dropped_lock); + if (r) { + trace_kvm_mmu_split_huge_page(gfn, spte, level, r); + break; + } + + if (dropped_lock) + goto restart; + + r = kvm_mmu_split_huge_page(kvm, slot, huge_sptep, &sp, &flush); + + trace_kvm_mmu_split_huge_page(gfn, spte, level, r); + + /* + * If splitting is successful we must restart the iterator + * because huge_sptep has just been removed from it. + */ + if (!r) + goto restart; + } + + if (sp) + kvm_mmu_free_shadow_page(sp); + + return flush; +} + +static void kvm_rmap_try_split_huge_pages(struct kvm *kvm, + const struct kvm_memory_slot *slot, + gfn_t start, gfn_t end, + int target_level) +{ + bool flush; + int level; + + /* + * Split huge pages starting with KVM_MAX_HUGEPAGE_LEVEL and working + * down to the target level. This ensures pages are recursively split + * all the way to the target level. There's no need to split pages + * already at the target level. + * + * Note that TLB flushes must be done before dropping the MMU lock since + * rmap_try_split_huge_pages() may partially split any given huge page, + * i.e. it may effectively unmap (make non-present) a portion of the + * huge page. + */ + for (level = KVM_MAX_HUGEPAGE_LEVEL; level > target_level; level--) { + flush = slot_handle_level_range(kvm, slot, + rmap_try_split_huge_pages, + level, level, start, end - 1, + true, flush); + } + + if (flush) + kvm_arch_flush_remote_tlbs_memslot(kvm, slot); +} + /* Must be called with the mmu_lock held in write-mode. */ void kvm_mmu_try_split_huge_pages(struct kvm *kvm, const struct kvm_memory_slot *memslot, u64 start, u64 end, int target_level) { + if (kvm_memslots_have_rmaps(kvm)) + kvm_rmap_try_split_huge_pages(kvm, memslot, start, end, + target_level); + if (is_tdp_mmu_enabled(kvm)) kvm_tdp_mmu_try_split_huge_pages(kvm, memslot, start, end, target_level, false); @@ -6086,6 +6387,12 @@ void kvm_mmu_slot_try_split_huge_pages(struct kvm *kvm, u64 start = memslot->base_gfn; u64 end = start + memslot->npages; + if (kvm_memslots_have_rmaps(kvm)) { + write_lock(&kvm->mmu_lock); + kvm_rmap_try_split_huge_pages(kvm, memslot, start, end, target_level); + write_unlock(&kvm->mmu_lock); + } + if (is_tdp_mmu_enabled(kvm)) { read_lock(&kvm->mmu_lock); kvm_tdp_mmu_try_split_huge_pages(kvm, memslot, start, end, target_level, true); From patchwork Fri Mar 11 00:25:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604202 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=DJnso/Xu; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=OaktuYZQ; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6DP20FKz9s5P for ; Fri, 11 Mar 2022 11:26:13 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=nwbq1Xd4xGBVqIhb21mljw7IpgVGij4bC+IojEdjX4U=; b=DJnso/XuEdVrONPbs3MU899B3p r9YCtP/rJyjcMvK/Wy58230CS+lXJf3QOtBu77X8raB8WryXcptMSIL3GwXTHzUT3lzV55yl+Jp6+ YuSVeWiRTEmLdTYH+1t8G/K5Jei4AMjNo0Rk+xT8jYypiBMVH+mOJPIggUmk7MyB+qC/cVZJC716h GtVFSXVJlYiHGJFz4zUFiRk1qldx3ZaFTyI2T4Og8rSVdanzxoK0TdUb05442HYKo+l4sHj0kuNak MvYcb+YF3UF6hu3OfFgvRfs0yWf+QuhRJG2C2lgz6lT/F0xzqzdKIPGC/IRXW/ArPHfvqTwwMx/3A MsBqWkJQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6h-00EPb0-23; Fri, 11 Mar 2022 00:26:11 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6e-00EPYE-GC for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:26:10 +0000 Received: by mail-pj1-x104a.google.com with SMTP id l14-20020a17090a660e00b001bf496be6d0so4235211pjj.7 for ; Thu, 10 Mar 2022 16:26:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GR+ED1Eu59W4o3nJAyG55WZVxFJ9/aqXQ0I8zOq5KWw=; b=OaktuYZQ/pvIKWnmmZaGYUKQVLGED+tckPX6EQ/Hge7NnOHi+0dW6e3BHGTVrJ5LSe zYj8P0q2JukbFWsyULC/88FupX6jIBpbFfharxjl39QDAs9lanUyXICTSDK8cj3pDZOV WEWEMonrACc5oHh0JE70KDRZRDGPSgYrCPwzJsWJm+sQ1qVmyolq8NXGcq0IrnRYyig7 nNTneUiLNKNAjzW7VEGLrEYY1dnIvKM+6ysBXeasZcBA2ng++NQvJ9sjXPuw854de0H0 XQPTUr5op2/4j+/a0wymfSm9VmfA19cou++hpRyPqqizmjhN5tVB2h0HSqgqbNMKiYrm VL0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GR+ED1Eu59W4o3nJAyG55WZVxFJ9/aqXQ0I8zOq5KWw=; b=rebA+cIk2ZfQ88f6EppejsQFQjpfAp98GxSrRel+jWtQLrZn0G2RlUCpkxebeJGIwb w/GrjXd243M9gyNL4s9A+dAqS3kZhKd8CQ5npcp8PohOQFOglOQfihOhepBfz0nk90t7 XbZFOHTjTUHQ6NZsiLBSDy+ppvr/HdfD+WdI5KwER4es0yKsWazeAEaJVZnnZoF+AnGm 62VcYNbqhPOfri5xLKYQAVwJvNMwFUYPhelOAfTycJ8w2tOVic0ifmPWLAUvvnrfecUy QKf7VTbN87jE3Z25uJlr9/MgSKmH9fWJgyigN4muYdmOZ8wvPd7E0wPVmPnmMhJE/wCh mEQg== X-Gm-Message-State: AOAM532pjimefle7IOQfraQUTC+ylTXEgw6tXSIt3gv99B2rT8gKGZom kVFCO0WQqw0KwA2HyW96ilOnkXEKs++jiA== X-Google-Smtp-Source: ABdhPJzHaoRrrUKazsHH7xWX2bJBJdhTwzNXL7WwckHPvdlf9FodGi4JOJFbVyjzmnYSAL0OQxSui6or0+UizQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:7e4d:b0:14f:e295:5a41 with SMTP id a13-20020a1709027e4d00b0014fe2955a41mr7628012pln.27.1646958366783; Thu, 10 Mar 2022 16:26:06 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:23 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-22-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 21/26] KVM: Allow for different capacities in kvm_mmu_memory_cache structs From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162608_569826_77AC28AA X-CRM114-Status: GOOD ( 23.43 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Allow the capacity of the kvm_mmu_memory_cache struct to be chosen at declaration time rather than being fixed for all declarations. This will be used in a follow-up commit to declare an cache in x86 [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:104a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Allow the capacity of the kvm_mmu_memory_cache struct to be chosen at declaration time rather than being fixed for all declarations. This will be used in a follow-up commit to declare an cache in x86 with a capacity of 512+ objects without having to increase the capacity of all caches in KVM. This change requires each cache now specify its capacity at runtime, since the cache struct itself no longer has a fixed capacity known at compile time. To protect against someone accidentally defining a kvm_mmu_memory_cache struct directly (without the extra storage), this commit includes a WARN_ON() in kvm_mmu_topup_memory_cache(). This change, unfortunately, adds some grottiness to kvm_phys_addr_ioremap() in arm64, which uses a function-local (i.e. stack-allocated) kvm_mmu_memory_cache struct. Since C does not allow anonymous structs in functions, the new wrapper struct that contains kvm_mmu_memory_cache and the objects pointer array, must be named, which means dealing with an outer and inner struct. The outer struct can't be dropped since then there would be no guarantee the kvm_mmu_memory_cache struct and objects array would be laid out consecutively on the stack. No functional change intended. Signed-off-by: David Matlack --- arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/mmu.c | 13 +++++++++---- arch/mips/include/asm/kvm_host.h | 2 +- arch/mips/kvm/mips.c | 2 ++ arch/riscv/include/asm/kvm_host.h | 2 +- arch/riscv/kvm/vcpu.c | 1 + arch/x86/include/asm/kvm_host.h | 8 ++++---- arch/x86/kvm/mmu/mmu.c | 9 +++++++++ include/linux/kvm_types.h | 19 +++++++++++++++++-- virt/kvm/kvm_main.c | 10 +++++++++- 11 files changed, 55 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 5bc01e62c08a..1369415290dd 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -357,7 +357,7 @@ struct kvm_vcpu_arch { bool pause; /* Cache some mmu pages needed inside spinlock regions */ - struct kvm_mmu_memory_cache mmu_page_cache; + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_page_cache); /* Target CPU and feature flags */ int target; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index ecc5958e27fe..5e38385be0ef 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -319,6 +319,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu->arch.target = -1; bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES); + vcpu->arch.mmu_page_cache.capacity = KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO; /* Set up the timer */ diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index bc2aba953299..940089ba65ad 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -765,7 +765,12 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, { phys_addr_t addr; int ret = 0; - struct kvm_mmu_memory_cache cache = { 0, __GFP_ZERO, NULL, }; + DEFINE_KVM_MMU_MEMORY_CACHE(cache) page_cache = { + .cache = { + .gfp_zero = __GFP_ZERO, + .capacity = KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE, + }, + }; struct kvm_pgtable *pgt = kvm->arch.mmu.pgt; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE | KVM_PGTABLE_PROT_R | @@ -778,14 +783,14 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, guest_ipa &= PAGE_MASK; for (addr = guest_ipa; addr < guest_ipa + size; addr += PAGE_SIZE) { - ret = kvm_mmu_topup_memory_cache(&cache, + ret = kvm_mmu_topup_memory_cache(&page_cache.cache, kvm_mmu_cache_min_pages(kvm)); if (ret) break; spin_lock(&kvm->mmu_lock); ret = kvm_pgtable_stage2_map(pgt, addr, PAGE_SIZE, pa, prot, - &cache); + &page_cache.cache); spin_unlock(&kvm->mmu_lock); if (ret) break; @@ -793,7 +798,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, pa += PAGE_SIZE; } - kvm_mmu_free_memory_cache(&cache); + kvm_mmu_free_memory_cache(&page_cache.cache); return ret; } diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h index 717716cc51c5..935511d7fc3a 100644 --- a/arch/mips/include/asm/kvm_host.h +++ b/arch/mips/include/asm/kvm_host.h @@ -347,7 +347,7 @@ struct kvm_vcpu_arch { unsigned long pending_exceptions_clr; /* Cache some mmu pages needed inside spinlock regions */ - struct kvm_mmu_memory_cache mmu_page_cache; + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_page_cache); /* vcpu's vzguestid is different on each host cpu in an smp system */ u32 vzguestid[NR_CPUS]; diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index a25e0b73ee70..45c7179144dc 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -387,6 +387,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) if (err) goto out_free_gebase; + vcpu->arch.mmu_page_cache.capacity = KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; + return 0; out_free_gebase: diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 99ef6a120617..5bd4902ebda3 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -186,7 +186,7 @@ struct kvm_vcpu_arch { struct kvm_sbi_context sbi_context; /* Cache pages needed to program page tables with spinlock held */ - struct kvm_mmu_memory_cache mmu_page_cache; + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_page_cache); /* VCPU power-off state */ bool power_off; diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 624166004e36..6a5f5aa45bac 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -94,6 +94,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) /* Mark this VCPU never ran */ vcpu->arch.ran_atleast_once = false; + vcpu->arch.mmu_page_cache.capacity = KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO; /* Setup ISA features available to VCPU */ diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 0f5a36772bdc..544dde11963b 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -692,10 +692,10 @@ struct kvm_vcpu_arch { */ struct kvm_mmu *walk_mmu; - struct kvm_mmu_memory_cache mmu_pte_list_desc_cache; - struct kvm_mmu_memory_cache mmu_shadow_page_cache; - struct kvm_mmu_memory_cache mmu_shadowed_translation_cache; - struct kvm_mmu_memory_cache mmu_page_header_cache; + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_pte_list_desc_cache); + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_shadow_page_cache); + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_shadowed_translation_cache); + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_page_header_cache); /* * QEMU userspace and the guest each have their own FPU state. diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index dd56b5b9624f..24e7e053e05b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5817,12 +5817,21 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu) { int ret; + vcpu->arch.mmu_pte_list_desc_cache.capacity = + KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; vcpu->arch.mmu_pte_list_desc_cache.kmem_cache = pte_list_desc_cache; vcpu->arch.mmu_pte_list_desc_cache.gfp_zero = __GFP_ZERO; + vcpu->arch.mmu_page_header_cache.capacity = + KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache; vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO; + vcpu->arch.mmu_shadowed_translation_cache.capacity = + KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; + + vcpu->arch.mmu_shadow_page_cache.capacity = + KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO; vcpu->arch.mmu = &vcpu->arch.root_mmu; diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index ac1ebb37a0ff..579cf39986ec 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -83,14 +83,29 @@ struct gfn_to_pfn_cache { * MMU flows is problematic, as is triggering reclaim, I/O, etc... while * holding MMU locks. Note, these caches act more like prefetch buffers than * classical caches, i.e. objects are not returned to the cache on being freed. + * + * The storage for the cache object pointers is laid out after the struct, to + * allow different declarations to choose different capacities. The capacity + * field defines the number of object pointers available after the struct. */ struct kvm_mmu_memory_cache { int nobjs; + int capacity; gfp_t gfp_zero; struct kmem_cache *kmem_cache; - void *objects[KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE]; + void *objects[]; }; -#endif + +#define __DEFINE_KVM_MMU_MEMORY_CACHE(_name, _capacity) \ + struct { \ + struct kvm_mmu_memory_cache _name; \ + void *_name##_objects[_capacity]; \ + } + +#define DEFINE_KVM_MMU_MEMORY_CACHE(_name) \ + __DEFINE_KVM_MMU_MEMORY_CACHE(_name, KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE) + +#endif /* KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE */ #define HALT_POLL_HIST_COUNT 32 diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 9581a24c3d17..1d849ba9529f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -371,9 +371,17 @@ int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) { void *obj; + /* + * The capacity fieldmust be initialized since the storage for the + * objects pointer array is laid out after the kvm_mmu_memory_cache + * struct and not known at compile time. + */ + if (WARN_ON(mc->capacity == 0)) + return -EINVAL; + if (mc->nobjs >= min) return 0; - while (mc->nobjs < ARRAY_SIZE(mc->objects)) { + while (mc->nobjs < mc->capacity) { obj = mmu_memory_cache_alloc_obj(mc, GFP_KERNEL_ACCOUNT); if (!obj) return mc->nobjs >= min ? 0 : -ENOMEM; From patchwork Fri Mar 11 00:25:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604203 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=oUujuEUm; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=cMEUZugu; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6DQ4M50z9s5P for ; Fri, 11 Mar 2022 11:26:14 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=MOUY/ktYnCfi3KBneoHSvnMCI+gbKkNW5Cn3ce9qNeg=; b=oUujuEUmzehfGZsJft+bnT9097 S9EGDUu3wkpQ7ASR6mkxrHKwZvRVHdeaco6XSUf0iXFzr3d2OTqtfYlg3Uhfq4dGMu5SlJLomKPEG 1oWhBjY3BWx9HxAKTNz0W3asbLDqlAw6muLgwUTh4IFjcCSr2REkHC25QhYnl8cTZ0VijHsHDLY8o oS8v11MjuQWyPwdAhuNft5zT2OStOiAKx26LVo0ORQIRPMcJP/FR9l9XoGeWEhdiHJK2XX/0X4hDP 3sfmQzyixcMLMQN0PLyKpKn8oVlwM+ljnUg5Qdv2L4adCUBpgr9gXwSoq7hCVdVWxZ35YhlvGS1gO HeQ5ox5w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6i-00EPcH-Ah; Fri, 11 Mar 2022 00:26:12 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6f-00EPZU-OI for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:26:11 +0000 Received: by mail-pg1-x549.google.com with SMTP id 9-20020a630509000000b0037c8607d296so3773456pgf.22 for ; Thu, 10 Mar 2022 16:26:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Xxlsl5gMjKmF9SCaiAavzpq6LPoScWsJHzRFSDHyFso=; b=cMEUZuguiW6pmIHu9LU3eoo1FDRnKfP07X6qrVJz+feSqP00edJurvJ6fZVCoGCvaJ 5svA21y657lW0vnnvcB8DrJj0Wif0psmeyY8i7OHoLX//aZidsK2Kwctp17DBZkm6zks bNqQTe153ijee74TbEx9ka/WwLu8NnHNmhJ3rzu0hVNv/X65IxxIaEZsKTVXiz3k0pq/ PXKVNpYhcW10Mo0GvokzU2pQFbCQGCYnnW5ERLDk3Bglg0cMtbS8FePSpbkv3zkxP4/+ HypkhlHZXg2YdqZQ+p9wkqmsfUp8Sg4/Y0BzNMC+ciXTAMJksdrkfNpcGpiny8WuXLFB rq7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Xxlsl5gMjKmF9SCaiAavzpq6LPoScWsJHzRFSDHyFso=; b=Brtjz0B3xffn7FUrRtUjXZLhb/2TdAhGUx5kWxh4xSOLHyRoMqzqdNjhcgGlOeX1OA eLKdxnfyXmTX8q8rTo9Yf018rMeSXmAilijgupjsQrccFncT2SkZfvmDP/EGgukKm1SK k8g/uNWj9hUj1En2kePJFelYNYOIovAqw83vvjsYgGCMN4fYVgPob4hDpestFTi+mBoH KhtCf5D846ClksRIAq3AbT3fEpKHch3mj9SL9DtJsigBipHUZJ5pCytlgDfxhjswsrb+ GE8L0OhdC0VJnqTwOlQFxgeYtiD0EfnJw7EwHOL1ZHcItnMjrjrJ/iRu+EQ8uUnIxA/c 4H7A== X-Gm-Message-State: AOAM533ROUOdhqGOrC6My7s85UtrFOl35F4Oo85WZq55Dpff4NVRYg6c RFHKky0CpQRxQY6phzqyyvnCM3lS3QtlrA== X-Google-Smtp-Source: ABdhPJywweZjzvrfhg7epKsXyYUluoBQw2WEtYQ9qKQxDvH8uHrVK9h/VGrnh31BcpoNycSVrImYh5be0PAtHQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:17ca:b0:1bf:6188:cc00 with SMTP id me10-20020a17090b17ca00b001bf6188cc00mr19161970pjb.2.1646958368553; Thu, 10 Mar 2022 16:26:08 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:24 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-23-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 22/26] KVM: Allow GFP flags to be passed when topping up MMU caches From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162609_813114_FE63B803 X-CRM114-Status: GOOD ( 11.63 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: This will be used in a subsequent commit to top-up MMU caches under the MMU lock with GFP_NOWAIT as part of eager page splitting. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: David Matlack --- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 9 +++++++-- 2 files changed, 8 insertions(+), 2 [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:549 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org This will be used in a subsequent commit to top-up MMU caches under the MMU lock with GFP_NOWAIT as part of eager page splitting. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: David Matlack --- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 9 +++++++-- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 252ee4a61b58..7d3a1f28beb2 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1335,6 +1335,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm); #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); +int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min, gfp_t gfp); int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc); void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc); void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 1d849ba9529f..7861874af1c8 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -367,7 +367,7 @@ static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc, return (void *)__get_free_page(gfp_flags); } -int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) +int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min, gfp_t gfp) { void *obj; @@ -382,7 +382,7 @@ int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) if (mc->nobjs >= min) return 0; while (mc->nobjs < mc->capacity) { - obj = mmu_memory_cache_alloc_obj(mc, GFP_KERNEL_ACCOUNT); + obj = mmu_memory_cache_alloc_obj(mc, gfp); if (!obj) return mc->nobjs >= min ? 0 : -ENOMEM; mc->objects[mc->nobjs++] = obj; @@ -390,6 +390,11 @@ int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) return 0; } +int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) +{ + return __kvm_mmu_topup_memory_cache(mc, min, GFP_KERNEL_ACCOUNT); +} + int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc) { return mc->nobjs; From patchwork Fri Mar 11 00:25:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604204 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=vyF41tvr; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=ZtJErbNb; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6DS57XFz9s5P for ; Fri, 11 Mar 2022 11:26:16 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=aNI2ECih1DI8I2uHU8Lchy7GjkYV6W3+2QXhwXcGcBw=; b=vyF41tvruWsZrGcT01xpSjaTI4 0kMDJGn65vjZeCj3BwKrCWXpVpSXmrfKafH8a/Zs5jaqPYS1ARPzHRxFXS7Xcr2uT6y+mODVbo0Hc yBUTXrtbLxKkqwHivsoTlwKLvnS0t1SJE4xq3su7/k6eCwYa6DaT4preu350m7EqhepEutSismM3o W+h1d1Oe+JM2soj9dEvDPmVeNQOlHTlH1q9BZj8mEyIiUCSp/f+dgUDWqpKPPKIMmFMU7qP8kyrm1 Tatst1FbfDI1mn/gjGw/fyviU8pkKGei0K/vjOHDSyytldIJVmyrTvR1e1+TQErqBDI21LHgnghDv xAVZ6C7w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6k-00EPdm-Fr; Fri, 11 Mar 2022 00:26:14 +0000 Received: from mail-pg1-x54a.google.com ([2607:f8b0:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6h-00EPaZ-Bp for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:26:13 +0000 Received: by mail-pg1-x54a.google.com with SMTP id u4-20020a63b544000000b0037c62d8b0ecso3781346pgo.13 for ; Thu, 10 Mar 2022 16:26:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=8sZhoWELpqEc2BsW/puOFOcuzB1aijCrHDgV+XGrT0A=; b=ZtJErbNbgiA2+RbqXKqBnxBr2Nq6m57xmo9PXp2yC127qBftR6dEDE/OtgMbXvzCe5 R3oHIyZtmvDsk5xP3N7/iIB8cGyt7UM8f0gNZlLM8zCj7BaKqgCdxPmM22RnEROZc6Jo GCFdagqCDehiE8hlKuTZVkv9tHCzqD3B+rrlOZXtI7E8Hc5in7dOI/qmFbGOIo09ztU1 V0po6MjiTapLs0qLGGWWVa9CvqbOzj55JzFU3GourgAtoEkN4GItGo5azol+kzBfzXqX sGDmznitJy/e3LnDvHMW04qf1JamRU4KvweC4AR38auxJrW7KOXkXENKvW4kGdA5Fyt2 1HHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8sZhoWELpqEc2BsW/puOFOcuzB1aijCrHDgV+XGrT0A=; b=Ppz9l+iFuG5MN8ferimOmT0pZR2BnuU+bxygrXjvMtMofPiOYm+C4PTi7BpmJLIFGd vpvONDyNHdeIakD7MCt2QcdLe4s3Oir6/adq2sWnjyvgD9e192CMPL0cQIdk5C2W64V3 AKflZ0mqtaw2h5Ea1//JCjKT96mGVqmkDvcwMwy0m7Q+EbNUfhdHBSYtsO7odzNO26+p pERm6y+1Z9H0tWtsgqpJI84khYaKOa2xTmVaMqhqVletBjAr79gUnSZsemNt2evvL+uL ltrJ5n1lSSWhxsLkFT7PM+xvnKURKt5iNa8BfOHJCZ6xQcyDELtjbHkG9b45PtvHXh3t MzwQ== X-Gm-Message-State: AOAM532JGZsa8vOjkznlJBia7sfWtWU6WQvwfT5zNPNVrgAGMO0c/rHB bbcnt34T3kZitnLMR2LybSBY65LtGXhgkQ== X-Google-Smtp-Source: ABdhPJxp6KEAF9MQBj/R4AILw7oMFKjGojv2nE0u8xYk0LZhOg4P/GeokJiuxdu0z4Y/37+7y05xtg0eLGJuXg== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:9007:b0:14f:3680:66d1 with SMTP id a7-20020a170902900700b0014f368066d1mr7904332plp.91.1646958370128; Thu, 10 Mar 2022 16:26:10 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:25 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-24-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 23/26] KVM: x86/mmu: Fully split huge pages that require extra pte_list_desc structs From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162611_435939_317CE277 X-CRM114-Status: GOOD ( 26.52 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: When splitting a huge page we need to add all of the lower level SPTEs to the memslot rmap. The current implementation of eager page splitting bails if adding an SPTE would require allocating an extra [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:54a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org When splitting a huge page we need to add all of the lower level SPTEs to the memslot rmap. The current implementation of eager page splitting bails if adding an SPTE would require allocating an extra pte_list_desc struct. Fix this limitation by allocating enough pte_list_desc structs before splitting the huge page. This eliminates the need for TLB flushing under the MMU lock because the huge page is always entirely split (no subregion of the huge page is unmapped). Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_host.h | 10 +++ arch/x86/kvm/mmu/mmu.c | 131 ++++++++++++++++++-------------- 2 files changed, 85 insertions(+), 56 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 544dde11963b..00a5c0bcc2eb 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1240,6 +1240,16 @@ struct kvm_arch { hpa_t hv_root_tdp; spinlock_t hv_root_tdp_lock; #endif + + /* + * Memory cache used to allocate pte_list_desc structs while splitting + * huge pages. In the worst case, to split one huge page we need 512 + * pte_list_desc structs to add each new lower level leaf sptep to the + * memslot rmap. + */ +#define HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY 512 + __DEFINE_KVM_MMU_MEMORY_CACHE(huge_page_split_desc_cache, + HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY); }; struct kvm_vm_stat { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 24e7e053e05b..95b8e2ef562f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1765,6 +1765,16 @@ struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, bool direc return sp; } +static inline gfp_t gfp_flags_for_split(bool locked) +{ + /* + * If under the MMU lock, use GFP_NOWAIT to avoid direct reclaim (which + * is slow) and to avoid making any filesystem callbacks (which can end + * up invoking KVM MMU notifiers, resulting in a deadlock). + */ + return (locked ? GFP_NOWAIT : GFP_KERNEL) | __GFP_ACCOUNT; +} + /* * Allocate a new shadow page, potentially while holding the MMU lock. * @@ -1772,17 +1782,11 @@ struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, bool direc * being mapped directly with a lower level page table. Thus there's no need to * allocate the shadowed_translation array. */ -struct kvm_mmu_page *kvm_mmu_alloc_direct_sp_for_split(bool locked) +static struct kvm_mmu_page *__kvm_mmu_alloc_direct_sp_for_split(gfp_t gfp) { struct kvm_mmu_page *sp; - gfp_t gfp; - /* - * If under the MMU lock, use GFP_NOWAIT to avoid direct reclaim (which - * is slow) and to avoid making any filesystem callbacks (which can end - * up invoking KVM MMU notifiers, resulting in a deadlock). - */ - gfp = (locked ? GFP_NOWAIT : GFP_KERNEL) | __GFP_ACCOUNT | __GFP_ZERO; + gfp |= __GFP_ZERO; sp = kmem_cache_alloc(mmu_page_header_cache, gfp); if (!sp) @@ -1799,6 +1803,13 @@ struct kvm_mmu_page *kvm_mmu_alloc_direct_sp_for_split(bool locked) return sp; } +struct kvm_mmu_page *kvm_mmu_alloc_direct_sp_for_split(bool locked) +{ + gfp_t gfp = gfp_flags_for_split(locked); + + return __kvm_mmu_alloc_direct_sp_for_split(gfp); +} + static void mark_unsync(u64 *spte); static void kvm_mmu_mark_parents_unsync(struct kvm_mmu_page *sp) { @@ -5989,6 +6000,11 @@ void kvm_mmu_init_vm(struct kvm *kvm) node->track_write = kvm_mmu_pte_write; node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; kvm_page_track_register_notifier(kvm, node); + + kvm->arch.huge_page_split_desc_cache.capacity = + HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY; + kvm->arch.huge_page_split_desc_cache.kmem_cache = pte_list_desc_cache; + kvm->arch.huge_page_split_desc_cache.gfp_zero = __GFP_ZERO; } void kvm_mmu_uninit_vm(struct kvm *kvm) @@ -6119,11 +6135,43 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); } +static int topup_huge_page_split_desc_cache(struct kvm *kvm, gfp_t gfp) +{ + /* + * We may need up to HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY descriptors + * to split any given huge page. We could more accurately calculate how + * many we actually need by inspecting all the rmaps and check which + * will need new descriptors, but that's not worth the extra cost or + * code complexity. + */ + return __kvm_mmu_topup_memory_cache( + &kvm->arch.huge_page_split_desc_cache, + HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY, + gfp); +} + +static int alloc_memory_for_split(struct kvm *kvm, struct kvm_mmu_page **spp, + bool locked) +{ + gfp_t gfp = gfp_flags_for_split(locked); + int r; + + r = topup_huge_page_split_desc_cache(kvm, gfp); + if (r) + return r; + + if (!*spp) { + *spp = __kvm_mmu_alloc_direct_sp_for_split(gfp); + r = *spp ? 0 : -ENOMEM; + } + + return r; +} + static int prepare_to_split_huge_page(struct kvm *kvm, const struct kvm_memory_slot *slot, u64 *huge_sptep, struct kvm_mmu_page **spp, - bool *flush, bool *dropped_lock) { int r = 0; @@ -6136,24 +6184,18 @@ static int prepare_to_split_huge_page(struct kvm *kvm, if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) goto drop_lock; - *spp = kvm_mmu_alloc_direct_sp_for_split(true); + r = alloc_memory_for_split(kvm, spp, true); if (r) goto drop_lock; return 0; drop_lock: - if (*flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); - - *flush = false; *dropped_lock = true; write_unlock(&kvm->mmu_lock); cond_resched(); - *spp = kvm_mmu_alloc_direct_sp_for_split(false); - if (!*spp) - r = -ENOMEM; + r = alloc_memory_for_split(kvm, spp, false); write_lock(&kvm->mmu_lock); return r; @@ -6196,10 +6238,10 @@ static struct kvm_mmu_page *kvm_mmu_get_sp_for_split(struct kvm *kvm, static int kvm_mmu_split_huge_page(struct kvm *kvm, const struct kvm_memory_slot *slot, - u64 *huge_sptep, struct kvm_mmu_page **spp, - bool *flush) + u64 *huge_sptep, struct kvm_mmu_page **spp) { + struct kvm_mmu_memory_cache *cache; struct kvm_mmu_page *split_sp; u64 huge_spte, split_spte; int split_level, index; @@ -6212,9 +6254,9 @@ static int kvm_mmu_split_huge_page(struct kvm *kvm, return -EOPNOTSUPP; /* - * Since we did not allocate pte_list_desc_structs for the split, we - * cannot add a new parent SPTE to parent_ptes. This should never happen - * in practice though since this is a fresh SP. + * We did not allocate an extra pte_list_desc struct to add huge_sptep + * to split_sp->parent_ptes. An extra pte_list_desc struct should never + * be necessary in practice though since split_sp is brand new. * * Note, this makes it safe to pass NULL to __link_shadow_page() below. */ @@ -6225,6 +6267,7 @@ static int kvm_mmu_split_huge_page(struct kvm *kvm, split_level = split_sp->role.level; access = split_sp->role.access; + cache = &kvm->arch.huge_page_split_desc_cache; for (index = 0; index < PT64_ENT_PER_PAGE; index++) { split_sptep = &split_sp->spt[index]; @@ -6232,25 +6275,11 @@ static int kvm_mmu_split_huge_page(struct kvm *kvm, BUG_ON(is_shadow_present_pte(*split_sptep)); - /* - * Since we did not allocate pte_list_desc structs for the - * split, we can't add a new SPTE that maps this GFN. - * Skipping this SPTE means we're only partially mapping the - * huge page, which means we'll need to flush TLBs before - * dropping the MMU lock. - * - * Note, this make it safe to pass NULL to __rmap_add() below. - */ - if (gfn_to_rmap(split_gfn, split_level, slot)->val) { - *flush = true; - continue; - } - split_spte = make_huge_page_split_spte( huge_spte, split_level + 1, index, access); mmu_spte_set(split_sptep, split_spte); - __rmap_add(kvm, NULL, slot, split_sptep, split_gfn, access); + __rmap_add(kvm, cache, slot, split_sptep, split_gfn, access); } /* @@ -6258,9 +6287,7 @@ static int kvm_mmu_split_huge_page(struct kvm *kvm, * page table. Since we are making this change without a TLB flush vCPUs * will see a mix of the split mappings and the original huge mapping, * depending on what's currently in their TLB. This is fine from a - * correctness standpoint since the translation will either be identical - * or non-present. To account for non-present mappings, the TLB will be - * flushed prior to dropping the MMU lock. + * correctness standpoint since the translation will be identical. */ __drop_large_spte(kvm, huge_sptep, false); __link_shadow_page(NULL, huge_sptep, split_sp); @@ -6297,7 +6324,6 @@ static bool rmap_try_split_huge_pages(struct kvm *kvm, struct kvm_mmu_page *sp = NULL; struct rmap_iterator iter; u64 *huge_sptep, spte; - bool flush = false; bool dropped_lock; int level; gfn_t gfn; @@ -6312,7 +6338,7 @@ static bool rmap_try_split_huge_pages(struct kvm *kvm, level = sptep_to_sp(huge_sptep)->role.level; gfn = sptep_to_gfn(huge_sptep); - r = prepare_to_split_huge_page(kvm, slot, huge_sptep, &sp, &flush, &dropped_lock); + r = prepare_to_split_huge_page(kvm, slot, huge_sptep, &sp, &dropped_lock); if (r) { trace_kvm_mmu_split_huge_page(gfn, spte, level, r); break; @@ -6321,7 +6347,7 @@ static bool rmap_try_split_huge_pages(struct kvm *kvm, if (dropped_lock) goto restart; - r = kvm_mmu_split_huge_page(kvm, slot, huge_sptep, &sp, &flush); + r = kvm_mmu_split_huge_page(kvm, slot, huge_sptep, &sp); trace_kvm_mmu_split_huge_page(gfn, spte, level, r); @@ -6336,7 +6362,7 @@ static bool rmap_try_split_huge_pages(struct kvm *kvm, if (sp) kvm_mmu_free_shadow_page(sp); - return flush; + return false; } static void kvm_rmap_try_split_huge_pages(struct kvm *kvm, @@ -6344,7 +6370,6 @@ static void kvm_rmap_try_split_huge_pages(struct kvm *kvm, gfn_t start, gfn_t end, int target_level) { - bool flush; int level; /* @@ -6352,21 +6377,15 @@ static void kvm_rmap_try_split_huge_pages(struct kvm *kvm, * down to the target level. This ensures pages are recursively split * all the way to the target level. There's no need to split pages * already at the target level. - * - * Note that TLB flushes must be done before dropping the MMU lock since - * rmap_try_split_huge_pages() may partially split any given huge page, - * i.e. it may effectively unmap (make non-present) a portion of the - * huge page. */ for (level = KVM_MAX_HUGEPAGE_LEVEL; level > target_level; level--) { - flush = slot_handle_level_range(kvm, slot, - rmap_try_split_huge_pages, - level, level, start, end - 1, - true, flush); + slot_handle_level_range(kvm, slot, + rmap_try_split_huge_pages, + level, level, start, end - 1, + true, false); } - if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + kvm_mmu_free_memory_cache(&kvm->arch.huge_page_split_desc_cache); } /* Must be called with the mmu_lock held in write-mode. */ From patchwork Fri Mar 11 00:25:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604205 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=HNxh99bL; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=fkEfUCit; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6DW5v7Tz9s5P for ; Fri, 11 Mar 2022 11:26:19 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=RcqU4ve0QeNIxmx8q0gHbc1E9Ug8wyLCoP3gwy6ao6M=; b=HNxh99bL38lmhxiRoxETbZvNQc oN5Y/+JspGENiOeK56gVl95Oj+FELNTLY0M8f8Kp5FlUc1OOmKTTcn0CGMGYh1eFOgfiOOLKFDSdP +J53k9SpvZTBpg43/1ssFvNgMjMpE7wnMK3iyenJw0ekNGPZ/FtwHytOmUgr9FUCiQ92SfmKciQe4 e3NOa1N6OPy/TPfOPRBqA0FWyIYXjFjIgz8q+6viTQlILwHhuOVR+QAXL4Zraq3BVK2XcTHTSOmRO zp5RNY+mPoGoyzW6BrgIvAbOJtH8MPSncSPCgMLGuStR2tV0AOuh5EIgqVKftLlKVokLvAutF90sR fFgkYacg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6n-00EPgL-KN; Fri, 11 Mar 2022 00:26:17 +0000 Received: from mail-pf1-x449.google.com ([2607:f8b0:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6j-00EPca-Ue for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:26:15 +0000 Received: by mail-pf1-x449.google.com with SMTP id y27-20020aa7943b000000b004f6decccdb5so4200503pfo.1 for ; Thu, 10 Mar 2022 16:26:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+N8CyLDUI8dOs6IazZEgayEpuN+Pk+7Zaf+44sPVu9M=; b=fkEfUCit5tqWvdDlI8v5ZSGJVY9seUe6cbf2whXXIZ5bwxCgzP0IqvRSWDUoHnUm0C W3XxUPolVkh0JOGVkZrBPLTgYMhmCmL3KGSLazm6lEXEdrUp9eA1e+Zxh3obazelnzBh dZq3DeFT1PL7lfJmBODoMSeUte5Fn4XfcSn9I3jPcoaKyZaVho/0yEOenJX9x6bY1C/o FG6pjk4smgKYWPjXUN1uX4ZD02cguLFEYY/MSYP3fEl3xF5comBq5sUT/YInXbYrQJMv JiMc8dEQa7jZfyLvcEOZDz+Kp9eA+3xn4embgt7o+mxQ1YtN7sOkwpiLsRTj0LtlLVPv dRCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+N8CyLDUI8dOs6IazZEgayEpuN+Pk+7Zaf+44sPVu9M=; b=tBBEX6UbK/0itk94g97eEgzYbkaOcx1rkKze0bGk4sQbfK+rGxXcbcysReJ5D0wgyG WR8ifQkJTkLk8CZvhKGOwNMwoJjEsLvml2GFw0thKvAx3HtznY0L23bzGbtmg4N0q6NL qzxCfATFDIinXdYpkx54PN9keHwu9+pVZtK0AceuMnDVi3BbhFdAiN2trXbFb2lkUquD d3q4n/CA9KYavt0AAXKGZqncGvII31WexC7v9ckJ3acaBLXqLjud4LMWPiOQ5loYVqBU 27kXJUlx/gQu8u90/NJVULhHwMe9i3x4GQ/VqhPDjSuaFl1Ri3rD1uUFbbFsd05hcD5H R+wQ== X-Gm-Message-State: AOAM530VZFOQvspNl6hCSminboNFGV28p62Ic7K++k8e9EoUefucQUmM kmiJDh8Lp9lCJNzMTXliCdpIPHqwwNKWWw== X-Google-Smtp-Source: ABdhPJzEurWCjkJGrlFQG7dN5CM2sI5brKP2s1Ygv17frKolnNrY7kNpKXDlaqlxOBpcKkeByFPsKnJeUISXWA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:4d81:b0:1bf:8ce4:4f51 with SMTP id oj1-20020a17090b4d8100b001bf8ce44f51mr322717pjb.0.1646958371853; Thu, 10 Mar 2022 16:26:11 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:26 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-25-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 24/26] KVM: x86/mmu: Split huge pages aliased by multiple SPTEs From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162614_023273_7E307E13 X-CRM114-Status: GOOD ( 26.15 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: The existing huge page splitting code bails if it encounters a huge page that is aliased by another SPTE that has already been split (either due to NX huge pages or eager page splitting). Extend the h [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:449 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org The existing huge page splitting code bails if it encounters a huge page that is aliased by another SPTE that has already been split (either due to NX huge pages or eager page splitting). Extend the huge page splitting code to also handle such aliases. The thing we have to be careful about is dealing with what's already in the lower level page table. If eager page splitting was the only operation that split huge pages, this would be fine. However huge pages can also be split by NX huge pages. This means the lower level page table may only be partially filled in and may point to even lower level page tables that are partially filled in. We can fill in the rest of the page table but dealing with the lower level page tables would be too complex. To handle this we flush TLBs after dropping the huge SPTE whenever we are about to install a lower level page table that was partially filled in (*). We can skip the TLB flush if the lower level page table was empty (no aliasing) or identical to what we were already going to populate it with (aliased huge page that was just eagerly split). (*) This TLB flush could probably be delayed until we're about to drop the MMU lock, which would also let us batch flushes for multiple splits. However such scenarios should be rare in practice (a huge page must be aliased in multiple SPTEs and have been split for NX Huge Pages in only some of them). Flushing immediately is simpler to plumb and also reduces the chances of tripping over a CPU bug (e.g. see iTLB multi-hit). Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_host.h | 5 ++- arch/x86/kvm/mmu/mmu.c | 73 +++++++++++++++------------------ 2 files changed, 36 insertions(+), 42 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 00a5c0bcc2eb..275d00528805 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1245,9 +1245,10 @@ struct kvm_arch { * Memory cache used to allocate pte_list_desc structs while splitting * huge pages. In the worst case, to split one huge page we need 512 * pte_list_desc structs to add each new lower level leaf sptep to the - * memslot rmap. + * memslot rmap plus 1 to extend the parent_ptes rmap of the new lower + * level page table. */ -#define HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY 512 +#define HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY 513 __DEFINE_KVM_MMU_MEMORY_CACHE(huge_page_split_desc_cache, HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY); }; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 95b8e2ef562f..68785b422a08 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6208,6 +6208,7 @@ static struct kvm_mmu_page *kvm_mmu_get_sp_for_split(struct kvm *kvm, { struct kvm_mmu_page *split_sp; union kvm_mmu_page_role role; + bool created = false; unsigned int access; gfn_t gfn; @@ -6220,25 +6221,21 @@ static struct kvm_mmu_page *kvm_mmu_get_sp_for_split(struct kvm *kvm, */ role = kvm_mmu_child_role(huge_sptep, true, access); split_sp = kvm_mmu_find_direct_sp(kvm, gfn, role); - - /* - * Opt not to split if the lower-level SP already exists. This requires - * more complex handling as the SP may be already partially filled in - * and may need extra pte_list_desc structs to update parent_ptes. - */ if (split_sp) - return NULL; + goto out; + created = true; swap(split_sp, *spp); init_shadow_page(kvm, split_sp, slot, gfn, role); - trace_kvm_mmu_get_page(split_sp, true); +out: + trace_kvm_mmu_get_page(split_sp, created); return split_sp; } -static int kvm_mmu_split_huge_page(struct kvm *kvm, - const struct kvm_memory_slot *slot, - u64 *huge_sptep, struct kvm_mmu_page **spp) +static void kvm_mmu_split_huge_page(struct kvm *kvm, + const struct kvm_memory_slot *slot, + u64 *huge_sptep, struct kvm_mmu_page **spp) { struct kvm_mmu_memory_cache *cache; @@ -6246,22 +6243,11 @@ static int kvm_mmu_split_huge_page(struct kvm *kvm, u64 huge_spte, split_spte; int split_level, index; unsigned int access; + bool flush = false; u64 *split_sptep; gfn_t split_gfn; split_sp = kvm_mmu_get_sp_for_split(kvm, slot, huge_sptep, spp); - if (!split_sp) - return -EOPNOTSUPP; - - /* - * We did not allocate an extra pte_list_desc struct to add huge_sptep - * to split_sp->parent_ptes. An extra pte_list_desc struct should never - * be necessary in practice though since split_sp is brand new. - * - * Note, this makes it safe to pass NULL to __link_shadow_page() below. - */ - if (WARN_ON_ONCE(split_sp->parent_ptes.val)) - return -EINVAL; huge_spte = READ_ONCE(*huge_sptep); @@ -6273,7 +6259,20 @@ static int kvm_mmu_split_huge_page(struct kvm *kvm, split_sptep = &split_sp->spt[index]; split_gfn = kvm_mmu_page_get_gfn(split_sp, index); - BUG_ON(is_shadow_present_pte(*split_sptep)); + /* + * split_sp may have populated page table entries if this huge + * page is aliased in multiple shadow page table entries. We + * know the existing SP will be mapping the same GFN->PFN + * translation since this is a direct SP. However, the SPTE may + * point to an even lower level page table that may only be + * partially filled in (e.g. for NX huge pages). In other words, + * we may be unmapping a portion of the huge page, which + * requires a TLB flush. + */ + if (is_shadow_present_pte(*split_sptep)) { + flush |= !is_last_spte(*split_sptep, split_level); + continue; + } split_spte = make_huge_page_split_spte( huge_spte, split_level + 1, index, access); @@ -6284,15 +6283,12 @@ static int kvm_mmu_split_huge_page(struct kvm *kvm, /* * Replace the huge spte with a pointer to the populated lower level - * page table. Since we are making this change without a TLB flush vCPUs - * will see a mix of the split mappings and the original huge mapping, - * depending on what's currently in their TLB. This is fine from a - * correctness standpoint since the translation will be identical. + * page table. If the lower-level page table indentically maps the huge + * page, there's no need for a TLB flush. Otherwise, flush TLBs after + * dropping the huge page and before installing the shadow page table. */ - __drop_large_spte(kvm, huge_sptep, false); - __link_shadow_page(NULL, huge_sptep, split_sp); - - return 0; + __drop_large_spte(kvm, huge_sptep, flush); + __link_shadow_page(cache, huge_sptep, split_sp); } static bool should_split_huge_page(u64 *huge_sptep) @@ -6347,16 +6343,13 @@ static bool rmap_try_split_huge_pages(struct kvm *kvm, if (dropped_lock) goto restart; - r = kvm_mmu_split_huge_page(kvm, slot, huge_sptep, &sp); - - trace_kvm_mmu_split_huge_page(gfn, spte, level, r); - /* - * If splitting is successful we must restart the iterator - * because huge_sptep has just been removed from it. + * After splitting we must restart the iterator because + * huge_sptep has just been removed from it. */ - if (!r) - goto restart; + kvm_mmu_split_huge_page(kvm, slot, huge_sptep, &sp); + trace_kvm_mmu_split_huge_page(gfn, spte, level, 0); + goto restart; } if (sp) From patchwork Fri Mar 11 00:25:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604206 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=GuOk3Blw; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=DMOErCjE; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6DY07DXz9s5P for ; Fri, 11 Mar 2022 11:26:21 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=3bNwlqVhVmKAPahj7v32+pWSj6d6YFKynW8Kbxqpt0o=; b=GuOk3Blwrj1ez/K1DRhtFz8dOM 2kE8B7IRcpWyWanehev5QEeRgsWTuqSv6+WxKOT5sXTRDyuZXp6sZ9uHRHmP+stWtB1LHOgih6lgx lnyIDy536K2JnXWSct7BDJtbeMaQcdGqT1x8Y0q73MPIGqvlO++ln4HEylZu83MjqeAKGO6EQC0kT URU2ZzNamVfX8HwdqAJrGzwrdICl1vVEDLDCrOmal2INmu9qJ7s19tqFjKErXtBhcBytqh7h+JfT8 eadDRuoi5RAFwOLlTXUc8OMHcfETdlQtU46T/u6L7yl0A0xu1r+MkbkKOHaxe6cxx9geeEny0f84m p8z3R5vw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6o-00EPh1-Re; Fri, 11 Mar 2022 00:26:18 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6l-00EPdZ-6d for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:26:16 +0000 Received: by mail-pj1-x104a.google.com with SMTP id e1-20020a17090a280100b001bf44b6d74bso6803293pjd.0 for ; Thu, 10 Mar 2022 16:26:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=//m+/4Erm4fn+rEULp2+N5ovNu4tAor1JU0kGbOkSFg=; b=DMOErCjEnqpcFOKygIlsSFTfCmw0C9Pj2tK0NYpYCgAcTI1iGWeaL6FRV1b2ZlfKX8 fHYQBmv/QbDdZc+efkMwruYG+KF4wDiQDaVEfR1WJ8mcUSF76wTcN32p8eRd3os+gbwe HmiPcVbSsqLkKI3pTJ0GlBqlqq9PA36fDqQPPRemYGpGS2VlQfbNt1iRDlvaB0K2kTnc XnxAxv59lS8YUiLJmKwZc6thsP7Jo1gj39W7PaGOaRSlJdD4vh92yg8lUDbEdT2KiQZ8 tIhYCc+gAjCABk9ZJUASFdr26iqSp1JW7do4XHUHG5vHE4zhj5bN+IYHo6EeO/tLY3TV 7glw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=//m+/4Erm4fn+rEULp2+N5ovNu4tAor1JU0kGbOkSFg=; b=SY/wMUS5ezjds7erhyekDo2pwrvEJwFPFPVJRa9ke+5tESNMazwQEPKxjx5g3+K9h+ YV2VHTeNjYKCVUlq/MrermPmmPMi4bAOp/bdkBXHyZW0h1l2WYA2Rc2MXo6lLEKRNCTl RNLa6MeAahqBki8gx6ZsAXcOzVFgSU/3vBJtgI6nKpD+u4jg4ZBfAh/6t6sEW+XvPNlx q6DoRbqNmNjrhxn9elQl4IbXSUvNTLhfCA65KGFCd2iVSHawMtkW2R8nY0luPZhNbH+Q 2+fZ1l3m4Bd7D0/sRochf5LKHgDsxQdfglbtrcnV1mP7Bj8FF3l6PmiNy++7jeevkYnK 2Z4w== X-Gm-Message-State: AOAM531Nhaz9HlqffdRsLPhzOx2s+6Z/U29OOcC4By/D4aswjYGM+gNg 8npbOxVmSGl8i9ZRiGohXvQ9I0OYWpo9gg== X-Google-Smtp-Source: ABdhPJzhIkawyFt0f4OEQ5xaTKMM8Gn4o1vOGnaEYzvNSaPlKUnsrfCuFW5EPlURrJUZ7m52xqfE2I313J0YiA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90a:a510:b0:1bc:5887:d957 with SMTP id a16-20020a17090aa51000b001bc5887d957mr18553716pjq.38.1646958373723; Thu, 10 Mar 2022 16:26:13 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:27 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-26-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 25/26] KVM: x86/mmu: Drop NULL pte_list_desc_cache fallback From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162615_264511_E736CD95 X-CRM114-Status: GOOD ( 10.58 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Now that the eager page splitting code no longer passes in NULL cache pointers we can get rid of the debug WARN_ON() and allocation fallback. While here, also drop the helper function mmu_alloc_pte_li [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:104a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Now that the eager page splitting code no longer passes in NULL cache pointers we can get rid of the debug WARN_ON() and allocation fallback. While here, also drop the helper function mmu_alloc_pte_list_desc() as it no longer serves any purpose. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 14 ++------------ 1 file changed, 2 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 68785b422a08..d2ffebb659e0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -725,16 +725,6 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); } -static struct pte_list_desc *mmu_alloc_pte_list_desc(struct kvm_mmu_memory_cache *cache) -{ - static const gfp_t gfp_nocache = GFP_ATOMIC | __GFP_ACCOUNT | __GFP_ZERO; - - if (WARN_ON_ONCE(!cache)) - return kmem_cache_alloc(pte_list_desc_cache, gfp_nocache); - - return kvm_mmu_memory_cache_alloc(cache); -} - static void mmu_free_pte_list_desc(struct pte_list_desc *pte_list_desc) { kmem_cache_free(pte_list_desc_cache, pte_list_desc); @@ -914,7 +904,7 @@ static int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, rmap_head->val = (unsigned long)spte; } else if (!(rmap_head->val & 1)) { rmap_printk("%p %llx 1->many\n", spte, *spte); - desc = mmu_alloc_pte_list_desc(cache); + desc = kvm_mmu_memory_cache_alloc(cache); desc->sptes[0] = (u64 *)rmap_head->val; desc->sptes[1] = spte; desc->spte_count = 2; @@ -926,7 +916,7 @@ static int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, while (desc->spte_count == PTE_LIST_EXT) { count += PTE_LIST_EXT; if (!desc->more) { - desc->more = mmu_alloc_pte_list_desc(cache); + desc->more = kvm_mmu_memory_cache_alloc(cache); desc = desc->more; desc->spte_count = 0; break; From patchwork Fri Mar 11 00:25:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1604207 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=J0rsxyGP; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=Bs0Nt6VP; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KF6DY1K5Nz9sGG for ; Fri, 11 Mar 2022 11:26:21 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=lZw6j+Bf6oja0FFTGzU9RdpPeO4IVHBXK8H0q/fHSOM=; b=J0rsxyGPrvdLHC+g7xKkZ8mm8G qY2y92LNREgkfjdxmEVZeIa9MAg9239cuQbJbvbpkR71DqcGTRu0WXerKjHomMGl7fBJhAyuL4vj2 k0Bj7uk94ATJe8honF8zmFfbQjqOAU+9i4oppPlWm1ETFGpRB0q55dS6wPSGUTu4XhC7YlnemivRK k2FzPVId/tcO1x/Ej19cVoJ0yntacX8yO/PDQDomPo5XSHG5/YlTt/34IL/cFPsUnhdZxoV+6tRyp BvVW8tB00AVYXEBhgAw52zPxUelsqLm06yrp0kkYy6AiFKMt1Xh2XTYJnj7JJrx5u/joyJriq06by aE1RFqEA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6o-00EPhF-W3; Fri, 11 Mar 2022 00:26:18 +0000 Received: from mail-pf1-x449.google.com ([2607:f8b0:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nST6m-00EPf8-SV for kvm-riscv@lists.infradead.org; Fri, 11 Mar 2022 00:26:18 +0000 Received: by mail-pf1-x449.google.com with SMTP id w68-20020a62dd47000000b004f6aa5e4824so4183837pff.4 for ; Thu, 10 Mar 2022 16:26:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=C/6kqtgpfTZHxI5VWOjwoNRlEkRWSpZD6XXLim3vJOQ=; b=Bs0Nt6VPOSGpoyrAZTXHtLQrqfTnnHH0UC9KK9A0vLWpFqtNCmQ76QkKCaoiU7vlME Q0bTazFlFcbJfqcpCjLO9tF13nAtANqrfJ4/weyLzjhfeuO+6qrSrtOmMS7TIYUEvmoj 3jWCRL/2/6pXWfByCKv+9rhnbh7rbU0XaID/OlkGYUR1Z4dQD7f/Cw2G6GMi4iWL1Uec P+O9K7voxCptiW2ggUfO7UL3R6VkHHKXqZbW0O4DhLGBotN31nqItGDXOibW5YjGwp66 nMmg3cEoPd8Q4NTBLdQxpNMCsJJ+tFv4B4qQ12ZQNrnMrYMMdODRkSRN3LkqTjAlbCWy wjpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=C/6kqtgpfTZHxI5VWOjwoNRlEkRWSpZD6XXLim3vJOQ=; b=1M39KFwgZsNwpbt5OpS1fNZIaSm3EJZ5QhjRvgZ81s8Jj8ADdluqpIO2smH9LD7W65 VNtkPej9bGjnMhSze4UhkRVu9XbPpefxmOltsEGgKAyEhGC1CJxY0VO405L5yTHCYiTe sFcqQWsvtACOMcidkw6+yfqUqqQYzfmPxdyhK3iQh/mwa3aI/+uw4XGbARb01uxhTXP3 xlzdiAqIY2MhOUacbia8EDo4022PFOsb5JIfKOjVLY+QGitNjWIX69rel3c+bPFOu/pg Xr0/GZoUE1W218IW1ZfAPFbi4OuHFZC4t8e88sghkKlngkDbp18i+WLPtn8sdveRqhW3 9HGg== X-Gm-Message-State: AOAM531j1P+dI4EHuWxgGmkjKhRSW5LzmoXdQhqq34iXAKFy+I6dWzZX 9htoTzsdmVn0I5Pp0sxhCj7feIpOu/dzFQ== X-Google-Smtp-Source: ABdhPJxWLEVXby9lk2SDFDSYjtNrAmaCeZHIJ0XIStrXC375fHHUc9G7571Y4uEmDYxZrmSHCBqVZt0xqwpthw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90a:12c8:b0:1bf:6484:3e27 with SMTP id b8-20020a17090a12c800b001bf64843e27mr7807022pjg.209.1646958375426; Thu, 10 Mar 2022 16:26:15 -0800 (PST) Date: Fri, 11 Mar 2022 00:25:28 +0000 In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com> Message-Id: <20220311002528.2230172-27-dmatlack@google.com> Mime-Version: 1.0 References: <20220311002528.2230172-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v2 26/26] KVM: selftests: Map x86_64 guest virtual memory with huge pages From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220310_162616_967466_D71AD624 X-CRM114-Status: GOOD ( 12.55 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Override virt_map() in x86_64 selftests to use the largest page size possible when mapping guest virtual memory. This enables testing eager page splitting with shadow paging (e.g. kvm_intel.ept=N), as [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:449 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Override virt_map() in x86_64 selftests to use the largest page size possible when mapping guest virtual memory. This enables testing eager page splitting with shadow paging (e.g. kvm_intel.ept=N), as it allows KVM to shadow guest memory with huge pages. Signed-off-by: David Matlack --- .../selftests/kvm/include/x86_64/processor.h | 6 ++++ tools/testing/selftests/kvm/lib/kvm_util.c | 4 +-- .../selftests/kvm/lib/x86_64/processor.c | 31 +++++++++++++++++++ 3 files changed, 39 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 37db341d4cc5..efb228d2fbf7 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -470,6 +470,12 @@ enum x86_page_size { X86_PAGE_SIZE_2M, X86_PAGE_SIZE_1G, }; + +static inline size_t page_size_bytes(enum x86_page_size page_size) +{ + return 1UL << (page_size * 9 + 12); +} + void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, enum x86_page_size page_size); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 1665a220abcb..60198587236d 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1432,8 +1432,8 @@ vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm) * Within the VM given by @vm, creates a virtual translation for * @npages starting at @vaddr to the page range starting at @paddr. */ -void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, - unsigned int npages) +void __weak virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, + unsigned int npages) { size_t page_size = vm->page_size; size_t size = npages * page_size; diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 9f000dfb5594..7df84292d5de 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -282,6 +282,37 @@ void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) __virt_pg_map(vm, vaddr, paddr, X86_PAGE_SIZE_4K); } +void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, unsigned int npages) +{ + size_t size = (size_t) npages * vm->page_size; + size_t vend = vaddr + size; + enum x86_page_size page_size; + size_t stride; + + TEST_ASSERT(vaddr + size > vaddr, "Vaddr overflow"); + TEST_ASSERT(paddr + size > paddr, "Paddr overflow"); + + /* + * Map the region with all 1G pages if possible, falling back to all + * 2M pages, and finally all 4K pages. This could be improved to use + * a mix of page sizes so that more of the region is mapped with large + * pages. + */ + for (page_size = X86_PAGE_SIZE_1G; page_size >= X86_PAGE_SIZE_4K; page_size--) { + stride = page_size_bytes(page_size); + + if (!(vaddr % stride) && !(paddr % stride) && !(size % stride)) + break; + } + + TEST_ASSERT(page_size >= X86_PAGE_SIZE_4K, + "Cannot map unaligned region: vaddr 0x%lx paddr 0x%lx npages 0x%x\n", + vaddr, paddr, npages); + + for (; vaddr < vend; vaddr += stride, paddr += stride) + __virt_pg_map(vm, vaddr, paddr, page_size); +} + static struct pageTableEntry *_vm_get_page_table_entry(struct kvm_vm *vm, int vcpuid, uint64_t vaddr) {