From patchwork Fri Apr 1 17:55:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612330 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=3rRYzIss; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=OZ/CKz/k; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXB5hYpz9sGD for ; Sat, 2 Apr 2022 04:56:10 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=E40v+zmSff56oq2ycNiBaJuZzytXvql0cQ2YySCFDG4=; b=3rRYzIssO1nBb36C3e97adY/5V I9keS3mY6s+tlPLUg3zrwDw3TKtai7omBFbYnWhDZkL2VhMuw6OBxZ6XdT5wkN20pNjckocpQ1e40 KlqSLJVwaGeREInT/cJT8rTgeuXzuJMp3t7atgB8wIF/63ROfXF+6a4nbdjXZl/70UNlmt3TKN0jM v+OFon44bhSIq8aHi/zQhbQIY7GEvmyVwWBIwg+lUGE/GnesS0LXAluuZMo9JmgJoZGziZmpWk3hN qh1y/+3TAb1sIJotlfeKbEMhVm7eFxiH8KSw0P4my5Uw/+Zcjm77JvjzDWmyZj3L9qG/C2KwtH9Ic Mt8jk0XQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVI-006la2-JK; Fri, 01 Apr 2022 17:56:08 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVF-006lYW-RP for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:07 +0000 Received: by mail-pj1-x1049.google.com with SMTP id ob7-20020a17090b390700b001c692ec6de4so4344412pjb.7 for ; Fri, 01 Apr 2022 10:56:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=IZRim+LrWsbKLhWGtCJAPIUqsiasbKStOmO5lLjRFP8=; b=OZ/CKz/kS7tsC+5ET9TAryRE1kCwgOMM6JGu+otu2+fefVt4XEH8bXii6Cm3AUtLCN Gw9osJsofH3J6mv+Pa2qF4AgPyuVpqfC38Nn1tKiG+M3EKrlbdGH2QPh3j9eewzu59lx ICuKCxkiGaO+YeoyVNcW7Nro8EgWUt56TC8bmJ6E3nLjN6WPFH2wT9SX/X9xZP2ZiF17 FJmRQTr/BPkOe9FVzGO8FNFlXg4UfuMC8jzLYVLEELEQwUE4jThW/O/paupXC7lWuxy1 jJcoNEaCMLN9riLuRhRPnUtblhKer7FLcbWvKt2OPcW3oc7aFw2S1ONaKDFn48FfppNp IR4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=IZRim+LrWsbKLhWGtCJAPIUqsiasbKStOmO5lLjRFP8=; b=giEOHFGBBkwaFtjLIyON8WsC0EhDCToDmG3KOcM5pixSbesPPQetQDVYZhfOg7ZH1B TgVm6eRzPXfkdUkqrg/OBkr5WAFxWNd1+ci4ag3vaSP73DUyvAHen/5Qh2NmCteWXo/u Meci9FJGsc4dk2RLReA1/+18Ee+jxtJzGfxzolZyjVP0k0/GAMXJVfkN7kh4HA3zNOoF MDKvvrUp5Q+Pv5Y2AbD5jmV5RX+V65DAUwFzn1aqtedByPNdFIVILN8vRjX9Y8X6/FDj 8U5GVf6TdRNf32tLnJ6+OmNZcQkLcTZlhLtjxny4odYov+uDVo8Sh3J1qk8cY3pBOrVb WRVA== X-Gm-Message-State: AOAM5316lG4he6mhPmPO8bpGW4WlfiTSx76gIs9QTcUteVjCJtZ0E4hr h2mbEB/0BYucGlcmljATqGcWq7ofW5nBzA== X-Google-Smtp-Source: ABdhPJw5vwbBPjHYSVeggYwsZsRSNlx5RqVJMTrxoIjsNKWoBtog3ECYJyqS377WsBTJdAeMsxOsjt/Joc7/Ug== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a62:6403:0:b0:4fa:c74c:83c5 with SMTP id y3-20020a626403000000b004fac74c83c5mr11997208pfb.30.1648835762419; Fri, 01 Apr 2022 10:56:02 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:32 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-2-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 01/23] KVM: x86/mmu: Optimize MMU page cache lookup for all direct SPs From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105605_926460_3CF7127D X-CRM114-Status: GOOD ( 11.06 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Commit fb58a9c345f6 ("KVM: x86/mmu: Optimize MMU page cache lookup for fully direct MMUs") skipped the unsync checks and write flood clearing for full direct MMUs. We can extend this further to skip t [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:1049 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Commit fb58a9c345f6 ("KVM: x86/mmu: Optimize MMU page cache lookup for fully direct MMUs") skipped the unsync checks and write flood clearing for full direct MMUs. We can extend this further to skip the checks for all direct shadow pages. Direct shadow pages in indirect MMUs (i.e. shadow paging) are used when shadowing a guest huge page with smaller pages. Such direct shadow pages, like their counterparts in fully direct MMUs, are never marked unsynced or have a non-zero write-flooding count. Checking sp->role.direct also generates better code than checking direct_map because, due to register pressure, direct_map has to get shoved onto the stack and then pulled back off. No functional change intended. Reviewed-by: Sean Christopherson Reviewed-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1361eb4599b4..dbfda133adbe 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2034,7 +2034,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, int direct, unsigned int access) { - bool direct_mmu = vcpu->arch.mmu->direct_map; union kvm_mmu_page_role role; struct hlist_head *sp_list; unsigned quadrant; @@ -2075,7 +2074,8 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, continue; } - if (direct_mmu) + /* unsync and write-flooding only apply to indirect SPs. */ + if (sp->role.direct) goto trace_get_page; if (sp->unsync) { From patchwork Fri Apr 1 17:55:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612331 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=UEDtygqA; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=VV57lG6j; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXD1CxPz9s0B for ; Sat, 2 Apr 2022 04:56:12 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=WnoN7SAs8lfqShUaweFRx2MeruAxAjLwOW4WvbaSQBc=; b=UEDtygqA8WuNPyntTmFt2D+lsi zTcr2zwxCTHCxkAhQpd4UNrCgr7Ybr/6u3RINBRACLYCdm3ArskFE1P+onJsegrV2WpxaAgLY+ZKi lJLbPGGngu8s8BiQArSQ3vnLYQHSTDeJpRMKUto4uYPpsiwiBusvkH6FFAxJeB4TBvw8k+FQr51/8 Prq06g64cwM5uEcIJmOFTsGb4TdSMNz/Lwjnh6lQeiDFah0e3bzUOhui1So7rXl4O/mRBc2WuX5rK vpO61FaKSRtNuLLaWvdOunuHnug5/jQP2RYo+dxQmyOYiybF2Dz4VHjBnDJQgJb6ohnX52fd8PUio 9JpbuYPQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVJ-006lav-Vf; Fri, 01 Apr 2022 17:56:10 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVH-006lYe-Kl for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:08 +0000 Received: by mail-pj1-x104a.google.com with SMTP id mw8-20020a17090b4d0800b001c717bb058eso4386089pjb.0 for ; Fri, 01 Apr 2022 10:56:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=jbEbkPKHHe1JjNjX3H5iG4xlTs9fnMsds5ok0hlcBBc=; b=VV57lG6jUHDDUIO2Lve/uFa7PUGIIl+l6vrjKQogFkiWK8u49u3fcY3FaY1VnX4SWU lAhHv82539GUIUhq5aY9Du6ih9DNPH3fkNleGNlcSEwXfPzGL+uipqnqtPitwZgwY1En 57IYSOhVR6QY6lWhxuALbHeIrXgCjKYELkI6V4Xg8HggG1c7xMCmklb0Ukj4j2O9z6nQ YHOCnhTZhVGdpOaQG3eacojXN34VLFVPbQS8nVMFlhHg4igvU1YVx1PAf6nhmFbJBGl2 y61oE1GKqMOOrtIH+uzY9fU55CkheYZRQEPwA//BTrqovqruJ4S0W1hVj+wMXZn/uESV wioA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=jbEbkPKHHe1JjNjX3H5iG4xlTs9fnMsds5ok0hlcBBc=; b=V88e02CFiIjnWqBay03GgUP7Gj9s9zX1KWplzb2e0DAL3zrnZJdFd9dwEGniyDmyBo ge10ORYLR6D3lroMvXVMZ1zKbXyOURj34WMfuCxA6n16ES08Ws9+gBTNe3DECof/E8qZ aKJeg6rFd8q/A34uBS6ko8876acHsB0UmczT2wMNFg3968VIasqKjuS7V6GiVFe11uqF lKCucBQ6Wq5Xyo5om5J1RjbovkZ0MN6RpMDjVZSXYrorJj26Z6dg84yrpvjd5hPGxSk4 VnTkY0sl2n4Hiwzupy7XYdAdo4DNjV4iaYXCc2vIfeu9rlrd/zagfaChiCKHarIC7gGM njbg== X-Gm-Message-State: AOAM533sAE79XG26SEhQdhLHgKtXY0hjstCXlQ19wwwLqrTKWntFxlDu LyctmAuQhhcQvn3Teibk5+9Vi+IrS8L7ug== X-Google-Smtp-Source: ABdhPJyXx7LaJtvQMH0HZ9aVW4MOJlG+noLjDvNAF17gJHNyl1HpSw9eMVXCjPkuG58rcMsky7HoJ7Kj5bLngA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:ccc4:b0:156:5d37:b42f with SMTP id z4-20020a170902ccc400b001565d37b42fmr7667666ple.157.1648835764076; Fri, 01 Apr 2022 10:56:04 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:33 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-3-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 02/23] KVM: x86/mmu: Use a bool for direct From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105607_714497_F6199A00 X-CRM114-Status: GOOD ( 10.32 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: The parameter "direct" can either be true or false, and all of the callers pass in a bool variable or true/false literal, so just use the type bool. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:104a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org The parameter "direct" can either be true or false, and all of the callers pass in a bool variable or true/false literal, so just use the type bool. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index dbfda133adbe..1c8d157c097b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1706,7 +1706,7 @@ static void drop_parent_pte(struct kvm_mmu_page *sp, mmu_spte_clear_no_track(parent_pte); } -static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct) +static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, bool direct) { struct kvm_mmu_page *sp; @@ -2031,7 +2031,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, gva_t gaddr, unsigned level, - int direct, + bool direct, unsigned int access) { union kvm_mmu_page_role role; From patchwork Fri Apr 1 17:55:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612332 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=EBY0ns4Y; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=IVSSCnph; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXD4j8Hz9s0r for ; Sat, 2 Apr 2022 04:56:12 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=e3YnfT840qHsW6Uwv3ibq1fMLO14x7pkLZ6yYXfIxt0=; b=EBY0ns4Y9avyBYj+Ru4ucp6F9V ONKT9tfF3yzPt1o7J5YjV+c0eoL2qXggzGbSYuDJhZdtU2dWDIYF5djdqczxFIbzKgmWfuKNyteWW zTjNCaBoee+8j2Vpy1o6b608i/PRVZfOZWhVSGtOWPNIY9Nw5r+O1osmi+ptYCAoU90RA8BMIAlAP piW5nO5Ljxi5QwCh6UnYBMtsK0JCPwq8HdCLbAzA0ZqRkiMO/Vy0/oeHK904gm/fT2rRNdlS/sFSk ABRuvapQelCGjqCLdyYybsN9WjsolU1trwdF7d6kHcNk0a+fUQH1tWLQkuIe9JFeuAYa7ab9c2AOK 5+1ZYOcw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVK-006lb5-4g; Fri, 01 Apr 2022 17:56:10 +0000 Received: from mail-pf1-x449.google.com ([2607:f8b0:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVH-006lZF-Kt for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:09 +0000 Received: by mail-pf1-x449.google.com with SMTP id j17-20020a62b611000000b004fa6338bd77so1996456pff.10 for ; Fri, 01 Apr 2022 10:56:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=XpIZ6CyHOOvE8Od+c+ze0q4rrAbewJsobbY4vDum0Lc=; b=IVSSCnphw7M0K9aXRxRH3sf3G0YsL0fqqfaGnlBT+6wYNtznF4z0IHRE97ked0u78f PKVMbySEg4DzJZrAcA6zYy5avxG3CQ96pBmh/PNjSHS5UExhx/I1aQ+acBTOQohLqeZF itEaxW1GIqfNl23Q9bUW1DL/NDdsLua1gDNg9u+9XQuuqrA66lmcvJdALC8IzTDs+2pX Iu32VKB4X8R0vgmdKrF/CGp24YxM0x532yQ7XlN1VVESXC2W3U21/WXqovIX1DroLwxm bEyUnmqd9hciCLP6wQBjKT/cCWERSA/UcN91gxugfFWeQ1pmDzoacaU6QLmJCG7Ab005 jmOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=XpIZ6CyHOOvE8Od+c+ze0q4rrAbewJsobbY4vDum0Lc=; b=ISVoIbe3rjZjCozdNYw0dYTd2GzM3oVTsd52WueVGLP9IrhPz94/U4STLe7SAfYV7B Q11xuwp6aexJCXkIOSaeSS1oQjGP7NueZGTLaQSZpyhoavn/Y98SxGiRrU1l4He8lbQf BIiGUhruoNxgvXyJuJtoBCAxaPNcMEqgEOttP+/KfRphP0/8GDiVhNkLeH7Hip97Oaqr lU8WxgCHNrVE18dR4iOFj2lqJr9Hvf31UITovWwOeMLU5ZOCcTs25TsTuGwP7QaTxdPw xRAx8e73Pjbdk/spOGW+f6X1zdweHniLpOnVoKFL1QGnvGZtacwbpLEW2uQhzyZTVdgh HBKg== X-Gm-Message-State: AOAM532lnxT5Run5Enp7KxROK6ilhqd9wFCIUVwowW3RLrikm9CAU0Rh tfwhVTElPzktfmGZIgh+Cc7PjJ9a/VBKow== X-Google-Smtp-Source: ABdhPJxGxM0q3ZoqUruShiwAwcq3Y/NOo9c7YDt9yXTgSeHLAGJMUYhHuR2A8KTgTdfSkk4dnIGYUv0yfxS+Zw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90a:858b:b0:1c6:5bc8:781a with SMTP id m11-20020a17090a858b00b001c65bc8781amr588173pjn.0.1648835765704; Fri, 01 Apr 2022 10:56:05 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:34 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-4-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 03/23] KVM: x86/mmu: Derive shadow MMU page role from parent From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105607_718118_DA16EE98 X-CRM114-Status: GOOD ( 19.49 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Instead of computing the shadow page role from scratch for every new page, we can derive most of the information from the parent shadow page. This avoids redundant calculations and reduces the number [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:449 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Instead of computing the shadow page role from scratch for every new page, we can derive most of the information from the parent shadow page. This avoids redundant calculations and reduces the number of parameters to kvm_mmu_get_page(). Preemptively split out the role calculation to a separate function for use in a following commit. No functional change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 96 +++++++++++++++++++++++----------- arch/x86/kvm/mmu/paging_tmpl.h | 9 ++-- 2 files changed, 71 insertions(+), 34 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1c8d157c097b..8253d68cc30b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2027,30 +2027,14 @@ static void clear_sp_write_flooding_count(u64 *spte) __clear_sp_write_flooding_count(sptep_to_sp(spte)); } -static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, - gfn_t gfn, - gva_t gaddr, - unsigned level, - bool direct, - unsigned int access) +static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, + union kvm_mmu_page_role role) { - union kvm_mmu_page_role role; struct hlist_head *sp_list; - unsigned quadrant; struct kvm_mmu_page *sp; int collisions = 0; LIST_HEAD(invalid_list); - role = vcpu->arch.mmu->mmu_role.base; - role.level = level; - role.direct = direct; - role.access = access; - if (role.has_4_byte_gpte) { - quadrant = gaddr >> (PAGE_SHIFT + (PT64_PT_BITS * level)); - quadrant &= (1 << ((PT32_PT_BITS - PT64_PT_BITS) * level)) - 1; - role.quadrant = quadrant; - } - sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; for_each_valid_sp(vcpu->kvm, sp, sp_list) { if (sp->gfn != gfn) { @@ -2068,7 +2052,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, * Unsync pages must not be left as is, because the new * upper-level page will be write-protected. */ - if (level > PG_LEVEL_4K && sp->unsync) + if (role.level > PG_LEVEL_4K && sp->unsync) kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list); continue; @@ -2107,14 +2091,14 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, ++vcpu->kvm->stat.mmu_cache_miss; - sp = kvm_mmu_alloc_page(vcpu, direct); + sp = kvm_mmu_alloc_page(vcpu, role.direct); sp->gfn = gfn; sp->role = role; hlist_add_head(&sp->hash_link, sp_list); - if (!direct) { + if (!role.direct) { account_shadowed(vcpu->kvm, sp); - if (level == PG_LEVEL_4K && kvm_vcpu_write_protect_gfn(vcpu, gfn)) + if (role.level == PG_LEVEL_4K && kvm_vcpu_write_protect_gfn(vcpu, gfn)) kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1); } trace_kvm_mmu_get_page(sp, true); @@ -2126,6 +2110,51 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, return sp; } +static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, u32 access) +{ + struct kvm_mmu_page *parent_sp = sptep_to_sp(sptep); + union kvm_mmu_page_role role; + + role = parent_sp->role; + role.level--; + role.access = access; + role.direct = direct; + + /* + * If the guest has 4-byte PTEs then that means it's using 32-bit, + * 2-level, non-PAE paging. KVM shadows such guests using 4 PAE page + * directories, each mapping 1/4 of the guest's linear address space + * (1GiB). The shadow pages for those 4 page directories are + * pre-allocated and assigned a separate quadrant in their role. + * + * Since we are allocating a child shadow page and there are only 2 + * levels, this must be a PG_LEVEL_4K shadow page. Here the quadrant + * will either be 0 or 1 because it maps 1/2 of the address space mapped + * by the guest's PG_LEVEL_4K page table (or 4MiB huge page) that it + * is shadowing. In this case, the quadrant can be derived by the index + * of the SPTE that points to the new child shadow page in the page + * directory (parent_sp). Specifically, every 2 SPTEs in parent_sp + * shadow one half of a guest's page table (or 4MiB huge page) so the + * quadrant is just the parity of the index of the SPTE. + */ + if (role.has_4_byte_gpte) { + WARN_ON_ONCE(role.level != PG_LEVEL_4K); + role.quadrant = (sptep - parent_sp->spt) % 2; + } + + return role; +} + +static struct kvm_mmu_page *kvm_mmu_get_child_sp(struct kvm_vcpu *vcpu, + u64 *sptep, gfn_t gfn, + bool direct, u32 access) +{ + union kvm_mmu_page_role role; + + role = kvm_mmu_child_role(sptep, direct, access); + return kvm_mmu_get_page(vcpu, gfn, role); +} + static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterator, struct kvm_vcpu *vcpu, hpa_t root, u64 addr) @@ -2930,8 +2959,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) if (is_shadow_present_pte(*it.sptep)) continue; - sp = kvm_mmu_get_page(vcpu, base_gfn, it.addr, - it.level - 1, true, ACC_ALL); + sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn, true, ACC_ALL); link_shadow_page(vcpu, it.sptep, sp); if (fault->is_tdp && fault->huge_page_disallowed && @@ -3313,12 +3341,21 @@ static int mmu_check_root(struct kvm_vcpu *vcpu, gfn_t root_gfn) return ret; } -static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, gva_t gva, +static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, int quadrant, u8 level, bool direct) { + union kvm_mmu_page_role role; struct kvm_mmu_page *sp; - sp = kvm_mmu_get_page(vcpu, gfn, gva, level, direct, ACC_ALL); + role = vcpu->arch.mmu->mmu_role.base; + role.level = level; + role.direct = direct; + role.access = ACC_ALL; + + if (role.has_4_byte_gpte) + role.quadrant = quadrant; + + sp = kvm_mmu_get_page(vcpu, gfn, role); ++sp->root_count; return __pa(sp->spt); @@ -3352,8 +3389,8 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) for (i = 0; i < 4; ++i) { WARN_ON_ONCE(IS_VALID_PAE_ROOT(mmu->pae_root[i])); - root = mmu_alloc_root(vcpu, i << (30 - PAGE_SHIFT), - i << 30, PT32_ROOT_LEVEL, true); + root = mmu_alloc_root(vcpu, i << (30 - PAGE_SHIFT), i, + PT32_ROOT_LEVEL, true); mmu->pae_root[i] = root | PT_PRESENT_MASK | shadow_me_mask; } @@ -3522,8 +3559,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) root_gfn = pdptrs[i] >> PAGE_SHIFT; } - root = mmu_alloc_root(vcpu, root_gfn, i << 30, - PT32_ROOT_LEVEL, false); + root = mmu_alloc_root(vcpu, root_gfn, i, PT32_ROOT_LEVEL, false); mmu->pae_root[i] = root | pm_mask; } diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 8621188b46df..729394de2658 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -683,8 +683,9 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, if (!is_shadow_present_pte(*it.sptep)) { table_gfn = gw->table_gfn[it.level - 2]; access = gw->pt_access[it.level - 2]; - sp = kvm_mmu_get_page(vcpu, table_gfn, fault->addr, - it.level-1, false, access); + sp = kvm_mmu_get_child_sp(vcpu, it.sptep, table_gfn, + false, access); + /* * We must synchronize the pagetable before linking it * because the guest doesn't need to flush tlb when @@ -740,8 +741,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, drop_large_spte(vcpu, it.sptep); if (!is_shadow_present_pte(*it.sptep)) { - sp = kvm_mmu_get_page(vcpu, base_gfn, fault->addr, - it.level - 1, true, direct_access); + sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn, + true, direct_access); link_shadow_page(vcpu, it.sptep, sp); if (fault->huge_page_disallowed && fault->req_level >= it.level) From patchwork Fri Apr 1 17:55:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612333 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=Wt1lgfSQ; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=edC1kXvm; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXG46myz9s0B for ; Sat, 2 Apr 2022 04:56:14 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=TOCj/EdUlhmIAokD3hfc6exJEVB0bwMvxFQSFd7cKU4=; b=Wt1lgfSQCnUSUyNTbLyNSInQjh 0RQuyR236OYgXa/AQdqM0NtK842DZn175FJE+OvFxESXsjuy07GpsebbpaUVpJumcrGIsiANRROcM 1bnYlbTBhypEihqBVQgkfqfFbYfltAHRTjtmfevTtgPm1Zy1IAHZ6BuMWum/giVefuKLOXXG1wiOo 3+ClgMae9SWDuVT3oAjfxcj+d7w7qdiPE06wgC3E4qOsv82bxNP/c07SRxqT4gjy1aBuHNj2zMDQ3 kts02rVexhwngTwLqFCAmGtSuqJUxFOlEp6ekYEhr6iq0caxovWZaL8j5nx26/seXx9fCQLiFHmZE kXHhAXMA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVM-006lcb-CM; Fri, 01 Apr 2022 17:56:12 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVJ-006lZx-5w for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:10 +0000 Received: by mail-pj1-x104a.google.com with SMTP id nr19-20020a17090b241300b001c6f8baf45eso2443812pjb.0 for ; Fri, 01 Apr 2022 10:56:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=HFmOXjdTm4bKuwpXyqPSjeWAmueQ2kp+gRfD4Io5xrA=; b=edC1kXvmiD1wHyEz0bAXG+Ro2vWFCm3Xn7InOKhIPfA34B9sPuHUUqYVs/SHdGxRrw jWRQHl6FTYIj9MAHwbfTfJfnOBAmUcnuyqglp1+wPvHjg9s6/WOLPPe09WZ8Eyh01DcM gvB64jDytmncJALI33os10qcutUKbgkFhWrCBMBqwDSgvw6GF5HbqCToSIPgWhm32NNM KA2znvla3jUqsM3RrasAMUXRR2tojM7hHQLJFQWV8yiDmAFUxh+FMrv/uRlk3Lt4UFLn ENw4Kf5Z/B8afJGDUB5jQiToxQiYbhTHRsK1j5+F3tIbZoSvKS2YRyOJs+0gSTue2Um3 Jnzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=HFmOXjdTm4bKuwpXyqPSjeWAmueQ2kp+gRfD4Io5xrA=; b=uAi1si3DPByokAzZ6c1MU4aQJ7IYJxUvVTsvHt/l+6y30gyTv5BlNGIlkPNJjShPM5 pQT+/SOEyr9nYtts5mSVdZYqW88d3NCdyut2HaTvK7E+nBfHrq384dtFCz3TP27HTf07 bqsSB+Iw/mc6rjxBFAE8C4x2YvHQZzS26ZXNGQP94haGWfgwbRCFLonbHZUF0TU9fcX2 UnOOsVxT64vjtBGL0txPWXmtHlx2q2p+LPy9XSYTLymNbqO4fxYuRRqZoZ3cTSP5d51/ z6OL2v3+cvWNVsOVv4gPzUl/ZCqYexRhuYZJaPeYTt8ZfUrdv0+17R5c0dBCt/5F1prl zniw== X-Gm-Message-State: AOAM532L9tQdDE5ZFvNuWkfGRL3Sk4uF+SFwBEnu+fQBqALwhj6N88hN fGGztIRoq7KcuXkGWOXAErQHIPMYrwAsNA== X-Google-Smtp-Source: ABdhPJwnqJBzfLm+Ds6V5OWREXOmRK/sUx5t1bSmdng1ilg+Qj+FOqvYH+aW09+0A8FDoL/oz9ikffec7I0g1g== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90a:858b:b0:1c6:5bc8:781a with SMTP id m11-20020a17090a858b00b001c65bc8781amr588186pjn.0.1648835767641; Fri, 01 Apr 2022 10:56:07 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:35 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-5-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 04/23] KVM: x86/mmu: Decompose kvm_mmu_get_page() into separate functions From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105609_262488_A6924E38 X-CRM114-Status: GOOD ( 26.52 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Decompose kvm_mmu_get_page() into separate helper functions to increase readability and prepare for allocating shadow pages without a vcpu pointer. Specifically, pull the guts of kvm_mmu_get_page() into 3 helper functions: Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:104a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Decompose kvm_mmu_get_page() into separate helper functions to increase readability and prepare for allocating shadow pages without a vcpu pointer. Specifically, pull the guts of kvm_mmu_get_page() into 3 helper functions: __kvm_mmu_find_shadow_page() - Walks the page hash checking for any existing mmu pages that match the given gfn and role. Does not attempt to synchronize the page if it is unsync. kvm_mmu_find_shadow_page() - Wraps __kvm_mmu_find_shadow_page() and handles syncing if necessary. kvm_mmu_new_shadow_page() Allocates and initializes an entirely new kvm_mmu_page. This currently requries a vcpu pointer for allocation and looking up the memslot but that will be removed in a future commit. Note, kvm_mmu_new_shadow_page() is temporary and will be removed in a subsequent commit. The name uses "new" rather than the more typical "alloc" to avoid clashing with the existing kvm_mmu_alloc_page(). No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 124 +++++++++++++++++++++++---------- arch/x86/kvm/mmu/paging_tmpl.h | 5 +- arch/x86/kvm/mmu/spte.c | 5 +- 3 files changed, 94 insertions(+), 40 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8253d68cc30b..8fdddd25029d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2027,16 +2027,25 @@ static void clear_sp_write_flooding_count(u64 *spte) __clear_sp_write_flooding_count(sptep_to_sp(spte)); } -static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, - union kvm_mmu_page_role role) +/* + * Searches for an existing SP for the given gfn and role. Makes no attempt to + * sync the SP if it is marked unsync. + * + * If creating an upper-level page table, zaps unsynced pages for the same + * gfn and adds them to the invalid_list. It's the callers responsibility + * to call kvm_mmu_commit_zap_page() on invalid_list. + */ +static struct kvm_mmu_page *__kvm_mmu_find_shadow_page(struct kvm *kvm, + gfn_t gfn, + union kvm_mmu_page_role role, + struct list_head *invalid_list) { struct hlist_head *sp_list; struct kvm_mmu_page *sp; int collisions = 0; - LIST_HEAD(invalid_list); - sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; - for_each_valid_sp(vcpu->kvm, sp, sp_list) { + sp_list = &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; + for_each_valid_sp(kvm, sp, sp_list) { if (sp->gfn != gfn) { collisions++; continue; @@ -2053,60 +2062,103 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, * upper-level page will be write-protected. */ if (role.level > PG_LEVEL_4K && sp->unsync) - kvm_mmu_prepare_zap_page(vcpu->kvm, sp, - &invalid_list); + kvm_mmu_prepare_zap_page(kvm, sp, invalid_list); + continue; } - /* unsync and write-flooding only apply to indirect SPs. */ - if (sp->role.direct) - goto trace_get_page; + /* Write-flooding is only tracked for indirect SPs. */ + if (!sp->role.direct) + __clear_sp_write_flooding_count(sp); - if (sp->unsync) { - /* - * The page is good, but is stale. kvm_sync_page does - * get the latest guest state, but (unlike mmu_unsync_children) - * it doesn't write-protect the page or mark it synchronized! - * This way the validity of the mapping is ensured, but the - * overhead of write protection is not incurred until the - * guest invalidates the TLB mapping. This allows multiple - * SPs for a single gfn to be unsync. - * - * If the sync fails, the page is zapped. If so, break - * in order to rebuild it. - */ - if (!kvm_sync_page(vcpu, sp, &invalid_list)) - break; + goto out; + } + sp = NULL; + +out: + if (collisions > kvm->stat.max_mmu_page_hash_collisions) + kvm->stat.max_mmu_page_hash_collisions = collisions; + + return sp; +} + +/* + * Looks up an existing SP for the given gfn and role if one exists. The + * return SP is guaranteed to be synced. + */ +static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm_vcpu *vcpu, + gfn_t gfn, + union kvm_mmu_page_role role) +{ + struct kvm_mmu_page *sp; + LIST_HEAD(invalid_list); + + sp = __kvm_mmu_find_shadow_page(vcpu->kvm, gfn, role, &invalid_list); + + if (sp && sp->unsync) { + /* + * The page is good, but is stale. kvm_sync_page does + * get the latest guest state, but (unlike mmu_unsync_children) + * it doesn't write-protect the page or mark it synchronized! + * This way the validity of the mapping is ensured, but the + * overhead of write protection is not incurred until the + * guest invalidates the TLB mapping. This allows multiple + * SPs for a single gfn to be unsync. + * + * If the sync fails, the page is zapped and added to the + * invalid_list. + */ + if (kvm_sync_page(vcpu, sp, &invalid_list)) { WARN_ON(!list_empty(&invalid_list)); kvm_flush_remote_tlbs(vcpu->kvm); + } else { + sp = NULL; } + } - __clear_sp_write_flooding_count(sp); + kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list); + return sp; +} -trace_get_page: - trace_kvm_mmu_get_page(sp, false); - goto out; - } +static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, + gfn_t gfn, + union kvm_mmu_page_role role) +{ + struct kvm_mmu_page *sp; + struct hlist_head *sp_list; ++vcpu->kvm->stat.mmu_cache_miss; sp = kvm_mmu_alloc_page(vcpu, role.direct); - sp->gfn = gfn; sp->role = role; + + sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; hlist_add_head(&sp->hash_link, sp_list); + if (!role.direct) { account_shadowed(vcpu->kvm, sp); if (role.level == PG_LEVEL_4K && kvm_vcpu_write_protect_gfn(vcpu, gfn)) kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1); } - trace_kvm_mmu_get_page(sp, true); -out: - kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list); - if (collisions > vcpu->kvm->stat.max_mmu_page_hash_collisions) - vcpu->kvm->stat.max_mmu_page_hash_collisions = collisions; + return sp; +} + +static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, + union kvm_mmu_page_role role) +{ + struct kvm_mmu_page *sp; + bool created = false; + + sp = kvm_mmu_find_shadow_page(vcpu, gfn, role); + if (!sp) { + created = true; + sp = kvm_mmu_new_shadow_page(vcpu, gfn, role); + } + + trace_kvm_mmu_get_page(sp, created); return sp; } diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 729394de2658..db63b5377465 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -692,8 +692,9 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, * the gpte is changed from non-present to present. * Otherwise, the guest may use the wrong mapping. * - * For PG_LEVEL_4K, kvm_mmu_get_page() has already - * synchronized it transiently via kvm_sync_page(). + * For PG_LEVEL_4K, kvm_mmu_get_existing_sp() has + * already synchronized it transiently via + * kvm_sync_page(). * * For higher level pagetable, we synchronize it via * the slower mmu_sync_children(). If it needs to diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 4739b53c9734..d10189d9c877 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -150,8 +150,9 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, /* * Optimization: for pte sync, if spte was writable the hash * lookup is unnecessary (and expensive). Write protection - * is responsibility of kvm_mmu_get_page / kvm_mmu_sync_roots. - * Same reasoning can be applied to dirty page accounting. + * is responsibility of kvm_mmu_create_sp() and + * kvm_mmu_sync_roots(). Same reasoning can be applied to dirty + * page accounting. */ if (is_writable_pte(old_spte)) goto out; From patchwork Fri Apr 1 17:55:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612334 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=JEhZWYR2; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=judHef9B; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXJ5kWJz9s0B for ; Sat, 2 Apr 2022 04:56:16 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=75QrCy2knwEiCdFt3KdTWhc/aoor8INxq7hjprYI5Gw=; b=JEhZWYR2iqILP/Mu+LxNRicO0J xrfWAEmTQEgvPNFlLMAtJon0ZCFYXZ2yzBBKlcIgflEdIwdy1iHuHiMHwMTdKVzs4wuxaFy5xEj14 cqbStWQNEHThykfVINp11nom7Qyi48T+jHoYJ4O7Yjt6FjNjCiK0AerAf4jt7W9AgL3a+WGdD+S6u b6Bw6OyeG8Y5V032t4PFjXX4YPwWNvf5Eyv1ZD2dU1NuS8iqAjGqLCouVzNy2pXfwxvn/G9EMoWxC xG3LXYobdiU3+3XT2qDrT3+F1qfVu5Qz9tp+Gmtp3c/uWQY7L8JcXyGVV9gGgmDERic+7bW2ZIGBg QTc8bXRw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVO-006leg-Im; Fri, 01 Apr 2022 17:56:14 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVL-006lat-U4 for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:13 +0000 Received: by mail-pl1-x64a.google.com with SMTP id d15-20020a170902cecf00b001565c6e4025so1823913plg.1 for ; Fri, 01 Apr 2022 10:56:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=5zXRbPH5UCC0MzIPY39weUiVp/chM0xOMHiRWD4la3c=; b=judHef9BAzgj4FgeW1vRm9YBTxJbUYxIamY2hI0NMvlM3cr5VrZONZvMzgzY7mB5fc Et/RNC+awLN7YqYwnnpHGoSFIrpJq2iHlgSKmbY1a8X8r6AEMzIShNB2vOGVmDUH236e bLg+ZXy5E+OlhsYteEkCa49jGpzhz/nHN1WgF+BBZ9kL6bi5bSvI0HyGYah0G90zn0E7 eWj9EUCrQC63YZptaVS06Wg5k6oo7X0VSsM4lBcYI4RQ3zm+9gffX0txYBDSpgOjhiIG tQBtKSlr0ySXNad11oRT+rju3gtUZm+AEg6oBd7n7RsnJLiJaciztvxcMN5HPBhVNkqD LwCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=5zXRbPH5UCC0MzIPY39weUiVp/chM0xOMHiRWD4la3c=; b=cWbpE1LP2m6dnejEtgGiyvzwY5zQhw/Rbmvm+ldA6sOfOBhYeifm3gtGbWVnsPmuei 9O4/dPZk3Hl+pE+kTTJ8PY13rILLJTLE3c3avFikdxK5avCI6lvaJRJT6PrPL5qOFyHw uIT3m7hIIVuCgkGLgcR5gubve0SeWKaRgcWWiJ7g7Ivnon5Y0GUaMCwdsmQDbG5RJmUM NEdMDMRz0Sf/kgk77dljZNQKHR/aerqDIJ46LHXSlrlDtIq/N+sRCs06J3pN0NNmLDNk ppXfi43/o23BbNPtFArkmqSc395vTdY6yCAo2M/KhApu9u29TFci/DW57c6A4HZebEWk ATEg== X-Gm-Message-State: AOAM531VYXvaAI1YCyvX8XOXN2PQyhbsGuSe9wzd4MeN9TOsUMuYRZ7P GAu7GopStOmC5UjG/lr5H4MgM1v1UZcB5g== X-Google-Smtp-Source: ABdhPJyefj5gqr540fQC5ORDWM5/+t3TXsB4vdSmUL266n9Vg0IBeJqnGewfNN7AZaQki9m2eBNaBN075pqkiQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:e8d1:b0:156:5651:777 with SMTP id v17-20020a170902e8d100b0015656510777mr9260591plg.65.1648835769357; Fri, 01 Apr 2022 10:56:09 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:36 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-6-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 05/23] KVM: x86/mmu: Rename shadow MMU functions that deal with shadow pages From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105611_997158_B829F1ED X-CRM114-Status: UNSURE ( 9.89 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Rename 3 functions: kvm_mmu_get_page() -> kvm_mmu_get_shadow_page() kvm_mmu_alloc_page() -> kvm_mmu_alloc_shadow_page() kvm_mmu_free_page() -> kvm_mmu_free_shadow_page() This change makes it clear that these functions deal with shadow pages rather than struct pages. Prefer "shadow_page" over the shorter "sp" since these are core routines. Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:64a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Rename 3 functions: kvm_mmu_get_page() -> kvm_mmu_get_shadow_page() kvm_mmu_alloc_page() -> kvm_mmu_alloc_shadow_page() kvm_mmu_free_page() -> kvm_mmu_free_shadow_page() This change makes it clear that these functions deal with shadow pages rather than struct pages. Prefer "shadow_page" over the shorter "sp" since these are core routines. Acked-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8fdddd25029d..dc1825de0752 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1668,7 +1668,7 @@ static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr) percpu_counter_add(&kvm_total_used_mmu_pages, nr); } -static void kvm_mmu_free_page(struct kvm_mmu_page *sp) +static void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) { MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); hlist_del(&sp->hash_link); @@ -1706,7 +1706,8 @@ static void drop_parent_pte(struct kvm_mmu_page *sp, mmu_spte_clear_no_track(parent_pte); } -static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, bool direct) +static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, + bool direct) { struct kvm_mmu_page *sp; @@ -2130,7 +2131,7 @@ static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, ++vcpu->kvm->stat.mmu_cache_miss; - sp = kvm_mmu_alloc_page(vcpu, role.direct); + sp = kvm_mmu_alloc_shadow_page(vcpu, role.direct); sp->gfn = gfn; sp->role = role; @@ -2146,8 +2147,9 @@ static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, return sp; } -static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, - union kvm_mmu_page_role role) +static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, + gfn_t gfn, + union kvm_mmu_page_role role) { struct kvm_mmu_page *sp; bool created = false; @@ -2204,7 +2206,7 @@ static struct kvm_mmu_page *kvm_mmu_get_child_sp(struct kvm_vcpu *vcpu, union kvm_mmu_page_role role; role = kvm_mmu_child_role(sptep, direct, access); - return kvm_mmu_get_page(vcpu, gfn, role); + return kvm_mmu_get_shadow_page(vcpu, gfn, role); } static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterator, @@ -2480,7 +2482,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, list_for_each_entry_safe(sp, nsp, invalid_list, link) { WARN_ON(!sp->role.invalid || sp->root_count); - kvm_mmu_free_page(sp); + kvm_mmu_free_shadow_page(sp); } } @@ -3407,7 +3409,7 @@ static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, int quadrant, if (role.has_4_byte_gpte) role.quadrant = quadrant; - sp = kvm_mmu_get_page(vcpu, gfn, role); + sp = kvm_mmu_get_shadow_page(vcpu, gfn, role); ++sp->root_count; return __pa(sp->spt); From patchwork Fri Apr 1 17:55:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612335 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=DsO+SdL2; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=diK+NDVV; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXL74vQz9s0B for ; Sat, 2 Apr 2022 04:56:18 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=WhVwfhhbf8T2lqCzWGY0JjL5vkij5738tKS8BD8p3Lw=; b=DsO+SdL2AalTCnhAQhShwTlb2c Lg/fX8bVsKYGxWegKbFKGOinkKQXQ7V+TOPg3w4xzPKdP4gurczFHLT9LfNuTLpV72uxau8Q7d89/ J0cY6C5avaFgDkb+iwtapVKCt0iTlUUit21CEA41iFHxzc/DExp3hBcD8cOtuHUKrzaryrIbLpFG4 3Ri4KD66LunAyzFZkDeAZYNCWWho5pPhvKJkgdRuKlBaWZ9Bftj403a69FzCoc1qjTZyMuRgzzDRU umrdxjdl4IPkkbhwYvS61l0UZ0bbfl6LjYKnHB040ZRXmoyQr/HItXcaGAalPvlNZQMO2uD2ew9SY VikceJGA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVQ-006lh4-P0; Fri, 01 Apr 2022 17:56:16 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVM-006lbo-6Y for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:14 +0000 Received: by mail-pl1-x64a.google.com with SMTP id u8-20020a170903124800b0015195a5826cso1818251plh.4 for ; Fri, 01 Apr 2022 10:56:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=IyIPSk2/EaBUB0UobyNpb06WjR6I1O9i3ZizvW3fhmY=; b=diK+NDVVMnH1euNmcgNZkZR+RNe+RtbEPnphRRHdwnohQBdcNG3DKV2+OalZZ9dtiO 4U1BmQRB9ltXsq1iribzjEnPiYHQfyjaB/RIv7u3RQME2nqt4UISDss9FwhaKRI73n2M bMU+zTqvPQzdMjCfjifjO/HgM4CjFbg0oko4cVui9/LHWSrFQKcR1tLhe7lbcKG+MY6Q lj9/alcDyXKJ/nW3dpp0m5fF9issvOVGtIPN401aoBG2OVH4cv2Xqlye/tg/RF+3/hn7 WxjoFGMkrlclU9nEnzQJ5GSO36yNskFztCeAocXYe1LYhR2jSCTTrWpJUtyLXPUObvDR s/ig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=IyIPSk2/EaBUB0UobyNpb06WjR6I1O9i3ZizvW3fhmY=; b=yLch07VcCF9FfS9/atUBVtI0w+vXf6wmxhI3TpvTjz/ivJKF8cFK7hG2YauUR8u7l2 4f51Ebw6FZO7W/vqo9p4LAgefmDqzmeV0GJHdDJbcr54F+CBScWMlltzq59AMdwFf7YC VSl7hbCe1wc1GpiMD71Qo+A51KwhKqSTIcmjua6ikXZVqPTkJWiDQe+EBgO7gtszZcQW KtzdzOPDAk326w7SY8VX6UsrWwWm3GNyY0reOHxTdXw6tWVyn2hC5jSHq894FJ4XkCJN Z9oIWOWstbCv8CcKrWCHp7LwSAKxvLeO0NrC2pyhc/6Qm4AvFawNg9WOpzqDDDN3G6G7 T1Uw== X-Gm-Message-State: AOAM533VeJXnFODqE/IzpkMK3oDMl9iZXRSOqDNors72GIOJuDW4YbUZ fJQl+AxrJOhEyGPev4KZvStncaFRDpp66A== X-Google-Smtp-Source: ABdhPJw+3njlRranWemDGRAFfxcGs6Z7lxR+Q6odwZANW7lQMpZ7k6xIEK9JVJtusBFO1h+uHnsiH7sfkuQYbQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:2d0:b0:4f4:1f34:e39d with SMTP id b16-20020a056a0002d000b004f41f34e39dmr12123192pft.14.1648835771030; Fri, 01 Apr 2022 10:56:11 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:37 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-7-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 06/23] KVM: x86/mmu: Pass memslot to kvm_mmu_new_shadow_page() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105612_281169_0F2198AD X-CRM114-Status: GOOD ( 13.97 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Passing the memslot to kvm_mmu_new_shadow_page() avoids the need for the vCPU pointer when write-protecting indirect 4k shadow pages. This moves us closer to being able to create new shadow pages duri [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:64a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Passing the memslot to kvm_mmu_new_shadow_page() avoids the need for the vCPU pointer when write-protecting indirect 4k shadow pages. This moves us closer to being able to create new shadow pages during VM ioctls for eager page splitting, where there is not vCPU pointer. This change does not negatively impact "Populate memory time" for ept=Y or ept=N configurations since kvm_vcpu_gfn_to_memslot() caches the last use slot. So even though we now look up the slot more often, it is a very cheap check. Opportunistically move the code to write-protect GFNs shadowed by PG_LEVEL_4K shadow pages into account_shadowed() to reduce indentation and consolidate the code. This also eliminates a memslot lookup. No functional change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index dc1825de0752..abfb3e5d1372 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -793,16 +793,14 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn) update_gfn_disallow_lpage_count(slot, gfn, -1); } -static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) +static void account_shadowed(struct kvm *kvm, + struct kvm_memory_slot *slot, + struct kvm_mmu_page *sp) { - struct kvm_memslots *slots; - struct kvm_memory_slot *slot; gfn_t gfn; kvm->arch.indirect_shadow_pages++; gfn = sp->gfn; - slots = kvm_memslots_for_spte_role(kvm, sp->role); - slot = __gfn_to_memslot(slots, gfn); /* the non-leaf shadow pages are keeping readonly. */ if (sp->role.level > PG_LEVEL_4K) @@ -810,6 +808,9 @@ static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) KVM_PAGE_TRACK_WRITE); kvm_mmu_gfn_disallow_lpage(slot, gfn); + + if (kvm_mmu_slot_gfn_write_protect(kvm, slot, gfn, PG_LEVEL_4K)) + kvm_flush_remote_tlbs_with_address(kvm, gfn, 1); } void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp) @@ -2123,6 +2124,7 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm_vcpu *vcpu, } static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, + struct kvm_memory_slot *slot, gfn_t gfn, union kvm_mmu_page_role role) { @@ -2138,11 +2140,8 @@ static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; hlist_add_head(&sp->hash_link, sp_list); - if (!role.direct) { - account_shadowed(vcpu->kvm, sp); - if (role.level == PG_LEVEL_4K && kvm_vcpu_write_protect_gfn(vcpu, gfn)) - kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1); - } + if (!role.direct) + account_shadowed(vcpu->kvm, slot, sp); return sp; } @@ -2151,13 +2150,15 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, gfn_t gfn, union kvm_mmu_page_role role) { + struct kvm_memory_slot *slot; struct kvm_mmu_page *sp; bool created = false; sp = kvm_mmu_find_shadow_page(vcpu, gfn, role); if (!sp) { created = true; - sp = kvm_mmu_new_shadow_page(vcpu, gfn, role); + slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); + sp = kvm_mmu_new_shadow_page(vcpu, slot, gfn, role); } trace_kvm_mmu_get_page(sp, created); From patchwork Fri Apr 1 17:55:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612336 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=zA+pyFwz; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=psRoNjQx; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXM10cFz9s0r for ; Sat, 2 Apr 2022 04:56:19 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=IsLspRZqHRsujqHEFISBp6LIiyawVeQ2n0oRlgEUzGg=; b=zA+pyFwz6MmNcodr2Et7cb+eUy MiYeLU7mFxyiU3pS6L98VryG/HnU7TDMziuHYddTvPVADk5zhAn7n1+nbMJtMsmNvuJWlNcdM6uql sYpy34ZhGBLAnHEJiWE1PiQiW3kBGk4ZNFj/9e6JwaMKREahHAH5qppaQ78rxgTsa+YL80Ry6y10F baHrCebPv87pf4WZvYWAzbYKQ/wo8ARWWqh7xmwQV770vnVX5CC+mTsdn36+rJfFoCFeRqt4RIMhs wggNa9guDRzeaxW/SrQpUWfkhSa0QAaZpeSbJ1TFMkujlmsdFjPi66dobHf8cQlSCScA8kUamHtmx 4j26iPTA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVQ-006lhH-Va; Fri, 01 Apr 2022 17:56:16 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVN-006ldN-Vq for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:15 +0000 Received: by mail-pj1-x1049.google.com with SMTP id mq8-20020a17090b380800b001c6f8962e95so4365328pjb.1 for ; Fri, 01 Apr 2022 10:56:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=T6MU8k6cQbf3TkeoEFiVrn7xHGx6m7+aDUnGVn3gY9A=; b=psRoNjQx06GKBw5vk1GEEdr6zd56BqsU6L5ZB+rOQfu2XH6HJb+EMCsMZX4hnwAonC 2qG5EnZHZ6vyxBjGbdz8b1ylt6lv4n/XYVsWPs5pHj7JFPfkrX1V4OyHPShUpqfk7GLA d911moNr53YYCdp3FNox0tSzlA0Fh25LgD83E572TTJFC+Gdm06EViMmGs2KvtC8wZ6W x17XSyEP01m5bqVgtCZ+WxHNdlHXdwqVUZvyOmLgdiN5Hlznx1jD+fC30Tj0tHOgwdXd rQUzG3elXhJ5R145vXnIyE5klI/0eFbeDgab/AtBOOlKGNWRf4G056rfgCyinO9LPkIZ +xCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=T6MU8k6cQbf3TkeoEFiVrn7xHGx6m7+aDUnGVn3gY9A=; b=kB7VTlkstpxPxAnJcFFVLxHfzzqQG/wFYuAyoUjfGrNlicqd64L8AC//jVxefIOXK6 lF+u2F7ZXMugojq302AXg40YOuhEcOvM60BtUjrRuWVRoV00GMlGuj3+uCm0wqiisof5 MQ4TIZnSKSSFy+gbzfGGQkxo3P9h8C6EmvnNeBVYGrgOvn2NyOdE+0B8cQlPhl9slfeE 80hUf8QkkYKjs9wDBoOQWdjey5lgnByFq5cbs+FFSxiDJq1q+bruyHHqHxf7k8P/PPjl sV3aDKWOBJpiz89a9TYJy3CqLn3CAU2LMH1YB48J/12uD3j55TCeSuGSwdP2njmSjdXH f6TQ== X-Gm-Message-State: AOAM530nVngc4qmOg1w6q3xHOWjOspmLGzb0cjx1I/k8HogrRegBBkp1 nWJOOf0c6q3odQhfW+qSw7HaiGyp8pzHIA== X-Google-Smtp-Source: ABdhPJxpf4Ryvjy4Xf0xw/2qrejO1ASWA3zDiH/Rc6IEicNMwrszz8mQR+qiF4YjulK0xOMtMCMyzDcI70BvKg== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:995:b0:4fb:607d:444c with SMTP id u21-20020a056a00099500b004fb607d444cmr12098520pfg.69.1648835772682; Fri, 01 Apr 2022 10:56:12 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:38 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-8-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 07/23] KVM: x86/mmu: Separate shadow MMU sp allocation from initialization From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105614_050856_0F3AB9EB X-CRM114-Status: GOOD ( 13.51 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Separate the code that allocates a new shadow page from the vCPU caches from the code that initializes it. This is in preparation for creating new shadow pages from VM ioctls for eager page splitting, [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:1049 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Separate the code that allocates a new shadow page from the vCPU caches from the code that initializes it. This is in preparation for creating new shadow pages from VM ioctls for eager page splitting, where we do not have access to the vCPU caches. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 38 ++++++++++++++++++-------------------- 1 file changed, 18 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index abfb3e5d1372..421fcbc97f9e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1716,16 +1716,9 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); if (!direct) sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache); + set_page_private(virt_to_page(sp->spt), (unsigned long)sp); - /* - * active_mmu_pages must be a FIFO list, as kvm_zap_obsolete_pages() - * depends on valid pages being added to the head of the list. See - * comments in kvm_zap_obsolete_pages(). - */ - sp->mmu_valid_gen = vcpu->kvm->arch.mmu_valid_gen; - list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages); - kvm_mod_used_mmu_pages(vcpu->kvm, +1); return sp; } @@ -2123,27 +2116,31 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm_vcpu *vcpu, return sp; } -static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, - struct kvm_memory_slot *slot, - gfn_t gfn, - union kvm_mmu_page_role role) +static void init_shadow_page(struct kvm *kvm, struct kvm_mmu_page *sp, + struct kvm_memory_slot *slot, gfn_t gfn, + union kvm_mmu_page_role role) { - struct kvm_mmu_page *sp; struct hlist_head *sp_list; - ++vcpu->kvm->stat.mmu_cache_miss; + ++kvm->stat.mmu_cache_miss; - sp = kvm_mmu_alloc_shadow_page(vcpu, role.direct); sp->gfn = gfn; sp->role = role; + sp->mmu_valid_gen = kvm->arch.mmu_valid_gen; - sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; + /* + * active_mmu_pages must be a FIFO list, as kvm_zap_obsolete_pages() + * depends on valid pages being added to the head of the list. See + * comments in kvm_zap_obsolete_pages(). + */ + list_add(&sp->link, &kvm->arch.active_mmu_pages); + kvm_mod_used_mmu_pages(kvm, 1); + + sp_list = &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; hlist_add_head(&sp->hash_link, sp_list); if (!role.direct) - account_shadowed(vcpu->kvm, slot, sp); - - return sp; + account_shadowed(kvm, slot, sp); } static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, @@ -2158,7 +2155,8 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, if (!sp) { created = true; slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - sp = kvm_mmu_new_shadow_page(vcpu, slot, gfn, role); + sp = kvm_mmu_alloc_shadow_page(vcpu, role.direct); + init_shadow_page(vcpu->kvm, sp, slot, gfn, role); } trace_kvm_mmu_get_page(sp, created); From patchwork Fri Apr 1 17:55:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612337 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=w4WIeyLb; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=XUYkEsDR; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXQ4Wlbz9sGD for ; Sat, 2 Apr 2022 04:56:22 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=gJIjJO8cd//bGrzsfdAU204evRXjBGCqDyezSUk0SAo=; b=w4WIeyLbXqqMHR4DUBolJrz8C9 w4Sb0FYlPYNKpIFscAIQnWqy6Cwx9VpDqGKVBQ1c4f+8WM2hfOZ162sMUduBEHIDp/oP80YlyNylh snXlrEO2Di2zrd9VWRu8oN55ujV7SwuzFJ2MQk+am3KhN5LpVU3KK0qzv3QiI+J4iT4XbnAWslOP6 ha+yiFWF9y9X+pNhl19ylbjuhFo9TMOb/LJpqgJdWgfmpj/WTFpnqx+ZZp4d9/cGm6EbJVwHUKSYl NU0+NiB3SeECXavGWX/fSQP3e37rPvDlliZlq78PwIPJlGXMy5B0S9ZToVtdZJGCb7ua2r/2tW0Zg EyUrAiQA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVU-006lks-7w; Fri, 01 Apr 2022 17:56:20 +0000 Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVR-006lfE-Jf for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:19 +0000 Received: by mail-pf1-x44a.google.com with SMTP id c78-20020a624e51000000b004fadac38f65so1984326pfb.16 for ; Fri, 01 Apr 2022 10:56:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=iIUi4j5dTF9zBg27X61yXEUlsha95IziaYqw02FJyl4=; b=XUYkEsDRY+yxOCCR/ISvExrk4TUU/9GP5XtZIVgZZDgY9kQP/XNnlk8fvUiSOISjYK R5m/JMOu+EmpXbr04dXRQa3x0NDxA+wEK3Kuz/EnkzaIX58tYbeh5VIO1ePqPEADkcP+ Bxr9baz/W/j0udZ76lWfRF8cYFvhRj3MFWoWUBcTG0gE/u6V6q5Pr6lM/mJiJpFcEG/M Ypf+ihXzgburWCxjfnncvi45fUrEjlfc6Kk10teb6t1BFUYa7k9dsG5WHjomwRREcHD9 UGOO5aksD6e2p7VRKL8vpkBf9oXsHOu3bl1zFvbHF0S6ZiuWYOv4Gib1FOulsahDO063 e0wQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=iIUi4j5dTF9zBg27X61yXEUlsha95IziaYqw02FJyl4=; b=fRcPheY2mZXbE7aEeT4qKuMZwvkyhqi8Q76fM0ku0QMkPkr37w+cqI+FYPw9VM8m0f ll6XlxBs1bhfUXnQ1nVx77PhtibKjoeHpWISBZsXoTPqP6ZHky3jiwTlBwTj31LrdIvV dB/3egl8peCt1GnBnoouokJtnom/VxENQVbJcIQW0lZrQWNsAnRXBF+nyuTIv62AaljE FRMtVJ4dTtMB2FdXvyY0SC2NxhuNQIRWVo0JXYVI1cnSHd93LjuEY+MC/rhTCDh2yRmt J3uq5A1Az7MqmjNpt3QrSW+0TL5P1uwYjBABQUx8WE2nFBMHOzeZoYPHZ3WUFCJy8ZTo ruhQ== X-Gm-Message-State: AOAM531TS2xkiO2D1J1bmmdjwtX30yWIYUEI+KSuNvlAojcZ4qoiz2io Cwb09gtHeiaKn1Ic800O+Yotsn82POh1jg== X-Google-Smtp-Source: ABdhPJzO4HC0kF89hVPwzXPdQHYPWNvD/kUD++1/OnBDK4J5ailVX4fYbPj+olrmsSjzQDC0shWUbcvlHeR7vQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:e81:b0:1c6:5a9c:5afa with SMTP id fv1-20020a17090b0e8100b001c65a9c5afamr587543pjb.1.1648835774239; Fri, 01 Apr 2022 10:56:14 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:39 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-9-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 08/23] KVM: x86/mmu: Link spt to sp during allocation From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105617_665451_B9C1B8C9 X-CRM114-Status: GOOD ( 11.69 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Link the shadow page table to the sp (via set_page_private()) during allocation rather than initialization. This is a more logical place to do it because allocation time is also where we do the revers [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:44a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Link the shadow page table to the sp (via set_page_private()) during allocation rather than initialization. This is a more logical place to do it because allocation time is also where we do the reverse link (setting sp->spt). This creates one extra call to set_page_private(), but having multiple calls to set_page_private() is unavoidable anyway. We either do set_page_private() during allocation, which requires 1 per allocation function, or we do it during initialization, which requires 1 per initialization function. No functional change intended. Suggested-by: Ben Gardon Signed-off-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index b3b6426725d4..17354e55735f 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -274,6 +274,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); + set_page_private(virt_to_page(sp->spt), (unsigned long)sp); return sp; } @@ -281,8 +282,6 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep, gfn_t gfn, union kvm_mmu_page_role role) { - set_page_private(virt_to_page(sp->spt), (unsigned long)sp); - sp->role = role; sp->gfn = gfn; sp->ptep = sptep; @@ -1435,6 +1434,8 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp) return NULL; } + set_page_private(virt_to_page(sp->spt), (unsigned long)sp); + return sp; } From patchwork Fri Apr 1 17:55:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612338 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=PWkdjPAI; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=r1irV2cX; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXQ4DQ4z9s0r for ; Sat, 2 Apr 2022 04:56:22 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=RVDV80YXr49COCwJJQMoAvgds23bL4DSY3NltNQXzjw=; b=PWkdjPAI30C+eBXcgBuhQJo1KE PIHCxcUIlvLyCtC5caDfUDj2YRCVEvpa2agQIs879Dyd+QfjNArslwauGt+qdWGr14ujCwMW1NAhW YyX5cIgSS79PMschtOU7MBG1n7bPiJ9E04oIPxuDfwRURtY2W3GmgMgU5T8nEVt43T7HYnarZCgwz B3BQOPMNt1ZEX+Dvk0wNJMhvuo2gvNRI9ydeh0Iu02L5lde3rQsvpOvYvYiArtemnTha5aacPZf3b vLLR4eUGIjstXwdqVs01ko4UsJ4Vumb0JrYZQZ96iUpjDzDTy0G2FHagijXTLYA0qpo/spCwzFClo ksnm+AdA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVU-006lkl-3O; Fri, 01 Apr 2022 17:56:20 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVR-006lgW-AX for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:18 +0000 Received: by mail-pj1-x104a.google.com with SMTP id w3-20020a17090ac98300b001b8b914e91aso1962616pjt.0 for ; Fri, 01 Apr 2022 10:56:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=nTmuYtTREbDuE8X5WKQrckgnKI3yoAyUOjS6IPz+d8s=; b=r1irV2cXU5balcgpJ+77g5pazDurBLKGlZ2Dfmc5U8VLb3YkAgu0DV8Hm8rTD/uVNR 4BFbWqVJzVzfgcSg/Qy3n1yW1bItPLIGG6Y2QN/OffvUeg0i997msfBloThaBEeVaIdW ivSbp9ZOUCJQdZ7sfRmYuChNnZ/RoMCSqHoW+ZboctWTFgAJ6mMmwDVoWQQjekBtaC6s /TbZn/Gm9YkE2eUSDw5p4bluh48WV3voiXn/aHalocyfRAsJ1Rc9LArz6w0hkjdKMpj1 smEHv0Tc0QDl3l0mZQTpGNMG4BoOnlIVQ1nSOcnOV8uHNA3V4tyFMQFIBqgqnIHvnqzG TU8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nTmuYtTREbDuE8X5WKQrckgnKI3yoAyUOjS6IPz+d8s=; b=oi9f4IITGwK38IJktjqPDQ1/VigYCAZKEJukz89H5zn5g/mcqzSAgK31yTSVl9STbX sVkRN/2FOAm06+wFSBGvXERtzqQWKBDcgYhQzU/wtVnxpt1RwyB6Ju5IIxMPcY8T14v4 vJgeUHMakTgZ5gDPBK5rOaGp2anDahnHeWkLwYf+tjlQn9m0WIqCFURk81fLlug9sZlf FM++aN9EGjYnhivJTrCjTwKkSr2maJaqUnpPMTak2lUKa1IXMIhmwdt3U06BC/kuoPur HJeb1ZPm1yHG2ILG3LA3RN4+Wdrg+Dgmlfaxl8SZEXDcVAAfvEmXcULm13jLCHsB5gsW uuQQ== X-Gm-Message-State: AOAM530+myQ5n8GhWGTi0YeNrmFjj0EvrDgj7cnz7U5Sc+14K9bPnkig /MOAfTN5wWvVi7sVNp0jFeE64IGQ5GtFbg== X-Google-Smtp-Source: ABdhPJwn8ro+5iboEj4iBbUxK1MDSaRBAe0CkNK+ZysloOmbIZqhBQ1y1ArIceThcmiHYQpxZuwA5eCXoSY58Q== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:1acb:b0:4fa:de8e:da9d with SMTP id f11-20020a056a001acb00b004fade8eda9dmr12200024pfv.42.1648835776102; Fri, 01 Apr 2022 10:56:16 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:40 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-10-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 09/23] KVM: x86/mmu: Move huge page split sp allocation code to mmu.c From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105617_391659_04ADC8E0 X-CRM114-Status: GOOD ( 18.97 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Move the code that allocates a new shadow page for splitting huge pages into mmu.c. Currently this code is only used by the TDP MMU but it will be reused in subsequent commits to also split huge pages [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:104a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Move the code that allocates a new shadow page for splitting huge pages into mmu.c. Currently this code is only used by the TDP MMU but it will be reused in subsequent commits to also split huge pages mapped by the shadow MMU. Move the GFP flags calculation down into the allocation code so that it does not have to be duplicated when the shadow MMU needs to start allocating SPs for splitting. Preemptively split out the gfp flags calculation to a separate helpers for use in a subsequent commit that adds support for eager page splitting to the shadow MMU. No functional change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 37 +++++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/mmu_internal.h | 2 ++ arch/x86/kvm/mmu/tdp_mmu.c | 34 ++---------------------------- 3 files changed, 41 insertions(+), 32 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 421fcbc97f9e..657c2a906c12 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1722,6 +1722,43 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, return sp; } +static inline gfp_t gfp_flags_for_split(bool locked) +{ + /* + * If under the MMU lock, use GFP_NOWAIT to avoid direct reclaim (which + * is slow) and to avoid making any filesystem callbacks (which can end + * up invoking KVM MMU notifiers, resulting in a deadlock). + */ + return (locked ? GFP_NOWAIT : GFP_KERNEL) | __GFP_ACCOUNT; +} + +/* + * Allocate a new shadow page, potentially while holding the MMU lock. + * + * Huge page splitting always uses direct shadow pages since the huge page is + * being mapped directly with a lower level page table. Thus there's no need to + * allocate the gfns array. + */ +struct kvm_mmu_page *kvm_mmu_alloc_direct_sp_for_split(bool locked) +{ + gfp_t gfp = gfp_flags_for_split(locked) | __GFP_ZERO; + struct kvm_mmu_page *sp; + + sp = kmem_cache_alloc(mmu_page_header_cache, gfp); + if (!sp) + return NULL; + + sp->spt = (void *)__get_free_page(gfp); + if (!sp->spt) { + kmem_cache_free(mmu_page_header_cache, sp); + return NULL; + } + + set_page_private(virt_to_page(sp->spt), (unsigned long)sp); + + return sp; +} + static void mark_unsync(u64 *spte); static void kvm_mmu_mark_parents_unsync(struct kvm_mmu_page *sp) { diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 1bff453f7cbe..a0648e7ddd33 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -171,4 +171,6 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); +struct kvm_mmu_page *kvm_mmu_alloc_direct_sp_for_split(bool locked); + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 17354e55735f..34e581bcaaf6 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1418,43 +1418,13 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm, return spte_set; } -static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp) -{ - struct kvm_mmu_page *sp; - - gfp |= __GFP_ZERO; - - sp = kmem_cache_alloc(mmu_page_header_cache, gfp); - if (!sp) - return NULL; - - sp->spt = (void *)__get_free_page(gfp); - if (!sp->spt) { - kmem_cache_free(mmu_page_header_cache, sp); - return NULL; - } - - set_page_private(virt_to_page(sp->spt), (unsigned long)sp); - - return sp; -} - static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm, struct tdp_iter *iter, bool shared) { struct kvm_mmu_page *sp; - /* - * Since we are allocating while under the MMU lock we have to be - * careful about GFP flags. Use GFP_NOWAIT to avoid blocking on direct - * reclaim and to avoid making any filesystem callbacks (which can end - * up invoking KVM MMU notifiers, resulting in a deadlock). - * - * If this allocation fails we drop the lock and retry with reclaim - * allowed. - */ - sp = __tdp_mmu_alloc_sp_for_split(GFP_NOWAIT | __GFP_ACCOUNT); + sp = kvm_mmu_alloc_direct_sp_for_split(true); if (sp) return sp; @@ -1466,7 +1436,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm, write_unlock(&kvm->mmu_lock); iter->yielded = true; - sp = __tdp_mmu_alloc_sp_for_split(GFP_KERNEL_ACCOUNT); + sp = kvm_mmu_alloc_direct_sp_for_split(false); if (shared) read_lock(&kvm->mmu_lock); From patchwork Fri Apr 1 17:55:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612339 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=mXFb58hv; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=TTHccN07; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXS67c1z9sGD for ; Sat, 2 Apr 2022 04:56:24 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=jOHw9NZOEviRpA7RLm42JpQdBjgVGxzdYzdowYcGTQg=; b=mXFb58hv7FIWQuopXfzNma2BH0 50GP0F9HSkndoLYwZPKJyuGTN7Zei16hlK5ZcGWQA+uERI3bUmsG7Mi/SJ9V9HhHtM07KaJztqFlS arufDINXnNI4m87gI5qHUt4/pM0CuHLC4n6xPCGrXSWLZ23362ggjnQxmV7viE42kmDiEm2O0xZwA Kgmi/HgSi6D7IsAWHRBeKOIYewgmwOGOG0jAz20/Tav9YMBSG3KGYYS/jeF6infTa6a2jGKSOueaL 1N+bi7N0Ge+ljBw5f428UmxgPXytyPkigyq5BPfDn9QZi9Oz+eq4tzxm6X/zaZj9k8fuwFD2IwZIG hiRKmeMA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVW-006lmq-KI; Fri, 01 Apr 2022 17:56:22 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVT-006liX-3H for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:20 +0000 Received: by mail-pl1-x64a.google.com with SMTP id s5-20020a170902b18500b00155d6fbf4d4so1792497plr.18 for ; Fri, 01 Apr 2022 10:56:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=9irR3brB5K4k6kskWzWAbqG40+EEbVRaPIvCDPrQqZc=; b=TTHccN07Kc2+dKX/CQM4O229WOCMUEIEZjrDqAkCum2pWRxXyep/RtD21aVHTzWJ0u 5Ur/7qOrscd0hKtzG7s1EiErU9Vn5N7nHtLcCGMtzTi3YMoFLmHW94dw6Gq8G0YshxYY MXs/f+M/+N5rlpf0p71rU9X8qWgGkVAjsH0i1IRyHKq8aV01lhAROqIUxZdIBEdQ6Fn7 o/5H/YoPwviW5zcmK5mwsLjoCqpLLJwdhb4vCxGtiGEedukX7RGb51wslmYlEJfKZvrb xua3zBzP8zwd6jU0yexNhL/1HTixfROGNf25yDDwoHh2EgdS6uv8nRoFcKEn2F6VBDv0 gMBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9irR3brB5K4k6kskWzWAbqG40+EEbVRaPIvCDPrQqZc=; b=VE+tCXipj51XlIMPk8atQ+bYMoJ8RnskaA71nv2+RtwSyVpi0pIUBajIozECxQTd+t d6z7ygo8+DApjJxOgx5hIh8w3XWvmlRZiT3vzqxxbBX/dqxzN33G+oUzyOz4DWA9u3e/ ZgFmxgUSbbxunqRCo8LUusDdbO3J9Reqo9hzGGM6zFaj7xnCQIgfr0MI4tRDBuztoCTX s5UdYar8hkjEI3Fmxd8cKiY123BYbCDDoAVZjkAlb99420dtm53fFEr1aTjSMVDHTv5H FXZbl5BitN0m6L2InUK6hUsQjyf8pBy29LTffi5qUIhCbfOPtdfCfYE+qX2KwI7xGz1V 1PYg== X-Gm-Message-State: AOAM532S570tbLBm4FWFTPlU93zmWw9sBiLvX85y32tBf7SDyzsS4J4c /N08puIbxL80grd70gxUMVjmQXucoGogZg== X-Google-Smtp-Source: ABdhPJxg+/Q/17NTgwHdrbJN0RELEFI4TA91YpU0RNqlmFBBNYRSy1Zgxz7Lu9EEyB4Z0QtTYObcl0Q0cVZDDA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:118c:b0:1ca:307:9b50 with SMTP id gk12-20020a17090b118c00b001ca03079b50mr13195653pjb.26.1648835777771; Fri, 01 Apr 2022 10:56:17 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:41 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-11-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 10/23] KVM: x86/mmu: Use common code to free kvm_mmu_page structs From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105619_189671_741241F1 X-CRM114-Status: GOOD ( 11.46 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Use a common function to free kvm_mmu_page structs in the TDP MMU and the shadow MMU. This reduces the amount of duplicate code and is needed in subsequent commits that allocate and free kvm_mmu_pages [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:64a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Use a common function to free kvm_mmu_page structs in the TDP MMU and the shadow MMU. This reduces the amount of duplicate code and is needed in subsequent commits that allocate and free kvm_mmu_pages for eager page splitting. Keep tdp_mmu_free_sp() as a wrapper to mirror tdp_mmu_alloc_sp(). No functional change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 8 ++++---- arch/x86/kvm/mmu/mmu_internal.h | 2 ++ arch/x86/kvm/mmu/tdp_mmu.c | 3 +-- 3 files changed, 7 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 657c2a906c12..27996fdb0e7e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1669,11 +1669,8 @@ static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr) percpu_counter_add(&kvm_total_used_mmu_pages, nr); } -static void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) +void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) { - MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); - hlist_del(&sp->hash_link); - list_del(&sp->link); free_page((unsigned long)sp->spt); if (!sp->role.direct) free_page((unsigned long)sp->gfns); @@ -2518,6 +2515,9 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, list_for_each_entry_safe(sp, nsp, invalid_list, link) { WARN_ON(!sp->role.invalid || sp->root_count); + MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); + hlist_del(&sp->hash_link); + list_del(&sp->link); kvm_mmu_free_shadow_page(sp); } } diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index a0648e7ddd33..5f91e4d07a95 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -173,4 +173,6 @@ void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); struct kvm_mmu_page *kvm_mmu_alloc_direct_sp_for_split(bool locked); +void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp); + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 34e581bcaaf6..8b00c868405b 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -64,8 +64,7 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) static void tdp_mmu_free_sp(struct kvm_mmu_page *sp) { - free_page((unsigned long)sp->spt); - kmem_cache_free(mmu_page_header_cache, sp); + kvm_mmu_free_shadow_page(sp); } /* From patchwork Fri Apr 1 17:55:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612340 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=cnHEWYnL; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=UlIX32ER; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXT6mC3z9sGD for ; Sat, 2 Apr 2022 04:56:25 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=QxPQnnylvov1YIXRZflykx4lyNYVN03D9lijAArsJoo=; b=cnHEWYnLalFziRzx9t36ZYuaqZ rFNCRb5hCQpjq9LmVoc6nYnz3cXwB0JqtOim+QBOOmSswlXMXAJWiBo4NBzSLJFOsO7a6AS/IcRuy WBCXxFYm07u12SRHyylQIdGbo4fi4bno0c8Xj4i77JHTLWKwqelKOrEDrhg6P/oYDUF43TJjVq0Ew GMy8DYNeTmidn7ZMaQLMexpt2byshPtjwATDTFJPlIU/U5Cfw8kn+KQUKzRo0P6IneSWUshBb8pUX KYppL0K/FnQg1ErM6Wo8LK/qCL9ezX1Y49khocAa98/dtYVtfoWEjI5V84Lp8q9TevEIq622LPtjt 3MiRIEAA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVX-006loF-P9; Fri, 01 Apr 2022 17:56:23 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVU-006lkP-CO for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:22 +0000 Received: by mail-pj1-x1049.google.com with SMTP id l6-20020a17090a150600b001c95a6ab60cso4353031pja.5 for ; Fri, 01 Apr 2022 10:56:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=W+wK5vxPR/QvP8f06+LL6qG83douEveyqaG4cfdgLaY=; b=UlIX32ER5ckYEDHWAIcjYr1jnfgt+txi+GTB7aQ575YuSQEhVBgxyCUID6hNAgokFE i76MKrGhcCti1IAqqtc8kmHq8LqNd7RHGUdhQj5N55jui7o41Er8H+t8k78BKld1GGlF 1VOlV5WFmwGA6KgQUp2Zp1tMkKdYXozagaTpFsbpYSaFotNIcx7eaEFtAXC+ZnpCROYq ABsmzYy5wYQds//B5QF75M7rFYH+fy/3noPiqdzNNeN7P/vT017vKaxWsaHrKZktu0mv qnRJyh8yDxMicnr56dYAjntxpQ2hWjBPYb0U8saUvcnyua7EEZ2oOdLzLWIpXm5jfysP Is+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=W+wK5vxPR/QvP8f06+LL6qG83douEveyqaG4cfdgLaY=; b=K2531KbgY16q3nEJkSsx6HwlYjcUke0ejqvjMhb18GkbUybid7MR6W/fKqOJWUJq8m DeVZDKkkznjjgmmSeX6th5wyTWgObu0uci1/5KVmL5TKQquTKl9X31TYZPM3Xcqtl2Wl IIDTYM4Ea/cscgRzDvf68l5rkxDsqzXPF3Ylj3EZFkDZcfEHYBQ0UiIC5/T3v0RYiZW1 4pw/nbpLNjTS3JgRfcFKTA0WtcNiGlDF/VomCKdVkB/3yTP+TOCcddKwqHeFxuXJJbmT kvWFCfxuBQFT2w8wOSn6WAvXD+iw3FQ0RRIXzZt5+Bjgoycw3vTy7ZV7VpKWf2SypZ1L mAbA== X-Gm-Message-State: AOAM533HwToxIJl8W7xjjauYAj57YKa1teJD6ftPsX+aNrBr9fXzFNVH vYiexedhc7NBb5i4uEnMb6i6i2fyN3JV1g== X-Google-Smtp-Source: ABdhPJxYuSmgflitVWDuHEX61iCJV7QQcSTVJ1SS0S3w86E/w3yONDjqYny6QBsz2LrR6va1k+Ws4RIi9FnL+w== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:4c84:b0:1c7:7769:3cc7 with SMTP id my4-20020a17090b4c8400b001c777693cc7mr13054532pjb.73.1648835779185; Fri, 01 Apr 2022 10:56:19 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:42 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-12-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 11/23] KVM: x86/mmu: Use common code to allocate shadow pages from vCPU caches From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105620_639674_C47DB63E X-CRM114-Status: GOOD ( 11.17 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Now that allocating shadow pages is isolated to a helper function, use it in the TDP MMU as well. Keep tdp_mmu_alloc_sp() to avoid hard-coding direct=true in multiple places. No functional change intended. Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:1049 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Now that allocating shadow pages is isolated to a helper function, use it in the TDP MMU as well. Keep tdp_mmu_alloc_sp() to avoid hard-coding direct=true in multiple places. No functional change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 3 +-- arch/x86/kvm/mmu/mmu_internal.h | 1 + arch/x86/kvm/mmu/tdp_mmu.c | 8 +------- 3 files changed, 3 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 27996fdb0e7e..37385835c399 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1704,8 +1704,7 @@ static void drop_parent_pte(struct kvm_mmu_page *sp, mmu_spte_clear_no_track(parent_pte); } -static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, - bool direct) +struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, bool direct) { struct kvm_mmu_page *sp; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 5f91e4d07a95..d4e2de5f2a6d 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -173,6 +173,7 @@ void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); struct kvm_mmu_page *kvm_mmu_alloc_direct_sp_for_split(bool locked); +struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, bool direct); void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp); #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 8b00c868405b..f6201b89045b 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -269,13 +269,7 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) { - struct kvm_mmu_page *sp; - - sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); - sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); - set_page_private(virt_to_page(sp->spt), (unsigned long)sp); - - return sp; + return kvm_mmu_alloc_shadow_page(vcpu, true); } static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep, From patchwork Fri Apr 1 17:55:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612342 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=HbBivOPr; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=jwJrV9wg; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXZ1cCdz9sGD for ; Sat, 2 Apr 2022 04:56:30 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=bq8MwdJwxzeNhN53RO8TuXOveyCpPNTrangs/9EveUg=; b=HbBivOPrMwhO0ICOBFZRX2M7MW qBEA9YV6LDUCOxikN4GdZw8iTYBybmy6hmuzbcGtlbki6qXJf5x2RbVXnjusNfYlOQ7zw4k0Cu8dc iYCrLRsgV7e9CE1WZZNlcQccJzgmbbDCChbFL0fulBdErOP7rji3Wi3NS4whOSPn2xNDNk/hWuRJR pLgIhq/6APcLmtSTYw0UBKeZfbRv0ZDdYktk43tahLoNyI/kEBol55TsZIlSoYp9AUhYJuWinZIHM dRJOUY0aiRHrxU5dxzfcs9+k9vRbUEaQI7w7J4fpjOdiuBcI10c2oyxoW5z8tmWtZ7IwXwMY+T1FC ZSS7i48g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVc-006lsn-1V; Fri, 01 Apr 2022 17:56:28 +0000 Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVY-006lm3-U7 for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:26 +0000 Received: by mail-pl1-x649.google.com with SMTP id r17-20020a170902ea5100b00153f493fa9aso1797204plg.17 for ; Fri, 01 Apr 2022 10:56:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=zKDQj9FrafQ+86iVPP3K51OIfRdmfs50RkYPmE+oxOQ=; b=jwJrV9wgSOZhOdo64taCxPoODAC6XGsgJuKkwc48uKLDwCehUF0oYRs0fx/6iUYJsN bDU8lVrfYvdUg3f5cyhCck37cCyLL5/oocbLVeX/2RMocHB7whP9YgJV1BPU0FInpT8M Jey3QTtqfjXizgqaTWKrOuc0XTGX8k9DlHlIvbHGsjO2yN1YnnybWB4lTqrMuaaxskHT tzBNngTy7KVEqbn10arUq+QvHMRdcdxlBk6ThObz7bZ8fYOmHleUEQKDX3znsHn2k++f k/IMtVSXnTPiSX4RDt6Ql8Je3nitAyb3Jw6XHuF5ueww4Kk/F0qDai4j5R/xhVPgfIGK wEzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=zKDQj9FrafQ+86iVPP3K51OIfRdmfs50RkYPmE+oxOQ=; b=QhKm+90IpM9CSHY6i4E4+hhOjUO2oK2a2YRClAnnwjXHJ5gAnamTB+ZZrAvmyBJNui gqL67TZJMRuvVBAASraExOtk2EaEzX7//RLQfQH4I7NWLAkypPVC8gZ3iCJDYYlmzn8P PemrYW+9jxmzdBkEwn/asrxXR/Gp8G1UpFvAPs/2lM6rHVHU1IWL4JAj6X1HwA/K8+tU aEcRg1aczT5BD4P8UdP8AI8u3k9L82Q6ef3kPrGT7XvcKQT9Ig4G6e35yPS7vg67DwaK PLG8/kwiEbWYnImWVLjBfOL8nAuotKzUxa9LA7SLBGtng2khzzmI84pZCMQoXnfR+wIG hmbg== X-Gm-Message-State: AOAM530rKbvTQYtKpPspCROODRrNkYHOUGoTCLNq/edtjc2/xH4oMHts KW7En3sVqmJIcvpg4AEoQKmmsPYRJjBHTA== X-Google-Smtp-Source: ABdhPJxYcEKA7pbWp7jjIlDB7Ac6r83BwprafTTSyvULHdi4YCW6thoQ1uhQkF3OnBLrsgJpdsF5d76dQHZQHA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90a:7304:b0:1c6:aadc:90e5 with SMTP id m4-20020a17090a730400b001c6aadc90e5mr13237879pjk.164.1648835780708; Fri, 01 Apr 2022 10:56:20 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:43 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-13-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 12/23] KVM: x86/mmu: Pass const memslot to rmap_add() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105624_994783_3A0D49AB X-CRM114-Status: UNSURE ( 9.84 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: rmap_add() only uses the slot to call gfn_to_rmap() which takes a const memslot. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:649 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org rmap_add() only uses the slot to call gfn_to_rmap() which takes a const memslot. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 37385835c399..1efe161f9c02 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1596,7 +1596,7 @@ static bool kvm_test_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, #define RMAP_RECYCLE_THRESHOLD 1000 -static void rmap_add(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, +static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, u64 *spte, gfn_t gfn) { struct kvm_mmu_page *sp; From patchwork Fri Apr 1 17:55:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612341 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=PwH9fsEI; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=aUeZagy6; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXY0dwXz9sGD for ; Sat, 2 Apr 2022 04:56:29 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=AimhxW6AV5Su3xubjlAFdNgwVaSe0njshgJoHfxu7w8=; b=PwH9fsEIZDL0HXP/HQo1PFtwi5 768U8LXs5R+aqnNqmFfKnTPXHn2k5XS9km/98Ks6d4WiV+sCcLjaAnJnEeAiTesFmp6q2kJTS7G8l gc+NL5enHkNL14xNJGCqUAmEytX3ePL4RTGPKmDhTwrC01HG5FOZnS793eoTAUz6238IbpxC5JgVc INmEA/bGr6UhEnZq60wO+q3RGh50HTTLIzkeeo7bFJna6nvsNLUOvNbOqavCaIQlcb9oihpUpJPHX qTTP6cXvH7IzmwsEwRdRvguIDUIhHTfWaFMbY9A8JC8z24cWtYlDpawR7q5GQQfx1k5qKH5QwC5MV v+TPsqwg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVa-006lrO-Ss; Fri, 01 Apr 2022 17:56:26 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVX-006lmk-EW for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:24 +0000 Received: by mail-pj1-x104a.google.com with SMTP id j15-20020a17090a2a8f00b001c6d6b729f1so1938253pjd.3 for ; Fri, 01 Apr 2022 10:56:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=98QuPuIamNZ4VXhVyk60fd3jtdaNhZOSrUDoLns0RNg=; b=aUeZagy6oSm/W0IESWczDjmWRTSG4Npvh1AoFSuKwIbezAKth9hVIv88sZ1PVSs54P QuQzvJ5uvlUy4BZeBvhbx8V7VTbmtyOFBlVK6TO1Y5OqSJrdAWIzlesY9Y7Ef0+ulNiw etoPUBWitef9pf0CbOSZCO7DfbbKKlk5tLH4vpNPAoXH3otLyid9PS7TlLRXA2o/s6Zx 14w18b1jE0xK/QCtfi7eO05rYdM47uPTnLEBVIcuO8jOAe+Yt04t1s8B7z6YuNT78vLP UhF/qof2Hy5JcUFWeN8ucBOI/LTmnlgblwP0QMCgT702dp28RE7ENQrHUu79Ry4+dba8 IV1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=98QuPuIamNZ4VXhVyk60fd3jtdaNhZOSrUDoLns0RNg=; b=6tQQf2SBirujZq8EzY/hwZkL6l+/AyNdm5k+TemOxEbWGibEcRB6egDdGZOWSzLJUm bMHMw7E7KlIy5GGex93SU3crsMLZZca9mjkPv9Tgx6RI475riALjLAWMpPnpXbiWMLw7 ZeqZl51dZWMVNlCoj/ycdQvi4zPEfcYrURcL3TjYfTC2/4h6fKoZBDKE0I11KXCPWQtr LAhrhgCCj8X0Qejs7TMhUH6FEJVEc8DF8tc+rGn0xoOUJsMk7hEIl9UBQvCKr2y8UtJ+ CdZhZlZYIkMICnqp0qNXdr+ncJXTxZjj3lxGYOX+4Njox6mnm1dLsJE6fR/cj86n0fec CV6A== X-Gm-Message-State: AOAM532Tu1HwVkya2uTdJN5G4fcjoBrdy46EHjtTjLD1b1Z+o35r22u6 usr9BZM5SFzA4bmHagH7D40dnoaZksGNOQ== X-Google-Smtp-Source: ABdhPJz698nbKbYgL2dF565iPEoR7WzCc+Wp0S/qIabLKC8pciUWRSwlWFExH/GdxxtVuQGJTVkUd+56ESvveA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:1acb:b0:4fb:358f:fe87 with SMTP id f11-20020a056a001acb00b004fb358ffe87mr12183610pfv.75.1648835782098; Fri, 01 Apr 2022 10:56:22 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:44 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-14-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 13/23] KVM: x86/mmu: Pass const memslot to init_shadow_page() and descendants From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105623_530868_4CD929F2 X-CRM114-Status: GOOD ( 12.25 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Use a const pointer so that init_shadow_page() can be called from contexts where we have a const pointer. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_page_track.h | 2 +- arch/x86/kvm/mmu/mmu.c | 6 +++--- arch/x86/kvm/mmu/mmu_ [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:104a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Use a const pointer so that init_shadow_page() can be called from contexts where we have a const pointer. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_page_track.h | 2 +- arch/x86/kvm/mmu/mmu.c | 6 +++--- arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/mmu/page_track.c | 4 ++-- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- arch/x86/kvm/mmu/tdp_mmu.h | 2 +- 6 files changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h index eb186bc57f6a..3a2dc183ae9a 100644 --- a/arch/x86/include/asm/kvm_page_track.h +++ b/arch/x86/include/asm/kvm_page_track.h @@ -58,7 +58,7 @@ int kvm_page_track_create_memslot(struct kvm *kvm, unsigned long npages); void kvm_slot_page_track_add_page(struct kvm *kvm, - struct kvm_memory_slot *slot, gfn_t gfn, + const struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode); void kvm_slot_page_track_remove_page(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1efe161f9c02..39d9cccbdc7e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -794,7 +794,7 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn) } static void account_shadowed(struct kvm *kvm, - struct kvm_memory_slot *slot, + const struct kvm_memory_slot *slot, struct kvm_mmu_page *sp) { gfn_t gfn; @@ -1373,7 +1373,7 @@ int kvm_cpu_dirty_log_size(void) } bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, - struct kvm_memory_slot *slot, u64 gfn, + const struct kvm_memory_slot *slot, u64 gfn, int min_level) { struct kvm_rmap_head *rmap_head; @@ -2150,7 +2150,7 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm_vcpu *vcpu, } static void init_shadow_page(struct kvm *kvm, struct kvm_mmu_page *sp, - struct kvm_memory_slot *slot, gfn_t gfn, + const struct kvm_memory_slot *slot, gfn_t gfn, union kvm_mmu_page_role role) { struct hlist_head *sp_list; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index d4e2de5f2a6d..b6e22ba9c654 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -134,7 +134,7 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, void kvm_mmu_gfn_disallow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, - struct kvm_memory_slot *slot, u64 gfn, + const struct kvm_memory_slot *slot, u64 gfn, int min_level); void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, u64 start_gfn, u64 pages); diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c index 2e09d1b6249f..3e7901294573 100644 --- a/arch/x86/kvm/mmu/page_track.c +++ b/arch/x86/kvm/mmu/page_track.c @@ -84,7 +84,7 @@ int kvm_page_track_write_tracking_alloc(struct kvm_memory_slot *slot) return 0; } -static void update_gfn_track(struct kvm_memory_slot *slot, gfn_t gfn, +static void update_gfn_track(const struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode, short count) { int index, val; @@ -112,7 +112,7 @@ static void update_gfn_track(struct kvm_memory_slot *slot, gfn_t gfn, * @mode: tracking mode, currently only write track is supported. */ void kvm_slot_page_track_add_page(struct kvm *kvm, - struct kvm_memory_slot *slot, gfn_t gfn, + const struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode) { diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index f6201b89045b..a04262bc34e2 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1793,7 +1793,7 @@ static bool write_protect_gfn(struct kvm *kvm, struct kvm_mmu_page *root, * Returns true if an SPTE was set and a TLB flush is needed. */ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, - struct kvm_memory_slot *slot, gfn_t gfn, + const struct kvm_memory_slot *slot, gfn_t gfn, int min_level) { struct kvm_mmu_page *root; diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 5e5ef2576c81..c139635d4209 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -48,7 +48,7 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, const struct kvm_memory_slot *slot); bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, - struct kvm_memory_slot *slot, gfn_t gfn, + const struct kvm_memory_slot *slot, gfn_t gfn, int min_level); void kvm_tdp_mmu_try_split_huge_pages(struct kvm *kvm, From patchwork Fri Apr 1 17:55:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612343 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=E0LoEF8M; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=EjkKGkzp; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXZ2G0zz9sGF for ; Sat, 2 Apr 2022 04:56:30 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=kzkmlZpo9Xxc6prL3ckpv4CKt9KAJbjfQxfqLR06mi8=; b=E0LoEF8MAksI4q/bFI3t1dcx6U Ib0QxlwjzM9imuOptUJ/WIgLlaYAqGoklTZPB2+7ytji18U0YqevlMtzx+jZTb2xRTCCiuWo1mFkC FXP29WXOBU2iq01AqZYv+gYxmMGfSwt0gG1JxD6RuSH+Y58WAIgr5xEQ2hxE8pGn6URavj/3NAETd D9GWxT5W7GPT4A5Td5Abg4nU1s9HmWsJ2rfHi2oPBcyUxOMzazWbIZj30tJgYu+U2cCB+pmJHbwQN VjOwb37DlczMH4IcgBhl5bE5dbw2DYRDeb+2NCS4vsbD5kehW2v9tHXtqqwtYgIqi1lznfPGXfraO BSW8fxfQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVc-006lsx-54; Fri, 01 Apr 2022 17:56:28 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVY-006loO-TP for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:26 +0000 Received: by mail-pj1-x104a.google.com with SMTP id bv2-20020a17090af18200b001c63c69a774so1662177pjb.0 for ; Fri, 01 Apr 2022 10:56:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=6nXa09mmt/6fKuSSqvfCsyYP8ACA5E/75PQkFkVUSuM=; b=EjkKGkzp3sFVKutQ7HKMWIknnxfZtmcm1Uv7rWc8W3rzaUUCvWJ/WDcOR+lAkqZDBN 8si82TCM2H9vua5OYF3I9On/pnmiMmZjbHw7FjG8TJgHo0DU9fAMb4jRpwje9wriV6Gc Xxy2A9lqTCeYgoOvDvLH9nxii+AKEutfjVGG+FEkzE0CqPLVWfBDdeTsQqtjxHQYXxQQ RAvTE1OzMWxA71F8Zf6SKr8BUsFVr2tcj0btjFJfDEergnYTo3L1ot7Ye37Yn2UMMWeP gjthXPgtBWHa7s1v+UFaMNe7hCjtbhb5SIcmbO/Vbko9K+g5ICEgluBdl3/o4WoqMJei t0vQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=6nXa09mmt/6fKuSSqvfCsyYP8ACA5E/75PQkFkVUSuM=; b=Crx0JIT/j0ycgnHb51mYvb9GKjkeC73FIovcERezgKfjT53M2BogeAxjR3jDxcPg4i eIcr3+iig9ZZAQot8XPRzdxfDJvA5yvcqB1eDDCBE4rOX2o+Idfe3pOUwTi27oG8uhyD AyoakW3NgbkPOWNXutzJNunNTbuHuncY+AKmkPA9xfHzReClnw6JbeLfh9OTWRCgQ2VG wZzvyVCvs0/SLJ7p8B8YOtVPuBsnI5+S1HhK40p8pDe6ICas4R9FUtYs9x6f/3zIq2ox i8fInRuB/M8H4qFS8fKGYR06y99rC0gWTg/HmUNOkmQcztqOFALHw2cXsCJnAPNIFO/9 iciQ== X-Gm-Message-State: AOAM530WeTeGOA5G+9WVOeaQq/0LVlmuRdg6ElLGrG2Pru6FspwmPW33 NHDzhNU6d5Q7lQITx8Wn6CgwAWlwfF+TLg== X-Google-Smtp-Source: ABdhPJxTUcosla4+mIbtqxvUm22jFAX4QiyWBPyvxtfD4VUdc7Cj5IRmQn5lIj5TzvvvJYTw/mjAo71ZpLvBXA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:4b44:b0:1c7:41d:9428 with SMTP id mi4-20020a17090b4b4400b001c7041d9428mr13003517pjb.85.1648835783718; Fri, 01 Apr 2022 10:56:23 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:45 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-15-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 14/23] KVM: x86/mmu: Decouple rmap_add() and link_shadow_page() from kvm_vcpu From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105624_976304_2888A4D1 X-CRM114-Status: GOOD ( 12.73 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Allow adding new entries to the rmap and linking shadow pages without a struct kvm_vcpu pointer by moving the implementation of rmap_add() and link_shadow_page() into inner helper functions. No functional change intended. Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:104a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Allow adding new entries to the rmap and linking shadow pages without a struct kvm_vcpu pointer by moving the implementation of rmap_add() and link_shadow_page() into inner helper functions. No functional change intended. Reviewed-by: Ben Gardon Reviewed-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 44 +++++++++++++++++++++++++----------------- 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 39d9cccbdc7e..7305a8c625c0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -725,11 +725,6 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); } -static struct pte_list_desc *mmu_alloc_pte_list_desc(struct kvm_vcpu *vcpu) -{ - return kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_pte_list_desc_cache); -} - static void mmu_free_pte_list_desc(struct pte_list_desc *pte_list_desc) { kmem_cache_free(pte_list_desc_cache, pte_list_desc); @@ -874,7 +869,7 @@ gfn_to_memslot_dirty_bitmap(struct kvm_vcpu *vcpu, gfn_t gfn, /* * Returns the number of pointers in the rmap chain, not counting the new one. */ -static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte, +static int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, struct kvm_rmap_head *rmap_head) { struct pte_list_desc *desc; @@ -885,7 +880,7 @@ static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte, rmap_head->val = (unsigned long)spte; } else if (!(rmap_head->val & 1)) { rmap_printk("%p %llx 1->many\n", spte, *spte); - desc = mmu_alloc_pte_list_desc(vcpu); + desc = kvm_mmu_memory_cache_alloc(cache); desc->sptes[0] = (u64 *)rmap_head->val; desc->sptes[1] = spte; desc->spte_count = 2; @@ -897,7 +892,7 @@ static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte, while (desc->spte_count == PTE_LIST_EXT) { count += PTE_LIST_EXT; if (!desc->more) { - desc->more = mmu_alloc_pte_list_desc(vcpu); + desc->more = kvm_mmu_memory_cache_alloc(cache); desc = desc->more; desc->spte_count = 0; break; @@ -1596,8 +1591,10 @@ static bool kvm_test_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, #define RMAP_RECYCLE_THRESHOLD 1000 -static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, - u64 *spte, gfn_t gfn) +static void __rmap_add(struct kvm *kvm, + struct kvm_mmu_memory_cache *cache, + const struct kvm_memory_slot *slot, + u64 *spte, gfn_t gfn) { struct kvm_mmu_page *sp; struct kvm_rmap_head *rmap_head; @@ -1606,15 +1603,21 @@ static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, sp = sptep_to_sp(spte); kvm_mmu_page_set_gfn(sp, spte - sp->spt, gfn); rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); - rmap_count = pte_list_add(vcpu, spte, rmap_head); + rmap_count = pte_list_add(cache, spte, rmap_head); if (rmap_count > RMAP_RECYCLE_THRESHOLD) { - kvm_unmap_rmapp(vcpu->kvm, rmap_head, NULL, gfn, sp->role.level, __pte(0)); + kvm_unmap_rmapp(kvm, rmap_head, NULL, gfn, sp->role.level, __pte(0)); kvm_flush_remote_tlbs_with_address( - vcpu->kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); } } +static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, + u64 *spte, gfn_t gfn) +{ + __rmap_add(vcpu->kvm, &vcpu->arch.mmu_pte_list_desc_cache, slot, spte, gfn); +} + bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { bool young = false; @@ -1682,13 +1685,13 @@ static unsigned kvm_page_table_hashfn(gfn_t gfn) return hash_64(gfn, KVM_MMU_HASH_SHIFT); } -static void mmu_page_add_parent_pte(struct kvm_vcpu *vcpu, +static void mmu_page_add_parent_pte(struct kvm_mmu_memory_cache *cache, struct kvm_mmu_page *sp, u64 *parent_pte) { if (!parent_pte) return; - pte_list_add(vcpu, parent_pte, &sp->parent_ptes); + pte_list_add(cache, parent_pte, &sp->parent_ptes); } static void mmu_page_remove_parent_pte(struct kvm_mmu_page *sp, @@ -2304,8 +2307,8 @@ static void shadow_walk_next(struct kvm_shadow_walk_iterator *iterator) __shadow_walk_next(iterator, *iterator->sptep); } -static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, - struct kvm_mmu_page *sp) +static void __link_shadow_page(struct kvm_mmu_memory_cache *cache, u64 *sptep, + struct kvm_mmu_page *sp) { u64 spte; @@ -2315,12 +2318,17 @@ static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, mmu_spte_set(sptep, spte); - mmu_page_add_parent_pte(vcpu, sp, sptep); + mmu_page_add_parent_pte(cache, sp, sptep); if (sp->unsync_children || sp->unsync) mark_unsync(sptep); } +static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, struct kvm_mmu_page *sp) +{ + __link_shadow_page(&vcpu->arch.mmu_pte_list_desc_cache, sptep, sp); +} + static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, unsigned direct_access) { From patchwork Fri Apr 1 17:55:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612344 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=ekWNwMrR; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=RKIqIHqT; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXc3TJhz9sGD for ; Sat, 2 Apr 2022 04:56:32 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=SuzY1d6Xc37qaMHUibr3TXp3t7O5Hm2Axi74KQWehZk=; b=ekWNwMrRXyMgfHiH4CdRKFII6O Q/cLacJFHdx2eWTQXYfA93uUuL7DHbMROaW24SotbxB9YaBPi0kkSnkVKFrDuYsG0OPLcnw5kQGjd i1w2NCU3i3uQ3G964kftAdEgf0rHRDaO6Q6ie5lThNkBQPqkVKTLjDZNZsMa9sZ+SPhNpqbFq2hOp gdAQhUWyDTpmJyVnjwaePMd0LhB2J7tllJBJM4IEGQM9/K7L9+JKsgaF/hc/TK4Nj4+926JlubwKJ A9N0da86d38zOk9zh1TtHQgjzYjVxleta/cEBF/foDR0N4ll1wS3qNMhA7Mej/jhDCGXzLQrZoxzT 7vRBTVxQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVe-006lur-8l; Fri, 01 Apr 2022 17:56:30 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVb-006lq4-H8 for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:28 +0000 Received: by mail-pg1-x549.google.com with SMTP id 13-20020a63030d000000b00386231e836bso1995146pgd.16 for ; Fri, 01 Apr 2022 10:56:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=M8diawJ0a5+6KeqCTlFavoigvh5ZH93bmgpmzfeJdKU=; b=RKIqIHqTnVlf04If3sQSSAHbqv3QjMDqucAB4bBcWHE84GJh7XwJyqak7+P6HFLJ6n +L1EoCVaxTwBwzeLwEZ0Mnk42VtiBqYSiLFEinADKf1v/Yjbv09HeUnW2aHu/sANey8q cjo663yO9FAQ7xYig94gQ5SsFEJK87TpzlrwfQpq1FmDouuBdRw3HLf/GzCksxq6h6Qd dHPc1eGJ/weS0TRiejDzlpOCxn9sAZnBRZi0jKPd4b8sUEQnMpL4KP+7rN83HYMRmSNw x95OtGAk1v69jDYVtvnNR9Y1eLvJJQfdxffJ5OeETA4K9WvDstqnYfAG9CmLiSo2fVV2 anFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=M8diawJ0a5+6KeqCTlFavoigvh5ZH93bmgpmzfeJdKU=; b=i5h48qzywQftlKl2ymfnqVMu6p4m519gBWzCEiDPzKOqs1CFUIWxxYT2p1ajEdVhy4 biap7Qt4rkW5eoIq39lTUmX0IEGQfb39/GrXDUiAkXN2i1GaJAWQBGL5XYiRq72Kj6I7 f6YoE0PxuwKmg67N2AH6R1emF0/3W5Z6eWIAcS1k/bnHd0ckICxbY2kLd9Ux+1Bg8NUt HDxUbkdbmnfbMhH9hF0Ld0EB4d5uKp5c9orySc1ZSJkW7qPeV4qUiEhiPsvJXk6SobQf LaR58NhQEEQuYet/rIkiNUqSJF0kIwOURWUjnQacZJ7ud9wvuvV8BLlzt7AKNKNzYJk8 hxlQ== X-Gm-Message-State: AOAM531ozkoPghfgBmD8Qpht8Q9AwnsuyJSG8hXoRSOprTGbftCcp6JD 6q+O62yc2fyxSb/2IDNaDtJ4mGatLTCweg== X-Google-Smtp-Source: ABdhPJzPFkqyWE5YerNB8OHSTmxQmbipoI1rO0qgKI7jWDlomyQ1zN1wSQvVwslRKXespEzPjv+VbOwMvNn3Mg== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:ec8c:b0:154:7cee:774e with SMTP id x12-20020a170902ec8c00b001547cee774emr11782718plg.61.1648835785152; Fri, 01 Apr 2022 10:56:25 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:46 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-16-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 15/23] KVM: x86/mmu: Update page stats in __rmap_add() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105627_652365_BACCECEE X-CRM114-Status: UNSURE ( 9.61 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Update the page stats in __rmap_add() rather than at the call site. This will avoid having to manually update page stats when splitting huge pages in a subsequent commit. No functional change intended. Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:549 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Update the page stats in __rmap_add() rather than at the call site. This will avoid having to manually update page stats when splitting huge pages in a subsequent commit. No functional change intended. Reviewed-by: Ben Gardon Reviewed-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7305a8c625c0..5e1002d57689 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1602,6 +1602,8 @@ static void __rmap_add(struct kvm *kvm, sp = sptep_to_sp(spte); kvm_mmu_page_set_gfn(sp, spte - sp->spt, gfn); + kvm_update_page_stats(kvm, sp->role.level, 1); + rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); rmap_count = pte_list_add(cache, spte, rmap_head); @@ -2839,7 +2841,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, if (!was_rmapped) { WARN_ON_ONCE(ret == RET_PF_SPURIOUS); - kvm_update_page_stats(vcpu->kvm, level, 1); rmap_add(vcpu, slot, sptep, gfn); } From patchwork Fri Apr 1 17:55:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612345 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=gjWYOfiy; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=NyoJT0Mj; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXd4HT6z9sGD for ; Sat, 2 Apr 2022 04:56:33 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=J7BjGmV82c5UzHe7Wz63n+YAKp+iqK2EdjCFpNfUOKA=; b=gjWYOfiyC0Hebu0oavNcyWm8uT oNcO6efyx8JNCoYWYP7N1aEV71s280oIWcz/9u6DCg0pCCElf6bba3nk+KIImRL9jW1nU6bDY1sI3 0luNJXbj7cyMyEdgGPVVrMg76Flduso2znNWIYeVFrYZcPLFvg/BZ6Yj59BPtCZ0q93eKXi5YEGJN 06FJ3SnwftHj3BILK7UAuL9yGEn8fckC+zm5js7H++8mncQaH/Czgd02Zn0ntTJxRiarTZSkRcUla f3wclgy7gd2kwgEOnDIahvL5A3EgdPqJgKz+MNmgNuuayAgDYolrIKJ0mMtmJbZ+WAgXO/4aD5gy1 7Z0mhWYg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVf-006lwN-EX; Fri, 01 Apr 2022 17:56:31 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVb-006lrb-Uf for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:30 +0000 Received: by mail-pj1-x104a.google.com with SMTP id fh22-20020a17090b035600b001c6a163499cso1647821pjb.2 for ; Fri, 01 Apr 2022 10:56:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ZZYIO/jnGHmZ4rH+JBLCnJUupr7XJywfFhnFxY1c2JA=; b=NyoJT0MjlL1XQEPPb1nlvLG7YLfoBZnsrKii89D0PDAULAX/Nnp+BmbYVneYVm7HID 2k3LGwu0EimGxpzxMMLgLenU4gJ0JRiWPIbIoZfMcIiXs9TbUAws/A0cdsH2+iRm0w/k hMSsl5Cbi9ejl3Uu9ksAzZhnc+on97UnujdE/a80/r/wyhkUQUcpSmNN8vnh8YfL0aqX EtUOdH3Lq4dPt4LfWkbYTrPaezu1yKO7xpvpJFP/pSk6/nGGPGYpbxPBqCdphCro//4W sAcXABrzIYiCnu6GCQw5jWZbmtz2UP/DLewJphPuOZXKcxoDSR3/3RTuS0DjaYSqyQY2 FZTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ZZYIO/jnGHmZ4rH+JBLCnJUupr7XJywfFhnFxY1c2JA=; b=IuPfMpH02EVQ3wYgPSdrZbhyou2hFYXkFOIMN1/HkdLoOYcaKA8G1CuuQ1i3026R2g fHVofYCGvTa0Bf1I9ef9P+gJDfBs9fpXMOFebCD6PkBEXvwa8Updzvopu4/c0lpwGrZK UXwtAdU4YISsTMgWxowRUyqlA6GBGFP4dN98dD93lPjvAsamjYsPmyVAJAHGvK+N6+sq ux0lxz15pJB4gGd405Ei79AKEN3swVeC19T93e1DwmmhSPKv4fsI7PDl2K9MzghJsuI+ cvjI7sNGcARd5Mf1fT7lSFl9L7HbyCP3284dTYzClbAxoCzcPkBXKI1kGKuogp/TpEO3 c4nA== X-Gm-Message-State: AOAM5322d9lXpBkRvRa0/nu6VlbgXQ9UNDBWNjWE4hVwikNpV7qbDZg/ NAzWmnfMYaZPqq+xAuNARmWspBVWZCIDdg== X-Google-Smtp-Source: ABdhPJz2hCYi6E81nF9CFy0u7qVJk/su8KMqwFLzykWKFkpDTKc8tN/+7J8eexUEDHOBAfrJM0iixp6dQ5bHWw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:bcca:b0:153:88c7:a02 with SMTP id o10-20020a170902bcca00b0015388c70a02mr11424516pls.112.1648835786683; Fri, 01 Apr 2022 10:56:26 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:47 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-17-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 16/23] KVM: x86/mmu: Cache the access bits of shadowed translations From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105628_016032_C9005555 X-CRM114-Status: GOOD ( 29.01 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: In order to split a huge page we need to know what access bits to assign to the role of the new child page table. This can't be easily derived from the huge page SPTE itself since KVM applies its own [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:104a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org In order to split a huge page we need to know what access bits to assign to the role of the new child page table. This can't be easily derived from the huge page SPTE itself since KVM applies its own access policies on top, such as for HugePage NX. We could walk the guest page tables to determine the correct access bits, but that is difficult to plumb outside of a vCPU fault context. Instead, we can store the original access bits for each leaf SPTE alongside the GFN in the gfns array. The access bits only take up 3 bits, which leaves 61 bits left over for gfns, which is more than enough. So this change does not require any additional memory. In order to keep the access bit cache in sync with the guest, we have to extend FNAME(sync_page) to also update the access bits. Now that the gfns array caches more information than just GFNs, rename it to shadowed_translation. Signed-off-by: David Matlack Reported-by: kernel test robot Reported-by: kernel test robot --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/mmu/mmu.c | 71 ++++++++++++++++++++++++++++----- arch/x86/kvm/mmu/mmu_internal.h | 20 +++++++++- arch/x86/kvm/mmu/paging_tmpl.h | 8 +++- 4 files changed, 85 insertions(+), 16 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 9694dd5e6ccc..be4349c9ffea 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -696,7 +696,7 @@ struct kvm_vcpu_arch { struct kvm_mmu_memory_cache mmu_pte_list_desc_cache; struct kvm_mmu_memory_cache mmu_shadow_page_cache; - struct kvm_mmu_memory_cache mmu_gfn_array_cache; + struct kvm_mmu_memory_cache mmu_shadowed_info_cache; struct kvm_mmu_memory_cache mmu_page_header_cache; /* diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 5e1002d57689..3a425ed80e23 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -708,7 +708,7 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect) if (r) return r; if (maybe_indirect) { - r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_gfn_array_cache, + r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadowed_info_cache, PT64_ROOT_MAX_LEVEL); if (r) return r; @@ -721,7 +721,7 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) { kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache); - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_gfn_array_cache); + kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadowed_info_cache); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); } @@ -733,7 +733,7 @@ static void mmu_free_pte_list_desc(struct pte_list_desc *pte_list_desc) static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) { if (!sp->role.direct) - return sp->gfns[index]; + return sp->shadowed_translation[index].gfn; return sp->gfn + (index << ((sp->role.level - 1) * PT64_LEVEL_BITS)); } @@ -741,7 +741,7 @@ static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) static void kvm_mmu_page_set_gfn(struct kvm_mmu_page *sp, int index, gfn_t gfn) { if (!sp->role.direct) { - sp->gfns[index] = gfn; + sp->shadowed_translation[index].gfn = gfn; return; } @@ -752,6 +752,47 @@ static void kvm_mmu_page_set_gfn(struct kvm_mmu_page *sp, int index, gfn_t gfn) kvm_mmu_page_get_gfn(sp, index), gfn); } +static void kvm_mmu_page_set_access(struct kvm_mmu_page *sp, int index, u32 access) +{ + if (!sp->role.direct) { + sp->shadowed_translation[index].access = access; + return; + } + + if (WARN_ON(access != sp->role.access)) + pr_err_ratelimited("access mismatch under direct page %llx " + "(expected %llx, got %llx)\n", + kvm_mmu_page_get_gfn(sp, index), + sp->role.access, access); +} + +/* + * For leaf SPTEs, fetch the *guest* access permissions being shadowed. Note + * that the SPTE itself may have a more constrained access permissions that + * what the guest enforces. For example, a guest may create an executable + * huge PTE but KVM may disallow execution to mitigate iTLB multihit. + */ +static u32 kvm_mmu_page_get_access(struct kvm_mmu_page *sp, int index) +{ + if (!sp->role.direct) + return sp->shadowed_translation[index].access; + + /* + * For direct MMUs (e.g. TDP or non-paging guests) there are no *guest* + * access permissions being shadowed. So we can just return ACC_ALL + * here. + * + * For indirect MMUs (shadow paging), direct shadow pages exist when KVM + * is shadowing a guest huge page with smaller pages, since the guest + * huge page is being directly mapped. In this case the guest access + * permissions being shadowed are the access permissions of the huge + * page. + * + * In both cases, sp->role.access contains exactly what we want. + */ + return sp->role.access; +} + /* * Return the pointer to the large page information for a given gfn, * handling slots that are not large page aligned. @@ -1594,7 +1635,7 @@ static bool kvm_test_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, static void __rmap_add(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, const struct kvm_memory_slot *slot, - u64 *spte, gfn_t gfn) + u64 *spte, gfn_t gfn, u32 access) { struct kvm_mmu_page *sp; struct kvm_rmap_head *rmap_head; @@ -1602,6 +1643,7 @@ static void __rmap_add(struct kvm *kvm, sp = sptep_to_sp(spte); kvm_mmu_page_set_gfn(sp, spte - sp->spt, gfn); + kvm_mmu_page_set_access(sp, spte - sp->spt, access); kvm_update_page_stats(kvm, sp->role.level, 1); rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); @@ -1615,9 +1657,9 @@ static void __rmap_add(struct kvm *kvm, } static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, - u64 *spte, gfn_t gfn) + u64 *spte, gfn_t gfn, u32 access) { - __rmap_add(vcpu->kvm, &vcpu->arch.mmu_pte_list_desc_cache, slot, spte, gfn); + __rmap_add(vcpu->kvm, &vcpu->arch.mmu_pte_list_desc_cache, slot, spte, gfn, access); } bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) @@ -1678,7 +1720,7 @@ void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) { free_page((unsigned long)sp->spt); if (!sp->role.direct) - free_page((unsigned long)sp->gfns); + free_page((unsigned long)sp->shadowed_translation); kmem_cache_free(mmu_page_header_cache, sp); } @@ -1715,8 +1757,12 @@ struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, bool direc sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); + + BUILD_BUG_ON(sizeof(sp->shadowed_translation[0]) != sizeof(u64)); + if (!direct) - sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache); + sp->shadowed_translation = + kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadowed_info_cache); set_page_private(virt_to_page(sp->spt), (unsigned long)sp); @@ -1738,7 +1784,7 @@ static inline gfp_t gfp_flags_for_split(bool locked) * * Huge page splitting always uses direct shadow pages since the huge page is * being mapped directly with a lower level page table. Thus there's no need to - * allocate the gfns array. + * allocate the shadowed_translation array. */ struct kvm_mmu_page *kvm_mmu_alloc_direct_sp_for_split(bool locked) { @@ -2841,7 +2887,10 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, if (!was_rmapped) { WARN_ON_ONCE(ret == RET_PF_SPURIOUS); - rmap_add(vcpu, slot, sptep, gfn); + rmap_add(vcpu, slot, sptep, gfn, pte_access); + } else { + /* Already rmapped but the pte_access bits may have changed. */ + kvm_mmu_page_set_access(sp, sptep - sp->spt, pte_access); } return ret; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index b6e22ba9c654..3f76f4c1ae59 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -32,6 +32,18 @@ extern bool dbg; typedef u64 __rcu *tdp_ptep_t; +/* + * Stores the result of the guest translation being shadowed by an SPTE. KVM + * shadows two types of guest translations: nGPA -> GPA (shadow EPT/NPT) and + * GVA -> GPA (traditional shadow paging). In both cases the result of the + * translation is a GPA and a set of access constraints. + */ +struct shadowed_translation_entry { + /* Note, GFNs can have at most 64 - PAGE_SHIFT = 52 bits. */ + u64 gfn:52; + u64 access:3; +}; + struct kvm_mmu_page { /* * Note, "link" through "spt" fit in a single 64 byte cache line on @@ -53,8 +65,12 @@ struct kvm_mmu_page { gfn_t gfn; u64 *spt; - /* hold the gfn of each spte inside spt */ - gfn_t *gfns; + /* + * Caches the result of the intermediate guest translation being + * shadowed by each SPTE. NULL for direct shadow pages. + */ + struct shadowed_translation_entry *shadowed_translation; + /* Currently serving as active root */ union { int root_count; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index db63b5377465..91c2088464ce 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -1014,7 +1014,8 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, } /* - * Using the cached information from sp->gfns is safe because: + * Using the information in sp->shadowed_translation (kvm_mmu_page_get_gfn() + * and kvm_mmu_page_get_access()) is safe because: * - The spte has a reference to the struct page, so the pfn for a given gfn * can't change unless all sptes pointing to it are nuked first. * @@ -1088,12 +1089,15 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) if (sync_mmio_spte(vcpu, &sp->spt[i], gfn, pte_access)) continue; - if (gfn != sp->gfns[i]) { + if (gfn != kvm_mmu_page_get_gfn(sp, i)) { drop_spte(vcpu->kvm, &sp->spt[i]); flush = true; continue; } + if (pte_access != kvm_mmu_page_get_access(sp, i)) + kvm_mmu_page_set_access(sp, i, pte_access); + sptep = &sp->spt[i]; spte = *sptep; host_writable = spte & shadow_host_writable_mask; From patchwork Fri Apr 1 17:55:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612346 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=n5xBJ+eH; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=Qr6krife; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXd5DdPz9sGF for ; Sat, 2 Apr 2022 04:56:33 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=9zQ812FcRBJhHFb/dDqpnRTflyR1dplWtr/dPXAMLCY=; b=n5xBJ+eH2bj0ZcSTHL0G4G83DO Cn4AktCv46o7rPpnR6SWubx8FshjXUTWEFN+SK+QKNUzv/bh9r2xbPUo73Ipx0HOAkH/YIzAAuw8x kG2+ws7VUr8Nt8S04YeaOMo7dY/vIkwiSpz52laKerhZhznCECLrj+Fglzs8FHtcos9xAFLA9+n9T SMz2Abtx4Eno+dBnvpskgEr8r/49wuGJ4UAGPCnGJKINXAVAnlW/Z34I9NsPaaGTruFxkh9i1Tq0E PaDqXNS9YsTjNEHP++0BKiUOUkWp81LKn3nqwuGmkoJUmas+Cm33PA6av/3wcWoy9m3taUqjwOMAY ydxQ8cYg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVf-006lwb-II; Fri, 01 Apr 2022 17:56:31 +0000 Received: from mail-pf1-x449.google.com ([2607:f8b0:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVd-006lt9-CS for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:30 +0000 Received: by mail-pf1-x449.google.com with SMTP id p187-20020a6229c4000000b004fb57adf76fso2019190pfp.2 for ; Fri, 01 Apr 2022 10:56:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=N+Yt2vLgKrZHcuNVUsOj9eJGBLy1X2SsKXkQhye+WRI=; b=Qr6kriferOcm7AIY4k/KBQ72DaTvRRltjRQh/5ECnhmB7SaEkrHaUEXakckg6PNkRI UqnATSklWc+8zALzY6JKcjXGFe1SfBq6T8EoKDzLpY6jLE2QLs0t53yQ/skVLxXtxGPc SbCNotOgDY1raopFMdahKZ5R9WaFB1/m5rObgP9wTILsP2yEOkos0mtBOC4YzivxawyR A6HVPHEPfUothLkegc9HrO+kBmoz4HMGCFgxA1bBAmyGBat3PADxzlV+wjdNE3FggTSW FImW+j6GQKwF5ZJexZpan7VjGim/N71Uhn5jXmS5VsRXU7NBV1T63G8bmnjgkl+Uvk8D elaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=N+Yt2vLgKrZHcuNVUsOj9eJGBLy1X2SsKXkQhye+WRI=; b=SuYmyqxP6ebF+PQWelK2am7IjGxrlGrBE7wiR00gpHoU7LOw3L5A1n90C+5Ao5WDv2 Q/JCRYX2D+EpW0UDmSthZ9MzXC5yuBfjNgE7KLobwurKFq7A8FxOnal2XnQYPdubEIV/ BFjD2owLy63xooyb8r3lcXPEm91LT56NWW+Hz1XD/m0l+2Kl1JOmCpv2YUw0vN8puYEJ nJwmiICWmOZJBTtjA1RVBmciM5bGOZ9S2C1dlGtTSYoqbkn1pLj1bJE+CXueIUf17+/r Ls4VizLlRc+QjXwHCLGM0nmw855t41cnFkAaaS7Bbf1n9/k+9TZPJXhhQz/xKqOjBrbq hghg== X-Gm-Message-State: AOAM530dmCfEl8x5q1hrj/ceKB1ngnciW1L7ZWQNjBVv6bZHrdxGuGIX rNF8Yi8CoifPj65wkQFffooU/QRFPLV9Pg== X-Google-Smtp-Source: ABdhPJwplQwfsWEVcDA4UHiGwuSL/A4oELiqXDjC0LjFD0RE2g1bBGBlZDUBl9Lc3mLD3IwFfsHZo9b9soqNMw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90a:3e44:b0:1c9:8365:5753 with SMTP id t4-20020a17090a3e4400b001c983655753mr13236316pjm.60.1648835788080; Fri, 01 Apr 2022 10:56:28 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:48 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-18-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 17/23] KVM: x86/mmu: Extend make_huge_page_split_spte() for the shadow MMU From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105629_442184_0AF0F450 X-CRM114-Status: GOOD ( 18.38 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Currently make_huge_page_split_spte() assumes execute permissions can be granted to any 4K SPTE when splitting huge pages. This is true for the TDP MMU but is not necessarily true for the shadow MMU, [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:449 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Currently make_huge_page_split_spte() assumes execute permissions can be granted to any 4K SPTE when splitting huge pages. This is true for the TDP MMU but is not necessarily true for the shadow MMU, since we may be splitting a huge page that shadows a non-executable guest huge page. To fix this, pass in the child shadow page where the huge page will be split and derive the execution permission from the shadow page's role. This is correct because huge pages are always split with direct shadow page and thus the shadow page role contains the correct access permissions. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/spte.c | 13 +++++++------ arch/x86/kvm/mmu/spte.h | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 3 files changed, 9 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index d10189d9c877..ef6537c6f5ef 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -216,10 +216,11 @@ static u64 make_spte_executable(u64 spte) * This is used during huge page splitting to build the SPTEs that make up the * new page table. */ -u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index) +u64 make_huge_page_split_spte(u64 huge_spte, struct kvm_mmu_page *sp, int index) { + bool exec_allowed = sp->role.access & ACC_EXEC_MASK; + int child_level = sp->role.level; u64 child_spte; - int child_level; if (WARN_ON_ONCE(!is_shadow_present_pte(huge_spte))) return 0; @@ -228,7 +229,6 @@ u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index) return 0; child_spte = huge_spte; - child_level = huge_level - 1; /* * The child_spte already has the base address of the huge page being @@ -241,10 +241,11 @@ u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index) child_spte &= ~PT_PAGE_SIZE_MASK; /* - * When splitting to a 4K page, mark the page executable as the - * NX hugepage mitigation no longer applies. + * When splitting to a 4K page where execution is allowed, mark + * the page executable as the NX hugepage mitigation no longer + * applies. */ - if (is_nx_huge_page_enabled()) + if (exec_allowed && is_nx_huge_page_enabled()) child_spte = make_spte_executable(child_spte); } diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 73f12615416f..921ea77f1b5e 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -415,7 +415,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, u64 *new_spte); -u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index); +u64 make_huge_page_split_spte(u64 huge_spte, struct kvm_mmu_page *sp, int index); u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled); u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access); u64 mark_spte_for_access_track(u64 spte); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index a04262bc34e2..36d241405ecc 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1455,7 +1455,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, * not been linked in yet and thus is not reachable from any other CPU. */ for (i = 0; i < PT64_ENT_PER_PAGE; i++) - sp->spt[i] = make_huge_page_split_spte(huge_spte, level, i); + sp->spt[i] = make_huge_page_split_spte(huge_spte, sp, i); /* * Replace the huge spte with a pointer to the populated lower level From patchwork Fri Apr 1 17:55:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612347 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=hbbLYjC5; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=HKXVYj5b; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXg72mtz9sGD for ; Sat, 2 Apr 2022 04:56:35 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Xp3FlEM/cqLH/296MIYq8fO2qaZMCQ52YrCEUoK+mgw=; b=hbbLYjC5DLHkWa818/VhIzTvjS 3WC5ScFYc5FzJLdjSsTW8T3bgxRN0ZLSQnA//xh0fZyveUtt/GlTqLINp0KsyejEkcfVr3vFc3nC7 En2vzvyV4bQgKymkKp0P/SjL93PZlyo4XionCeJukhArl7WnfHCeX1xnv7NmRsYRo+Ttk7fhbt9sO BxSVqxbXlIoec7K89lIt2z3f3SgGu2rXQIX72L8CeNWHhBWBGMQerc1xrLYiuOmcBCZ4j58QIccaF W9QytvogQurPwYu8DRaKq2esNpNjEQ0VlVP9PVGF/fqPsyCzmK/THPCg4enhN/Lbgt/o3Ws07Zsu8 jxKEtdNQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVh-006lyp-Or; Fri, 01 Apr 2022 17:56:33 +0000 Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVe-006lum-U2 for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:32 +0000 Received: by mail-pl1-x649.google.com with SMTP id x18-20020a170902ea9200b00153e0dbca9bso1811904plb.9 for ; Fri, 01 Apr 2022 10:56:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=bPjRdwbrgeaGoRO6D9pIVUv9XL1QVrE/jZ7frr9Y9sk=; b=HKXVYj5bLgLQDNl0Yp3f3Cxk0xEfw/WwCidCusFgYIXZPgwnivI6+Ih1CXvlNY8wv+ 0SB7IO2lfWqyxHZXsqnHLnT0QBt4/Kn5lk5EW8K9iW1NthwnnpRVMkumhd4k1OQE77wT S2O1Z7AitpIGEox5Qh5FKe0zYHXrRWVT9NJBODMGEgo+t6A06dO9q/xdIDofE1nFLYnl tWGum6qGeGVSd65XaJizCavBESlaKNmwmbP75qcI6R+2txzDbk0mbonxSd1y21Sie2vy W5UgLGjdxmVDqUWGtPF5QIJMbd5Gcb1De7NMCzJgjAuAzu6A7T0xAsFyXc5SNRC5Sf5R j+lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bPjRdwbrgeaGoRO6D9pIVUv9XL1QVrE/jZ7frr9Y9sk=; b=rVRvdswrecakd2TzY0npnTqM0i6afXjYp2CcQbJMWQAw4t4Dmb8uY2aQz5zaYhtQ2k VDgLWCt3oWHialcNajJSWd90OEO9FTmPGiCBwpy6uUAjxeTGhcfM4o21Z7eJ97WeSdem DkakHE4gCDfXlbEq6nt8q6T0Y36L73mQflMF2+fiiqGEVEosEWXmvLRBwBzbZr5oeHYt jbpM1AWD+d21GSbfeyqlaAccW5vEDo8f22+Vyfm3H6DS+CJpHw1O1yFSHqg6saApCiV7 gDiAV7xA+NrXASKW+E3j7bTKtft7TF0CdgFe4fE+F5eFnVJnfsZqFScxssesLMS9IhXy TB3Q== X-Gm-Message-State: AOAM531tX7pKlrVfYHLOh1FmxPaOLiTROCFIRjNclRpcNgjNsZJaGUKO 5btUQ1WB7UoOFuE+QrWPQcUM49jrQMA8MA== X-Google-Smtp-Source: ABdhPJwAwneAua3CK3hqlxT6AVdmOtu9SNJP0PRrgeiWzgw657lKsS+2gpSEl29HSjZknUEHAGP38CQTV81Y8Q== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:d888:b0:151:6fe8:6e68 with SMTP id b8-20020a170902d88800b001516fe86e68mr11236930plz.158.1648835789747; Fri, 01 Apr 2022 10:56:29 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:49 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-19-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 18/23] KVM: x86/mmu: Zap collapsible SPTEs at all levels in the shadow MMU From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105631_027971_4FE2E56E X-CRM114-Status: GOOD ( 12.60 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Currently KVM only zaps collapsible 4KiB SPTEs in the shadow MMU (i.e. in the rmap). This is fine for now KVM never creates intermediate huge pages during dirty logging, i.e. a 1GiB page is never part [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:649 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Currently KVM only zaps collapsible 4KiB SPTEs in the shadow MMU (i.e. in the rmap). This is fine for now KVM never creates intermediate huge pages during dirty logging, i.e. a 1GiB page is never partially split to a 2MiB page. However, this will stop being true once the shadow MMU participates in eager page splitting, which can in fact leave behind partially split huge pages. In preparation for that change, change the shadow MMU to iterate over all necessary levels when zapping collapsible SPTEs. No functional change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3a425ed80e23..6390b23d286a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6172,18 +6172,25 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, return need_tlb_flush; } +static void kvm_rmap_zap_collapsible_sptes(struct kvm *kvm, + const struct kvm_memory_slot *slot) +{ + /* + * Note, use KVM_MAX_HUGEPAGE_LEVEL - 1 since there's no need to zap + * pages that are already mapped at the maximum possible level. + */ + if (slot_handle_level(kvm, slot, kvm_mmu_zap_collapsible_spte, + PG_LEVEL_4K, KVM_MAX_HUGEPAGE_LEVEL - 1, + true)) + kvm_arch_flush_remote_tlbs_memslot(kvm, slot); +} + void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, const struct kvm_memory_slot *slot) { if (kvm_memslots_have_rmaps(kvm)) { write_lock(&kvm->mmu_lock); - /* - * Zap only 4k SPTEs since the legacy MMU only supports dirty - * logging at a 4k granularity and never creates collapsible - * 2m SPTEs during dirty logging. - */ - if (slot_handle_level_4k(kvm, slot, kvm_mmu_zap_collapsible_spte, true)) - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + kvm_rmap_zap_collapsible_sptes(kvm, slot); write_unlock(&kvm->mmu_lock); } From patchwork Fri Apr 1 17:55:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612348 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=NGb1TSoN; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=qJlGcvod; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXl13nlz9sGD for ; Sat, 2 Apr 2022 04:56:39 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=6tUEYTFzkmy5JoAu0D/9/PrJtpCpo0BnU8QGWo7ywNU=; b=NGb1TSoNkc/nVCOVrP8k7itCIG n470mam3kPS0qs26SJBOPeoSS7ch5ravLX4vST1GnjtSkwbXEqjzEV9lK63myA26qd8P9XZpujSrs 7MqYj4UEP9VRWkAXwZdp2JfWMR8E7Pzh/OwOLvVFoacJz9+G/u5Fh9XZEKuL3na+U0+oXRI6lGdyh +GypXZRKwKAgTjpPk4hRTeXDddz1D5YQdHgZW/+og8IvwexKA+tg8DEZ5r1ohRzx4ZhvdDyP6LqVT O6jE9ZAytm0fembBI9ZRabMDoSfuTttb74mEqBsbdELP3YMYYCNCKXpaSspb9fEfOsBLnAcnR1Wa2 atH0JCFw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVk-006m2J-SM; Fri, 01 Apr 2022 17:56:36 +0000 Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVh-006lwz-T6 for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:35 +0000 Received: by mail-pf1-x44a.google.com with SMTP id 77-20020a621450000000b004fa8868a49eso2020212pfu.3 for ; Fri, 01 Apr 2022 10:56:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=SFXm2KwniaE00QEj2NzqHgvf1ASQ84rh2AbpNytHfco=; b=qJlGcvodbGXModlH9jGukKmgTNEjWAOITQQu62mFHP6LKDJbwZqskUJsLcP86ZbJU6 ci9SF25lZR0byT8KkjvNOzhD5SdjyuAbbl77X2sYflrRrmhCGM3luPp1sOeeEkV+3EAL 72SOwJfxzbZxHxeJw0r5hmFUuOz8+1rGkoNZg9n+oGWFvq9KXzbUDF9pwFZSsRX7F7CU QAYlhUtNCoVt4sAs4aFJDgnE2BjvLy8UVhToIBfsNEPPIcfoYrdvQP2Y5Al9np8aW+3Z vZlfv9GdlmFaj4JXon3BM18j1zorzjZSDNDzm7hgwuUh83hha4yb45BqMoS1/CXJ+Jpn +MMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=SFXm2KwniaE00QEj2NzqHgvf1ASQ84rh2AbpNytHfco=; b=sHoeFTDt7jlfQ9EE/x9q9lFLIdXQYun/1T5pc7Itt4riZrKqWcH2a2tW4Ga+/vKp7j NiAmVp1GVdCeo0gHG/ODllvlNPtRPq1WIErKsBY4aJSBFIZAoqmHrAHgf97gKy/ZnklJ 2o2kEmFtAtbqcM1FKxiAO1LymG4+9C6wIe/WxfUrBFv9fKw1NdbW0DhCtqQaClBSQOJW SXj8/onxSX8Alr0BP7NUY4b3DNffBu803fJfFUlCGjH5IBIS6PL1e7tDrzkGC4TVBijY jSU1K9MRT7TY3WNMrRbpSOjBEzGQLOx8DbOadj94XdA7wqducYXw0JSMZhG0EMMnEXDx 3JtA== X-Gm-Message-State: AOAM532Y0oOgoGbJMKhH7n2efPocavm84kvRCYb8FTngYddS/nNbuJEQ QDX7X0tVewptvlPBN/ZB1KYYAZI+Eup15g== X-Google-Smtp-Source: ABdhPJy5g1z/iSbFZBwpj7J/HiTUYMlrOwh+Ud1JO3xyCytgN0gs7EbKQm2V60MVCQpu5tAlITqiMh3TNmIocA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:e78f:b0:156:3b3b:e4ce with SMTP id cp15-20020a170902e78f00b001563b3be4cemr17310175plb.8.1648835791370; Fri, 01 Apr 2022 10:56:31 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:50 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-20-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 19/23] KVM: x86/mmu: Refactor drop_large_spte() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105633_955975_A5B2F691 X-CRM114-Status: GOOD ( 11.24 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: drop_large_spte() drops a large SPTE if it exists and then flushes TLBs. Its helper function, __drop_large_spte(), does the drop without the flush. In preparation for eager page splitting, which will need to sometimes flush when dropping large SPTEs (and sometimes not), push the flushing logic down into __drop_large_spte() and add a bool paramete [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:44a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org drop_large_spte() drops a large SPTE if it exists and then flushes TLBs. Its helper function, __drop_large_spte(), does the drop without the flush. In preparation for eager page splitting, which will need to sometimes flush when dropping large SPTEs (and sometimes not), push the flushing logic down into __drop_large_spte() and add a bool parameter to control it. No functional change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 29 +++++++++++++++-------------- 1 file changed, 15 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6390b23d286a..f058f28909ea 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1184,28 +1184,29 @@ static void drop_spte(struct kvm *kvm, u64 *sptep) rmap_remove(kvm, sptep); } - -static bool __drop_large_spte(struct kvm *kvm, u64 *sptep) +static void __drop_large_spte(struct kvm *kvm, u64 *sptep, bool flush) { - if (is_large_pte(*sptep)) { - WARN_ON(sptep_to_sp(sptep)->role.level == PG_LEVEL_4K); - drop_spte(kvm, sptep); - return true; - } + struct kvm_mmu_page *sp; - return false; -} + if (!is_large_pte(*sptep)) + return; -static void drop_large_spte(struct kvm_vcpu *vcpu, u64 *sptep) -{ - if (__drop_large_spte(vcpu->kvm, sptep)) { - struct kvm_mmu_page *sp = sptep_to_sp(sptep); + sp = sptep_to_sp(sptep); + WARN_ON(sp->role.level == PG_LEVEL_4K); - kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn, + drop_spte(kvm, sptep); + + if (flush) { + kvm_flush_remote_tlbs_with_address(kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); } } +static void drop_large_spte(struct kvm_vcpu *vcpu, u64 *sptep) +{ + return __drop_large_spte(vcpu->kvm, sptep, true); +} + /* * Write-protect on the specified @sptep, @pt_protect indicates whether * spte write-protection is caused by protecting shadow page table. From patchwork Fri Apr 1 17:55:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612349 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=0wOO3bVW; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=fHLn2+n3; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXm259nz9sGD for ; Sat, 2 Apr 2022 04:56:40 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=sZ//iXBZPCqTAJV8xziWM7j4Yr00sUbiW6w4nDASjfs=; b=0wOO3bVW3WSuiygB7EuFdVbMzL qBhJbvqfvgIOk928YlD1ilInTpbxr53tRB0ysY8WF5hqeGYcr6Ntyb9eZrdhuuqwTnh/Hk1eh6Lhb 7WvoFNYtLCqZ354P/FsMWiZVk+jSZfm6rCoikp5lnkoV1uQtbzuatgkSx2jWR2VYzAMz5krpnHqqO k/ID3WFvfBYkWMWhJDf0vmymWWspgVNHa+X2EHhr1kl0vGja4mEXYZcax0W/118cXP0IkGT/yzCAs H3USygEMJ4u4LDudQQ1rXIRLX37HBW1Uq+tPepaexZxNqFWljeWmFZtkgkTZtsDDTO3BTWzjWHqmC qYHSTtBw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVm-006m3j-2f; Fri, 01 Apr 2022 17:56:38 +0000 Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVi-006lyV-9x for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:36 +0000 Received: by mail-pf1-x44a.google.com with SMTP id c78-20020a624e51000000b004fadac38f65so1984673pfb.16 for ; Fri, 01 Apr 2022 10:56:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=KCsCCKPJw//ZakmipHFrFH80mcQyRNSTiCbn8jePd0o=; b=fHLn2+n3366F34M19Tmgg8BnVu9+LEvD/2GR4UBzeiov31AnJXMHjbvaLIkEhk4j3o N2FZDGKKyT0qh/GAFAsu7U+qc4lSEocUuRqAjOf7lQ9u6ItLIqvfQrsKijs6YPJLzBjd uC8gf+S8jFrJllP72uEXB2Y+07rGTl1RQKc7glH/ssP6++fOrooznYAYU5n7zg/qKHeW iNvNTcMHR5IsB0XNEvYETh9u4AwOCHGs2pl2pc1vOsB7472EF6j2krfhqIl8ZUkBnDZD UIZ5nzt2s0eTLQqqA6efcgkB3hYEgq0tDT4G/RkeDl6nWFCfmeYN3u6h2EoKQ3p5BEMe hyaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=KCsCCKPJw//ZakmipHFrFH80mcQyRNSTiCbn8jePd0o=; b=t9fcH5yI2lto08fahiht5TNqmERNTC3x+f8A1h7/vG7YaBasIliULjauDcBQUW/WE2 OYdU6JxJCzf1Rg4TCDcjsqUnO1+gzla3hWTbqoFzarXzP1Y0o7ijBA4f3Zv/Ed6d609m tyMzYAh6t8TLikyI4aDWmDrhXUKJCCxJcy3siq7s5zJOBde8w2h4ekHcdfoGwZcoTy7H Qy/NvYyHQU2cOe1B9dBRTkwrIB5JXvXgsKlf0Yf0+uPVxUa959CRFdNWNmMvb/pk8Ohh 3Y2YnYgd7QxiNXDCmml/EeP5TmhK0DzGbfzfwqNo5+D6dVPkXV8NoEqZIONj7mTTQy3t 3wDw== X-Gm-Message-State: AOAM531K+ytXNEqQbYlM6HprQOu06JFV4DNUbiOjsq1CAUgfcyA3Vry3 3dmdy3pC+kdQqxaEFYC+PyzelZZthvbu6w== X-Google-Smtp-Source: ABdhPJx3yk8DGw0UUUnESqpCywr8AT/K25hWiO7n8nNV5Yk05HotoBcuxffoYGIonmxFrdqGUZzHk5X3TRJ/9w== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a62:1dc9:0:b0:4fa:e4d2:7745 with SMTP id d192-20020a621dc9000000b004fae4d27745mr11986119pfd.61.1648835793113; Fri, 01 Apr 2022 10:56:33 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:51 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-21-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 20/23] KVM: Allow for different capacities in kvm_mmu_memory_cache structs From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105634_372648_072A8D99 X-CRM114-Status: GOOD ( 24.65 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Allow the capacity of the kvm_mmu_memory_cache struct to be chosen at declaration time rather than being fixed for all declarations. This will be used in a follow-up commit to declare an cache in x86 [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:44a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Allow the capacity of the kvm_mmu_memory_cache struct to be chosen at declaration time rather than being fixed for all declarations. This will be used in a follow-up commit to declare an cache in x86 with a capacity of 512+ objects without having to increase the capacity of all caches in KVM. This change requires each cache now specify its capacity at runtime, since the cache struct itself no longer has a fixed capacity known at compile time. To protect against someone accidentally defining a kvm_mmu_memory_cache struct directly (without the extra storage), this commit includes a WARN_ON() in kvm_mmu_topup_memory_cache(). This change, unfortunately, adds some grottiness to kvm_phys_addr_ioremap() in arm64, which uses a function-local (i.e. stack-allocated) kvm_mmu_memory_cache struct. Since C does not allow anonymous structs in functions, the new wrapper struct that contains kvm_mmu_memory_cache and the objects pointer array, must be named, which means dealing with an outer and inner struct. The outer struct can't be dropped since then there would be no guarantee the kvm_mmu_memory_cache struct and objects array would be laid out consecutively on the stack. No functional change intended. Signed-off-by: David Matlack Acked-by: Anup Patel --- arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/mmu.c | 13 +++++++++---- arch/mips/include/asm/kvm_host.h | 2 +- arch/mips/kvm/mips.c | 2 ++ arch/riscv/include/asm/kvm_host.h | 2 +- arch/riscv/kvm/mmu.c | 17 ++++++++++------- arch/riscv/kvm/vcpu.c | 1 + arch/x86/include/asm/kvm_host.h | 8 ++++---- arch/x86/kvm/mmu/mmu.c | 9 +++++++++ include/linux/kvm_types.h | 19 +++++++++++++++++-- virt/kvm/kvm_main.c | 10 +++++++++- 12 files changed, 65 insertions(+), 21 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 0e96087885fe..4670491899de 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -362,7 +362,7 @@ struct kvm_vcpu_arch { bool pause; /* Cache some mmu pages needed inside spinlock regions */ - struct kvm_mmu_memory_cache mmu_page_cache; + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_page_cache); /* Target CPU and feature flags */ int target; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index ba9165e84396..af4d8a490af5 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -320,6 +320,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu->arch.target = -1; bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES); + vcpu->arch.mmu_page_cache.capacity = KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO; /* Set up the timer */ diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 0d19259454d8..01e15bcb7be2 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -764,7 +764,12 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, { phys_addr_t addr; int ret = 0; - struct kvm_mmu_memory_cache cache = { 0, __GFP_ZERO, NULL, }; + DEFINE_KVM_MMU_MEMORY_CACHE(cache) page_cache = { + .cache = { + .gfp_zero = __GFP_ZERO, + .capacity = KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE, + }, + }; struct kvm_pgtable *pgt = kvm->arch.mmu.pgt; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE | KVM_PGTABLE_PROT_R | @@ -777,14 +782,14 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, guest_ipa &= PAGE_MASK; for (addr = guest_ipa; addr < guest_ipa + size; addr += PAGE_SIZE) { - ret = kvm_mmu_topup_memory_cache(&cache, + ret = kvm_mmu_topup_memory_cache(&page_cache.cache, kvm_mmu_cache_min_pages(kvm)); if (ret) break; write_lock(&kvm->mmu_lock); ret = kvm_pgtable_stage2_map(pgt, addr, PAGE_SIZE, pa, prot, - &cache); + &page_cache.cache); write_unlock(&kvm->mmu_lock); if (ret) break; @@ -792,7 +797,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, pa += PAGE_SIZE; } - kvm_mmu_free_memory_cache(&cache); + kvm_mmu_free_memory_cache(&page_cache.cache); return ret; } diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h index 717716cc51c5..935511d7fc3a 100644 --- a/arch/mips/include/asm/kvm_host.h +++ b/arch/mips/include/asm/kvm_host.h @@ -347,7 +347,7 @@ struct kvm_vcpu_arch { unsigned long pending_exceptions_clr; /* Cache some mmu pages needed inside spinlock regions */ - struct kvm_mmu_memory_cache mmu_page_cache; + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_page_cache); /* vcpu's vzguestid is different on each host cpu in an smp system */ u32 vzguestid[NR_CPUS]; diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index a25e0b73ee70..45c7179144dc 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -387,6 +387,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) if (err) goto out_free_gebase; + vcpu->arch.mmu_page_cache.capacity = KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; + return 0; out_free_gebase: diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 78da839657e5..4ec0b7a3d515 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -186,7 +186,7 @@ struct kvm_vcpu_arch { struct kvm_sbi_context sbi_context; /* Cache pages needed to program page tables with spinlock held */ - struct kvm_mmu_memory_cache mmu_page_cache; + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_page_cache); /* VCPU power-off state */ bool power_off; diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index f80a34fbf102..5ffd164a5aeb 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -347,10 +347,12 @@ static int stage2_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, int ret = 0; unsigned long pfn; phys_addr_t addr, end; - struct kvm_mmu_memory_cache pcache; - - memset(&pcache, 0, sizeof(pcache)); - pcache.gfp_zero = __GFP_ZERO; + DEFINE_KVM_MMU_MEMORY_CACHE(cache) page_cache = { + .cache = { + .gfp_zero = __GFP_ZERO, + .capacity = KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE, + }, + }; end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK; pfn = __phys_to_pfn(hpa); @@ -361,12 +363,13 @@ static int stage2_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, if (!writable) pte = pte_wrprotect(pte); - ret = kvm_mmu_topup_memory_cache(&pcache, stage2_pgd_levels); + ret = kvm_mmu_topup_memory_cache(&page_cache.cache, + stage2_pgd_levels); if (ret) goto out; spin_lock(&kvm->mmu_lock); - ret = stage2_set_pte(kvm, 0, &pcache, addr, &pte); + ret = stage2_set_pte(kvm, 0, &page_cache.cache, addr, &pte); spin_unlock(&kvm->mmu_lock); if (ret) goto out; @@ -375,7 +378,7 @@ static int stage2_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, } out: - kvm_mmu_free_memory_cache(&pcache); + kvm_mmu_free_memory_cache(&page_cache.cache); return ret; } diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 624166004e36..6a5f5aa45bac 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -94,6 +94,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) /* Mark this VCPU never ran */ vcpu->arch.ran_atleast_once = false; + vcpu->arch.mmu_page_cache.capacity = KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO; /* Setup ISA features available to VCPU */ diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index be4349c9ffea..ffb2b99f3a60 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -694,10 +694,10 @@ struct kvm_vcpu_arch { */ struct kvm_mmu *walk_mmu; - struct kvm_mmu_memory_cache mmu_pte_list_desc_cache; - struct kvm_mmu_memory_cache mmu_shadow_page_cache; - struct kvm_mmu_memory_cache mmu_shadowed_info_cache; - struct kvm_mmu_memory_cache mmu_page_header_cache; + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_pte_list_desc_cache); + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_shadow_page_cache); + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_shadowed_info_cache); + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_page_header_cache); /* * QEMU userspace and the guest each have their own FPU state. diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f058f28909ea..a8200b3f8782 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5800,12 +5800,21 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu) { int ret; + vcpu->arch.mmu_pte_list_desc_cache.capacity = + KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; vcpu->arch.mmu_pte_list_desc_cache.kmem_cache = pte_list_desc_cache; vcpu->arch.mmu_pte_list_desc_cache.gfp_zero = __GFP_ZERO; + vcpu->arch.mmu_page_header_cache.capacity = + KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache; vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO; + vcpu->arch.mmu_shadowed_info_cache.capacity = + KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; + + vcpu->arch.mmu_shadow_page_cache.capacity = + KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO; vcpu->arch.mmu = &vcpu->arch.root_mmu; diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index ac1ebb37a0ff..579cf39986ec 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -83,14 +83,29 @@ struct gfn_to_pfn_cache { * MMU flows is problematic, as is triggering reclaim, I/O, etc... while * holding MMU locks. Note, these caches act more like prefetch buffers than * classical caches, i.e. objects are not returned to the cache on being freed. + * + * The storage for the cache object pointers is laid out after the struct, to + * allow different declarations to choose different capacities. The capacity + * field defines the number of object pointers available after the struct. */ struct kvm_mmu_memory_cache { int nobjs; + int capacity; gfp_t gfp_zero; struct kmem_cache *kmem_cache; - void *objects[KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE]; + void *objects[]; }; -#endif + +#define __DEFINE_KVM_MMU_MEMORY_CACHE(_name, _capacity) \ + struct { \ + struct kvm_mmu_memory_cache _name; \ + void *_name##_objects[_capacity]; \ + } + +#define DEFINE_KVM_MMU_MEMORY_CACHE(_name) \ + __DEFINE_KVM_MMU_MEMORY_CACHE(_name, KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE) + +#endif /* KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE */ #define HALT_POLL_HIST_COUNT 32 diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 70e05af5ebea..c4cac4195f4a 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -373,9 +373,17 @@ int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) { void *obj; + /* + * The capacity fieldmust be initialized since the storage for the + * objects pointer array is laid out after the kvm_mmu_memory_cache + * struct and not known at compile time. + */ + if (WARN_ON(mc->capacity == 0)) + return -EINVAL; + if (mc->nobjs >= min) return 0; - while (mc->nobjs < ARRAY_SIZE(mc->objects)) { + while (mc->nobjs < mc->capacity) { obj = mmu_memory_cache_alloc_obj(mc, GFP_KERNEL_ACCOUNT); if (!obj) return mc->nobjs >= min ? 0 : -ENOMEM; From patchwork Fri Apr 1 17:55:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612350 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=htmadObD; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=s72V80y7; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXm3j6Pz9sGF for ; Sat, 2 Apr 2022 04:56:40 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=jgoxbMBdAbGDDRh19ZWekSm+L6whHgnQ5dfCQ2atgRs=; b=htmadObDPXMshRJkYeg0xKOWi9 9Nynkusp8vR4pXsF8v4s9XND91cS8EGPnZ3/ZC1LKmoBmBHpN7w2RpOtCLrbW/GZ0E+US3QHCK7oE u7P60StUXW8aSS0YhvbZgwjehMS9nzsU02lw7TnekWEy0rpqJKkkXuGhoaVi3jCPAAHZ+1ilaI0XN 3mhbT5UX3j0eSZPgigZCtJ/oh5Xe4vLkZ4hiQjSzQB1ia6wJBolQL2rRpi2JJMmbGRHi2iMs9UMai ibVPN8pqHbuQzoyIamxfvVIUGZibgcj4TktcpnC27YctUMbLV77RU4HQiUCYTrllD8OhYD0QRPieP z6A6HZfQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVm-006m43-BL; Fri, 01 Apr 2022 17:56:38 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVk-006m0s-5R for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:37 +0000 Received: by mail-pj1-x104a.google.com with SMTP id bo16-20020a17090b091000b001c6c96491aeso2406964pjb.1 for ; Fri, 01 Apr 2022 10:56:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rk47OwLDTUZoPp3sSFPqcpIfMojh6XvvX1htIIdRD8M=; b=s72V80y7tDOh1F0V57dD8TPMVDGZRFJLVeRypa2j6NOIeyK6OdhN/vMWnCocaWrr7k CTCwsrxCAl4hj3azp9KbFVRgymRC6cGX41YRf3bKbcz4hip3E1y1CUleOkW/mRUjOv72 4q0juwIAEhigFEFYWcTUd2u5Yv+XnqXVD6a3pn/Ui+K8aArSkJYYBVjZFCZK1SOZsjC5 7Yesg542/Fq0ocOM9rn7tyOyRxQKZ3AxCieAlZZBYn/IE9/hjt2QS80Tz+PLFcorlflQ sdxYZkSU8ub5I6xHkzMHujo97FILxUXUQ+MHykloWi+evJGsQ9HO/j+9n0zxGqN3fxiy QTYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rk47OwLDTUZoPp3sSFPqcpIfMojh6XvvX1htIIdRD8M=; b=gUbOLX7p4UJq18BBNLwjDKI4yOp6/LGAzHcl82t7vZI+yZmEPE2Qdpw5fvwbIYCLM2 vXP8/TTmUUHrv+tc0ldjwEt079/bAQjU/twieZD2nOiEQ6QpQzlI42dw6ZR6XCAuNMW9 +vJ+Okh9nQWuQUyy/HLkKzCgsyr3KVISjrNlopvDJc0etoaINRlJeic7JsEv2vqGmo/f 9nLTP5aTDjMiZdGVW77RyIR02I3SdCqG/z/78EtBHHJcBgJUXyFdXtlbmJWzXi2fWP8x gRcgzW5/8KOS1frOa4FW5jKciatNu4u9A8LZ2SJEXrNLOG1okjwxmjRNADxyOwFXCldj u75g== X-Gm-Message-State: AOAM533SqEgTsOzj70A7cguopilhKR1xikOqZ1aEtC0QVCpr7yKYsZbf nJ7rVRIJISuL3aHrQjSg6WrP7nrkoiXu7w== X-Google-Smtp-Source: ABdhPJybrRr8dD4aYv/cbz/erk9I24s3Xk9K694qQkW/0pJU9JMwG7vQGFtAvLaRQheDfdF0TPnn3iF2T50Fnw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:ce0f:b0:156:5a4:926c with SMTP id k15-20020a170902ce0f00b0015605a4926cmr11415497plg.3.1648835794980; Fri, 01 Apr 2022 10:56:34 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:52 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-22-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 21/23] KVM: Allow GFP flags to be passed when topping up MMU caches From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105636_229028_D36FD152 X-CRM114-Status: GOOD ( 11.46 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: This will be used in a subsequent commit to top-up MMU caches under the MMU lock with GFP_NOWAIT as part of eager page splitting. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: David Matlack --- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 9 +++++++-- 2 files changed, 8 insertions(+), 2 [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:104a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org This will be used in a subsequent commit to top-up MMU caches under the MMU lock with GFP_NOWAIT as part of eager page splitting. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: David Matlack --- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 9 +++++++-- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 252ee4a61b58..7d3a1f28beb2 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1335,6 +1335,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm); #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); +int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min, gfp_t gfp); int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc); void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc); void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index c4cac4195f4a..554148ea0c30 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -369,7 +369,7 @@ static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc, return (void *)__get_free_page(gfp_flags); } -int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) +int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min, gfp_t gfp) { void *obj; @@ -384,7 +384,7 @@ int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) if (mc->nobjs >= min) return 0; while (mc->nobjs < mc->capacity) { - obj = mmu_memory_cache_alloc_obj(mc, GFP_KERNEL_ACCOUNT); + obj = mmu_memory_cache_alloc_obj(mc, gfp); if (!obj) return mc->nobjs >= min ? 0 : -ENOMEM; mc->objects[mc->nobjs++] = obj; @@ -392,6 +392,11 @@ int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) return 0; } +int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) +{ + return __kvm_mmu_topup_memory_cache(mc, min, GFP_KERNEL_ACCOUNT); +} + int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc) { return mc->nobjs; From patchwork Fri Apr 1 17:55:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612352 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=LLwjZpA7; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=BGWmxg2u; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXv0F6dz9sGF for ; Sat, 2 Apr 2022 04:56:47 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=yK7I25EHFvbbSmm1N5L8b8222foFt22XlmlmO/6M2C4=; b=LLwjZpA7nPsCX64FKKOKPCVhAa K8F09nt4Jq1gDHXmflt8j61zU9zatc7YOSTXGdIv51h+UHznJWKLrFudeQNzsQcFO4VncJTznkzQy uXtjnCFl0Rln5zAXu7ZsPaETKpFKQLSXK8qRFfTma12xdTXb4hMR0tSiZ+DoEK2GaAYEyGcSOio/F Re7urXA/bKvtD8bcKrOx+MMsnNbkvNmb/Wl0ije+OSZJLJYvLsrPj0FRtmj80qnjh6aU/+PAfVUlh worJq/uuCeKRE+K8ptPpylMGPRhepQf5bkK8yFtWAPNB5jv7bRPDkASjPkxZnaBAbDc6jvWydqjl0 /vpOK+ig==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVp-006m6k-Ez; Fri, 01 Apr 2022 17:56:41 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVl-006m2T-Tu for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:40 +0000 Received: by mail-pl1-x64a.google.com with SMTP id r16-20020a170903021000b00155f7af17b6so1793791plh.16 for ; Fri, 01 Apr 2022 10:56:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=5xLJkysC3ZlCsyjnevnbbczfF7DAWIHXhWTtUpk/DaM=; b=BGWmxg2u+a742ZhGq8e/gaCDz955+1ZlJovsdI6L0nUAdWVa6Qdtc23Nr2Pjp8RQ3N ACHnaXMUEZZJtcUTF+9+oQ7mPkwkih4Kyme64gsN+BLDFtcBI2OQ/7C0QjVXQ4T/159e lLX9UO9w/OYj40ZxkmxV0yV/PEMV9uDKZXfTwSZfO+Vc4Ty6Q1q8YKMJMEFOWECBG+qC Esy1K3+NzO2OoFIKu40KLMPncdvpA75at3FLolRVU1+pXbEb1fIbJA2nh41yucAQM+1F fH1akGlZxRH6ir3R6Niw4Z7lGVNrxjVDaz6IRs2J8IbjR18pMPuYiYP1zkb/NFf1KEOc 1CWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=5xLJkysC3ZlCsyjnevnbbczfF7DAWIHXhWTtUpk/DaM=; b=CC5VWphnI2H/O5EwSYU6rCrJ3gRMh8IkXdcsGrg02UIrlnVYSnjiHOClVXS0g6dGN/ hQnQOSDrPevpRfGMDmt5yHnapvRX4xmHgVjGh6QWWhyzK4zYZ4UYgHNFrMeTWK2rV/wx /3r6QLHXNuLzdGyUlITwZW6eA+L/PpYDtUrxe/8ubvxQUAZF/CWwboCdaQG7vPZSetzG c7FL8vmdoCAMS/q7Z5JUCuJgmFSJdZWc2+Uvwsk7PvSbPwYi6JVNZMWG4YPi+eWp1/9p kLQGTQ5LgcYMz2EkmfgnQVFWe1MrVd0kEWmnksvL6R/ZV5U/cfOBS9d2BzbLXWkcnffG yplQ== X-Gm-Message-State: AOAM533sb6vWHAsSXh0d9W03RA9xB9yvCPoKsBq7r/lMEcew+g6vHcWH uzJ37Q/vfisKog03ja0HglNFNTEbq7clkw== X-Google-Smtp-Source: ABdhPJyRqF97zxTLz0gW9feu0V4xTh0qrgEHDZnAhXWlpLSbGZiPVvAurNY7AvSLmzoxLwnBmcQvKj8DX1SHqw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:aa7:81c1:0:b0:4f7:6ba1:553b with SMTP id c1-20020aa781c1000000b004f76ba1553bmr12211127pfn.45.1648835796738; Fri, 01 Apr 2022 10:56:36 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:53 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-23-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 22/23] KVM: x86/mmu: Support Eager Page Splitting in the shadow MMU From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105637_991336_EFE7270D X-CRM114-Status: GOOD ( 31.97 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Add support for Eager Page Splitting pages that are mapped by the shadow MMU. Walk through the rmap first splitting all 1GiB pages to 2MiB pages, and then splitting all 2MiB pages to 4KiB pages. Splitting huge pages mapped by the shadow MMU requries dealing with some extra complexity beyond that of the TDP MMU: Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:64a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Add support for Eager Page Splitting pages that are mapped by the shadow MMU. Walk through the rmap first splitting all 1GiB pages to 2MiB pages, and then splitting all 2MiB pages to 4KiB pages. Splitting huge pages mapped by the shadow MMU requries dealing with some extra complexity beyond that of the TDP MMU: (1) The shadow MMU has a limit on the number of shadow pages that are allowed to be allocated. So, as a policy, Eager Page Splitting refuses to split if there are KVM_MIN_FREE_MMU_PAGES or fewer pages available. (2) Huge pages may be mapped by indirect shadow pages which have the possibility of being unsync. As a policy we opt not to split such pages as their translation may no longer be valid. (3) Splitting a huge page may end up re-using an existing lower level shadow page tables. This is unlike the TDP MMU which always allocates new shadow page tables when splitting. (4) When installing the lower level SPTEs, they must be added to the rmap which may require allocating additional pte_list_desc structs. Note, for case (3) we have to be careful about dealing with what's already in the lower level page table. Specifically the lower level page table may only be partially filled in and may point to even lower level page tables that are partially filled in. We can fill in non-present entries, but recursing into the lower level page tables would be too complex. This means that Eager Page Splitting may partially unmap a huge page. To handle this we flush TLBs after dropping the huge SPTE whenever we are about to install a lower level page table that was partially filled in (*). We can skip the TLB flush if the lower level page table was empty (no aliasing) or identical to what we were already going to populate it with (aliased huge page that was just eagerly split). (*) This TLB flush could probably be delayed until we're about to drop the MMU lock, which would also let us batch flushes for multiple splits. However such scenarios should be rare in practice (a huge page must be aliased in multiple SPTEs and have been split for NX Huge Pages in only some of them). Flushing immediately is simpler to plumb and also reduces the chances of tripping over a CPU bug (e.g. see iTLB multi-hit). Suggested-by: Peter Feiner [ This commit is based off of the original implementation of Eager Page Splitting from Peter in Google's kernel from 2016 that handles cases (1) and (2) above. ] Signed-off-by: David Matlack --- .../admin-guide/kernel-parameters.txt | 3 - arch/x86/include/asm/kvm_host.h | 12 + arch/x86/kvm/mmu/mmu.c | 268 ++++++++++++++++++ arch/x86/kvm/x86.c | 6 + 4 files changed, 286 insertions(+), 3 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 05161afd7642..495f6ac53801 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2360,9 +2360,6 @@ the KVM_CLEAR_DIRTY ioctl, and only for the pages being cleared. - Eager page splitting currently only supports splitting - huge pages mapped by the TDP MMU. - Default is Y (on). kvm.enable_vmware_backdoor=[KVM] Support VMware backdoor PV interface. diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index ffb2b99f3a60..053a32afd18b 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1246,6 +1246,16 @@ struct kvm_arch { hpa_t hv_root_tdp; spinlock_t hv_root_tdp_lock; #endif + + /* + * Memory cache used to allocate pte_list_desc structs while splitting + * huge pages. In the worst case, to split one huge page we need 512 + * pte_list_desc structs to add each lower level leaf sptep to the rmap + * plus 1 to extend the parent_ptes rmap of the lower level page table. + */ +#define HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY 513 + __DEFINE_KVM_MMU_MEMORY_CACHE(huge_page_split_desc_cache, + HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY); }; struct kvm_vm_stat { @@ -1621,6 +1631,8 @@ void kvm_mmu_zap_all(struct kvm *kvm); void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen); void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned long kvm_nr_mmu_pages); +void free_huge_page_split_desc_cache(struct kvm *kvm); + int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3); int emulator_write_phys(struct kvm_vcpu *vcpu, gpa_t gpa, diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a8200b3f8782..9adafed43048 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5972,6 +5972,11 @@ void kvm_mmu_init_vm(struct kvm *kvm) node->track_write = kvm_mmu_pte_write; node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; kvm_page_track_register_notifier(kvm, node); + + kvm->arch.huge_page_split_desc_cache.capacity = + HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY; + kvm->arch.huge_page_split_desc_cache.kmem_cache = pte_list_desc_cache; + kvm->arch.huge_page_split_desc_cache.gfp_zero = __GFP_ZERO; } void kvm_mmu_uninit_vm(struct kvm *kvm) @@ -6102,12 +6107,267 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); } +static int topup_huge_page_split_desc_cache(struct kvm *kvm, bool locked) +{ + gfp_t gfp = gfp_flags_for_split(locked); + + /* + * We may need up to HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY descriptors + * to split any given huge page. We could more accurately calculate how + * many we actually need by inspecting all the rmaps and check which + * will need new descriptors, but that's not worth the extra cost or + * code complexity. + */ + return __kvm_mmu_topup_memory_cache( + &kvm->arch.huge_page_split_desc_cache, + HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY, + gfp); +} + +void free_huge_page_split_desc_cache(struct kvm *kvm) +{ + kvm_mmu_free_memory_cache(&kvm->arch.huge_page_split_desc_cache); +} + +static int alloc_memory_for_split(struct kvm *kvm, struct kvm_mmu_page **spp, + bool locked) +{ + int r; + + r = topup_huge_page_split_desc_cache(kvm, locked); + if (r) + return r; + + if (!*spp) { + *spp = kvm_mmu_alloc_direct_sp_for_split(locked); + r = *spp ? 0 : -ENOMEM; + } + + return r; +} + +static struct kvm_mmu_page *kvm_mmu_get_sp_for_split(struct kvm *kvm, + const struct kvm_memory_slot *slot, + u64 *huge_sptep, + struct kvm_mmu_page **spp) +{ + struct kvm_mmu_page *sp, *huge_sp = sptep_to_sp(huge_sptep); + union kvm_mmu_page_role role; + LIST_HEAD(invalid_list); + unsigned int access; + gfn_t gfn; + + gfn = kvm_mmu_page_get_gfn(huge_sp, huge_sptep - huge_sp->spt); + access = kvm_mmu_page_get_access(huge_sp, huge_sptep - huge_sp->spt); + + /* + * Huge page splitting always uses direct shadow pages since we are + * directly mapping the huge page GFN region with smaller pages. + */ + role = kvm_mmu_child_role(huge_sptep, true, access); + + sp = __kvm_mmu_find_shadow_page(kvm, gfn, role, &invalid_list); + if (sp) { + /* Direct SPs should never be unsync. */ + WARN_ON_ONCE(sp->unsync); + trace_kvm_mmu_get_page(sp, false); + } else { + swap(sp, *spp); + init_shadow_page(kvm, sp, slot, gfn, role); + trace_kvm_mmu_get_page(sp, true); + } + + kvm_mmu_commit_zap_page(kvm, &invalid_list); + + return sp; +} + +static void kvm_mmu_split_huge_page(struct kvm *kvm, + const struct kvm_memory_slot *slot, + u64 *huge_sptep, struct kvm_mmu_page **spp) + +{ + struct kvm_mmu_memory_cache *cache = &kvm->arch.huge_page_split_desc_cache; + u64 huge_spte = READ_ONCE(*huge_sptep); + struct kvm_mmu_page *sp; + bool flush = false; + u64 *sptep, spte; + gfn_t gfn; + int index; + + sp = kvm_mmu_get_sp_for_split(kvm, slot, huge_sptep, spp); + + for (index = 0; index < PT64_ENT_PER_PAGE; index++) { + sptep = &sp->spt[index]; + gfn = kvm_mmu_page_get_gfn(sp, index); + + /* + * sp may have populated page table entries, e.g. if this huge + * page is aliased by multiple sptes with the same access + * permissions. We know the sptes will be mapping the same + * gfn-to-pfn translation since sp is direct. However, a given + * spte may point to an even lower level page table. We don't + * know if that lower level page table is completely filled in, + * i.e. we may be effectively unmapping a region of memory, so + * we must flush the TLB. + */ + if (is_shadow_present_pte(*sptep)) { + flush |= !is_last_spte(*sptep, sp->role.level); + continue; + } + + spte = make_huge_page_split_spte(huge_spte, sp, index); + mmu_spte_set(sptep, spte); + __rmap_add(kvm, cache, slot, sptep, gfn, sp->role.access); + } + + /* + * Replace the huge spte with a pointer to the populated lower level + * page table. If the lower-level page table indentically maps the huge + * page, there's no need for a TLB flush. Otherwise, flush TLBs after + * dropping the huge page and before installing the shadow page table. + */ + __drop_large_spte(kvm, huge_sptep, flush); + __link_shadow_page(cache, huge_sptep, sp); +} + +static int __try_split_huge_page(struct kvm *kvm, + const struct kvm_memory_slot *slot, + u64 *huge_sptep, struct kvm_mmu_page **spp) +{ + int r = 0; + + if (kvm_mmu_available_pages(kvm) <= KVM_MIN_FREE_MMU_PAGES) + return -ENOSPC; + + if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) + goto drop_lock; + + r = alloc_memory_for_split(kvm, spp, true); + if (r) + goto drop_lock; + + kvm_mmu_split_huge_page(kvm, slot, huge_sptep, spp); + + return 0; + +drop_lock: + write_unlock(&kvm->mmu_lock); + cond_resched(); + r = alloc_memory_for_split(kvm, spp, false); + write_lock(&kvm->mmu_lock); + + /* + * Ask the caller to try again if the allocation succeeded. We dropped + * the MMU lock so huge_sptep may no longer be valid. + */ + return r ?: -EAGAIN; +} + +static int try_split_huge_page(struct kvm *kvm, + const struct kvm_memory_slot *slot, + u64 *huge_sptep, struct kvm_mmu_page **spp) +{ + struct kvm_mmu_page *huge_sp = sptep_to_sp(huge_sptep); + int level, r; + gfn_t gfn; + u64 spte; + + /* + * Record information about the huge page being split to use in the + * tracepoint below. Do this now because __try_split_huge_page() may + * drop the MMU lock, after which huge_sptep may no longer be a valid + * pointer. + */ + gfn = kvm_mmu_page_get_gfn(huge_sp, huge_sptep - huge_sp->spt); + level = huge_sp->role.level; + spte = *huge_sptep; + + r = __try_split_huge_page(kvm, slot, huge_sptep, spp); + + trace_kvm_mmu_split_huge_page(gfn, spte, level, r); + + return r; +} + + +static bool skip_split_huge_page(u64 *huge_sptep) +{ + struct kvm_mmu_page *sp = sptep_to_sp(huge_sptep); + + if (WARN_ON_ONCE(!is_large_pte(*huge_sptep))) + return true; + + /* + * As a policy, do not split huge pages if the sp on which they reside + * is unsync. Unsync means the guest is modifying the page table being + * shadowed, so splitting may be a waste of cycles and memory. + */ + return sp->role.invalid || sp->unsync; +} + +static bool rmap_try_split_huge_pages(struct kvm *kvm, + struct kvm_rmap_head *rmap_head, + const struct kvm_memory_slot *slot) +{ + struct kvm_mmu_page *sp = NULL; + struct rmap_iterator iter; + u64 *huge_sptep; + int r; + +restart: + for_each_rmap_spte(rmap_head, &iter, huge_sptep) { + if (skip_split_huge_page(huge_sptep)) + continue; + + r = try_split_huge_page(kvm, slot, huge_sptep, &sp); + if (r < 0 && r != -EAGAIN) + break; + + /* + * Splitting succeeded (and removed huge_sptep from the + * iterator) or we had to drop the MMU lock. Either way, restart + * the iterator to get it back into a consistent state. + */ + goto restart; + } + + if (sp) + kvm_mmu_free_shadow_page(sp); + + return false; +} + +static void kvm_rmap_try_split_huge_pages(struct kvm *kvm, + const struct kvm_memory_slot *slot, + gfn_t start, gfn_t end, + int target_level) +{ + int level; + + /* + * Split huge pages starting with KVM_MAX_HUGEPAGE_LEVEL and working + * down to the target level. This ensures pages are recursively split + * all the way to the target level. There's no need to split pages + * already at the target level. + */ + for (level = KVM_MAX_HUGEPAGE_LEVEL; level > target_level; level--) { + slot_handle_level_range(kvm, slot, + rmap_try_split_huge_pages, + level, level, start, end - 1, + true, false); + } +} + /* Must be called with the mmu_lock held in write-mode. */ void kvm_mmu_try_split_huge_pages(struct kvm *kvm, const struct kvm_memory_slot *memslot, u64 start, u64 end, int target_level) { + if (kvm_memslots_have_rmaps(kvm)) + kvm_rmap_try_split_huge_pages(kvm, memslot, start, end, target_level); + if (is_tdp_mmu_enabled(kvm)) kvm_tdp_mmu_try_split_huge_pages(kvm, memslot, start, end, target_level, false); @@ -6125,6 +6385,14 @@ void kvm_mmu_slot_try_split_huge_pages(struct kvm *kvm, u64 start = memslot->base_gfn; u64 end = start + memslot->npages; + if (kvm_memslots_have_rmaps(kvm)) { + topup_huge_page_split_desc_cache(kvm, false); + write_lock(&kvm->mmu_lock); + kvm_rmap_try_split_huge_pages(kvm, memslot, start, end, target_level); + write_unlock(&kvm->mmu_lock); + free_huge_page_split_desc_cache(kvm); + } + if (is_tdp_mmu_enabled(kvm)) { read_lock(&kvm->mmu_lock); kvm_tdp_mmu_try_split_huge_pages(kvm, memslot, start, end, target_level, true); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d3a9ce07a565..02728c3f088e 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12106,6 +12106,12 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm, * page faults will create the large-page sptes. */ kvm_mmu_zap_collapsible_sptes(kvm, new); + + /* + * Free any memory left behind by eager page splitting. Ignore + * the module parameter since userspace might have changed it. + */ + free_huge_page_split_desc_cache(kvm); } else { /* * Initially-all-set does not require write protecting any page, From patchwork Fri Apr 1 17:55:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1612351 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=KPiY80RK; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=fuN0d8N9; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KVSXt34NSz9sGD for ; Sat, 2 Apr 2022 04:56:46 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=qIV26kXpxQ6zv4f1UqBC2al/G+zUbkr8NiM0+PlOqRM=; b=KPiY80RKetAJZYhJXjn5EmPnHo wj9xuDrKkXsFOZv7TDbfkLU+0lw+5cLCMV6CZ+pJ+IzAqo6EjndAIgJZDoBXzJf77NIAk1f/Lx0id wE2zu8Q8gsKnMbNYwnpNlmAArNf3f8xrouxpEsxB5gsbgs/3pKlaFBMb4o2z0yuzsDW5C2qoPiOVw 7AMp2q4eVWMWazdg+vBFfqtHcLA2sz8zIbo4C3mgt06l71hfyyUoRseBmhtD+Y68qnu0n2KpljiOQ cVonybspTcU8s6LHvcDtACPSC0RcRhN5AMZ4N3O28h1NhFwW6NzzAGZcE1DiZpjI1yxWYILArMYkh vhiRrpeQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVq-006m7j-JL; Fri, 01 Apr 2022 17:56:42 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1naLVn-006m4N-FT for kvm-riscv@lists.infradead.org; Fri, 01 Apr 2022 17:56:41 +0000 Received: by mail-pj1-x1049.google.com with SMTP id o13-20020a17090ab88d00b001c96a912b04so1931219pjr.5 for ; Fri, 01 Apr 2022 10:56:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=whi7VXODCYStZwZ0yhJ9hj7iu8dCf643usT2exONxAs=; b=fuN0d8N9vFpmGZgcbUnIPl1/u50XpVyZVJS25Zl8sNb/QRw8cVTyJ5dMdqA5+b4+ZF uSZa76VjUdHU0qH1Va5a1zhK0stt4d5uFxMsf6ivJGXpSjoisa6ZlsqPiy1q//ZG6dgK E70U/wr8WlB5MVNjCZS6zluBKURb/ry6bmDWBkudFenFfsRDpzcJwK8g9cWgZalXPg39 dhNKu56TjS2VbFJgtFxwT7wFpAedLoTQxO4/+9hOW6Wb4II8CrKPKuliva4BSCXXUE2Q 8/Bl+aR/+ltV30PKrHQhHQgNsOJUQvLBay/8SjLanwL99lfTbrvVCd2W7FugWSH6scXf 43Pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=whi7VXODCYStZwZ0yhJ9hj7iu8dCf643usT2exONxAs=; b=xupYSzJcQ9DCD0LxD4JwB3/4TY2+/6IuH7P1PL06SmIOHaarqAeg+M4+Ihj5wj0Fj4 L7Zmy1sa156UKvCdKywTtVWDRQXJIU4HwIfdrXaj2RmoJ8WhgsUVG7Fu9JbW7w4JRUeY durKhftqM07u/Oc0Vty7R+MFowHGMpSAZHFfd7tiv5YjfW9khrwGipvF1BXrxKZPuNQ4 euY32iZqz3UoakrmFpDcc7n3zRWnvXhyUJKUZ2xNTBOeSxLR+m+McjN3a6dK/y4flN3B lzwADp4bJudSQF5TngpFTqY07duWWOr17DnskJ5LNxllYh5dFiQT/BNIGnUunzaKNfdt 9+Iw== X-Gm-Message-State: AOAM530iV5X8bIbAj2AIV4Rl/9VRr9vkug3I6QGV6UWm6mSzFcIEe2TZ bAlEIBwo2MX7f7Uv2Z3v/OZByNtvQQC4vQ== X-Google-Smtp-Source: ABdhPJzrOoFK0CXfNiBHYff/Ss14MY3SHaY29S4xt2lMgr9gQfCmCewctxWArdepbXNkov7vRSKMSJUb4tjYpQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:1ca1:b0:4fa:7e80:6957 with SMTP id y33-20020a056a001ca100b004fa7e806957mr12069429pfw.33.1648835798210; Fri, 01 Apr 2022 10:56:38 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:54 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-24-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 23/23] KVM: selftests: Map x86_64 guest virtual memory with huge pages From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220401_105639_535474_99C3B57F X-CRM114-Status: GOOD ( 12.49 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Override virt_map() in x86_64 selftests to use the largest page size possible when mapping guest virtual memory. This enables testing eager page splitting with shadow paging (e.g. kvm_intel.ept=N), as [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:1049 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Override virt_map() in x86_64 selftests to use the largest page size possible when mapping guest virtual memory. This enables testing eager page splitting with shadow paging (e.g. kvm_intel.ept=N), as it allows KVM to shadow guest memory with huge pages. Signed-off-by: David Matlack --- .../selftests/kvm/include/x86_64/processor.h | 6 ++++ tools/testing/selftests/kvm/lib/kvm_util.c | 4 +-- .../selftests/kvm/lib/x86_64/processor.c | 31 +++++++++++++++++++ 3 files changed, 39 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 37db341d4cc5..efb228d2fbf7 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -470,6 +470,12 @@ enum x86_page_size { X86_PAGE_SIZE_2M, X86_PAGE_SIZE_1G, }; + +static inline size_t page_size_bytes(enum x86_page_size page_size) +{ + return 1UL << (page_size * 9 + 12); +} + void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, enum x86_page_size page_size); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 1665a220abcb..60198587236d 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1432,8 +1432,8 @@ vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm) * Within the VM given by @vm, creates a virtual translation for * @npages starting at @vaddr to the page range starting at @paddr. */ -void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, - unsigned int npages) +void __weak virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, + unsigned int npages) { size_t page_size = vm->page_size; size_t size = npages * page_size; diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 9f000dfb5594..7df84292d5de 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -282,6 +282,37 @@ void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) __virt_pg_map(vm, vaddr, paddr, X86_PAGE_SIZE_4K); } +void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, unsigned int npages) +{ + size_t size = (size_t) npages * vm->page_size; + size_t vend = vaddr + size; + enum x86_page_size page_size; + size_t stride; + + TEST_ASSERT(vaddr + size > vaddr, "Vaddr overflow"); + TEST_ASSERT(paddr + size > paddr, "Paddr overflow"); + + /* + * Map the region with all 1G pages if possible, falling back to all + * 2M pages, and finally all 4K pages. This could be improved to use + * a mix of page sizes so that more of the region is mapped with large + * pages. + */ + for (page_size = X86_PAGE_SIZE_1G; page_size >= X86_PAGE_SIZE_4K; page_size--) { + stride = page_size_bytes(page_size); + + if (!(vaddr % stride) && !(paddr % stride) && !(size % stride)) + break; + } + + TEST_ASSERT(page_size >= X86_PAGE_SIZE_4K, + "Cannot map unaligned region: vaddr 0x%lx paddr 0x%lx npages 0x%x\n", + vaddr, paddr, npages); + + for (; vaddr < vend; vaddr += stride, paddr += stride) + __virt_pg_map(vm, vaddr, paddr, page_size); +} + static struct pageTableEntry *_vm_get_page_table_entry(struct kvm_vm *vm, int vcpuid, uint64_t vaddr) {