{"id":2222476,"url":"http://patchwork.ozlabs.org/api/patches/2222476/?format=json","web_url":"http://patchwork.ozlabs.org/project/kvm-riscv/patch/20260412023822.83341-1-tjytimi@163.com/","project":{"id":70,"url":"http://patchwork.ozlabs.org/api/projects/70/?format=json","name":"Linux KVM RISC-V","link_name":"kvm-riscv","list_id":"kvm-riscv.lists.infradead.org","list_email":"kvm-riscv@lists.infradead.org","web_url":"","scm_url":"","webscm_url":"","list_archive_url":"http://lists.infradead.org/pipermail/kvm-riscv/","list_archive_url_format":"","commit_url_format":""},"msgid":"<20260412023822.83341-1-tjytimi@163.com>","list_archive_url":null,"date":"2026-04-12T02:38:22","name":"[v4] RISC-V: KVM: Batch stage-2 TLB flushes","commit_ref":null,"pull_url":null,"state":"new","archived":false,"hash":"25346233f6300f0afb935b0d9c4a439085bb4c83","submitter":{"id":91419,"url":"http://patchwork.ozlabs.org/api/people/91419/?format=json","name":"Jinyu Tang","email":"tjytimi@163.com"},"delegate":null,"mbox":"http://patchwork.ozlabs.org/project/kvm-riscv/patch/20260412023822.83341-1-tjytimi@163.com/mbox/","series":[{"id":499585,"url":"http://patchwork.ozlabs.org/api/series/499585/?format=json","web_url":"http://patchwork.ozlabs.org/project/kvm-riscv/list/?series=499585","date":"2026-04-12T02:38:22","name":"[v4] RISC-V: KVM: Batch stage-2 TLB flushes","version":4,"mbox":"http://patchwork.ozlabs.org/series/499585/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/2222476/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/2222476/checks/","tags":{},"related":[],"headers":{"Return-Path":"\n <kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n secure) header.d=lists.infradead.org header.i=@lists.infradead.org\n header.a=rsa-sha256 header.s=bombadil.20210309 header.b=D8g8T6Ba;\n\tdkim=fail reason=\"signature verification failed\" (1024-bit key;\n unprotected) header.d=163.com header.i=@163.com header.a=rsa-sha256\n header.s=s110527 header.b=UCeNUcYi;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=none (no SPF record) smtp.mailfrom=lists.infradead.org\n (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org;\n envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org;\n receiver=patchwork.ozlabs.org)"],"Received":["from bombadil.infradead.org (bombadil.infradead.org\n [IPv6:2607:7c80:54:3::133])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4ftZWS2nqxz1xtJ\n\tfor <incoming@patchwork.ozlabs.org>; Sun, 12 Apr 2026 12:39:40 +1000 (AEST)","from localhost ([::1] helo=bombadil.infradead.org)\n\tby bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux))\n\tid 1wBkjV-0000000Dyxa-0wUg;\n\tSun, 12 Apr 2026 02:39:33 +0000","from m16.mail.163.com ([117.135.210.3])\n\tby bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux))\n\tid 1wBkjR-0000000Dyx5-3m6R;\n\tSun, 12 Apr 2026 02:39:32 +0000","from localhost.localdomain (unknown [])\n\tby gzsmtp4 (Coremail) with SMTP id PygvCgCHSpClBdtp7Z4DVw--.388S2;\n\tSun, 12 Apr 2026 10:38:32 +0800 (CST)"],"DKIM-Signature":["v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;\n\td=lists.infradead.org; s=bombadil.20210309; h=Sender:\n\tContent-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post:\n\tList-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-ID:Date:Subject:Cc\n\t:To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From:\n\tResent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:\n\tList-Owner; bh=/U2Lk9rUlQsxqXiyuic1uQJ3fDVV7ikFlqdKm4i1ATc=; b=D8g8T6Baa+y1Ha\n\tx0c2zvZyTQOe/ekmjrnMD9ls0vVXtFK8oTJ8CEakN4DNe84pI102dcI5fDwYU+fIK8mAD5VKud+Nk\n\tImTdYcSI11uKjT7uhTz6vsMf/apy7bRxPAWyhumKNo3SYW+NHY6rJQCmTspsomEkoEx15FwdauZJX\n\trH0opQ9IWAM9Wjm6Xh0zkkiQIfQwUtdGOGeFk1gSZv/sFQOgbnveWDgXpp7dHgEtxScubJGL8Ifxj\n\teyzrkOVCWAkaO1S6IedsmWojVp/4kJQ6qmLfKqmcvumu1vOvpNKZtB9Q+mdfOV2yAxVyk+zasGfPC\n\t3dfIJWhCA8DXjFlKooNA==;","v=1; a=rsa-sha256; c=relaxed/relaxed; d=163.com;\n\ts=s110527; h=From:To:Subject:Date:Message-ID:MIME-Version; bh=Y2\n\tOmV9s6+anmCgwu8y2A94tcUE/WvgCKrhzYIZoMaAg=; b=UCeNUcYiEjW5Bb5KoC\n\to5UuW8+Km3+24/4piHwX3c+1QROHF9Lvd8ZhGVUNpWe+aC3YsyDbnUy74Cf1f/il\n\t78uyxm5c5Ao9/C0hZlKSt7E7f7g63jASlGrdvbS/IDhFLdMxd7C71bnghkCS6QxZ\n\tuVlD3UT2wNWxSDmzDqSTZvytM="],"From":"Jinyu Tang <tjytimi@163.com>","To":"Anup Patel <apatel@ventanamicro.com>, Anup Patel <anup@brainfault.org>,\n Atish Patra <atish.patra@linux.dev>, Paul Walmsley <pjw@kernel.org>,\n Paul Walmsley <paul.walmsley@sifive.com>,\n Palmer Dabbelt <palmer@dabbelt.com>, Albert Ou <aou@eecs.berkeley.edu>,\n Alexandre Ghiti <alex@ghiti.fr>,\n =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= <radim.krcmar@oss.qualcomm.com>,\n Andrew Jones <andrew.jones@oss.qualcomm.com>,\n Conor Dooley <conor.dooley@microchip.com>,\n Yong-Xuan Wang <yongxuan.wang@sifive.com>, Nutty Liu <nutty.liu@hotmail.com>","Cc":"kvm@vger.kernel.org,\n\tkvm-riscv@lists.infradead.org,\n\tlinux-riscv@lists.infradead.org,\n\tlinux-kernel@vger.kernel.org,\n\tJinyu Tang <tjytimi@163.com>","Subject":"[PATCH v4] RISC-V: KVM: Batch stage-2 TLB flushes","Date":"Sun, 12 Apr 2026 10:38:22 +0800","Message-ID":"<20260412023822.83341-1-tjytimi@163.com>","X-Mailer":"git-send-email 2.43.0","MIME-Version":"1.0","X-CM-TRANSID":"PygvCgCHSpClBdtp7Z4DVw--.388S2","X-Coremail-Antispam":"1Uf129KBjvJXoWfJFWDArWDXw4fGryfGF15Arb_yoWDKF1kpr\n\t4DGr93ur4fXrs7XF13tF4kZrn8Wws7W3WrAry5CF98ZFn0qrWfXr1kW34vvry5JF1rXFW3\n\tuFyDJF15Ar4IyrUanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2\n\t9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x0pEE_M3UUUUU=","X-Originating-IP":"[2408:823d:2011:11d4:e10f:2e1c:8928:c657]","X-CM-SenderInfo":"xwm13xlpl6il2tof0z/xtbC8Qm8NWnbBakY2QAA3u","X-CRM114-Version":"20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 ","X-CRM114-CacheID":"sfid-20260411_193930_752252_18A92241 ","X-CRM114-Status":"GOOD (  15.62  )","X-Spam-Score":"-2.1 (--)","X-Spam-Report":"Spam detection software,\n running on the system \"bombadil.infradead.org\",\n has NOT identified this incoming email as spam.  The original\n message has been attached to this so you can view it or label\n similar future email.  If you have any questions, see\n the administrator of that system for details.\n Content preview:  KVM RISC-V triggers a TLB flush for every single stage-2\n PTE\n    modification (unmap or write-protect) now. Although KVM coalesces the\n hardware\n    IPIs, the software overhead of executing the flush work for [...]\n Content analysis details:   (-2.1 points, 5.0 required)\n  pts rule name              description\n ---- ----------------------\n --------------------------------------------------\n -0.0 RCVD_IN_DNSWL_NONE     RBL: Sender listed at https://www.dnswl.org/, no\n                             trust\n                             [117.135.210.3 listed in list.dnswl.org]\n  0.0 RCVD_IN_VALIDITY_CERTIFIED_BLOCKED RBL: ADMINISTRATOR NOTICE: The\n                             query to Validity was blocked.  See\n                             https://knowledge.validity.com/hc/en-us/articles/20961730681243\n                              for more information.\n                          [117.135.210.3 listed in\n sa-trusted.bondedsender.org]\n  0.0 RCVD_IN_VALIDITY_SAFE_BLOCKED RBL: ADMINISTRATOR NOTICE: The query to\n                              Validity was blocked.  See\n                             https://knowledge.validity.com/hc/en-us/articles/20961730681243\n                              for more information.\n                             [117.135.210.3 listed in sa-accredit.habeas.com]\n  0.0 RCVD_IN_MSPIKE_H2      RBL: Average reputation (+2)\n                             [117.135.210.3 listed in wl.mailspike.net]\n -0.0 SPF_PASS               SPF: sender matches SPF record\n  0.0 SPF_HELO_NONE          SPF: HELO does not publish an SPF Record\n -0.1 DKIM_VALID_EF          Message has a valid DKIM or DK signature from\n                             envelope-from domain\n  0.1 DKIM_SIGNED            Message has a DKIM or DK signature,\n not necessarily valid\n -0.1 DKIM_VALID_AU          Message has a valid DKIM or DK signature from\n author's\n                             domain\n -0.1 DKIM_VALID             Message has at least one valid DKIM or DK\n signature\n -1.9 BAYES_00               BODY: Bayes spam probability is 0 to 1%\n                             [score: 0.0000]\n  0.0 RCVD_IN_VALIDITY_RPBL_BLOCKED RBL: ADMINISTRATOR NOTICE: The query to\n                              Validity was blocked.  See\n                             https://knowledge.validity.com/hc/en-us/articles/20961730681243\n                              for more information.\n                             [117.135.210.3 listed in\n bl.score.senderscore.com]\n  0.0 FREEMAIL_FROM          Sender email is commonly abused enduser mail\n provider\n                             [tjytimi(at)163.com]\n  0.0 UNPARSEABLE_RELAY      Informational: message has unparseable relay\n lines","X-BeenThere":"kvm-riscv@lists.infradead.org","X-Mailman-Version":"2.1.34","Precedence":"list","List-Id":"<kvm-riscv.lists.infradead.org>","List-Unsubscribe":"<http://lists.infradead.org/mailman/options/kvm-riscv>,\n <mailto:kvm-riscv-request@lists.infradead.org?subject=unsubscribe>","List-Archive":"<http://lists.infradead.org/pipermail/kvm-riscv/>","List-Post":"<mailto:kvm-riscv@lists.infradead.org>","List-Help":"<mailto:kvm-riscv-request@lists.infradead.org?subject=help>","List-Subscribe":"<http://lists.infradead.org/mailman/listinfo/kvm-riscv>,\n <mailto:kvm-riscv-request@lists.infradead.org?subject=subscribe>","Content-Type":"text/plain; charset=\"us-ascii\"","Content-Transfer-Encoding":"7bit","Sender":"\"kvm-riscv\" <kvm-riscv-bounces@lists.infradead.org>","Errors-To":"kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org"},"content":"KVM RISC-V triggers a TLB flush for every single stage-2 PTE\nmodification (unmap or write-protect) now. Although KVM coalesces the\nhardware IPIs, the software overhead of executing the flush work\nfor every page is large, especially during dirty page tracking.\n\nFollowing the approach used in x86 and arm64, this patch optimizes\nthe MMU logic by making the PTE manipulation functions return a boolean\nindicating if a leaf PTE was actually changed. The outer MMU functions\nbubble up this flag to batch the remote TLB flushes.\n\nConsequently, the flush operation is executed only once per batch.\nMoving it outside of the `mmu_lock` also reduces lock contention.\n\nTested with tools/testing/selftests/kvm on a 4-vCPU guest (Host\nenvironment: QEMU 10.2.1 RISC-V)\n1. demand_paging_test (1GB memory)\n  time ./demand_paging_test -b 1G -v 4\n- Total execution time reduced from ~2m39s to ~2m31s\n2. dirty_log_perf_test (1GB memory)\n  ./dirty_log_perf_test -b 1G -v 4\n- \"Clear dirty log time\" per iteration dropped significantly from\n   ~3.40s to ~0.18s\n\nReviewed-by: Nutty Liu <nutty.liu@hotmail.com>\nSigned-off-by: Jinyu Tang <tjytimi@163.com>\n---\nv3 -> v4:\nRebase to branch linux-riscv_kvm_next and retest performance\n\nv2 -> v3:\nAddressed review comments from Anup Patel:\n- Removed gstage_tlb_flush() for non-leaf PTEs only set flush flag\n- Removed KVM_GSTAGE_FLAGS_LOCAL check\n- Used kvm_flush_remote_tlbs_range() instead of full flushes in\n  kvm_arch_flush_shadow_memslot() and kvm_unmap_gfn_range() to avoid\n  unnecessary global TLB flush.\n\nv1 -> v2:\n- Fixed alignment issues in multi-line function calls supported by\n  Nutty Liu.\n\n arch/riscv/include/asm/kvm_gstage.h |  6 ++--\n arch/riscv/kvm/gstage.c             | 35 ++++++++++++++---------\n arch/riscv/kvm/mmu.c                | 44 +++++++++++++++++++++++------\n 3 files changed, 60 insertions(+), 25 deletions(-)","diff":"diff --git a/arch/riscv/include/asm/kvm_gstage.h b/arch/riscv/include/asm/kvm_gstage.h\nindex 9c908432b..f820c6783 100644\n--- a/arch/riscv/include/asm/kvm_gstage.h\n+++ b/arch/riscv/include/asm/kvm_gstage.h\n@@ -70,13 +70,13 @@ enum kvm_riscv_gstage_op {\n \tGSTAGE_OP_WP,\t\t/* Write-protect */\n };\n \n-void kvm_riscv_gstage_op_pte(struct kvm_gstage *gstage, gpa_t addr,\n+bool kvm_riscv_gstage_op_pte(struct kvm_gstage *gstage, gpa_t addr,\n \t\t\t     pte_t *ptep, u32 ptep_level, enum kvm_riscv_gstage_op op);\n \n-void kvm_riscv_gstage_unmap_range(struct kvm_gstage *gstage,\n+bool kvm_riscv_gstage_unmap_range(struct kvm_gstage *gstage,\n \t\t\t\t  gpa_t start, gpa_t size, bool may_block);\n \n-void kvm_riscv_gstage_wp_range(struct kvm_gstage *gstage, gpa_t start, gpa_t end);\n+bool kvm_riscv_gstage_wp_range(struct kvm_gstage *gstage, gpa_t start, gpa_t end);\n \n void kvm_riscv_gstage_mode_detect(void);\n \ndiff --git a/arch/riscv/kvm/gstage.c b/arch/riscv/kvm/gstage.c\nindex d9fe8be2a..e020b334a 100644\n--- a/arch/riscv/kvm/gstage.c\n+++ b/arch/riscv/kvm/gstage.c\n@@ -337,35 +337,36 @@ int kvm_riscv_gstage_split_huge(struct kvm_gstage *gstage,\n \treturn 0;\n }\n \n-void kvm_riscv_gstage_op_pte(struct kvm_gstage *gstage, gpa_t addr,\n+bool kvm_riscv_gstage_op_pte(struct kvm_gstage *gstage, gpa_t addr,\n \t\t\t     pte_t *ptep, u32 ptep_level, enum kvm_riscv_gstage_op op)\n {\n \tint i, ret;\n \tpte_t old_pte, *next_ptep;\n \tu32 next_ptep_level;\n \tunsigned long next_page_size, page_size;\n+\tbool flush = false;\n \n \tret = gstage_level_to_page_size(gstage, ptep_level, &page_size);\n \tif (ret)\n-\t\treturn;\n+\t\treturn false;\n \n \tWARN_ON(addr & (page_size - 1));\n \n \tif (!pte_val(ptep_get(ptep)))\n-\t\treturn;\n+\t\treturn false;\n \n \tif (ptep_level && !gstage_pte_leaf(ptep)) {\n \t\tnext_ptep = (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep));\n \t\tnext_ptep_level = ptep_level - 1;\n \t\tret = gstage_level_to_page_size(gstage, next_ptep_level, &next_page_size);\n \t\tif (ret)\n-\t\t\treturn;\n+\t\t\treturn false;\n \n \t\tif (op == GSTAGE_OP_CLEAR)\n \t\t\tset_pte(ptep, __pte(0));\n \t\tfor (i = 0; i < PTRS_PER_PTE; i++)\n-\t\t\tkvm_riscv_gstage_op_pte(gstage, addr + i * next_page_size,\n-\t\t\t\t\t\t&next_ptep[i], next_ptep_level, op);\n+\t\t\tflush |= kvm_riscv_gstage_op_pte(gstage, addr + i * next_page_size,\n+\t\t\t\t\t\t\t &next_ptep[i], next_ptep_level, op);\n \t\tif (op == GSTAGE_OP_CLEAR)\n \t\t\tput_page(virt_to_page(next_ptep));\n \t} else {\n@@ -375,11 +376,13 @@ void kvm_riscv_gstage_op_pte(struct kvm_gstage *gstage, gpa_t addr,\n \t\telse if (op == GSTAGE_OP_WP)\n \t\t\tset_pte(ptep, __pte(pte_val(ptep_get(ptep)) & ~_PAGE_WRITE));\n \t\tif (pte_val(*ptep) != pte_val(old_pte))\n-\t\t\tgstage_tlb_flush(gstage, ptep_level, addr);\n+\t\t\tflush = true;\n \t}\n+\n+\treturn flush;\n }\n \n-void kvm_riscv_gstage_unmap_range(struct kvm_gstage *gstage,\n+bool kvm_riscv_gstage_unmap_range(struct kvm_gstage *gstage,\n \t\t\t\t  gpa_t start, gpa_t size, bool may_block)\n {\n \tint ret;\n@@ -388,6 +391,7 @@ void kvm_riscv_gstage_unmap_range(struct kvm_gstage *gstage,\n \tbool found_leaf;\n \tunsigned long page_size;\n \tgpa_t addr = start, end = start + size;\n+\tbool flush = false;\n \n \twhile (addr < end) {\n \t\tfound_leaf = kvm_riscv_gstage_get_leaf(gstage, addr, &ptep, &ptep_level);\n@@ -399,8 +403,8 @@ void kvm_riscv_gstage_unmap_range(struct kvm_gstage *gstage,\n \t\t\tgoto next;\n \n \t\tif (!(addr & (page_size - 1)) && ((end - addr) >= page_size))\n-\t\t\tkvm_riscv_gstage_op_pte(gstage, addr, ptep,\n-\t\t\t\t\t\tptep_level, GSTAGE_OP_CLEAR);\n+\t\t\tflush |= kvm_riscv_gstage_op_pte(gstage, addr, ptep,\n+\t\t\t\t\t\t\t ptep_level, GSTAGE_OP_CLEAR);\n \n next:\n \t\taddr += page_size;\n@@ -412,9 +416,11 @@ void kvm_riscv_gstage_unmap_range(struct kvm_gstage *gstage,\n \t\tif (!(gstage->flags & KVM_GSTAGE_FLAGS_LOCAL) && may_block && addr < end)\n \t\t\tcond_resched_lock(&gstage->kvm->mmu_lock);\n \t}\n+\n+\treturn flush;\n }\n \n-void kvm_riscv_gstage_wp_range(struct kvm_gstage *gstage, gpa_t start, gpa_t end)\n+bool kvm_riscv_gstage_wp_range(struct kvm_gstage *gstage, gpa_t start, gpa_t end)\n {\n \tint ret;\n \tpte_t *ptep;\n@@ -422,6 +428,7 @@ void kvm_riscv_gstage_wp_range(struct kvm_gstage *gstage, gpa_t start, gpa_t end\n \tbool found_leaf;\n \tgpa_t addr = start;\n \tunsigned long page_size;\n+\tbool flush = false;\n \n \twhile (addr < end) {\n \t\tfound_leaf = kvm_riscv_gstage_get_leaf(gstage, addr, &ptep, &ptep_level);\n@@ -433,11 +440,13 @@ void kvm_riscv_gstage_wp_range(struct kvm_gstage *gstage, gpa_t start, gpa_t end\n \t\t\tgoto next;\n \n \t\taddr = ALIGN_DOWN(addr, page_size);\n-\t\tkvm_riscv_gstage_op_pte(gstage, addr, ptep,\n-\t\t\t\t\tptep_level, GSTAGE_OP_WP);\n+\t\tflush |= kvm_riscv_gstage_op_pte(gstage, addr, ptep,\n+\t\t\t\t\t\t ptep_level, GSTAGE_OP_WP);\n next:\n \t\taddr += page_size;\n \t}\n+\n+\treturn flush;\n }\n \n void __init kvm_riscv_gstage_mode_detect(void)\ndiff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c\nindex 2d3def024..8469ed932 100644\n--- a/arch/riscv/kvm/mmu.c\n+++ b/arch/riscv/kvm/mmu.c\n@@ -23,13 +23,15 @@ static void mmu_wp_memory_region(struct kvm *kvm, int slot)\n \tphys_addr_t start = memslot->base_gfn << PAGE_SHIFT;\n \tphys_addr_t end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT;\n \tstruct kvm_gstage gstage;\n+\tbool flush;\n \n \tkvm_riscv_gstage_init(&gstage, kvm);\n \n \tspin_lock(&kvm->mmu_lock);\n-\tkvm_riscv_gstage_wp_range(&gstage, start, end);\n+\tflush = kvm_riscv_gstage_wp_range(&gstage, start, end);\n \tspin_unlock(&kvm->mmu_lock);\n-\tkvm_flush_remote_tlbs_memslot(kvm, memslot);\n+\tif (flush)\n+\t\tkvm_flush_remote_tlbs_memslot(kvm, memslot);\n }\n \n int kvm_riscv_mmu_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa,\n@@ -82,12 +84,17 @@ int kvm_riscv_mmu_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa,\n void kvm_riscv_mmu_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long size)\n {\n \tstruct kvm_gstage gstage;\n+\tbool flush;\n \n \tkvm_riscv_gstage_init(&gstage, kvm);\n \n \tspin_lock(&kvm->mmu_lock);\n-\tkvm_riscv_gstage_unmap_range(&gstage, gpa, size, false);\n+\tflush = kvm_riscv_gstage_unmap_range(&gstage, gpa, size, false);\n \tspin_unlock(&kvm->mmu_lock);\n+\n+\tif (flush)\n+\t\tkvm_flush_remote_tlbs_range(kvm, gpa >> PAGE_SHIFT,\n+\t\t\t\t\t    size >> PAGE_SHIFT);\n }\n \n void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,\n@@ -99,10 +106,14 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,\n \tphys_addr_t start = (base_gfn +  __ffs(mask)) << PAGE_SHIFT;\n \tphys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT;\n \tstruct kvm_gstage gstage;\n+\tbool flush;\n \n \tkvm_riscv_gstage_init(&gstage, kvm);\n \n-\tkvm_riscv_gstage_wp_range(&gstage, start, end);\n+\tflush = kvm_riscv_gstage_wp_range(&gstage, start, end);\n+\tif (flush)\n+\t\tkvm_flush_remote_tlbs_range(kvm, start >> PAGE_SHIFT,\n+\t\t\t\t\t    (end - start) >> PAGE_SHIFT);\n }\n \n void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)\n@@ -128,12 +139,16 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm,\n \tgpa_t gpa = slot->base_gfn << PAGE_SHIFT;\n \tphys_addr_t size = slot->npages << PAGE_SHIFT;\n \tstruct kvm_gstage gstage;\n+\tbool flush;\n \n \tkvm_riscv_gstage_init(&gstage, kvm);\n \n \tspin_lock(&kvm->mmu_lock);\n-\tkvm_riscv_gstage_unmap_range(&gstage, gpa, size, false);\n+\tflush = kvm_riscv_gstage_unmap_range(&gstage, gpa, size, false);\n \tspin_unlock(&kvm->mmu_lock);\n+\tif (flush)\n+\t\tkvm_flush_remote_tlbs_range(kvm, gpa >> PAGE_SHIFT,\n+\t\t\t\t\t    size >> PAGE_SHIFT);\n }\n \n void kvm_arch_commit_memory_region(struct kvm *kvm,\n@@ -231,17 +246,24 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)\n {\n \tstruct kvm_gstage gstage;\n \tbool mmu_locked;\n+\tbool flush;\n \n \tif (!kvm->arch.pgd)\n \t\treturn false;\n \n \tkvm_riscv_gstage_init(&gstage, kvm);\n \tmmu_locked = spin_trylock(&kvm->mmu_lock);\n-\tkvm_riscv_gstage_unmap_range(&gstage, range->start << PAGE_SHIFT,\n-\t\t\t\t     (range->end - range->start) << PAGE_SHIFT,\n-\t\t\t\t     range->may_block);\n+\n+\tflush = kvm_riscv_gstage_unmap_range(&gstage, range->start << PAGE_SHIFT,\n+\t\t\t\t\t     (range->end - range->start) << PAGE_SHIFT,\n+\t\t\t\t\t     range->may_block);\n+\n \tif (mmu_locked)\n \t\tspin_unlock(&kvm->mmu_lock);\n+\n+\tif (flush)\n+\t\tkvm_flush_remote_tlbs_range(kvm, range->start,\n+\t\t\t\t\t    range->end - range->start);\n \treturn false;\n }\n \n@@ -557,11 +579,12 @@ void kvm_riscv_mmu_free_pgd(struct kvm *kvm)\n {\n \tstruct kvm_gstage gstage;\n \tvoid *pgd = NULL;\n+\tbool flush = false;\n \n \tspin_lock(&kvm->mmu_lock);\n \tif (kvm->arch.pgd) {\n \t\tkvm_riscv_gstage_init(&gstage, kvm);\n-\t\tkvm_riscv_gstage_unmap_range(&gstage, 0UL,\n+\t\tflush = kvm_riscv_gstage_unmap_range(&gstage, 0UL,\n \t\t\tkvm_riscv_gstage_gpa_size(kvm->arch.pgd_levels), false);\n \t\tpgd = READ_ONCE(kvm->arch.pgd);\n \t\tkvm->arch.pgd = NULL;\n@@ -570,6 +593,9 @@ void kvm_riscv_mmu_free_pgd(struct kvm *kvm)\n \t}\n \tspin_unlock(&kvm->mmu_lock);\n \n+\tif (flush)\n+\t\tkvm_flush_remote_tlbs(kvm);\n+\n \tif (pgd)\n \t\tfree_pages((unsigned long)pgd, get_order(kvm_riscv_gstage_pgd_size));\n }\n","prefixes":["v4"]}