{"id":2225345,"url":"http://patchwork.ozlabs.org/api/patches/2225345/?format=json","web_url":"http://patchwork.ozlabs.org/project/kvm-riscv/patch/20260420212004.3938325-12-seanjc@google.com/","project":{"id":70,"url":"http://patchwork.ozlabs.org/api/projects/70/?format=json","name":"Linux KVM RISC-V","link_name":"kvm-riscv","list_id":"kvm-riscv.lists.infradead.org","list_email":"kvm-riscv@lists.infradead.org","web_url":"","scm_url":"","webscm_url":"","list_archive_url":"http://lists.infradead.org/pipermail/kvm-riscv/","list_archive_url_format":"","commit_url_format":""},"msgid":"<20260420212004.3938325-12-seanjc@google.com>","list_archive_url":null,"date":"2026-04-20T21:19:56","name":"[v3,11/19] KVM: selftests: Drop \"vaddr_\" from APIs that allocate memory for a given VM","commit_ref":null,"pull_url":null,"state":"new","archived":false,"hash":"83415beb27d65fd58007575cbd1a9ec24d0f7f1d","submitter":{"id":81022,"url":"http://patchwork.ozlabs.org/api/people/81022/?format=json","name":"Sean Christopherson","email":"seanjc@google.com"},"delegate":null,"mbox":"http://patchwork.ozlabs.org/project/kvm-riscv/patch/20260420212004.3938325-12-seanjc@google.com/mbox/","series":[{"id":500685,"url":"http://patchwork.ozlabs.org/api/series/500685/?format=json","web_url":"http://patchwork.ozlabs.org/project/kvm-riscv/list/?series=500685","date":"2026-04-20T21:19:45","name":"KVM: selftests: Use kernel-style integer and g[vp]a_t types","version":3,"mbox":"http://patchwork.ozlabs.org/series/500685/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/2225345/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/2225345/checks/","tags":{},"related":[],"headers":{"Return-Path":"\n <kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n secure) header.d=lists.infradead.org header.i=@lists.infradead.org\n header.a=rsa-sha256 header.s=bombadil.20210309 header.b=tN2Mw9kq;\n\tdkim=fail reason=\"signature verification failed\" (2048-bit key;\n unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256\n header.s=20251104 header.b=bMHTakY6;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=none (no SPF record) smtp.mailfrom=lists.infradead.org\n (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org;\n envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org;\n receiver=patchwork.ozlabs.org)"],"Received":["from bombadil.infradead.org (bombadil.infradead.org\n [IPv6:2607:7c80:54:3::133])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4fzz1N3gbFz1yCv\n\tfor <incoming@patchwork.ozlabs.org>; Tue, 21 Apr 2026 07:20:56 +1000 (AEST)","from localhost ([::1] helo=bombadil.infradead.org)\n\tby bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux))\n\tid 1wEw31-00000007gDX-230H;\n\tMon, 20 Apr 2026 21:20:51 +0000","from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049])\n\tby bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux))\n\tid 1wEw2f-00000007flz-1Jn9\n\tfor kvm-riscv@lists.infradead.org;\n\tMon, 20 Apr 2026 21:20:35 +0000","by mail-pj1-x1049.google.com with SMTP id\n 98e67ed59e1d1-3568090851aso8470145a91.1\n        for <kvm-riscv@lists.infradead.org>;\n Mon, 20 Apr 2026 14:20:28 -0700 (PDT)"],"DKIM-Signature":["v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;\n\td=lists.infradead.org; s=bombadil.20210309; h=Sender:\n\tContent-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help:\n\tList-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID\n\t:References:Mime-Version:In-Reply-To:Date:Content-ID:Content-Description:\n\tResent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:\n\tList-Owner; bh=sfgXd8YSNfLTKCA8uLFC3JyAeS8eEa9VvLHfoiA54iU=; b=tN2Mw9kqQqJO/K\n\teFcYgMA+CAwX2XTXbsgQ7necX981L8NAoQKkuzzM1j5rtBzmT90tt1AP+78NJXs7jtd9fBTX0Xnmd\n\t7eANp4takBeCuktAtnlktbS9smt09NsoRFub4P2oRXb27OOwyR70lor91djWn2btxEinVv6ByZQW0\n\tT1T8lA1ZZ9D9kgnkyV2MkpB4YciiasXN7vyiFugomfvjF/MCGieQpILZCBVwdHoxnsMQzuEqoxG+U\n\tLroKOBWeQzyA1A/nA8I/zVBFWjFdl1xiO5LEY7n8qj7aykOd5PPt5xl66g6L3r5+gQxurjvGjPeNB\n\txH2elHuUPQQTntSrBLwA==;","v=1; a=rsa-sha256; c=relaxed/relaxed;\n        d=google.com; s=20251104; t=1776720027; x=1777324827;\n darn=lists.infradead.org;\n        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to\n         :date:reply-to:from:to:cc:subject:date:message-id:reply-to;\n        bh=TaylwcgVf48kOhELpsI9sN/K9I6TR/YTRK7E4rfY3Gs=;\n        b=bMHTakY6fp8s5jmYubkQcn4buguFkTCSZZy81lrRgmO8kRB8tdCCPSwHkhnKIfZC2t\n         DJ3i04G85KSrRp7p6wk9/oPDUDEtwGPvnSYiscOB/RpjkzUH7kjTtT30j1FgGndzjMa1\n         DrF146FPffAUAM7ZY3R6jsH26iU4UTxf+iyXc0yGfiaqiF9JMS7St/4b/cbiRgIxjmeJ\n         4mH3KaUPgS8nsWDHTk3qt6JgwBan9gKinsju3OyyEhlE6FqLZM+kBN/6vEwRQIPiCWuP\n         0iK6uhfNxoS09aLqKyBRpdpWxuSLCFO95aWwwyJj3iKL/DzS7zpcygmr4ov+z8RAbrE+\n         /QVg=="],"X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n        d=1e100.net; s=20251104; t=1776720027; x=1777324827;\n        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to\n         :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id\n         :reply-to;\n        bh=TaylwcgVf48kOhELpsI9sN/K9I6TR/YTRK7E4rfY3Gs=;\n        b=ZPvT5n9LU5bcA0yskGDrnXLHvn1m2pDLE0meoDLu95qw++jNOsa2QMCVsicW99cwLC\n         kZrsEuycenpkHqI5rcMrGnvj/8NGuQxKBDt7SXmXD9EQKks3NPhuKl3coyNvwQhqyxMr\n         wMNRZ0llttZo0+FShVvKWNvGF5BR60awltOkXn9oWOw1zzsFXJHkZs58nU5CmnPt6ZBJ\n         cNE3lMpQA/wHFb874zIVc27A6AqWzwn0IeiERwur3HCz0m1OUi7UU+KJlZebtg9xCdDu\n         NJpFJwO689xGPoSxAJPgqcnlaKZkwAW/N54u5CQxWcUH8Nhakq3cDv+wIoLfbCnr0Sgs\n         QdsQ==","X-Forwarded-Encrypted":"i=1;\n AFNElJ9qjFIA/d0n9Q/6OeRYHFSJh5fVkFOiuG+21VMg+fxP/3Xoi4OTg6P1XjKLvBXS98cJ5iIW40SM0Bg=@lists.infradead.org","X-Gm-Message-State":"AOJu0YwZSkzUGGLXQpcIdXK/vxrp4O2YhM62VyfEiUyW/25eeSNogAde\n\tZ+iqsYnt8HGKBYXO2eqj5RIAQv3UKS32NmHzeZIzTrWiuHt8Aysr3RFhCRDeAawYafjxFcef7gT\n\t3nWOOxA==","X-Received":"from pldt14.prod.google.com ([2002:a17:903:40ce:b0:2b4:5ded:6ecb])\n (user=seanjc job=prod-delivery.src-stubby-dispatcher) by\n 2002:a17:90b:2fc8:b0:35d:a542:2dc4\n with SMTP id 98e67ed59e1d1-3614047a7b8mr16878211a91.21.1776720027270; Mon, 20\n Apr 2026 14:20:27 -0700 (PDT)","Date":"Mon, 20 Apr 2026 14:19:56 -0700","In-Reply-To":"<20260420212004.3938325-1-seanjc@google.com>","Mime-Version":"1.0","References":"<20260420212004.3938325-1-seanjc@google.com>","X-Mailer":"git-send-email 2.54.0.rc1.555.g9c883467ad-goog","Message-ID":"<20260420212004.3938325-12-seanjc@google.com>","Subject":"[PATCH v3 11/19] KVM: selftests: Drop \"vaddr_\" from APIs that\n allocate memory for a given VM","From":"Sean Christopherson <seanjc@google.com>","To":"Paolo Bonzini <pbonzini@redhat.com>, Marc Zyngier <maz@kernel.org>,\n\tOliver Upton <oupton@kernel.org>, Tianrui Zhao <zhaotianrui@loongson.cn>,\n\tBibo Mao <maobibo@loongson.cn>, Huacai Chen <chenhuacai@kernel.org>,\n\tAnup Patel <anup@brainfault.org>, Paul Walmsley <pjw@kernel.org>,\n\tPalmer Dabbelt <palmer@dabbelt.com>, Albert Ou <aou@eecs.berkeley.edu>,\n\tChristian Borntraeger <borntraeger@linux.ibm.com>,\n Janosch Frank <frankja@linux.ibm.com>,\n\tClaudio Imbrenda <imbrenda@linux.ibm.com>,\n Sean Christopherson <seanjc@google.com>","Cc":"kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org,\n\tkvmarm@lists.linux.dev, loongarch@lists.linux.dev,\n\tkvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org,\n\tlinux-kernel@vger.kernel.org, David Matlack <dmatlack@google.com>","X-CRM114-Version":"20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 ","X-CRM114-CacheID":"sfid-20260420_142029_595198_263CF1DF ","X-CRM114-Status":"GOOD (  17.62  )","X-Spam-Score":"-9.5 (---------)","X-Spam-Report":"Spam detection software,\n running on the system \"bombadil.infradead.org\",\n has NOT identified this incoming email as spam.  The original\n message has been attached to this so you can view it or label\n similar future email.  If you have any questions, see\n the administrator of that system for details.\n Content preview:  Now that KVM selftests use gva_t instead of vm_vaddr_t,\n drop\n    \"vaddr_\" from the core memory allocation APIs as the information is\n extraneous\n    and does more harm than good. E.g. the APIs don't _just_ all [...]\n Content analysis details:   (-9.5 points, 5.0 required)\n  pts rule name              description\n ---- ----------------------\n --------------------------------------------------\n -0.0 RCVD_IN_DNSWL_NONE     RBL: Sender listed at https://www.dnswl.org/, no\n                             trust\n                             [2607:f8b0:4864:20:0:0:0:1049 listed in]\n                             [list.dnswl.org]\n -0.0 SPF_PASS               SPF: sender matches SPF record\n -7.5 USER_IN_DEF_DKIM_WL    From: address is in the default DKIM welcome-list\n  0.0 SPF_HELO_NONE          SPF: HELO does not publish an SPF Record\n -0.1 DKIM_VALID             Message has at least one valid DKIM or DK\n signature\n -0.1 DKIM_VALID_AU          Message has a valid DKIM or DK signature from\n author's\n                             domain\n  0.1 DKIM_SIGNED            Message has a DKIM or DK signature,\n not necessarily valid\n -1.9 BAYES_00               BODY: Bayes spam probability is 0 to 1%\n                             [score: 0.0000]\n -0.0 DKIMWL_WL_MED          DKIMwl.org - Medium trust sender","X-BeenThere":"kvm-riscv@lists.infradead.org","X-Mailman-Version":"2.1.34","Precedence":"list","List-Id":"<kvm-riscv.lists.infradead.org>","List-Unsubscribe":"<http://lists.infradead.org/mailman/options/kvm-riscv>,\n <mailto:kvm-riscv-request@lists.infradead.org?subject=unsubscribe>","List-Archive":"<http://lists.infradead.org/pipermail/kvm-riscv/>","List-Post":"<mailto:kvm-riscv@lists.infradead.org>","List-Help":"<mailto:kvm-riscv-request@lists.infradead.org?subject=help>","List-Subscribe":"<http://lists.infradead.org/mailman/listinfo/kvm-riscv>,\n <mailto:kvm-riscv-request@lists.infradead.org?subject=subscribe>","Reply-To":"Sean Christopherson <seanjc@google.com>","Content-Type":"text/plain; charset=\"us-ascii\"","Content-Transfer-Encoding":"7bit","Sender":"\"kvm-riscv\" <kvm-riscv-bounces@lists.infradead.org>","Errors-To":"kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org"},"content":"Now that KVM selftests use gva_t instead of vm_vaddr_t, drop \"vaddr_\" from\nthe core memory allocation APIs as the information is extraneous and does\nmore harm than good.  E.g. the APIs don't _just_ allocate virtual memory,\nthey allocate backing physical memory and install mappings in the guest\npage tables.  And as proven by kmalloc() and malloc(), developers generally\nexpect that allocations come with a working virtual address.\n\nOpportunistically clean up the function comment for vm_alloc(), and drop\nthe misleading and superfluous comments for its wrappers.\n\nNo functional change intended.\n\nSigned-off-by: Sean Christopherson <seanjc@google.com>\n---\n tools/testing/selftests/kvm/arm64/vgic_irq.c  |  4 +-\n .../testing/selftests/kvm/include/kvm_util.h  | 16 ++--\n .../selftests/kvm/lib/arm64/processor.c       | 10 +--\n tools/testing/selftests/kvm/lib/elf.c         |  3 +-\n tools/testing/selftests/kvm/lib/kvm_util.c    | 84 +++++--------------\n .../selftests/kvm/lib/loongarch/processor.c   | 13 +--\n .../selftests/kvm/lib/riscv/processor.c       |  6 +-\n .../selftests/kvm/lib/s390/processor.c        |  2 +-\n .../testing/selftests/kvm/lib/ucall_common.c  |  4 +-\n tools/testing/selftests/kvm/lib/x86/hyperv.c  |  8 +-\n .../testing/selftests/kvm/lib/x86/processor.c | 12 +--\n tools/testing/selftests/kvm/lib/x86/svm.c     |  8 +-\n tools/testing/selftests/kvm/lib/x86/vmx.c     | 16 ++--\n tools/testing/selftests/kvm/s390/memop.c      | 12 +--\n tools/testing/selftests/kvm/s390/tprot.c      |  4 +-\n tools/testing/selftests/kvm/x86/amx_test.c    |  6 +-\n tools/testing/selftests/kvm/x86/cpuid_test.c  |  2 +-\n .../testing/selftests/kvm/x86/hyperv_clock.c  |  2 +-\n .../testing/selftests/kvm/x86/hyperv_evmcs.c  |  2 +-\n .../kvm/x86/hyperv_extended_hypercalls.c      |  4 +-\n .../selftests/kvm/x86/hyperv_features.c       |  6 +-\n tools/testing/selftests/kvm/x86/hyperv_ipi.c  |  2 +-\n .../selftests/kvm/x86/hyperv_svm_test.c       |  2 +-\n .../selftests/kvm/x86/hyperv_tlb_flush.c      |  6 +-\n .../selftests/kvm/x86/kvm_clock_test.c        |  2 +-\n .../selftests/kvm/x86/sev_smoke_test.c        |  4 +-\n .../kvm/x86/svm_nested_soft_inject_test.c     |  2 +-\n .../selftests/kvm/x86/xapic_ipi_test.c        |  2 +-\n 28 files changed, 102 insertions(+), 142 deletions(-)","diff":"diff --git a/tools/testing/selftests/kvm/arm64/vgic_irq.c b/tools/testing/selftests/kvm/arm64/vgic_irq.c\nindex 8a9dd79123d4..5e231998617e 100644\n--- a/tools/testing/selftests/kvm/arm64/vgic_irq.c\n+++ b/tools/testing/selftests/kvm/arm64/vgic_irq.c\n@@ -771,7 +771,7 @@ static void test_vgic(u32 nr_irqs, bool level_sensitive, bool eoi_split)\n \tvcpu_init_descriptor_tables(vcpu);\n \n \t/* Setup the guest args page (so it gets the args). */\n-\targs_gva = vm_vaddr_alloc_page(vm);\n+\targs_gva = vm_alloc_page(vm);\n \tmemcpy(addr_gva2hva(vm, args_gva), &args, sizeof(args));\n \tvcpu_args_set(vcpu, 1, args_gva);\n \n@@ -997,7 +997,7 @@ static void test_vgic_two_cpus(void *gcode)\n \tvcpu_init_descriptor_tables(vcpus[1]);\n \n \t/* Setup the guest args page (so it gets the args). */\n-\targs_gva = vm_vaddr_alloc_page(vm);\n+\targs_gva = vm_alloc_page(vm);\n \tmemcpy(addr_gva2hva(vm, args_gva), &args, sizeof(args));\n \tvcpu_args_set(vcpus[0], 2, args_gva, 0);\n \tvcpu_args_set(vcpus[1], 2, args_gva, 1);\ndiff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h\nindex 676e3ccb1462..8f7afc34ea8d 100644\n--- a/tools/testing/selftests/kvm/include/kvm_util.h\n+++ b/tools/testing/selftests/kvm/include/kvm_util.h\n@@ -716,14 +716,14 @@ void vm_mem_region_delete(struct kvm_vm *vm, u32 slot);\n struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, u32 vcpu_id);\n void vm_populate_vaddr_bitmap(struct kvm_vm *vm);\n gva_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, gva_t vaddr_min);\n-gva_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min);\n-gva_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,\n-\t\t       enum kvm_mem_region_type type);\n-gva_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,\n-\t\t\t    enum kvm_mem_region_type type);\n-gva_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages);\n-gva_t __vm_vaddr_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type type);\n-gva_t vm_vaddr_alloc_page(struct kvm_vm *vm);\n+gva_t vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min);\n+gva_t __vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,\n+\t\t enum kvm_mem_region_type type);\n+gva_t vm_alloc_shared(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,\n+\t\t      enum kvm_mem_region_type type);\n+gva_t vm_alloc_pages(struct kvm_vm *vm, int nr_pages);\n+gva_t __vm_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type type);\n+gva_t vm_alloc_page(struct kvm_vm *vm);\n \n void virt_map(struct kvm_vm *vm, u64 vaddr, u64 paddr,\n \t      unsigned int npages);\ndiff --git a/tools/testing/selftests/kvm/lib/arm64/processor.c b/tools/testing/selftests/kvm/lib/arm64/processor.c\nindex 7ba3a48911e3..c4f0e37f2907 100644\n--- a/tools/testing/selftests/kvm/lib/arm64/processor.c\n+++ b/tools/testing/selftests/kvm/lib/arm64/processor.c\n@@ -422,9 +422,9 @@ static struct kvm_vcpu *__aarch64_vcpu_add(struct kvm_vm *vm, u32 vcpu_id,\n \n \tstack_size = vm->page_size == 4096 ? DEFAULT_STACK_PGS * vm->page_size :\n \t\t\t\t\t     vm->page_size;\n-\tstack_vaddr = __vm_vaddr_alloc(vm, stack_size,\n-\t\t\t\t       DEFAULT_ARM64_GUEST_STACK_VADDR_MIN,\n-\t\t\t\t       MEM_REGION_DATA);\n+\tstack_vaddr = __vm_alloc(vm, stack_size,\n+\t\t\t\t DEFAULT_ARM64_GUEST_STACK_VADDR_MIN,\n+\t\t\t\t MEM_REGION_DATA);\n \n \taarch64_vcpu_setup(vcpu, init);\n \n@@ -536,8 +536,8 @@ void route_exception(struct ex_regs *regs, int vector)\n \n void vm_init_descriptor_tables(struct kvm_vm *vm)\n {\n-\tvm->handlers = __vm_vaddr_alloc(vm, sizeof(struct handlers),\n-\t\t\t\t\tvm->page_size, MEM_REGION_DATA);\n+\tvm->handlers = __vm_alloc(vm, sizeof(struct handlers), vm->page_size,\n+\t\t\t\t  MEM_REGION_DATA);\n \n \t*(gva_t *)addr_gva2hva(vm, (gva_t)(&exception_handlers)) = vm->handlers;\n }\ndiff --git a/tools/testing/selftests/kvm/lib/elf.c b/tools/testing/selftests/kvm/lib/elf.c\nindex 0f2710cda9d8..2288480f4e1e 100644\n--- a/tools/testing/selftests/kvm/lib/elf.c\n+++ b/tools/testing/selftests/kvm/lib/elf.c\n@@ -162,8 +162,7 @@ void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename)\n \t\tseg_vend |= vm->page_size - 1;\n \t\tsize_t seg_size = seg_vend - seg_vstart + 1;\n \n-\t\tgva_t vaddr = __vm_vaddr_alloc(vm, seg_size, seg_vstart,\n-\t\t\t\t\t\t    MEM_REGION_CODE);\n+\t\tgva_t vaddr = __vm_alloc(vm, seg_size, seg_vstart, MEM_REGION_CODE);\n \t\tTEST_ASSERT(vaddr == seg_vstart, \"Unable to allocate \"\n \t\t\t\"virtual memory for segment at requested min addr,\\n\"\n \t\t\t\"  segment idx: %u\\n\"\ndiff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c\nindex 050ae9c92681..b304c0e54837 100644\n--- a/tools/testing/selftests/kvm/lib/kvm_util.c\n+++ b/tools/testing/selftests/kvm/lib/kvm_util.c\n@@ -1450,8 +1450,8 @@ gva_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, gva_t vaddr_min)\n \treturn pgidx_start * vm->page_size;\n }\n \n-static gva_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,\n-\t\t\t\tenum kvm_mem_region_type type, bool protected)\n+static gva_t ____vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,\n+\t\t\t  enum kvm_mem_region_type type, bool protected)\n {\n \tu64 pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);\n \n@@ -1476,84 +1476,44 @@ static gva_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,\n \treturn vaddr_start;\n }\n \n-gva_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,\n-\t\t       enum kvm_mem_region_type type)\n+gva_t __vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,\n+\t\t enum kvm_mem_region_type type)\n {\n-\treturn ____vm_vaddr_alloc(vm, sz, vaddr_min, type,\n-\t\t\t\t  vm_arch_has_protected_memory(vm));\n+\treturn ____vm_alloc(vm, sz, vaddr_min, type,\n+\t\t\t    vm_arch_has_protected_memory(vm));\n }\n \n-gva_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,\n-\t\t\t    enum kvm_mem_region_type type)\n+gva_t vm_alloc_shared(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,\n+\t\t      enum kvm_mem_region_type type)\n {\n-\treturn ____vm_vaddr_alloc(vm, sz, vaddr_min, type, false);\n+\treturn ____vm_alloc(vm, sz, vaddr_min, type, false);\n }\n \n /*\n- * VM Virtual Address Allocate\n- *\n- * Input Args:\n- *   vm - Virtual Machine\n- *   sz - Size in bytes\n- *   vaddr_min - Minimum starting virtual address\n- *\n- * Output Args: None\n- *\n- * Return:\n- *   Starting guest virtual address\n- *\n- * Allocates at least sz bytes within the virtual address space of the vm\n- * given by vm.  The allocated bytes are mapped to a virtual address >=\n- * the address given by vaddr_min.  Note that each allocation uses a\n- * a unique set of pages, with the minimum real allocation being at least\n- * a page. The allocated physical space comes from the TEST_DATA memory region.\n+ * Allocates at least sz bytes within the virtual address space of the VM\n+ * given by @vm.  The allocated bytes are mapped to a virtual address >= the\n+ * address given by @vaddr_min.  Note that each allocation uses a a unique set\n+ * of pages, with the minimum real allocation being at least a page. The\n+ * allocated physical space comes from the TEST_DATA memory region.\n  */\n-gva_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min)\n+gva_t vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min)\n {\n-\treturn __vm_vaddr_alloc(vm, sz, vaddr_min, MEM_REGION_TEST_DATA);\n+\treturn __vm_alloc(vm, sz, vaddr_min, MEM_REGION_TEST_DATA);\n }\n \n-/*\n- * VM Virtual Address Allocate Pages\n- *\n- * Input Args:\n- *   vm - Virtual Machine\n- *\n- * Output Args: None\n- *\n- * Return:\n- *   Starting guest virtual address\n- *\n- * Allocates at least N system pages worth of bytes within the virtual address\n- * space of the vm.\n- */\n-gva_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages)\n+gva_t vm_alloc_pages(struct kvm_vm *vm, int nr_pages)\n {\n-\treturn vm_vaddr_alloc(vm, nr_pages * getpagesize(), KVM_UTIL_MIN_VADDR);\n+\treturn vm_alloc(vm, nr_pages * getpagesize(), KVM_UTIL_MIN_VADDR);\n }\n \n-gva_t __vm_vaddr_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type type)\n+gva_t __vm_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type type)\n {\n-\treturn __vm_vaddr_alloc(vm, getpagesize(), KVM_UTIL_MIN_VADDR, type);\n+\treturn __vm_alloc(vm, getpagesize(), KVM_UTIL_MIN_VADDR, type);\n }\n \n-/*\n- * VM Virtual Address Allocate Page\n- *\n- * Input Args:\n- *   vm - Virtual Machine\n- *\n- * Output Args: None\n- *\n- * Return:\n- *   Starting guest virtual address\n- *\n- * Allocates at least one system page worth of bytes within the virtual address\n- * space of the vm.\n- */\n-gva_t vm_vaddr_alloc_page(struct kvm_vm *vm)\n+gva_t vm_alloc_page(struct kvm_vm *vm)\n {\n-\treturn vm_vaddr_alloc_pages(vm, 1);\n+\treturn vm_alloc_pages(vm, 1);\n }\n \n /*\ndiff --git a/tools/testing/selftests/kvm/lib/loongarch/processor.c b/tools/testing/selftests/kvm/lib/loongarch/processor.c\nindex 2982196db3b2..318520f1f1b9 100644\n--- a/tools/testing/selftests/kvm/lib/loongarch/processor.c\n+++ b/tools/testing/selftests/kvm/lib/loongarch/processor.c\n@@ -206,8 +206,9 @@ void vm_init_descriptor_tables(struct kvm_vm *vm)\n {\n \tvoid *addr;\n \n-\tvm->handlers = __vm_vaddr_alloc(vm, sizeof(struct handlers),\n-\t\t\tLOONGARCH_GUEST_STACK_VADDR_MIN, MEM_REGION_DATA);\n+\tvm->handlers = __vm_alloc(vm, sizeof(struct handlers),\n+\t\t\t\t  LOONGARCH_GUEST_STACK_VADDR_MIN,\n+\t\t\t\t  MEM_REGION_DATA);\n \n \taddr = addr_gva2hva(vm, vm->handlers);\n \tmemset(addr, 0, vm->page_size);\n@@ -354,8 +355,8 @@ void loongarch_vcpu_setup(struct kvm_vcpu *vcpu)\n \tloongarch_set_csr(vcpu, LOONGARCH_CSR_STLBPGSIZE, PS_DEFAULT_SIZE);\n \n \t/* LOONGARCH_CSR_KS1 is used for exception stack */\n-\tval = __vm_vaddr_alloc(vm, vm->page_size,\n-\t\t\tLOONGARCH_GUEST_STACK_VADDR_MIN, MEM_REGION_DATA);\n+\tval = __vm_alloc(vm, vm->page_size, LOONGARCH_GUEST_STACK_VADDR_MIN,\n+\t\t\t MEM_REGION_DATA);\n \tTEST_ASSERT(val != 0,  \"No memory for exception stack\");\n \tval = val + vm->page_size;\n \tloongarch_set_csr(vcpu, LOONGARCH_CSR_KS1, val);\n@@ -378,8 +379,8 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)\n \n \tvcpu = __vm_vcpu_add(vm, vcpu_id);\n \tstack_size = vm->page_size;\n-\tstack_vaddr = __vm_vaddr_alloc(vm, stack_size,\n-\t\t\tLOONGARCH_GUEST_STACK_VADDR_MIN, MEM_REGION_DATA);\n+\tstack_vaddr = __vm_alloc(vm, stack_size,\n+\t\t\t\t LOONGARCH_GUEST_STACK_VADDR_MIN, MEM_REGION_DATA);\n \tTEST_ASSERT(stack_vaddr != 0,  \"No memory for vm stack\");\n \n \tloongarch_vcpu_setup(vcpu);\ndiff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c\nindex 7336d5a20419..38eb8302922a 100644\n--- a/tools/testing/selftests/kvm/lib/riscv/processor.c\n+++ b/tools/testing/selftests/kvm/lib/riscv/processor.c\n@@ -322,7 +322,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)\n \n \tstack_size = vm->page_size == 4096 ? DEFAULT_STACK_PGS * vm->page_size :\n \t\t\t\t\t     vm->page_size;\n-\tstack_vaddr = __vm_vaddr_alloc(vm, stack_size,\n+\tstack_vaddr = __vm_alloc(vm, stack_size,\n \t\t\t\t       DEFAULT_RISCV_GUEST_STACK_VADDR_MIN,\n \t\t\t\t       MEM_REGION_DATA);\n \n@@ -449,8 +449,8 @@ void vcpu_init_vector_tables(struct kvm_vcpu *vcpu)\n \n void vm_init_vector_tables(struct kvm_vm *vm)\n {\n-\tvm->handlers = __vm_vaddr_alloc(vm, sizeof(struct handlers),\n-\t\t\t\t   vm->page_size, MEM_REGION_DATA);\n+\tvm->handlers = __vm_alloc(vm, sizeof(struct handlers), vm->page_size,\n+\t\t\t\t  MEM_REGION_DATA);\n \n \t*(gva_t *)addr_gva2hva(vm, (gva_t)(&exception_handlers)) = vm->handlers;\n }\ndiff --git a/tools/testing/selftests/kvm/lib/s390/processor.c b/tools/testing/selftests/kvm/lib/s390/processor.c\nindex d35f23a4db12..4ae0a39f426f 100644\n--- a/tools/testing/selftests/kvm/lib/s390/processor.c\n+++ b/tools/testing/selftests/kvm/lib/s390/processor.c\n@@ -171,7 +171,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)\n \tTEST_ASSERT(vm->page_size == PAGE_SIZE, \"Unsupported page size: 0x%x\",\n \t\t    vm->page_size);\n \n-\tstack_vaddr = __vm_vaddr_alloc(vm, stack_size,\n+\tstack_vaddr = __vm_alloc(vm, stack_size,\n \t\t\t\t       DEFAULT_GUEST_STACK_VADDR_MIN,\n \t\t\t\t       MEM_REGION_DATA);\n \ndiff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c\nindex b16b5c5b3a1e..4a8a5bc40a45 100644\n--- a/tools/testing/selftests/kvm/lib/ucall_common.c\n+++ b/tools/testing/selftests/kvm/lib/ucall_common.c\n@@ -32,8 +32,8 @@ void ucall_init(struct kvm_vm *vm, gpa_t mmio_gpa)\n \tgva_t vaddr;\n \tint i;\n \n-\tvaddr = vm_vaddr_alloc_shared(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR,\n-\t\t\t\t      MEM_REGION_DATA);\n+\tvaddr = vm_alloc_shared(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR,\n+\t\t\t\tMEM_REGION_DATA);\n \thdr = (struct ucall_header *)addr_gva2hva(vm, vaddr);\n \tmemset(hdr, 0, sizeof(*hdr));\n \ndiff --git a/tools/testing/selftests/kvm/lib/x86/hyperv.c b/tools/testing/selftests/kvm/lib/x86/hyperv.c\nindex c2806bed43c9..d200c5c26e2e 100644\n--- a/tools/testing/selftests/kvm/lib/x86/hyperv.c\n+++ b/tools/testing/selftests/kvm/lib/x86/hyperv.c\n@@ -78,21 +78,21 @@ bool kvm_hv_cpu_has(struct kvm_x86_cpu_feature feature)\n struct hyperv_test_pages *vcpu_alloc_hyperv_test_pages(struct kvm_vm *vm,\n \t\t\t\t\t\t       gva_t *p_hv_pages_gva)\n {\n-\tgva_t hv_pages_gva = vm_vaddr_alloc_page(vm);\n+\tgva_t hv_pages_gva = vm_alloc_page(vm);\n \tstruct hyperv_test_pages *hv = addr_gva2hva(vm, hv_pages_gva);\n \n \t/* Setup of a region of guest memory for the VP Assist page. */\n-\thv->vp_assist = (void *)vm_vaddr_alloc_page(vm);\n+\thv->vp_assist = (void *)vm_alloc_page(vm);\n \thv->vp_assist_hva = addr_gva2hva(vm, (uintptr_t)hv->vp_assist);\n \thv->vp_assist_gpa = addr_gva2gpa(vm, (uintptr_t)hv->vp_assist);\n \n \t/* Setup of a region of guest memory for the partition assist page. */\n-\thv->partition_assist = (void *)vm_vaddr_alloc_page(vm);\n+\thv->partition_assist = (void *)vm_alloc_page(vm);\n \thv->partition_assist_hva = addr_gva2hva(vm, (uintptr_t)hv->partition_assist);\n \thv->partition_assist_gpa = addr_gva2gpa(vm, (uintptr_t)hv->partition_assist);\n \n \t/* Setup of a region of guest memory for the enlightened VMCS. */\n-\thv->enlightened_vmcs = (void *)vm_vaddr_alloc_page(vm);\n+\thv->enlightened_vmcs = (void *)vm_alloc_page(vm);\n \thv->enlightened_vmcs_hva = addr_gva2hva(vm, (uintptr_t)hv->enlightened_vmcs);\n \thv->enlightened_vmcs_gpa = addr_gva2gpa(vm, (uintptr_t)hv->enlightened_vmcs);\n \ndiff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c\nindex 723a5200c4bb..50848112932c 100644\n--- a/tools/testing/selftests/kvm/lib/x86/processor.c\n+++ b/tools/testing/selftests/kvm/lib/x86/processor.c\n@@ -746,10 +746,10 @@ static void vm_init_descriptor_tables(struct kvm_vm *vm)\n \tstruct kvm_segment seg;\n \tint i;\n \n-\tvm->arch.gdt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);\n-\tvm->arch.idt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);\n-\tvm->handlers = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);\n-\tvm->arch.tss = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);\n+\tvm->arch.gdt = __vm_alloc_page(vm, MEM_REGION_DATA);\n+\tvm->arch.idt = __vm_alloc_page(vm, MEM_REGION_DATA);\n+\tvm->handlers = __vm_alloc_page(vm, MEM_REGION_DATA);\n+\tvm->arch.tss = __vm_alloc_page(vm, MEM_REGION_DATA);\n \n \t/* Handlers have the same address in both address spaces.*/\n \tfor (i = 0; i < NUM_INTERRUPTS; i++)\n@@ -828,7 +828,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)\n \tgva_t stack_vaddr;\n \tstruct kvm_vcpu *vcpu;\n \n-\tstack_vaddr = __vm_vaddr_alloc(vm, DEFAULT_STACK_PGS * getpagesize(),\n+\tstack_vaddr = __vm_alloc(vm, DEFAULT_STACK_PGS * getpagesize(),\n \t\t\t\t       DEFAULT_GUEST_STACK_VADDR_MIN,\n \t\t\t\t       MEM_REGION_DATA);\n \n@@ -844,7 +844,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)\n \t * may need to subtract 4 bytes instead of 8 bytes.\n \t */\n \tTEST_ASSERT(IS_ALIGNED(stack_vaddr, PAGE_SIZE),\n-\t\t    \"__vm_vaddr_alloc() did not provide a page-aligned address\");\n+\t\t    \"__vm_alloc() did not provide a page-aligned address\");\n \tstack_vaddr -= 8;\n \n \tvcpu = __vm_vcpu_add(vm, vcpu_id);\ndiff --git a/tools/testing/selftests/kvm/lib/x86/svm.c b/tools/testing/selftests/kvm/lib/x86/svm.c\nindex 620bdc5d3cc2..3b01605ab016 100644\n--- a/tools/testing/selftests/kvm/lib/x86/svm.c\n+++ b/tools/testing/selftests/kvm/lib/x86/svm.c\n@@ -30,18 +30,18 @@ u64 rflags;\n struct svm_test_data *\n vcpu_alloc_svm(struct kvm_vm *vm, gva_t *p_svm_gva)\n {\n-\tgva_t svm_gva = vm_vaddr_alloc_page(vm);\n+\tgva_t svm_gva = vm_alloc_page(vm);\n \tstruct svm_test_data *svm = addr_gva2hva(vm, svm_gva);\n \n-\tsvm->vmcb = (void *)vm_vaddr_alloc_page(vm);\n+\tsvm->vmcb = (void *)vm_alloc_page(vm);\n \tsvm->vmcb_hva = addr_gva2hva(vm, (uintptr_t)svm->vmcb);\n \tsvm->vmcb_gpa = addr_gva2gpa(vm, (uintptr_t)svm->vmcb);\n \n-\tsvm->save_area = (void *)vm_vaddr_alloc_page(vm);\n+\tsvm->save_area = (void *)vm_alloc_page(vm);\n \tsvm->save_area_hva = addr_gva2hva(vm, (uintptr_t)svm->save_area);\n \tsvm->save_area_gpa = addr_gva2gpa(vm, (uintptr_t)svm->save_area);\n \n-\tsvm->msr = (void *)vm_vaddr_alloc_page(vm);\n+\tsvm->msr = (void *)vm_alloc_page(vm);\n \tsvm->msr_hva = addr_gva2hva(vm, (uintptr_t)svm->msr);\n \tsvm->msr_gpa = addr_gva2gpa(vm, (uintptr_t)svm->msr);\n \tmemset(svm->msr_hva, 0, getpagesize());\ndiff --git a/tools/testing/selftests/kvm/lib/x86/vmx.c b/tools/testing/selftests/kvm/lib/x86/vmx.c\nindex b2f83c3f7f16..67642759e4a0 100644\n--- a/tools/testing/selftests/kvm/lib/x86/vmx.c\n+++ b/tools/testing/selftests/kvm/lib/x86/vmx.c\n@@ -81,37 +81,37 @@ void vm_enable_ept(struct kvm_vm *vm)\n struct vmx_pages *\n vcpu_alloc_vmx(struct kvm_vm *vm, gva_t *p_vmx_gva)\n {\n-\tgva_t vmx_gva = vm_vaddr_alloc_page(vm);\n+\tgva_t vmx_gva = vm_alloc_page(vm);\n \tstruct vmx_pages *vmx = addr_gva2hva(vm, vmx_gva);\n \n \t/* Setup of a region of guest memory for the vmxon region. */\n-\tvmx->vmxon = (void *)vm_vaddr_alloc_page(vm);\n+\tvmx->vmxon = (void *)vm_alloc_page(vm);\n \tvmx->vmxon_hva = addr_gva2hva(vm, (uintptr_t)vmx->vmxon);\n \tvmx->vmxon_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vmxon);\n \n \t/* Setup of a region of guest memory for a vmcs. */\n-\tvmx->vmcs = (void *)vm_vaddr_alloc_page(vm);\n+\tvmx->vmcs = (void *)vm_alloc_page(vm);\n \tvmx->vmcs_hva = addr_gva2hva(vm, (uintptr_t)vmx->vmcs);\n \tvmx->vmcs_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vmcs);\n \n \t/* Setup of a region of guest memory for the MSR bitmap. */\n-\tvmx->msr = (void *)vm_vaddr_alloc_page(vm);\n+\tvmx->msr = (void *)vm_alloc_page(vm);\n \tvmx->msr_hva = addr_gva2hva(vm, (uintptr_t)vmx->msr);\n \tvmx->msr_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->msr);\n \tmemset(vmx->msr_hva, 0, getpagesize());\n \n \t/* Setup of a region of guest memory for the shadow VMCS. */\n-\tvmx->shadow_vmcs = (void *)vm_vaddr_alloc_page(vm);\n+\tvmx->shadow_vmcs = (void *)vm_alloc_page(vm);\n \tvmx->shadow_vmcs_hva = addr_gva2hva(vm, (uintptr_t)vmx->shadow_vmcs);\n \tvmx->shadow_vmcs_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->shadow_vmcs);\n \n \t/* Setup of a region of guest memory for the VMREAD and VMWRITE bitmaps. */\n-\tvmx->vmread = (void *)vm_vaddr_alloc_page(vm);\n+\tvmx->vmread = (void *)vm_alloc_page(vm);\n \tvmx->vmread_hva = addr_gva2hva(vm, (uintptr_t)vmx->vmread);\n \tvmx->vmread_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vmread);\n \tmemset(vmx->vmread_hva, 0, getpagesize());\n \n-\tvmx->vmwrite = (void *)vm_vaddr_alloc_page(vm);\n+\tvmx->vmwrite = (void *)vm_alloc_page(vm);\n \tvmx->vmwrite_hva = addr_gva2hva(vm, (uintptr_t)vmx->vmwrite);\n \tvmx->vmwrite_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vmwrite);\n \tmemset(vmx->vmwrite_hva, 0, getpagesize());\n@@ -390,7 +390,7 @@ bool kvm_cpu_has_ept(void)\n \n void prepare_virtualize_apic_accesses(struct vmx_pages *vmx, struct kvm_vm *vm)\n {\n-\tvmx->apic_access = (void *)vm_vaddr_alloc_page(vm);\n+\tvmx->apic_access = (void *)vm_alloc_page(vm);\n \tvmx->apic_access_hva = addr_gva2hva(vm, (uintptr_t)vmx->apic_access);\n \tvmx->apic_access_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->apic_access);\n }\ndiff --git a/tools/testing/selftests/kvm/s390/memop.c b/tools/testing/selftests/kvm/s390/memop.c\nindex 9855b5bfb5ed..0244848621b3 100644\n--- a/tools/testing/selftests/kvm/s390/memop.c\n+++ b/tools/testing/selftests/kvm/s390/memop.c\n@@ -880,8 +880,8 @@ static void test_copy_key_fetch_prot_override(void)\n \tstruct test_default t = test_default_init(guest_copy_key_fetch_prot_override);\n \tgva_t guest_0_page, guest_last_page;\n \n-\tguest_0_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, 0);\n-\tguest_last_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);\n+\tguest_0_page = vm_alloc(t.kvm_vm, PAGE_SIZE, 0);\n+\tguest_last_page = vm_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);\n \tif (guest_0_page != 0 || guest_last_page != last_page_addr) {\n \t\tprint_skip(\"did not allocate guest pages at required positions\");\n \t\tgoto out;\n@@ -919,8 +919,8 @@ static void test_errors_key_fetch_prot_override_not_enabled(void)\n \tstruct test_default t = test_default_init(guest_copy_key_fetch_prot_override);\n \tgva_t guest_0_page, guest_last_page;\n \n-\tguest_0_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, 0);\n-\tguest_last_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);\n+\tguest_0_page = vm_alloc(t.kvm_vm, PAGE_SIZE, 0);\n+\tguest_last_page = vm_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);\n \tif (guest_0_page != 0 || guest_last_page != last_page_addr) {\n \t\tprint_skip(\"did not allocate guest pages at required positions\");\n \t\tgoto out;\n@@ -940,8 +940,8 @@ static void test_errors_key_fetch_prot_override_enabled(void)\n \tstruct test_default t = test_default_init(guest_copy_key_fetch_prot_override);\n \tgva_t guest_0_page, guest_last_page;\n \n-\tguest_0_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, 0);\n-\tguest_last_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);\n+\tguest_0_page = vm_alloc(t.kvm_vm, PAGE_SIZE, 0);\n+\tguest_last_page = vm_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);\n \tif (guest_0_page != 0 || guest_last_page != last_page_addr) {\n \t\tprint_skip(\"did not allocate guest pages at required positions\");\n \t\tgoto out;\ndiff --git a/tools/testing/selftests/kvm/s390/tprot.c b/tools/testing/selftests/kvm/s390/tprot.c\nindex e021e198b28e..8054d2b178f0 100644\n--- a/tools/testing/selftests/kvm/s390/tprot.c\n+++ b/tools/testing/selftests/kvm/s390/tprot.c\n@@ -146,7 +146,7 @@ static enum stage perform_next_stage(int *i, bool mapped_0)\n \t\t/*\n \t\t * Some fetch protection override tests require that page 0\n \t\t * be mapped, however, when the hosts tries to map that page via\n-\t\t * vm_vaddr_alloc, it may happen that some other page gets mapped\n+\t\t * vm_alloc, it may happen that some other page gets mapped\n \t\t * instead.\n \t\t * In order to skip these tests we detect this inside the guest\n \t\t */\n@@ -219,7 +219,7 @@ int main(int argc, char *argv[])\n \tmprotect(addr_gva2hva(vm, (gva_t)pages), PAGE_SIZE * 2, PROT_READ);\n \tHOST_SYNC(vcpu, TEST_SIMPLE);\n \n-\tguest_0_page = vm_vaddr_alloc(vm, PAGE_SIZE, 0);\n+\tguest_0_page = vm_alloc(vm, PAGE_SIZE, 0);\n \tif (guest_0_page != 0) {\n \t\t/* Use NO_TAP so we don't get a PASS print */\n \t\tHOST_SYNC_NO_TAP(vcpu, STAGE_INIT_FETCH_PROT_OVERRIDE);\ndiff --git a/tools/testing/selftests/kvm/x86/amx_test.c b/tools/testing/selftests/kvm/x86/amx_test.c\nindex 9ecf7515442b..4e63da2b1889 100644\n--- a/tools/testing/selftests/kvm/x86/amx_test.c\n+++ b/tools/testing/selftests/kvm/x86/amx_test.c\n@@ -263,15 +263,15 @@ int main(int argc, char *argv[])\n \tvcpu_regs_get(vcpu, &regs1);\n \n \t/* amx cfg for guest_code */\n-\tamx_cfg = vm_vaddr_alloc_page(vm);\n+\tamx_cfg = vm_alloc_page(vm);\n \tmemset(addr_gva2hva(vm, amx_cfg), 0x0, getpagesize());\n \n \t/* amx tiledata for guest_code */\n-\ttiledata = vm_vaddr_alloc_pages(vm, 2);\n+\ttiledata = vm_alloc_pages(vm, 2);\n \tmemset(addr_gva2hva(vm, tiledata), rand() | 1, 2 * getpagesize());\n \n \t/* XSAVE state for guest_code */\n-\txstate = vm_vaddr_alloc_pages(vm, DIV_ROUND_UP(XSAVE_SIZE, PAGE_SIZE));\n+\txstate = vm_alloc_pages(vm, DIV_ROUND_UP(XSAVE_SIZE, PAGE_SIZE));\n \tmemset(addr_gva2hva(vm, xstate), 0, PAGE_SIZE * DIV_ROUND_UP(XSAVE_SIZE, PAGE_SIZE));\n \tvcpu_args_set(vcpu, 3, amx_cfg, tiledata, xstate);\n \ndiff --git a/tools/testing/selftests/kvm/x86/cpuid_test.c b/tools/testing/selftests/kvm/x86/cpuid_test.c\nindex 3c45249a42c4..ef0ddd240887 100644\n--- a/tools/testing/selftests/kvm/x86/cpuid_test.c\n+++ b/tools/testing/selftests/kvm/x86/cpuid_test.c\n@@ -143,7 +143,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu, int stage)\n struct kvm_cpuid2 *vcpu_alloc_cpuid(struct kvm_vm *vm, gva_t *p_gva, struct kvm_cpuid2 *cpuid)\n {\n \tint size = sizeof(*cpuid) + cpuid->nent * sizeof(cpuid->entries[0]);\n-\tgva_t gva = vm_vaddr_alloc(vm, size, KVM_UTIL_MIN_VADDR);\n+\tgva_t gva = vm_alloc(vm, size, KVM_UTIL_MIN_VADDR);\n \tstruct kvm_cpuid2 *guest_cpuids = addr_gva2hva(vm, gva);\n \n \tmemcpy(guest_cpuids, cpuid, size);\ndiff --git a/tools/testing/selftests/kvm/x86/hyperv_clock.c b/tools/testing/selftests/kvm/x86/hyperv_clock.c\nindex 6bb1ca11256f..c083cea546dc 100644\n--- a/tools/testing/selftests/kvm/x86/hyperv_clock.c\n+++ b/tools/testing/selftests/kvm/x86/hyperv_clock.c\n@@ -218,7 +218,7 @@ int main(void)\n \n \tvcpu_set_hv_cpuid(vcpu);\n \n-\ttsc_page_gva = vm_vaddr_alloc_page(vm);\n+\ttsc_page_gva = vm_alloc_page(vm);\n \tmemset(addr_gva2hva(vm, tsc_page_gva), 0x0, getpagesize());\n \tTEST_ASSERT((addr_gva2gpa(vm, tsc_page_gva) & (getpagesize() - 1)) == 0,\n \t\t\"TSC page has to be page aligned\");\ndiff --git a/tools/testing/selftests/kvm/x86/hyperv_evmcs.c b/tools/testing/selftests/kvm/x86/hyperv_evmcs.c\nindex 061d9e1f02c0..c7fa114aee20 100644\n--- a/tools/testing/selftests/kvm/x86/hyperv_evmcs.c\n+++ b/tools/testing/selftests/kvm/x86/hyperv_evmcs.c\n@@ -246,7 +246,7 @@ int main(int argc, char *argv[])\n \n \tvm = vm_create_with_one_vcpu(&vcpu, guest_code);\n \n-\thcall_page = vm_vaddr_alloc_pages(vm, 1);\n+\thcall_page = vm_alloc_pages(vm, 1);\n \tmemset(addr_gva2hva(vm, hcall_page), 0x0,  getpagesize());\n \n \tvcpu_set_hv_cpuid(vcpu);\ndiff --git a/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c b/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c\nindex be7a2a631789..ae047db7b1be 100644\n--- a/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c\n+++ b/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c\n@@ -57,11 +57,11 @@ int main(void)\n \tvcpu_set_hv_cpuid(vcpu);\n \n \t/* Hypercall input */\n-\thcall_in_page = vm_vaddr_alloc_pages(vm, 1);\n+\thcall_in_page = vm_alloc_pages(vm, 1);\n \tmemset(addr_gva2hva(vm, hcall_in_page), 0x0, vm->page_size);\n \n \t/* Hypercall output */\n-\thcall_out_page = vm_vaddr_alloc_pages(vm, 1);\n+\thcall_out_page = vm_alloc_pages(vm, 1);\n \tmemset(addr_gva2hva(vm, hcall_out_page), 0x0, vm->page_size);\n \n \tvcpu_args_set(vcpu, 3, addr_gva2gpa(vm, hcall_in_page),\ndiff --git a/tools/testing/selftests/kvm/x86/hyperv_features.c b/tools/testing/selftests/kvm/x86/hyperv_features.c\nindex 52dbd52ce606..7347f1fe5157 100644\n--- a/tools/testing/selftests/kvm/x86/hyperv_features.c\n+++ b/tools/testing/selftests/kvm/x86/hyperv_features.c\n@@ -141,7 +141,7 @@ static void guest_test_msrs_access(void)\n \twhile (true) {\n \t\tvm = vm_create_with_one_vcpu(&vcpu, guest_msr);\n \n-\t\tmsr_gva = vm_vaddr_alloc_page(vm);\n+\t\tmsr_gva = vm_alloc_page(vm);\n \t\tmemset(addr_gva2hva(vm, msr_gva), 0x0, getpagesize());\n \t\tmsr = addr_gva2hva(vm, msr_gva);\n \n@@ -530,10 +530,10 @@ static void guest_test_hcalls_access(void)\n \t\tvm = vm_create_with_one_vcpu(&vcpu, guest_hcall);\n \n \t\t/* Hypercall input/output */\n-\t\thcall_page = vm_vaddr_alloc_pages(vm, 2);\n+\t\thcall_page = vm_alloc_pages(vm, 2);\n \t\tmemset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize());\n \n-\t\thcall_params = vm_vaddr_alloc_page(vm);\n+\t\thcall_params = vm_alloc_page(vm);\n \t\tmemset(addr_gva2hva(vm, hcall_params), 0x0, getpagesize());\n \t\thcall = addr_gva2hva(vm, hcall_params);\n \ndiff --git a/tools/testing/selftests/kvm/x86/hyperv_ipi.c b/tools/testing/selftests/kvm/x86/hyperv_ipi.c\nindex beafcfa4043a..771535f9aad3 100644\n--- a/tools/testing/selftests/kvm/x86/hyperv_ipi.c\n+++ b/tools/testing/selftests/kvm/x86/hyperv_ipi.c\n@@ -253,7 +253,7 @@ int main(int argc, char *argv[])\n \tvm = vm_create_with_one_vcpu(&vcpu[0], sender_guest_code);\n \n \t/* Hypercall input/output */\n-\thcall_page = vm_vaddr_alloc_pages(vm, 2);\n+\thcall_page = vm_alloc_pages(vm, 2);\n \tmemset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize());\n \n \ndiff --git a/tools/testing/selftests/kvm/x86/hyperv_svm_test.c b/tools/testing/selftests/kvm/x86/hyperv_svm_test.c\nindex 77b774b5041c..7a62f6a9d606 100644\n--- a/tools/testing/selftests/kvm/x86/hyperv_svm_test.c\n+++ b/tools/testing/selftests/kvm/x86/hyperv_svm_test.c\n@@ -165,7 +165,7 @@ int main(int argc, char *argv[])\n \tvcpu_alloc_svm(vm, &nested_gva);\n \tvcpu_alloc_hyperv_test_pages(vm, &hv_pages_gva);\n \n-\thcall_page = vm_vaddr_alloc_pages(vm, 1);\n+\thcall_page = vm_alloc_pages(vm, 1);\n \tmemset(addr_gva2hva(vm, hcall_page), 0x0,  getpagesize());\n \n \tvcpu_args_set(vcpu, 3, nested_gva, hv_pages_gva, addr_gva2gpa(vm, hcall_page));\ndiff --git a/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c b/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c\nindex a4fb63112cac..6adf76574921 100644\n--- a/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c\n+++ b/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c\n@@ -593,11 +593,11 @@ int main(int argc, char *argv[])\n \tvm = vm_create_with_one_vcpu(&vcpu[0], sender_guest_code);\n \n \t/* Test data page */\n-\ttest_data_page = vm_vaddr_alloc_page(vm);\n+\ttest_data_page = vm_alloc_page(vm);\n \tdata = (struct test_data *)addr_gva2hva(vm, test_data_page);\n \n \t/* Hypercall input/output */\n-\tdata->hcall_gva = vm_vaddr_alloc_pages(vm, 2);\n+\tdata->hcall_gva = vm_alloc_pages(vm, 2);\n \tdata->hcall_gpa = addr_gva2gpa(vm, data->hcall_gva);\n \tmemset(addr_gva2hva(vm, data->hcall_gva), 0x0, 2 * PAGE_SIZE);\n \n@@ -606,7 +606,7 @@ int main(int argc, char *argv[])\n \t * and the test will swap their mappings. The third page keeps the indication\n \t * about the current state of mappings.\n \t */\n-\tdata->test_pages = vm_vaddr_alloc_pages(vm, NTEST_PAGES + 1);\n+\tdata->test_pages = vm_alloc_pages(vm, NTEST_PAGES + 1);\n \tfor (i = 0; i < NTEST_PAGES; i++)\n \t\tmemset(addr_gva2hva(vm, data->test_pages + PAGE_SIZE * i),\n \t\t       (u8)(i + 1), PAGE_SIZE);\ndiff --git a/tools/testing/selftests/kvm/x86/kvm_clock_test.c b/tools/testing/selftests/kvm/x86/kvm_clock_test.c\nindex 2b8a3feee1f8..5ad4aeb8e373 100644\n--- a/tools/testing/selftests/kvm/x86/kvm_clock_test.c\n+++ b/tools/testing/selftests/kvm/x86/kvm_clock_test.c\n@@ -147,7 +147,7 @@ int main(void)\n \n \tvm = vm_create_with_one_vcpu(&vcpu, guest_main);\n \n-\tpvti_gva = vm_vaddr_alloc(vm, getpagesize(), 0x10000);\n+\tpvti_gva = vm_alloc(vm, getpagesize(), 0x10000);\n \tpvti_gpa = addr_gva2gpa(vm, pvti_gva);\n \tvcpu_args_set(vcpu, 2, pvti_gpa, pvti_gva);\n \ndiff --git a/tools/testing/selftests/kvm/x86/sev_smoke_test.c b/tools/testing/selftests/kvm/x86/sev_smoke_test.c\nindex 4e037795dc33..1a49ee391586 100644\n--- a/tools/testing/selftests/kvm/x86/sev_smoke_test.c\n+++ b/tools/testing/selftests/kvm/x86/sev_smoke_test.c\n@@ -115,8 +115,8 @@ static void test_sync_vmsa(u32 type, u64 policy)\n \tstruct kvm_xsave __attribute__((aligned(64))) xsave = { 0 };\n \n \tvm = vm_sev_create_with_one_vcpu(type, guest_code_xsave, &vcpu);\n-\tgva = vm_vaddr_alloc_shared(vm, PAGE_SIZE, KVM_UTIL_MIN_VADDR,\n-\t\t\t\t    MEM_REGION_TEST_DATA);\n+\tgva = vm_alloc_shared(vm, PAGE_SIZE, KVM_UTIL_MIN_VADDR,\n+\t\t\t      MEM_REGION_TEST_DATA);\n \thva = addr_gva2hva(vm, gva);\n \n \tvcpu_args_set(vcpu, 1, gva);\ndiff --git a/tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c b/tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c\nindex 5fefb319d9be..f72f11d4c4f8 100644\n--- a/tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c\n+++ b/tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c\n@@ -161,7 +161,7 @@ static void run_test(bool is_nmi)\n \tif (!is_nmi) {\n \t\tvoid *idt, *idt_alt;\n \n-\t\tidt_alt_vm = vm_vaddr_alloc_page(vm);\n+\t\tidt_alt_vm = vm_alloc_page(vm);\n \t\tidt_alt = addr_gva2hva(vm, idt_alt_vm);\n \t\tidt = addr_gva2hva(vm, vm->arch.idt);\n \t\tmemcpy(idt_alt, idt, getpagesize());\ndiff --git a/tools/testing/selftests/kvm/x86/xapic_ipi_test.c b/tools/testing/selftests/kvm/x86/xapic_ipi_test.c\nindex 3df6df2a1b55..d2e2410f748b 100644\n--- a/tools/testing/selftests/kvm/x86/xapic_ipi_test.c\n+++ b/tools/testing/selftests/kvm/x86/xapic_ipi_test.c\n@@ -414,7 +414,7 @@ int main(int argc, char *argv[])\n \n \tparams[1].vcpu = vm_vcpu_add(vm, 1, sender_guest_code);\n \n-\ttest_data_page_vaddr = vm_vaddr_alloc_page(vm);\n+\ttest_data_page_vaddr = vm_alloc_page(vm);\n \tdata = addr_gva2hva(vm, test_data_page_vaddr);\n \tmemset(data, 0, sizeof(*data));\n \tparams[0].data = data;\n","prefixes":["v3","11/19"]}