From patchwork Thu Dec 23 12:30:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 1572675 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=mYQckmr8; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4JKVVp20WWz9tk8 for ; Thu, 23 Dec 2021 23:53:38 +1100 (AEDT) Received: from localhost ([::1]:58746 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1n0NbD-0006oY-KO for incoming@patchwork.ozlabs.org; Thu, 23 Dec 2021 07:53:35 -0500 Received: from eggs.gnu.org ([209.51.188.92]:47474) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n0NHH-0007bp-4u for qemu-devel@nongnu.org; Thu, 23 Dec 2021 07:32:59 -0500 Received: from mga17.intel.com ([192.55.52.151]:7864) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1n0NHE-0002n8-5m for qemu-devel@nongnu.org; Thu, 23 Dec 2021 07:32:57 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1640262776; x=1671798776; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=ZDfyC3JzEks8dbWtdmTIC7qG6w7nH+1ENlm6SIYFT0Y=; b=mYQckmr8RyaKsFnEkD8ILzQwaMYSWHXnfokxrKFg04tImf0S4Cmylkez yXbLQs/csLWV5mP3gy2mAb26Qxdw8jgfjJDz+LrBF1NQNEcvFQwTHdLW/ L5GgKD8oStZtC+dFV/RO8oDZx+nnn39k7dl6GMu57zCmAw6uGEi2XZi6U UwCB7Ow562ew21fppsvda/chneFi4Y8uh2escmA/4d43G6CEvOP8FbaHg BDy55CzQkkLTovns4wV7gw/gnr2RX4f/0q0UC0F4r6XsKJIeG0ABzD7on Re4LR/p/Ph/E13KFYiFfwEf2DgAaWjm3Baxu5S0NQSLVXhmU8b/3tQkQz A==; X-IronPort-AV: E=McAfee;i="6200,9189,10206"; a="221492991" X-IronPort-AV: E=Sophos;i="5.88,229,1635231600"; d="scan'208";a="221492991" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Dec 2021 04:32:54 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,229,1635231600"; d="scan'208";a="522079140" Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga008.jf.intel.com with ESMTP; 23 Dec 2021 04:32:47 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org Subject: [PATCH v3 kvm/queue 15/16] KVM: Use kvm_userspace_memory_region_ext Date: Thu, 23 Dec 2021 20:30:10 +0800 Message-Id: <20211223123011.41044-16-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211223123011.41044-1-chao.p.peng@linux.intel.com> References: <20211223123011.41044-1-chao.p.peng@linux.intel.com> Received-SPF: none client-ip=192.55.52.151; envelope-from=chao.p.peng@linux.intel.com; helo=mga17.intel.com X-Spam_score_int: -44 X-Spam_score: -4.5 X-Spam_bar: ---- X-Spam_report: (-4.5 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.203, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_NONE=0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wanpeng Li , jun.nakajima@intel.com, david@redhat.com, "J . Bruce Fields" , dave.hansen@intel.com, "H . Peter Anvin" , Chao Peng , ak@linux.intel.com, Jonathan Corbet , Joerg Roedel , x86@kernel.org, Hugh Dickins , Ingo Molnar , Borislav Petkov , luto@kernel.org, Thomas Gleixner , Vitaly Kuznetsov , Jim Mattson , Sean Christopherson , susie.li@intel.com, Jeff Layton , john.ji@intel.com, Yu Zhang , Paolo Bonzini , Andrew Morton , "Kirill A . Shutemov" Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" Use the new extended memslot structure kvm_userspace_memory_region_ext which includes two additional fd/ofs fields comparing to the current kvm_userspace_memory_region. The fields fd/ofs will be copied from userspace only when KVM_MEM_PRIVATE is set. Internal the KVM we change all existing kvm_userspace_memory_region to kvm_userspace_memory_region_ext since the new extended structure covers all the existing fields in kvm_userspace_memory_region. Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- arch/x86/kvm/x86.c | 2 +- include/linux/kvm_host.h | 4 ++-- virt/kvm/kvm_main.c | 19 +++++++++++++------ 3 files changed, 16 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 42bde45a1bc2..52942195def3 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11551,7 +11551,7 @@ void __user * __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, } for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { - struct kvm_userspace_memory_region m; + struct kvm_userspace_memory_region_ext m; m.slot = id | (i << 16); m.flags = 0; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ad89a0e8bf6b..fabab3b77d57 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -981,9 +981,9 @@ enum kvm_mr_change { }; int kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem); + const struct kvm_userspace_memory_region_ext *mem); int __kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem); + const struct kvm_userspace_memory_region_ext *mem); void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen); int kvm_arch_prepare_memory_region(struct kvm *kvm, diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 36dd2adcd7fc..cf8dcb3b8c7f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1514,7 +1514,7 @@ static void kvm_replace_memslot(struct kvm *kvm, } } -static int check_memory_region_flags(const struct kvm_userspace_memory_region *mem) +static int check_memory_region_flags(const struct kvm_userspace_memory_region_ext *mem) { u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES; @@ -1907,7 +1907,7 @@ static bool kvm_check_memslot_overlap(struct kvm_memslots *slots, int id, * Must be called holding kvm->slots_lock for write. */ int __kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem) + const struct kvm_userspace_memory_region_ext *mem) { struct kvm_memory_slot *old, *new; struct kvm_memslots *slots; @@ -2011,7 +2011,7 @@ int __kvm_set_memory_region(struct kvm *kvm, EXPORT_SYMBOL_GPL(__kvm_set_memory_region); int kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem) + const struct kvm_userspace_memory_region_ext *mem) { int r; @@ -2023,7 +2023,7 @@ int kvm_set_memory_region(struct kvm *kvm, EXPORT_SYMBOL_GPL(kvm_set_memory_region); static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm, - struct kvm_userspace_memory_region *mem) + struct kvm_userspace_memory_region_ext *mem) { if ((u16)mem->slot >= KVM_USER_MEM_SLOTS) return -EINVAL; @@ -4569,12 +4569,19 @@ static long kvm_vm_ioctl(struct file *filp, break; } case KVM_SET_USER_MEMORY_REGION: { - struct kvm_userspace_memory_region kvm_userspace_mem; + struct kvm_userspace_memory_region_ext kvm_userspace_mem; r = -EFAULT; if (copy_from_user(&kvm_userspace_mem, argp, - sizeof(kvm_userspace_mem))) + sizeof(struct kvm_userspace_memory_region))) goto out; + if (kvm_userspace_mem.flags & KVM_MEM_PRIVATE) { + int offset = offsetof( + struct kvm_userspace_memory_region_ext, ofs); + if (copy_from_user(&kvm_userspace_mem.ofs, argp + offset, + sizeof(kvm_userspace_mem) - offset)) + goto out; + } r = kvm_vm_ioctl_set_memory_region(kvm, &kvm_userspace_mem); break;