From patchwork Tue Mar 19 04:04:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suraj Jitindar Singh X-Patchwork-Id: 1058187 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="mSDJcZn6"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44NfZl48XNz9s7T for ; Tue, 19 Mar 2019 15:04:47 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725966AbfCSEEq (ORCPT ); Tue, 19 Mar 2019 00:04:46 -0400 Received: from mail-pg1-f195.google.com ([209.85.215.195]:34762 "EHLO mail-pg1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725913AbfCSEEq (ORCPT ); Tue, 19 Mar 2019 00:04:46 -0400 Received: by mail-pg1-f195.google.com with SMTP id v12so12949373pgq.1; Mon, 18 Mar 2019 21:04:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=g1JIVSC1Hgz7a+lPmcRua7QRbeXbwn+HRsBNArd4hh0=; b=mSDJcZn6uSTHWe9Ka8FdjS2gKpqgJQRN54GupIwwANmg+wkQ5g63ZoMaeBGnvAOzCo A5G8tHLH0sXfq6w1yBbDFeI0XF6o3d6cpvjhDbxFN3UmHmrLJ5/1T4wTTRyNJRvso7ZU KBvBno2TaTjTvnk1vjbnqs5MEZRW5BNYyJJ9gn4SIFonGa1T/dJeaSVMfeShgJgdJyip P4Ymu6MPJHtYa4eZlSWpX1x1s56/bvYHKCnXqlmlu1CjluXJD4zmQMrJhAC8Zb2G85B0 O5vVzC8mIkwO+VwPDcYh8WhCh29QQ9uUL5kn6supA6e9/blprjO5WAhoeDaPyOp2QfVt VP1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=g1JIVSC1Hgz7a+lPmcRua7QRbeXbwn+HRsBNArd4hh0=; b=JK4mWB4L1mXFo3X+IjBAHmtQcbfkFhPNTCmbQ4r62rC1tk11jWZbkXqJu/h+f6e3Dn O0zzM+xa6oxW13IX8ojz60/b/wQYFu5Vlu6c/hQcukpOm7x1/+pX/C0t5uSPIjB/V0Ym n+/WSz2gyQ1anxfpVolPZSG8tKC307PyMs3Kqkj0trPvZkpHkUa+bl9sd1uSUBgwvAkY cMcxsaf5OTFKNiqJaGfT3+g9l4BE0xNpfpUBA/BBQekppR5U4TozYi5A2t09EVYxcvZo U8XPSpU3Q1nROEmL8TgJK3s2OY0uouC4BynAzH4XelQE45qjHElOGqDABJqfkoLVgnZ/ CsRg== X-Gm-Message-State: APjAAAU7hGW+UT1Yu75guVw77FmEp2sq41G3vS2cE9y1HjPurXmtzfZ7 Oe4AQzfcYYqF2MEqYghtdAHcKv6jFyw= X-Google-Smtp-Source: APXvYqyYeke9+ikTRF99QUEOpWw8SkvXiW6ea9T8vVdvh04MljDAVQdvwQ8uLFQ+btN145CGA4mUnQ== X-Received: by 2002:a63:f453:: with SMTP id p19mr19931092pgk.232.1552968285607; Mon, 18 Mar 2019 21:04:45 -0700 (PDT) Received: from surajjs2.ozlabs.ibm.com ([122.99.82.10]) by smtp.gmail.com with ESMTPSA id h87sm27650935pfj.20.2019.03.18.21.04.42 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 18 Mar 2019 21:04:44 -0700 (PDT) From: Suraj Jitindar Singh To: kvm-ppc@vger.kernel.org Cc: paulus@samba.org, kvm@vger.kernel.org, Suraj Jitindar Singh Subject: [PATCH 1/3] KVM: PPC: Implement kvmppc_copy_guest() to perform in place copy of guest memory Date: Tue, 19 Mar 2019 15:04:33 +1100 Message-Id: <20190319040435.10716-1-sjitindarsingh@gmail.com> X-Mailer: git-send-email 2.13.6 Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Implement the function kvmppc_copy_guest() to be used to perform a memory copy from one guest physical address to another of a variable length. This performs similar functionality as the kvm_read_guest() and kvm_write_guest() functions, except both addresses point to guest memory. This performs a copy in place using raw_copy_in_user() to avoid having to buffer the data. The guest memory can reside in different memslots and the copy length can span memslots. Signed-off-by: Suraj Jitindar Singh --- arch/powerpc/kvm/book3s_hv.c | 69 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 69 insertions(+) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index ec38576dc685..7179ea783f4f 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -814,6 +814,75 @@ static int kvmppc_h_set_mode(struct kvm_vcpu *vcpu, unsigned long mflags, } } +static int __kvmppc_copy_guest_page(struct kvm_memory_slot *to_memslot, + gfn_t to_gfn, int to_offset, + struct kvm_memory_slot *from_memslot, + gfn_t from_gfn, int from_offset, int len) +{ + int r; + unsigned long to_addr, from_addr; + + to_addr = gfn_to_hva_memslot(to_memslot, to_gfn); + if (kvm_is_error_hva(to_addr)) + return -EFAULT; + from_addr = gfn_to_hva_memslot(from_memslot, from_gfn); + if (kvm_is_error_hva(from_addr)) + return -EFAULT; + r = raw_copy_in_user((void __user *)to_addr + to_offset, + (void __user *)from_addr + from_offset, len); + if (r) + return -EFAULT; + return 0; +} + +static int next_segment_many(unsigned long len, int offset1, int offset2) +{ + int size = min(PAGE_SIZE - offset1, PAGE_SIZE - offset2); + + if (len > size) + return size; + else + return len; +} + +static int kvmppc_copy_guest(struct kvm *kvm, gpa_t to, gpa_t from, + unsigned long len) +{ + struct kvm_memory_slot *to_memslot = NULL; + struct kvm_memory_slot *from_memslot = NULL; + gfn_t to_gfn = to >> PAGE_SHIFT; + gfn_t from_gfn = from >> PAGE_SHIFT; + int seg; + int to_offset = offset_in_page(to); + int from_offset = offset_in_page(from); + int ret; + + while ((seg = next_segment_many(len, to_offset, from_offset)) != 0) { + if (!to_memslot || (to_gfn >= (to_memslot->base_gfn + + to_memslot->npages))) + to_memslot = gfn_to_memslot(kvm, to_gfn); + if (!from_memslot || (from_gfn >= (from_memslot->base_gfn + + from_memslot->npages))) + from_memslot = gfn_to_memslot(kvm, from_gfn); + + ret = __kvmppc_copy_guest_page(to_memslot, to_gfn, to_offset, + from_memslot, from_gfn, + from_offset, seg); + if (ret < 0) + return ret; + mark_page_dirty(kvm, to_gfn); + + to_offset = (to_offset + seg) & (PAGE_SIZE - 1); + from_offset = (from_offset + seg) & (PAGE_SIZE - 1); + len -= seg; + if (!to_offset) + to_gfn += 1; + if (!from_offset) + from_gfn += 1; + } + return 0; +} + static int kvm_arch_vcpu_yield_to(struct kvm_vcpu *target) { struct kvmppc_vcore *vcore = target->arch.vcore;