From patchwork Mon Jul 2 08:54:30 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Takuya Yoshikawa X-Patchwork-Id: 168510 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 47FCF2C01FB for ; Mon, 2 Jul 2012 18:54:48 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932534Ab2GBIyr (ORCPT ); Mon, 2 Jul 2012 04:54:47 -0400 Received: from tama500.ecl.ntt.co.jp ([129.60.39.148]:46783 "EHLO tama500.ecl.ntt.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932469Ab2GBIyq (ORCPT ); Mon, 2 Jul 2012 04:54:46 -0400 Received: from mfs6.rdh.ecl.ntt.co.jp (mfs6.rdh.ecl.ntt.co.jp [129.60.39.149]) by tama500.ecl.ntt.co.jp (8.14.5/8.14.5) with ESMTP id q628sasu008714; Mon, 2 Jul 2012 17:54:36 +0900 (JST) Received: from mfs6.rdh.ecl.ntt.co.jp (localhost [127.0.0.1]) by mfs6.rdh.ecl.ntt.co.jp (Postfix) with ESMTP id 444D96C41; Mon, 2 Jul 2012 17:54:36 +0900 (JST) Received: from imail2.m.ecl.ntt.co.jp (imail2.m.ecl.ntt.co.jp [129.60.5.247]) by mfs6.rdh.ecl.ntt.co.jp (Postfix) with ESMTP id 361056BF2; Mon, 2 Jul 2012 17:54:36 +0900 (JST) Received: from yshpad ([129.60.241.139]) by imail2.m.ecl.ntt.co.jp (8.13.8/8.13.8) with SMTP id q628saAA017371; Mon, 2 Jul 2012 17:54:36 +0900 Date: Mon, 2 Jul 2012 17:54:30 +0900 From: Takuya Yoshikawa To: avi@redhat.com, mtosatti@redhat.com Cc: agraf@suse.de, paulus@samba.org, aarcange@redhat.com, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linux-kernel@vger.kernel.org, takuya.yoshikawa@gmail.com Subject: [PATCH 2/8] KVM: Introduce hva_to_gfn_memslot() for kvm_handle_hva() Message-Id: <20120702175430.9fec42f1.yoshikawa.takuya@oss.ntt.co.jp> In-Reply-To: <20120702175239.5fec56b3.yoshikawa.takuya@oss.ntt.co.jp> References: <20120702175239.5fec56b3.yoshikawa.takuya@oss.ntt.co.jp> X-Mailer: Sylpheed 3.1.0 (GTK+ 2.24.4; x86_64-pc-linux-gnu) Mime-Version: 1.0 Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This restricts hva handling in mmu code and makes it easier to extend kvm_handle_hva() so that it can treat a range of addresses later in this patch series. Signed-off-by: Takuya Yoshikawa Cc: Alexander Graf Cc: Paul Mackerras --- arch/powerpc/kvm/book3s_64_mmu_hv.c | 6 +++--- arch/x86/kvm/mmu.c | 3 +-- include/linux/kvm_host.h | 8 ++++++++ 3 files changed, 12 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c index d03eb6f..3703755 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -772,10 +772,10 @@ static int kvm_handle_hva(struct kvm *kvm, unsigned long hva, end = start + (memslot->npages << PAGE_SHIFT); if (hva >= start && hva < end) { - gfn_t gfn_offset = (hva - start) >> PAGE_SHIFT; + gfn_t gfn = hva_to_gfn_memslot(hva, memslot); + gfn_t gfn_offset = gfn - memslot->base_gfn; - ret = handler(kvm, &memslot->rmap[gfn_offset], - memslot->base_gfn + gfn_offset); + ret = handler(kvm, &memslot->rmap[gfn_offset], gfn); retval |= ret; } } diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index d3e7e6a..b898bec 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1202,8 +1202,7 @@ static int kvm_handle_hva(struct kvm *kvm, unsigned long hva, end = start + (memslot->npages << PAGE_SHIFT); if (hva >= start && hva < end) { - gfn_t gfn_offset = (hva - start) >> PAGE_SHIFT; - gfn_t gfn = memslot->base_gfn + gfn_offset; + gfn_t gfn = hva_to_gfn_memslot(hva, memslot); ret = 0; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c7f7787..c0dd2e9 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -740,6 +740,14 @@ static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) (base_gfn >> KVM_HPAGE_GFN_SHIFT(level)); } +static inline gfn_t +hva_to_gfn_memslot(unsigned long hva, struct kvm_memory_slot *slot) +{ + gfn_t gfn_offset = (hva - slot->userspace_addr) >> PAGE_SHIFT; + + return slot->base_gfn + gfn_offset; +} + static inline unsigned long gfn_to_hva_memslot(struct kvm_memory_slot *slot, gfn_t gfn) {