From patchwork Mon Apr 8 05:08:18 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Gibson X-Patchwork-Id: 234571 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 2A39C2C0095 for ; Mon, 8 Apr 2013 15:14:51 +1000 (EST) Received: from localhost ([::1]:58450 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UP4Pp-0002jn-1i for incoming@patchwork.ozlabs.org; Mon, 08 Apr 2013 01:14:49 -0400 Received: from eggs.gnu.org ([208.118.235.92]:40584) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UP4PB-0002dw-MB for qemu-devel@nongnu.org; Mon, 08 Apr 2013 01:14:22 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UP4P5-0003ah-C2 for qemu-devel@nongnu.org; Mon, 08 Apr 2013 01:14:09 -0400 Received: from ozlabs.org ([203.10.76.45]:33848) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UP4P5-0003aK-14; Mon, 08 Apr 2013 01:14:03 -0400 Received: by ozlabs.org (Postfix, from userid 1007) id 846442C00C9; Mon, 8 Apr 2013 15:14:00 +1000 (EST) From: David Gibson To: agraf@suse.de Date: Mon, 8 Apr 2013 15:08:18 +1000 Message-Id: <1365397702-1515-4-git-send-email-david@gibson.dropbear.id.au> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1365397702-1515-1-git-send-email-david@gibson.dropbear.id.au> References: <1365397702-1515-1-git-send-email-david@gibson.dropbear.id.au> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 203.10.76.45 Cc: qemu-ppc@nongnu.org, qemu-devel@nongnu.org, David Gibson Subject: [Qemu-devel] [PATCH 3/7] pseries: Fix incorrect calculation of RMA size in certain configurations X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org For the pseries machine, we need to advertise to the guest the size of its RMA - that is the amount of memory it can access with the MMU off. For HV KVM, this is constrained by the hardware limitations on the virtual RMA of one hash PTE per PTE group in the hash page table. We already had code to calculate this, but it was assuming the VRMA page size was the same as the (host) backing page size for guest RAM. In the case of a host kernel configured for 64k base page size, but running on hardware (or firmware) which only allows 4k pages, the hose will do all its allocations with a 64k page size, but still use 4k hardware pages for actual mappings. Usually that's transparent to things running under the host, but in the case of the maximum VRMA size it's not. This patch refines the RMA size calculation to instead use the largest available hardware page size (as reported by the SMMU_INFO call) which is less than or equal to the backing page size. This now gives the correct RMA size in all cases I've tested. Signed-off-by: David Gibson --- target-ppc/kvm.c | 26 +++++++++++++++++++++++++- 1 file changed, 25 insertions(+), 1 deletion(-) diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c index 597066f..d3a478e 100644 --- a/target-ppc/kvm.c +++ b/target-ppc/kvm.c @@ -1370,11 +1370,35 @@ off_t kvmppc_alloc_rma(const char *name, MemoryRegion *sysmem) uint64_t kvmppc_rma_size(uint64_t current_size, unsigned int hash_shift) { + struct kvm_ppc_smmu_info info; + long rampagesize, best_page_shift; + int i; + if (cap_ppc_rma >= 2) { return current_size; } + + /* Find the largest hardware supported page size that's less than + * or equal to the (logical) backing page size of guest RAM */ + kvm_get_smmu_info(ppc_env_get_cpu(first_cpu), &info); + rampagesize = getrampagesize(); + best_page_shift = 0; + + for (i = 0; i < KVM_PPC_PAGE_SIZES_MAX_SZ; i++) { + struct kvm_ppc_one_seg_page_size *sps = &info.sps[i]; + + if (!sps->page_shift) { + continue; + } + + if ((sps->page_shift > best_page_shift) + && ((1UL << sps->page_shift) <= rampagesize)) { + best_page_shift = sps->page_shift; + } + } + return MIN(current_size, - getrampagesize() << (hash_shift - 7)); + 1ULL << (best_page_shift + hash_shift - 7)); } #endif