From patchwork Thu Jan 17 22:50:41 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 213390 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id C256E2C008E for ; Fri, 18 Jan 2013 09:50:54 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753193Ab3AQWus (ORCPT ); Thu, 17 Jan 2013 17:50:48 -0500 Received: from cantor2.suse.de ([195.135.220.15]:41739 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752472Ab3AQWur (ORCPT ); Thu, 17 Jan 2013 17:50:47 -0500 Received: from relay1.suse.de (unknown [195.135.220.254]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx2.suse.de (Postfix) with ESMTP id 3B581A51C9; Thu, 17 Jan 2013 23:50:46 +0100 (CET) From: Alexander Graf To: kvm-ppc@vger.kernel.org Cc: kvm@vger.kernel.org Subject: [PATCH 3/3] KVM: PPC: e500: Implement TLB1-in-TLB0 mapping Date: Thu, 17 Jan 2013 23:50:41 +0100 Message-Id: <1358463041-25922-4-git-send-email-agraf@suse.de> X-Mailer: git-send-email 1.6.0.2 In-Reply-To: <1358463041-25922-1-git-send-email-agraf@suse.de> References: <1358463041-25922-1-git-send-email-agraf@suse.de> Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org When a host mapping fault happens in a guest TLB1 entry today, we map the translated guest entry into the host's TLB1. This isn't particularly clever when the guest is mapped by normal 4k pages, since these would be a lot better to put into TLB0 instead. This patch adds the required logic to map 4k TLB1 shadow maps into the host's TLB0. Signed-off-by: Alexander Graf --- arch/powerpc/kvm/e500.h | 1 + arch/powerpc/kvm/e500_mmu_host.c | 58 +++++++++++++++++++++++++++++-------- 2 files changed, 46 insertions(+), 13 deletions(-) diff --git a/arch/powerpc/kvm/e500.h b/arch/powerpc/kvm/e500.h index 00f96d8..d32e6a8 100644 --- a/arch/powerpc/kvm/e500.h +++ b/arch/powerpc/kvm/e500.h @@ -28,6 +28,7 @@ #define E500_TLB_VALID 1 #define E500_TLB_BITMAP 2 +#define E500_TLB_TLB0 (1 << 2) struct tlbe_ref { pfn_t pfn; diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c index 3bb2154..cbb6cf8 100644 --- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -198,6 +198,11 @@ void inval_gtlbe_on_host(struct kvmppc_vcpu_e500 *vcpu_e500, int tlbsel, local_irq_restore(flags); return; + } else if (tlbsel == 1 && + vcpu_e500->gtlb_priv[1][esel].ref.flags & E500_TLB_TLB0) { + /* This is a slow path, so just invalidate everything */ + kvmppc_e500_tlbil_all(vcpu_e500); + vcpu_e500->gtlb_priv[1][esel].ref.flags &= ~E500_TLB_TLB0; } /* Guest tlbe is backed by at most one host tlbe per shadow pid. */ @@ -453,24 +458,27 @@ static void kvmppc_e500_tlb0_map(struct kvmppc_vcpu_e500 *vcpu_e500, int esel, gtlbe, 0, stlbe, ref); } -/* Caller must ensure that the specified guest TLB entry is safe to insert into - * the shadow TLB. */ -/* XXX for both one-one and one-to-many , for now use TLB1 */ -static int kvmppc_e500_tlb1_map(struct kvmppc_vcpu_e500 *vcpu_e500, - u64 gvaddr, gfn_t gfn, struct kvm_book3e_206_tlb_entry *gtlbe, - struct kvm_book3e_206_tlb_entry *stlbe, int esel) +static int kvmppc_e500_tlb1_map_tlb0(struct kvmppc_vcpu_e500 *vcpu_e500, + struct tlbe_ref *ref, + int esel) { - struct tlbe_ref *ref; - unsigned int victim; + /* Indicate that we're backing this TLB1 entry with TLB0 entries */ + vcpu_e500->gtlb_priv[1][esel].ref.flags |= E500_TLB_TLB0; - victim = vcpu_e500->host_tlb1_nv++; + /* Indicate that we're not using TLB1 at all */ + return -1; +} + +static int kvmppc_e500_tlb1_map_tlb1(struct kvmppc_vcpu_e500 *vcpu_e500, + struct tlbe_ref *ref, + int esel) +{ + unsigned int victim = vcpu_e500->host_tlb1_nv++; if (unlikely(vcpu_e500->host_tlb1_nv >= tlb1_max_shadow_size())) vcpu_e500->host_tlb1_nv = 0; - ref = &vcpu_e500->tlb_refs[1][victim]; - kvmppc_e500_shadow_map(vcpu_e500, gvaddr, gfn, gtlbe, 1, stlbe, ref); - + vcpu_e500->tlb_refs[1][victim] = *ref; vcpu_e500->g2h_tlb1_map[esel] |= (u64)1 << victim; vcpu_e500->gtlb_priv[1][esel].ref.flags |= E500_TLB_BITMAP; if (vcpu_e500->h2g_tlb1_rmap[victim]) { @@ -482,6 +490,25 @@ static int kvmppc_e500_tlb1_map(struct kvmppc_vcpu_e500 *vcpu_e500, return victim; } +/* Caller must ensure that the specified guest TLB entry is safe to insert into + * the shadow TLB. */ +/* For both one-one and one-to-many */ +static int kvmppc_e500_tlb1_map(struct kvmppc_vcpu_e500 *vcpu_e500, + u64 gvaddr, gfn_t gfn, struct kvm_book3e_206_tlb_entry *gtlbe, + struct kvm_book3e_206_tlb_entry *stlbe, int esel) +{ + struct tlbe_ref ref; + + ref.flags = 0; + kvmppc_e500_shadow_map(vcpu_e500, gvaddr, gfn, gtlbe, 1, stlbe, &ref); + + /* Use TLB0 when we can only map a page with 4k */ + if (get_tlb_tsize(stlbe) == BOOK3E_PAGESZ_4K) + return kvmppc_e500_tlb1_map_tlb0(vcpu_e500, &ref, esel); + /* Otherwise map into TLB1 */ + return kvmppc_e500_tlb1_map_tlb1(vcpu_e500, &ref, esel); +} + /* sesel is for tlb1 only */ static void write_stlbe(struct kvmppc_vcpu_e500 *vcpu_e500, struct kvm_book3e_206_tlb_entry *gtlbe, @@ -529,9 +556,14 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 eaddr, gpa_t gpaddr, case 1: { gfn_t gfn = gpaddr >> PAGE_SHIFT; - stlbsel = 1; sesel = kvmppc_e500_tlb1_map(vcpu_e500, eaddr, gfn, gtlbe, &stlbe, esel); + if (sesel < 0) { + /* TLB0 mapping */ + sesel = 0; + stlbsel = 0; + } else + stlbsel = 1; break; }