From patchwork Thu Jan 17 22:50:39 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 213391 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 5D0002C0089 for ; Fri, 18 Jan 2013 09:50:55 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752472Ab3AQWut (ORCPT ); Thu, 17 Jan 2013 17:50:49 -0500 Received: from cantor2.suse.de ([195.135.220.15]:41740 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752229Ab3AQWur (ORCPT ); Thu, 17 Jan 2013 17:50:47 -0500 Received: from relay1.suse.de (unknown [195.135.220.254]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx2.suse.de (Postfix) with ESMTP id 395B6A51BB; Thu, 17 Jan 2013 23:50:46 +0100 (CET) From: Alexander Graf To: kvm-ppc@vger.kernel.org Cc: kvm@vger.kernel.org Subject: [PATCH 1/3] KVM: PPC: e500: Call kvmppc_mmu_map for initial mapping Date: Thu, 17 Jan 2013 23:50:39 +0100 Message-Id: <1358463041-25922-2-git-send-email-agraf@suse.de> X-Mailer: git-send-email 1.6.0.2 In-Reply-To: <1358463041-25922-1-git-send-email-agraf@suse.de> References: <1358463041-25922-1-git-send-email-agraf@suse.de> Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org When emulating tlbwe, we want to automatically map the entry that just got written in our shadow TLB map, because chances are quite high that it's going to be used very soon. Today this happens explicitly, duplicating all the logic that is in kvmppc_mmu_map() already. Just call that one instead. Signed-off-by: Alexander Graf --- arch/powerpc/kvm/e500.h | 4 ++- arch/powerpc/kvm/e500_tlb.c | 45 +++++++++++------------------------------- 2 files changed, 15 insertions(+), 34 deletions(-) diff --git a/arch/powerpc/kvm/e500.h b/arch/powerpc/kvm/e500.h index c70d37e..00f96d8 100644 --- a/arch/powerpc/kvm/e500.h +++ b/arch/powerpc/kvm/e500.h @@ -113,8 +113,10 @@ static inline struct kvmppc_vcpu_e500 *to_e500(struct kvm_vcpu *vcpu) #define KVM_E500_TLB0_SIZE (KVM_E500_TLB0_WAY_SIZE * KVM_E500_TLB0_WAY_NUM) #define KVM_E500_TLB1_SIZE 16 +#define KVM_E500_INDEX_FORCE_MAP 0x80000000 + #define index_of(tlbsel, esel) (((tlbsel) << 16) | ((esel) & 0xFFFF)) -#define tlbsel_of(index) ((index) >> 16) +#define tlbsel_of(index) (((index) >> 16) & 0x3) #define esel_of(index) ((index) & 0xFFFF) #define E500_TLB_USER_PERM_MASK (MAS3_UX|MAS3_UR|MAS3_UW) diff --git a/arch/powerpc/kvm/e500_tlb.c b/arch/powerpc/kvm/e500_tlb.c index cf3f180..eda7be1 100644 --- a/arch/powerpc/kvm/e500_tlb.c +++ b/arch/powerpc/kvm/e500_tlb.c @@ -853,8 +853,8 @@ static void write_stlbe(struct kvmppc_vcpu_e500 *vcpu_e500, int kvmppc_e500_emul_tlbwe(struct kvm_vcpu *vcpu) { struct kvmppc_vcpu_e500 *vcpu_e500 = to_e500(vcpu); - struct kvm_book3e_206_tlb_entry *gtlbe, stlbe; - int tlbsel, esel, stlbsel, sesel; + struct kvm_book3e_206_tlb_entry *gtlbe; + int tlbsel, esel; int recal = 0; tlbsel = get_tlb_tlbsel(vcpu); @@ -892,40 +892,17 @@ int kvmppc_e500_emul_tlbwe(struct kvm_vcpu *vcpu) /* Invalidate shadow mappings for the about-to-be-clobbered TLBE. */ if (tlbe_is_host_safe(vcpu, gtlbe)) { - u64 eaddr; - u64 raddr; + u64 eaddr = get_tlb_eaddr(gtlbe); + u64 raddr = get_tlb_raddr(gtlbe); - switch (tlbsel) { - case 0: - /* TLB0 */ + if (tlbsel == 0) { gtlbe->mas1 &= ~MAS1_TSIZE(~0); gtlbe->mas1 |= MAS1_TSIZE(BOOK3E_PAGESZ_4K); - - stlbsel = 0; - kvmppc_e500_tlb0_map(vcpu_e500, esel, &stlbe); - sesel = 0; /* unused */ - - break; - - case 1: - /* TLB1 */ - eaddr = get_tlb_eaddr(gtlbe); - raddr = get_tlb_raddr(gtlbe); - - /* Create a 4KB mapping on the host. - * If the guest wanted a large page, - * only the first 4KB is mapped here and the rest - * are mapped on the fly. */ - stlbsel = 1; - sesel = kvmppc_e500_tlb1_map(vcpu_e500, eaddr, - raddr >> PAGE_SHIFT, gtlbe, &stlbe, esel); - break; - - default: - BUG(); } - write_stlbe(vcpu_e500, gtlbe, &stlbe, stlbsel, sesel); + /* Premap the faulting page */ + kvmppc_mmu_map(vcpu, eaddr, raddr, + index_of(tlbsel, esel) | KVM_E500_INDEX_FORCE_MAP); } kvmppc_set_exit_type(vcpu, EMULATED_TLBWE_EXITS); @@ -1024,9 +1001,11 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 eaddr, gpa_t gpaddr, { struct kvmppc_vcpu_e500 *vcpu_e500 = to_e500(vcpu); struct tlbe_priv *priv; - struct kvm_book3e_206_tlb_entry *gtlbe, stlbe; + struct kvm_book3e_206_tlb_entry *gtlbe, stlbe = {}; int tlbsel = tlbsel_of(index); int esel = esel_of(index); + /* Needed for initial map, where we can't use the cached value */ + int force_map = index & KVM_E500_INDEX_FORCE_MAP; int stlbsel, sesel; gtlbe = get_entry(vcpu_e500, tlbsel, esel); @@ -1038,7 +1017,7 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 eaddr, gpa_t gpaddr, priv = &vcpu_e500->gtlb_priv[tlbsel][esel]; /* Only triggers after clear_tlb_refs */ - if (unlikely(!(priv->ref.flags & E500_TLB_VALID))) + if (force_map || unlikely(!(priv->ref.flags & E500_TLB_VALID))) kvmppc_e500_tlb0_map(vcpu_e500, esel, &stlbe); else kvmppc_e500_setup_stlbe(vcpu, gtlbe, BOOK3E_PAGESZ_4K,