From patchwork Tue May 14 00:44:40 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Scott Wood X-Patchwork-Id: 243551 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id DB82C2C00B7 for ; Tue, 14 May 2013 10:45:00 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750985Ab3ENAo4 (ORCPT ); Mon, 13 May 2013 20:44:56 -0400 Received: from co1ehsobe004.messaging.microsoft.com ([216.32.180.187]:46609 "EHLO co1outboundpool.messaging.microsoft.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752347Ab3ENAox (ORCPT ); Mon, 13 May 2013 20:44:53 -0400 Received: from mail52-co1-R.bigfish.com (10.243.78.246) by CO1EHSOBE015.bigfish.com (10.243.66.78) with Microsoft SMTP Server id 14.1.225.23; Tue, 14 May 2013 00:44:52 +0000 Received: from mail52-co1 (localhost [127.0.0.1]) by mail52-co1-R.bigfish.com (Postfix) with ESMTP id 0B1D5600331; Tue, 14 May 2013 00:44:52 +0000 (UTC) X-Forefront-Antispam-Report: CIP:70.37.183.190; KIP:(null); UIP:(null); IPV:NLI; H:mail.freescale.net; RD:none; EFVD:NLI X-SpamScore: 0 X-BigFish: VS0(zzzz1f42h1ee6h1de0h1fdah1202h1e76h1d1ah1d2ah1fc6hzz8275bhz2dh2a8h668h839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah1d0ch1d2eh1d3fh1155h) Received: from mail52-co1 (localhost.localdomain [127.0.0.1]) by mail52-co1 (MessageSwitch) id 1368492289550096_4634; Tue, 14 May 2013 00:44:49 +0000 (UTC) Received: from CO1EHSMHS004.bigfish.com (unknown [10.243.78.232]) by mail52-co1.bigfish.com (Postfix) with ESMTP id 79B37A4005A; Tue, 14 May 2013 00:44:49 +0000 (UTC) Received: from mail.freescale.net (70.37.183.190) by CO1EHSMHS004.bigfish.com (10.243.66.14) with Microsoft SMTP Server (TLS) id 14.1.225.23; Tue, 14 May 2013 00:44:49 +0000 Received: from tx30smr01.am.freescale.net (10.81.153.31) by 039-SN1MMR1-004.039d.mgd.msft.net (10.84.1.14) with Microsoft SMTP Server (TLS) id 14.2.328.11; Tue, 14 May 2013 00:45:09 +0000 Received: from snotra.am.freescale.net ([10.214.85.128]) by tx30smr01.am.freescale.net (8.14.3/8.14.0) with ESMTP id r4E0ig54004956; Mon, 13 May 2013 17:44:47 -0700 From: Scott Wood To: Alexander Graf CC: , , Scott Wood Subject: [PATCH v3 2/3] kvm/ppc: Call trace_hardirqs_on before entry Date: Mon, 13 May 2013 19:44:40 -0500 Message-ID: <1368492281-30473-3-git-send-email-scottwood@freescale.com> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1368492281-30473-1-git-send-email-scottwood@freescale.com> References: <1368492281-30473-1-git-send-email-scottwood@freescale.com> MIME-Version: 1.0 X-OriginatorOrg: freescale.com Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Currently this is only being done on 64-bit. Rather than just move it out of the 64-bit ifdef, move it to kvm_lazy_ee_enable() so that it is consistent with lazy ee state, and so that we don't track more host code as interrupts-enabled than necessary. Rename kvm_lazy_ee_enable() to kvm_fix_ee_before_entry() to reflect that this function now has a role on 32-bit as well. Signed-off-by: Scott Wood --- arch/powerpc/include/asm/kvm_ppc.h | 11 ++++++++--- arch/powerpc/kvm/book3s_pr.c | 4 ++-- arch/powerpc/kvm/booke.c | 4 ++-- arch/powerpc/kvm/powerpc.c | 2 -- 4 files changed, 12 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index a5287fe..6885846 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -394,10 +394,15 @@ static inline void kvmppc_mmu_flush_icache(pfn_t pfn) } } -/* Please call after prepare_to_enter. This function puts the lazy ee state - back to normal mode, without actually enabling interrupts. */ -static inline void kvmppc_lazy_ee_enable(void) +/* + * Please call after prepare_to_enter. This function puts the lazy ee and irq + * disabled tracking state back to normal mode, without actually enabling + * interrupts. + */ +static inline void kvmppc_fix_ee_before_entry(void) { + trace_hardirqs_on(); + #ifdef CONFIG_PPC64 /* Only need to enable IRQs by hard enabling them after this */ local_paca->irq_happened = 0; diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index bdc40b8..0b97ce4 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -890,7 +890,7 @@ program_interrupt: local_irq_enable(); r = s; } else { - kvmppc_lazy_ee_enable(); + kvmppc_fix_ee_before_entry(); } } @@ -1161,7 +1161,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) if (vcpu->arch.shared->msr & MSR_FP) kvmppc_handle_ext(vcpu, BOOK3S_INTERRUPT_FP_UNAVAIL, MSR_FP); - kvmppc_lazy_ee_enable(); + kvmppc_fix_ee_before_entry(); ret = __kvmppc_vcpu_run(kvm_run, vcpu); diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c index 83f84c5..3817c67 100644 --- a/arch/powerpc/kvm/booke.c +++ b/arch/powerpc/kvm/booke.c @@ -673,7 +673,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) ret = s; goto out; } - kvmppc_lazy_ee_enable(); + kvmppc_fix_ee_before_entry(); kvm_guest_enter(); @@ -1160,7 +1160,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu, local_irq_enable(); r = (s << 2) | RESUME_HOST | (r & RESUME_FLAG_NV); } else { - kvmppc_lazy_ee_enable(); + kvmppc_fix_ee_before_entry(); } } diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index 6316ee3..4e05f8c 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -117,8 +117,6 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu) kvm_guest_exit(); continue; } - - trace_hardirqs_on(); #endif kvm_guest_enter();