From patchwork Wed Feb 29 00:09:55 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 143597 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 6B1FEB6FFD for ; Wed, 29 Feb 2012 11:11:30 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965844Ab2B2ALW (ORCPT ); Tue, 28 Feb 2012 19:11:22 -0500 Received: from cantor2.suse.de ([195.135.220.15]:45235 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965873Ab2B2AKN (ORCPT ); Tue, 28 Feb 2012 19:10:13 -0500 Received: from relay1.suse.de (unknown [195.135.220.254]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx2.suse.de (Postfix) with ESMTP id A9A65916B4; Wed, 29 Feb 2012 01:10:07 +0100 (CET) From: Alexander Graf To: kvm-ppc@vger.kernel.org Cc: kvm@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Scott Wood Subject: [PATCH 27/38] KVM: PPC: bookehv: remove negation for CONFIG_64BIT Date: Wed, 29 Feb 2012 01:09:55 +0100 Message-Id: <1330474206-14794-28-git-send-email-agraf@suse.de> X-Mailer: git-send-email 1.7.3.4 In-Reply-To: <1330474206-14794-1-git-send-email-agraf@suse.de> References: <1330474206-14794-1-git-send-email-agraf@suse.de> Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Instead if doing #ifndef CONFIG_64BIT ... #else ... #endif we should rather do #ifdef CONFIG_64BIT ... #else ... #endif which is a lot easier to read. Change the bookehv implementation to stick with this rule. Signed-off-by: Alexander Graf --- arch/powerpc/kvm/bookehv_interrupts.S | 24 ++++++++++++------------ 1 files changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/kvm/bookehv_interrupts.S b/arch/powerpc/kvm/bookehv_interrupts.S index 215381e..c5a0796 100644 --- a/arch/powerpc/kvm/bookehv_interrupts.S +++ b/arch/powerpc/kvm/bookehv_interrupts.S @@ -99,10 +99,10 @@ .endif oris r8, r6, MSR_CE@h -#ifndef CONFIG_64BIT - stw r6, (VCPU_SHARED_MSR + 4)(r11) -#else +#ifdef CONFIG_64BIT std r6, (VCPU_SHARED_MSR)(r11) +#else + stw r6, (VCPU_SHARED_MSR + 4)(r11) #endif ori r8, r8, MSR_ME | MSR_RI PPC_STL r5, VCPU_PC(r4) @@ -344,10 +344,10 @@ _GLOBAL(kvmppc_resume_host) stw r5, VCPU_SHARED_MAS0(r11) mfspr r7, SPRN_MAS2 stw r6, VCPU_SHARED_MAS1(r11) -#ifndef CONFIG_64BIT - stw r7, (VCPU_SHARED_MAS2 + 4)(r11) -#else +#ifdef CONFIG_64BIT std r7, (VCPU_SHARED_MAS2)(r11) +#else + stw r7, (VCPU_SHARED_MAS2 + 4)(r11) #endif mfspr r5, SPRN_MAS3 mfspr r6, SPRN_MAS4 @@ -530,10 +530,10 @@ lightweight_exit: stw r3, VCPU_HOST_MAS6(r4) lwz r3, VCPU_SHARED_MAS0(r11) lwz r5, VCPU_SHARED_MAS1(r11) -#ifndef CONFIG_64BIT - lwz r6, (VCPU_SHARED_MAS2 + 4)(r11) -#else +#ifdef CONFIG_64BIT ld r6, (VCPU_SHARED_MAS2)(r11) +#else + lwz r6, (VCPU_SHARED_MAS2 + 4)(r11) #endif lwz r7, VCPU_SHARED_MAS7_3+4(r11) lwz r8, VCPU_SHARED_MAS4(r11) @@ -572,10 +572,10 @@ lightweight_exit: PPC_LL r6, VCPU_CTR(r4) PPC_LL r7, VCPU_CR(r4) PPC_LL r8, VCPU_PC(r4) -#ifndef CONFIG_64BIT - lwz r9, (VCPU_SHARED_MSR + 4)(r11) -#else +#ifdef CONFIG_64BIT ld r9, (VCPU_SHARED_MSR)(r11) +#else + lwz r9, (VCPU_SHARED_MSR + 4)(r11) #endif PPC_LL r0, VCPU_GPR(r0)(r4) PPC_LL r1, VCPU_GPR(r1)(r4)