From patchwork Wed Sep 11 11:57:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Ellerman X-Patchwork-Id: 1160935 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ellerman.id.au Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=ellerman.id.au header.i=@ellerman.id.au header.b="Afq08k54"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46T0ll5Ns1z9sDB for ; Wed, 11 Sep 2019 21:58:11 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726965AbfIKL6L (ORCPT ); Wed, 11 Sep 2019 07:58:11 -0400 Received: from ozlabs.org ([203.11.71.1]:47535 "EHLO ozlabs.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726657AbfIKL6L (ORCPT ); Wed, 11 Sep 2019 07:58:11 -0400 Received: by ozlabs.org (Postfix, from userid 1034) id 46T0lh334jz9sNF; Wed, 11 Sep 2019 21:58:08 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=ellerman.id.au; s=201909; t=1568203088; bh=Vy4hxNy5H4nXpVB0J5OJPxiPAovkXT8uIkQnZC3sc9c=; h=From:To:Cc:Subject:Date:From; b=Afq08k54qKNb3dYYn3ekDL9Oi35/MkC4hlktWPf0jLTQdGvyAPScXBvyLEBRf3jB7 uHAUMdFdXSJi+4wgfcA+vgqz1oQ0mjfL0olKkpDw9jNgZ+Kta0xYDqpmAtya000XIk epo543XYn/LIFIYz2CdR4VbS9ZL7QNOnZ55MD3gM2M6PTCY01HpKFS2kEpbbogkBZF vvuv612HFPm5/VTptOHtslnvNmZDMr8OCMB8w/8XLfvJOYgUR3m6Hm32ZUE60s2amg Ak5p67p4H4cmR45RfgKrgmUt6JGX/S0bd+cEmI5sDEaqXquIw9SC63sm96oSkCDdwF BWK5TtEo1ndmQ== From: Michael Ellerman To: linuxppc-dev@ozlabs.org Cc: cai@lca.pw, kvm-ppc@vger.kernel.org Subject: [PATCH 1/4] powerpc/kvm: Move kvm_tmp into .text, shrink to 64K Date: Wed, 11 Sep 2019 21:57:43 +1000 Message-Id: <20190911115746.12433-1-mpe@ellerman.id.au> X-Mailer: git-send-email 2.21.0 MIME-Version: 1.0 Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org In some configurations of KVM, guests binary patch themselves to avoid/reduce trapping into the hypervisor. For some instructions this requires replacing one instruction with a sequence of instructions. For those cases we need to write the sequence of instructions somewhere and then patch the location of the original instruction to branch to the sequence. That requires that the location of the sequence be within 32MB of the original instruction. The current solution for this is that we create a 1MB array in BSS, write sequences into there, and then free the remainder of the array. This has a few problems: - it confuses kmemleak. - it confuses lockdep. - it requires mapping kvm_tmp executable, which can cause adjacent areas to also be mapped executable if we're using 16M pages for the linear mapping. - the 32MB limit can be exceeded if the kernel is big enough, especially with STRICT_KERNEL_RWX enabled, which then prevents the patching from working at all. We can fix all those problems by making kvm_tmp just a region of regular .text. However currently it's 1MB in size, and we don't want to waste 1MB of text. In practice however I only see ~30KB of kvm_tmp being used even for an allyes_config. So shrink kvm_tmp to 64K, which ought to be enough for everyone, and move it into .text. Signed-off-by: Michael Ellerman --- arch/powerpc/kernel/kvm.c | 24 +++++------------------- arch/powerpc/kernel/kvm_emul.S | 8 ++++++++ 2 files changed, 13 insertions(+), 19 deletions(-) diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c index b7b3a5e4e224..e3b5aa583319 100644 --- a/arch/powerpc/kernel/kvm.c +++ b/arch/powerpc/kernel/kvm.c @@ -64,7 +64,8 @@ #define KVM_INST_MTSRIN 0x7c0001e4 static bool kvm_patching_worked = true; -char kvm_tmp[1024 * 1024]; +extern char kvm_tmp[]; +extern char kvm_tmp_end[]; static int kvm_tmp_index; static inline void kvm_patch_ins(u32 *inst, u32 new_inst) @@ -132,7 +133,7 @@ static u32 *kvm_alloc(int len) { u32 *p; - if ((kvm_tmp_index + len) > ARRAY_SIZE(kvm_tmp)) { + if ((kvm_tmp_index + len) > (kvm_tmp_end - kvm_tmp)) { printk(KERN_ERR "KVM: No more space (%d + %d)\n", kvm_tmp_index, len); kvm_patching_worked = false; @@ -699,25 +700,13 @@ static void kvm_use_magic_page(void) kvm_patching_worked ? "worked" : "failed"); } -static __init void kvm_free_tmp(void) -{ - /* - * Inform kmemleak about the hole in the .bss section since the - * corresponding pages will be unmapped with DEBUG_PAGEALLOC=y. - */ - kmemleak_free_part(&kvm_tmp[kvm_tmp_index], - ARRAY_SIZE(kvm_tmp) - kvm_tmp_index); - free_reserved_area(&kvm_tmp[kvm_tmp_index], - &kvm_tmp[ARRAY_SIZE(kvm_tmp)], -1, NULL); -} - static int __init kvm_guest_init(void) { if (!kvm_para_available()) - goto free_tmp; + return 0; if (!epapr_paravirt_enabled) - goto free_tmp; + return 0; if (kvm_para_has_feature(KVM_FEATURE_MAGIC_PAGE)) kvm_use_magic_page(); @@ -727,9 +716,6 @@ static int __init kvm_guest_init(void) powersave_nap = 1; #endif -free_tmp: - kvm_free_tmp(); - return 0; } diff --git a/arch/powerpc/kernel/kvm_emul.S b/arch/powerpc/kernel/kvm_emul.S index eb2568f583ae..9dd17dce10a1 100644 --- a/arch/powerpc/kernel/kvm_emul.S +++ b/arch/powerpc/kernel/kvm_emul.S @@ -334,5 +334,13 @@ kvm_emulate_mtsrin_len: .long (kvm_emulate_mtsrin_end - kvm_emulate_mtsrin) / 4 + .balign 4 + .global kvm_tmp +kvm_tmp: + .space (64 * 1024) + +.global kvm_tmp_end +kvm_tmp_end: + .global kvm_template_end kvm_template_end: From patchwork Wed Sep 11 11:57:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Ellerman X-Patchwork-Id: 1160936 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ellerman.id.au Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=ellerman.id.au header.i=@ellerman.id.au header.b="aSYcB3p8"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46T0mD1Fnlz9sNF for ; Wed, 11 Sep 2019 21:58:36 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726702AbfIKL6f (ORCPT ); Wed, 11 Sep 2019 07:58:35 -0400 Received: from ozlabs.org ([203.11.71.1]:51417 "EHLO ozlabs.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726657AbfIKL6f (ORCPT ); Wed, 11 Sep 2019 07:58:35 -0400 Received: by ozlabs.org (Postfix, from userid 1034) id 46T0m90xyXz9sNF; Wed, 11 Sep 2019 21:58:32 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=ellerman.id.au; s=201909; t=1568203113; bh=fq8fQ22sZ/nyxorllJO6Q77Ryp80lsBaQrxTm1scNks=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aSYcB3p8kSf3w4axofVVHUqrTb7pKuCteerz4+YKYzIs2aOzNEa6HapIdArGixeL2 B99z8jtMAVgFBx3j0v8NL9rYN1DVnAxWGyoS3XJY6wbrjTMnnHrfmw4l1mld9QNKNq 3KBfCjjFwz90iNMfkgg4oRkxqUGO19N8ddomkVgvPmHMcD0d0DFUY7Rx8nfnwJVhWU A/9+LlNqg09vNoVwUl1+T5N0tNkmAM7uxC49P572rLG7s3QK1P3AH1cd1+EniK2jrw fNICh3zRufz2l+gq8qjSadpxpyON9YkoNauS4efy0QprR5Kw7LmqNJqaXgjCBzTaDx XdkBidtk2zGEw== From: Michael Ellerman To: linuxppc-dev@ozlabs.org Cc: cai@lca.pw, kvm-ppc@vger.kernel.org Subject: [PATCH 2/4] powerpc/64s: Remove overlaps_kvm_tmp() Date: Wed, 11 Sep 2019 21:57:44 +1000 Message-Id: <20190911115746.12433-2-mpe@ellerman.id.au> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190911115746.12433-1-mpe@ellerman.id.au> References: <20190911115746.12433-1-mpe@ellerman.id.au> MIME-Version: 1.0 Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org kvm_tmp is now in .text and so doesn't need a special overlap check. Signed-off-by: Michael Ellerman --- arch/powerpc/include/asm/sections.h | 11 ----------- arch/powerpc/mm/book3s64/hash_utils.c | 4 ---- 2 files changed, 15 deletions(-) diff --git a/arch/powerpc/include/asm/sections.h b/arch/powerpc/include/asm/sections.h index 4a1664a8658d..5a9b6eb651b6 100644 --- a/arch/powerpc/include/asm/sections.h +++ b/arch/powerpc/include/asm/sections.h @@ -61,17 +61,6 @@ static inline int overlaps_kernel_text(unsigned long start, unsigned long end) (unsigned long)_stext < end; } -static inline int overlaps_kvm_tmp(unsigned long start, unsigned long end) -{ -#ifdef CONFIG_KVM_GUEST - extern char kvm_tmp[]; - return start < (unsigned long)kvm_tmp && - (unsigned long)&kvm_tmp[1024 * 1024] < end; -#else - return 0; -#endif -} - #ifdef PPC64_ELF_ABI_v1 #define HAVE_DEREFERENCE_FUNCTION_DESCRIPTOR 1 diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c index b8ad14bb1170..1be0622a1f38 100644 --- a/arch/powerpc/mm/book3s64/hash_utils.c +++ b/arch/powerpc/mm/book3s64/hash_utils.c @@ -271,10 +271,6 @@ int htab_bolt_mapping(unsigned long vstart, unsigned long vend, if (overlaps_kernel_text(vaddr, vaddr + step)) tprot &= ~HPTE_R_N; - /* Make kvm guest trampolines executable */ - if (overlaps_kvm_tmp(vaddr, vaddr + step)) - tprot &= ~HPTE_R_N; - /* * If relocatable, check if it overlaps interrupt vectors that * are copied down to real 0. For relocatable kernel From patchwork Wed Sep 11 11:57:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Ellerman X-Patchwork-Id: 1160937 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ellerman.id.au Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=ellerman.id.au header.i=@ellerman.id.au header.b="UjzxfiFJ"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46T0mk0k13z9sDB for ; Wed, 11 Sep 2019 21:59:02 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727307AbfIKL7B (ORCPT ); Wed, 11 Sep 2019 07:59:01 -0400 Received: from ozlabs.org ([203.11.71.1]:60879 "EHLO ozlabs.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727121AbfIKL7B (ORCPT ); Wed, 11 Sep 2019 07:59:01 -0400 Received: by ozlabs.org (Postfix, from userid 1034) id 46T0mf2MRsz9sNk; Wed, 11 Sep 2019 21:58:58 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=ellerman.id.au; s=201909; t=1568203138; bh=fHjG4/rFIJyvNhfaY19rGTqzVCkA1AYWYNztpmSsp18=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UjzxfiFJ7FhuznCIGG4+n4xmA/0/K6q7djCB/2+qWQuzVNRUJwmnOiQgITWxlGNFf 40z8cVNQrFLpD8lrvZWyF53+mMk+mYXQgLbpgwK2z0IHd38DPYWjjj4oIP7LCTBbg6 VmMLOb1xh/D+xag0pXCzV5/CwsUEkKt6qc33u+iflVjK3oa4OQ9LZ0dj/11RK/FzIO +HWUVAOgGzGjr2HsTJA8yf3LX/I1lXYTdOJjJZkrIqZTmi+VyQYKGbQvkZOhdfSjgW j5axSvQH+4XowTxGKUlpx1cxzXNpSFNDvq3MRDaM8vfnuUn62OnSqx/6SYCqVv6/Xj W52opOarVoUnQ== From: Michael Ellerman To: linuxppc-dev@ozlabs.org Cc: cai@lca.pw, kvm-ppc@vger.kernel.org Subject: [PATCH 3/4] powerpc/kvm: Explicitly mark kvm guest code as __init Date: Wed, 11 Sep 2019 21:57:45 +1000 Message-Id: <20190911115746.12433-3-mpe@ellerman.id.au> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190911115746.12433-1-mpe@ellerman.id.au> References: <20190911115746.12433-1-mpe@ellerman.id.au> MIME-Version: 1.0 Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org All the code in kvm.c can be marked __init. Most of it is already inlined into the initcall, but not all. So instead of relying on the inlining, mark it all as __init. This saves ~280 bytes of text for my configuration. Signed-off-by: Michael Ellerman --- arch/powerpc/kernel/kvm.c | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c index e3b5aa583319..617eba82531c 100644 --- a/arch/powerpc/kernel/kvm.c +++ b/arch/powerpc/kernel/kvm.c @@ -68,13 +68,13 @@ extern char kvm_tmp[]; extern char kvm_tmp_end[]; static int kvm_tmp_index; -static inline void kvm_patch_ins(u32 *inst, u32 new_inst) +static void __init kvm_patch_ins(u32 *inst, u32 new_inst) { *inst = new_inst; flush_icache_range((ulong)inst, (ulong)inst + 4); } -static void kvm_patch_ins_ll(u32 *inst, long addr, u32 rt) +static void __init kvm_patch_ins_ll(u32 *inst, long addr, u32 rt) { #ifdef CONFIG_64BIT kvm_patch_ins(inst, KVM_INST_LD | rt | (addr & 0x0000fffc)); @@ -83,7 +83,7 @@ static void kvm_patch_ins_ll(u32 *inst, long addr, u32 rt) #endif } -static void kvm_patch_ins_ld(u32 *inst, long addr, u32 rt) +static void __init kvm_patch_ins_ld(u32 *inst, long addr, u32 rt) { #ifdef CONFIG_64BIT kvm_patch_ins(inst, KVM_INST_LD | rt | (addr & 0x0000fffc)); @@ -92,12 +92,12 @@ static void kvm_patch_ins_ld(u32 *inst, long addr, u32 rt) #endif } -static void kvm_patch_ins_lwz(u32 *inst, long addr, u32 rt) +static void __init kvm_patch_ins_lwz(u32 *inst, long addr, u32 rt) { kvm_patch_ins(inst, KVM_INST_LWZ | rt | (addr & 0x0000ffff)); } -static void kvm_patch_ins_std(u32 *inst, long addr, u32 rt) +static void __init kvm_patch_ins_std(u32 *inst, long addr, u32 rt) { #ifdef CONFIG_64BIT kvm_patch_ins(inst, KVM_INST_STD | rt | (addr & 0x0000fffc)); @@ -106,17 +106,17 @@ static void kvm_patch_ins_std(u32 *inst, long addr, u32 rt) #endif } -static void kvm_patch_ins_stw(u32 *inst, long addr, u32 rt) +static void __init kvm_patch_ins_stw(u32 *inst, long addr, u32 rt) { kvm_patch_ins(inst, KVM_INST_STW | rt | (addr & 0x0000fffc)); } -static void kvm_patch_ins_nop(u32 *inst) +static void __init kvm_patch_ins_nop(u32 *inst) { kvm_patch_ins(inst, KVM_INST_NOP); } -static void kvm_patch_ins_b(u32 *inst, int addr) +static void __init kvm_patch_ins_b(u32 *inst, int addr) { #if defined(CONFIG_RELOCATABLE) && defined(CONFIG_PPC_BOOK3S) /* On relocatable kernels interrupts handlers and our code @@ -129,7 +129,7 @@ static void kvm_patch_ins_b(u32 *inst, int addr) kvm_patch_ins(inst, KVM_INST_B | (addr & KVM_INST_B_MASK)); } -static u32 *kvm_alloc(int len) +static u32 * __init kvm_alloc(int len) { u32 *p; @@ -152,7 +152,7 @@ extern u32 kvm_emulate_mtmsrd_orig_ins_offs; extern u32 kvm_emulate_mtmsrd_len; extern u32 kvm_emulate_mtmsrd[]; -static void kvm_patch_ins_mtmsrd(u32 *inst, u32 rt) +static void __init kvm_patch_ins_mtmsrd(u32 *inst, u32 rt) { u32 *p; int distance_start; @@ -205,7 +205,7 @@ extern u32 kvm_emulate_mtmsr_orig_ins_offs; extern u32 kvm_emulate_mtmsr_len; extern u32 kvm_emulate_mtmsr[]; -static void kvm_patch_ins_mtmsr(u32 *inst, u32 rt) +static void __init kvm_patch_ins_mtmsr(u32 *inst, u32 rt) { u32 *p; int distance_start; @@ -266,7 +266,7 @@ extern u32 kvm_emulate_wrtee_orig_ins_offs; extern u32 kvm_emulate_wrtee_len; extern u32 kvm_emulate_wrtee[]; -static void kvm_patch_ins_wrtee(u32 *inst, u32 rt, int imm_one) +static void __init kvm_patch_ins_wrtee(u32 *inst, u32 rt, int imm_one) { u32 *p; int distance_start; @@ -323,7 +323,7 @@ extern u32 kvm_emulate_wrteei_0_branch_offs; extern u32 kvm_emulate_wrteei_0_len; extern u32 kvm_emulate_wrteei_0[]; -static void kvm_patch_ins_wrteei_0(u32 *inst) +static void __init kvm_patch_ins_wrteei_0(u32 *inst) { u32 *p; int distance_start; @@ -364,7 +364,7 @@ extern u32 kvm_emulate_mtsrin_orig_ins_offs; extern u32 kvm_emulate_mtsrin_len; extern u32 kvm_emulate_mtsrin[]; -static void kvm_patch_ins_mtsrin(u32 *inst, u32 rt, u32 rb) +static void __init kvm_patch_ins_mtsrin(u32 *inst, u32 rt, u32 rb) { u32 *p; int distance_start; @@ -400,7 +400,7 @@ static void kvm_patch_ins_mtsrin(u32 *inst, u32 rt, u32 rb) #endif -static void kvm_map_magic_page(void *data) +static void __init kvm_map_magic_page(void *data) { u32 *features = data; @@ -415,7 +415,7 @@ static void kvm_map_magic_page(void *data) *features = out[0]; } -static void kvm_check_ins(u32 *inst, u32 features) +static void __init kvm_check_ins(u32 *inst, u32 features) { u32 _inst = *inst; u32 inst_no_rt = _inst & ~KVM_MASK_RT; @@ -659,7 +659,7 @@ static void kvm_check_ins(u32 *inst, u32 features) extern u32 kvm_template_start[]; extern u32 kvm_template_end[]; -static void kvm_use_magic_page(void) +static void __init kvm_use_magic_page(void) { u32 *p; u32 *start, *end; From patchwork Wed Sep 11 11:57:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Ellerman X-Patchwork-Id: 1160938 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ellerman.id.au Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=ellerman.id.au header.i=@ellerman.id.au header.b="nccu4sUw"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46T0n96sr3z9sPK for ; Wed, 11 Sep 2019 21:59:25 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727785AbfIKL7Z (ORCPT ); Wed, 11 Sep 2019 07:59:25 -0400 Received: from bilbo.ozlabs.org ([203.11.71.1]:46265 "EHLO ozlabs.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727121AbfIKL7Z (ORCPT ); Wed, 11 Sep 2019 07:59:25 -0400 Received: by ozlabs.org (Postfix, from userid 1034) id 46T0n70WYmz9sNx; Wed, 11 Sep 2019 21:59:22 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=ellerman.id.au; s=201909; t=1568203163; bh=Ug7u/HQGkSwDLHWHp5/4EK9Pu+ixbz3YSDtnNvf3jWw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nccu4sUwdhEKYvPd/IPsMifrzVggF7dmEuU7ayhTa133cgCYNB+r+6CXapgygLw/e Rt9I64+NfjhvACXvuxDlJTXZWhzFkA00weUOK/Uvv67t3NGV6qYbicm8lwUzPJGnV/ vx+BiFANmLQQSctTtg+TxQJRcMWR2hzqeUlKk1tLrLY/4EExZBOfqx48Gn9wHemCNQ b/k8N32+UPSrxJ9UPC43r6C8X5bbPT3NaJ/qAugUkORrNP4XM5YSWHxtK16jZal1bU YXW9grLmYAVj1ulgTgO9RAdFtB130mSIR6VDe5/tfj0WDwgdiiJTLc1/GGZao9liTS PqvS0pLNRnc2g== From: Michael Ellerman To: linuxppc-dev@ozlabs.org Cc: cai@lca.pw, kvm-ppc@vger.kernel.org Subject: [PATCH 4/4] powerpc/kvm: Add ifdefs around template code Date: Wed, 11 Sep 2019 21:57:46 +1000 Message-Id: <20190911115746.12433-4-mpe@ellerman.id.au> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190911115746.12433-1-mpe@ellerman.id.au> References: <20190911115746.12433-1-mpe@ellerman.id.au> MIME-Version: 1.0 Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Some of the templates used for KVM patching are only used on certain platforms, but currently they are always built-in, fix that. Signed-off-by: Michael Ellerman --- arch/powerpc/kernel/kvm_emul.S | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/powerpc/kernel/kvm_emul.S b/arch/powerpc/kernel/kvm_emul.S index 9dd17dce10a1..7af6f8b50c5d 100644 --- a/arch/powerpc/kernel/kvm_emul.S +++ b/arch/powerpc/kernel/kvm_emul.S @@ -192,6 +192,8 @@ kvm_emulate_mtmsr_len: .long (kvm_emulate_mtmsr_end - kvm_emulate_mtmsr) / 4 +#ifdef CONFIG_BOOKE + /* also used for wrteei 1 */ .global kvm_emulate_wrtee kvm_emulate_wrtee: @@ -285,6 +287,10 @@ kvm_emulate_wrteei_0_len: .long (kvm_emulate_wrteei_0_end - kvm_emulate_wrteei_0) / 4 +#endif /* CONFIG_BOOKE */ + +#ifdef CONFIG_PPC_BOOK3S_32 + .global kvm_emulate_mtsrin kvm_emulate_mtsrin: @@ -334,6 +340,8 @@ kvm_emulate_mtsrin_len: .long (kvm_emulate_mtsrin_end - kvm_emulate_mtsrin) / 4 +#endif /* CONFIG_PPC_BOOK3S_32 */ + .balign 4 .global kvm_tmp kvm_tmp: