From patchwork Thu Jan 17 12:13:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Ellerman X-Patchwork-Id: 1026588 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 43gNML1DLVz9rxp for ; Thu, 17 Jan 2019 23:15:42 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ellerman.id.au Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 43gNMK6xMnzDqn4 for ; Thu, 17 Jan 2019 23:15:41 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Received: from ozlabs.org (bilbo.ozlabs.org [203.11.71.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 43gNJt6Y5tzDqjl for ; Thu, 17 Jan 2019 23:13:34 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=ellerman.id.au Received: by ozlabs.org (Postfix) id 43gNJt3gZfz9sBn; Thu, 17 Jan 2019 23:13:34 +1100 (AEDT) Delivered-To: linuxppc-dev@ozlabs.org Received: by ozlabs.org (Postfix, from userid 1034) id 43gNJs6Vjtz9sCX; Thu, 17 Jan 2019 23:13:33 +1100 (AEDT) From: Michael Ellerman To: linuxppc-dev@ozlabs.org Subject: [PATCH 1/4] powerpc/64s: Always set mmu_slb_size using slb_set_size() Date: Thu, 17 Jan 2019 23:13:25 +1100 Message-Id: <20190117121328.13395-1-mpe@ellerman.id.au> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: npiggin@gmail.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" It's easier to reason about the code if we only set mmu_slb_size in one place, so convert open-coded assignments to use slb_set_size(). Signed-off-by: Michael Ellerman Reviewed-by: Aneesh Kumar K.V --- arch/powerpc/kernel/prom.c | 2 +- arch/powerpc/mm/pgtable-radix.c | 3 ++- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c index 4181ec715f88..14693f8ccb80 100644 --- a/arch/powerpc/kernel/prom.c +++ b/arch/powerpc/kernel/prom.c @@ -238,7 +238,7 @@ static void __init init_mmu_slb_size(unsigned long node) of_get_flat_dt_prop(node, "ibm,slb-size", NULL); if (slb_size_ptr) - mmu_slb_size = be32_to_cpup(slb_size_ptr); + slb_set_size(be32_to_cpup(slb_size_ptr)); } #else #define init_mmu_slb_size(node) do { } while(0) diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c index 931156069a81..949fbc96b237 100644 --- a/arch/powerpc/mm/pgtable-radix.c +++ b/arch/powerpc/mm/pgtable-radix.c @@ -328,7 +328,8 @@ void __init radix_init_pgtable(void) struct memblock_region *reg; /* We don't support slb for radix */ - mmu_slb_size = 0; + slb_set_size(0); + /* * Create the linear mapping, using standard page size for now */ From patchwork Thu Jan 17 12:13:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Ellerman X-Patchwork-Id: 1026590 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 43gNPz4hkgz9rxp for ; Thu, 17 Jan 2019 23:17:59 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ellerman.id.au Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 43gNPz3CsLzDqn4 for ; Thu, 17 Jan 2019 23:17:59 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Received: from ozlabs.org (bilbo.ozlabs.org [IPv6:2401:3900:2:1::2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 43gNJv6SZyzDqkK for ; Thu, 17 Jan 2019 23:13:35 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=ellerman.id.au Received: by ozlabs.org (Postfix) id 43gNJv0HK7z9sCh; Thu, 17 Jan 2019 23:13:35 +1100 (AEDT) Delivered-To: linuxppc-dev@ozlabs.org Received: by ozlabs.org (Postfix, from userid 1034) id 43gNJt5qnfz9sD4; Thu, 17 Jan 2019 23:13:34 +1100 (AEDT) From: Michael Ellerman To: linuxppc-dev@ozlabs.org Subject: [PATCH 2/4] powerpc/64s: Add slb_full_bitmap rather than hard-coding U32_MAX Date: Thu, 17 Jan 2019 23:13:26 +1100 Message-Id: <20190117121328.13395-2-mpe@ellerman.id.au> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190117121328.13395-1-mpe@ellerman.id.au> References: <20190117121328.13395-1-mpe@ellerman.id.au> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: npiggin@gmail.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" The recent rewrite of the SLB code into C included the assumption that all CPUs we run on have at least 32 SLB entries. This is currently true but a bit fragile as the SLB size is actually defined by the device tree and so could theoretically change at any time. The assumption is encoded in the fact that we use U32_MAX as the value for a full SLB bitmap. Instead, calculate what the full bitmap would be based on the SLB size we're given and store it. This still requires the SLB size to be a power of 2. Fixes: 126b11b294d1 ("powerpc/64s/hash: Add SLB allocation status bitmaps") Signed-off-by: Michael Ellerman Reviewed-by: Aneesh Kumar K.V --- arch/powerpc/mm/slb.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c index bc3914d54e26..61450a9cf30d 100644 --- a/arch/powerpc/mm/slb.c +++ b/arch/powerpc/mm/slb.c @@ -506,9 +506,16 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm) asm volatile("isync" : : : "memory"); } +static u32 slb_full_bitmap; + void slb_set_size(u16 size) { mmu_slb_size = size; + + if (size >= 32) + slb_full_bitmap = U32_MAX; + else + slb_full_bitmap = (1ul << size) - 1; } void slb_initialize(void) @@ -611,7 +618,7 @@ static enum slb_index alloc_slb_index(bool kernel) * POWER7/8/9 have 32 SLB entries, this could be expanded if a * future CPU has more. */ - if (local_paca->slb_used_bitmap != U32_MAX) { + if (local_paca->slb_used_bitmap != slb_full_bitmap) { index = ffz(local_paca->slb_used_bitmap); local_paca->slb_used_bitmap |= 1U << index; if (kernel) From patchwork Thu Jan 17 12:13:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Ellerman X-Patchwork-Id: 1026595 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 43gNWS6kpjz9rxp for ; Thu, 17 Jan 2019 23:22:44 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ellerman.id.au Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 43gNWS5mS3zDqr5 for ; Thu, 17 Jan 2019 23:22:44 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Received: from ozlabs.org (bilbo.ozlabs.org [IPv6:2401:3900:2:1::2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 43gNJw2XJYzDqkL for ; Thu, 17 Jan 2019 23:13:36 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=ellerman.id.au Received: by ozlabs.org (Postfix) id 43gNJw0McSz9sDT; Thu, 17 Jan 2019 23:13:36 +1100 (AEDT) Delivered-To: linuxppc-dev@ozlabs.org Received: by ozlabs.org (Postfix, from userid 1034) id 43gNJv5Lrnz9sDP; Thu, 17 Jan 2019 23:13:35 +1100 (AEDT) From: Michael Ellerman To: linuxppc-dev@ozlabs.org Subject: [PATCH 3/4] powerpc/64s: Move SLB init into hash_utils_64.c Date: Thu, 17 Jan 2019 23:13:27 +1100 Message-Id: <20190117121328.13395-3-mpe@ellerman.id.au> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190117121328.13395-1-mpe@ellerman.id.au> References: <20190117121328.13395-1-mpe@ellerman.id.au> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: npiggin@gmail.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" The SLB initialisation code is spread around a bit between prom.c and hash_utils_64.c. Consolidate it all in hash_utils_64.c. This slightly changes the timing of when mmu_slb_size is initialised, but that should have no effect. Signed-off-by: Michael Ellerman Reviewed-by: Aneesh Kumar K.V --- arch/powerpc/kernel/prom.c | 16 ---------------- arch/powerpc/mm/hash_utils_64.c | 15 ++++++++++----- 2 files changed, 10 insertions(+), 21 deletions(-) diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c index 14693f8ccb80..018ededd1948 100644 --- a/arch/powerpc/kernel/prom.c +++ b/arch/powerpc/kernel/prom.c @@ -229,21 +229,6 @@ static void __init check_cpu_pa_features(unsigned long node) ibm_pa_features, ARRAY_SIZE(ibm_pa_features)); } -#ifdef CONFIG_PPC_BOOK3S_64 -static void __init init_mmu_slb_size(unsigned long node) -{ - const __be32 *slb_size_ptr; - - slb_size_ptr = of_get_flat_dt_prop(node, "slb-size", NULL) ? : - of_get_flat_dt_prop(node, "ibm,slb-size", NULL); - - if (slb_size_ptr) - slb_set_size(be32_to_cpup(slb_size_ptr)); -} -#else -#define init_mmu_slb_size(node) do { } while(0) -#endif - static struct feature_property { const char *name; u32 min_value; @@ -379,7 +364,6 @@ static int __init early_init_dt_scan_cpus(unsigned long node, } identical_pvr_fixup(node); - init_mmu_slb_size(node); #ifdef CONFIG_PPC64 if (nthreads == 1) diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c index 4aa0797000f7..33ce76be17de 100644 --- a/arch/powerpc/mm/hash_utils_64.c +++ b/arch/powerpc/mm/hash_utils_64.c @@ -344,9 +344,8 @@ static int __init parse_disable_1tb_segments(char *p) } early_param("disable_1tb_segments", parse_disable_1tb_segments); -static int __init htab_dt_scan_seg_sizes(unsigned long node, - const char *uname, int depth, - void *data) +static int __init htab_dt_scan_slb(unsigned long node, const char *uname, + int depth, void *data) { const char *type = of_get_flat_dt_prop(node, "device_type", NULL); const __be32 *prop; @@ -356,6 +355,12 @@ static int __init htab_dt_scan_seg_sizes(unsigned long node, if (type == NULL || strcmp(type, "cpu") != 0) return 0; + prop = of_get_flat_dt_prop(node, "slb-size", NULL); + if (!prop) + prop = of_get_flat_dt_prop(node, "ibm,slb-size", NULL); + if (prop) + slb_set_size(be32_to_cpup(prop)); + prop = of_get_flat_dt_prop(node, "ibm,processor-segment-sizes", &size); if (prop == NULL) return 0; @@ -954,8 +959,8 @@ static void __init htab_initialize(void) void __init hash__early_init_devtree(void) { - /* Initialize segment sizes */ - of_scan_flat_dt(htab_dt_scan_seg_sizes, NULL); + /* Initialize SLB size and segment sizes */ + of_scan_flat_dt(htab_dt_scan_slb, NULL); /* Initialize page sizes */ htab_scan_page_sizes(); From patchwork Thu Jan 17 12:13:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Ellerman X-Patchwork-Id: 1026591 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 43gNSp6wGLz9rxp for ; Thu, 17 Jan 2019 23:20:26 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ellerman.id.au Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 43gNSp5gS4zDqHF for ; Thu, 17 Jan 2019 23:20:26 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Received: from ozlabs.org (bilbo.ozlabs.org [203.11.71.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 43gNJx0Z07zDqkP for ; Thu, 17 Jan 2019 23:13:37 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=ellerman.id.au Received: by ozlabs.org (Postfix) id 43gNJw6D1Zz9sBn; Thu, 17 Jan 2019 23:13:36 +1100 (AEDT) Delivered-To: linuxppc-dev@ozlabs.org Received: by ozlabs.org (Postfix, from userid 1034) id 43gNJw5Ddnz9sCh; Thu, 17 Jan 2019 23:13:36 +1100 (AEDT) From: Michael Ellerman To: linuxppc-dev@ozlabs.org Subject: [PATCH 4/4] powerpc/64s: Support shrinking the SLB for debugging Date: Thu, 17 Jan 2019 23:13:28 +1100 Message-Id: <20190117121328.13395-4-mpe@ellerman.id.au> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190117121328.13395-1-mpe@ellerman.id.au> References: <20190117121328.13395-1-mpe@ellerman.id.au> MIME-Version: 1.0 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: npiggin@gmail.com Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" On machines with 1TB segments and a 32-entry SLB it's quite hard to cause sufficient SLB pressure to trigger bugs caused due to badly timed SLB faults. We have seen this in the past and a few years ago added the disable_1tb_segments command line option to force the use of 256MB segments. However even this allows some bugs to slip through testing if the SLB entry in question was recently accessed. So add a new command line parameter for debugging which shrinks the SLB to the minimum size we can support. Currently that size is 3, two bolted SLBs and 1 for dynamic use. This creates the maximal SLB pressure while still allowing the kernel to operate. Signed-off-by: Michael Ellerman Reviewed-by: Aneesh Kumar K.V --- arch/powerpc/mm/slb.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c index 61450a9cf30d..0f33e28f97da 100644 --- a/arch/powerpc/mm/slb.c +++ b/arch/powerpc/mm/slb.c @@ -506,10 +506,24 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm) asm volatile("isync" : : : "memory"); } +static bool shrink_slb = false; + +static int __init parse_shrink_slb(char *p) +{ + shrink_slb = true; + slb_set_size(0); + + return 0; +} +early_param("shrink_slb", parse_shrink_slb); + static u32 slb_full_bitmap; void slb_set_size(u16 size) { + if (shrink_slb) + size = SLB_NUM_BOLTED + 1; + mmu_slb_size = size; if (size >= 32)