From patchwork Wed Nov 14 17:03:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Vivier X-Patchwork-Id: 997835 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 42w9pp2WLvz9rxp for ; Thu, 15 Nov 2018 04:05:06 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 42w9pp1DvbzF3cT for ; Thu, 15 Nov 2018 04:05:06 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=redhat.com X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=redhat.com (client-ip=209.132.183.28; helo=mx1.redhat.com; envelope-from=lvivier@redhat.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=redhat.com Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 42w9mw2265zF3cQ for ; Thu, 15 Nov 2018 04:03:24 +1100 (AEDT) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 1270A315487B; Wed, 14 Nov 2018 17:03:23 +0000 (UTC) Received: from thinkpad.redhat.com (ovpn-204-52.brq.redhat.com [10.40.204.52]) by smtp.corp.redhat.com (Postfix) with ESMTP id EBB9160BF6; Wed, 14 Nov 2018 17:03:20 +0000 (UTC) From: Laurent Vivier To: Michael Ellerman Subject: [PATCH] powerpc/numa: fix hot-added CPU on memory-less node Date: Wed, 14 Nov 2018 18:03:19 +0100 Message-Id: <20181114170319.24828-1-lvivier@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.41]); Wed, 14 Nov 2018 17:03:23 +0000 (UTC) X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Laurent Vivier , Satheesh Rajendran , linux-kernel@vger.kernel.org, Michael Bringmann , Nathan Fontenot , linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Trying to hotplug a CPU on an empty NUMA node (without memory or CPU) crashes the kernel when the CPU is onlined. During the onlining process, the kernel calls start_secondary() that ends by calling set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu])) that relies on NODE_DATA(nid)->node_zonelists and in our case NODE_DATA(nid) is NULL. To fix that, add the same checking as we already have in find_and_online_cpu_nid(): if NODE_DATA() is NULL, use the first online node. Bug: https://github.com/linuxppc/linux/issues/184 Fixes: ea05ba7c559c8e5a5946c3a94a2a266e9a6680a6 (powerpc/numa: Ensure nodes initialized for hotplug) Signed-off-by: Laurent Vivier --- arch/powerpc/mm/numa.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c index 3a048e98a132..1b2d25a3c984 100644 --- a/arch/powerpc/mm/numa.c +++ b/arch/powerpc/mm/numa.c @@ -483,6 +483,15 @@ static int numa_setup_cpu(unsigned long lcpu) if (nid < 0 || !node_possible(nid)) nid = first_online_node; + if (NODE_DATA(nid) == NULL) { + /* + * Default to using the nearest node that has memory installed. + * Otherwise, it would be necessary to patch the kernel MM code + * to deal with more memoryless-node error conditions. + */ + nid = first_online_node; + } + map_cpu_to_node(lcpu, nid); of_node_put(cpu); out: