From patchwork Thu Sep 12 10:15:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 1161492 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=huawei.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46TZTk5lN2z9s00 for ; Thu, 12 Sep 2019 20:18:02 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731111AbfILKSC (ORCPT ); Thu, 12 Sep 2019 06:18:02 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:33298 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1730454AbfILKSB (ORCPT ); Thu, 12 Sep 2019 06:18:01 -0400 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 16F4FEDB530A876F1624; Thu, 12 Sep 2019 18:17:57 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.439.0; Thu, 12 Sep 2019 18:17:55 +0800 From: Yunsheng Lin To: , , , , , , , , , , , , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v3 1/8] arm64: numa: make node_to_cpumask_map() NUMA_NO_NODE aware for arm64 Date: Thu, 12 Sep 2019 18:15:27 +0800 Message-ID: <1568283334-178380-2-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1568283334-178380-1-git-send-email-linyunsheng@huawei.com> References: <1568283334-178380-1-git-send-email-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org When passing the return value of dev_to_node() to cpumask_of_node() without checking the node id if the node id is NUMA_NO_NODE, there is global-out-of-bounds detected by KASAN. From the discussion [1], NUMA_NO_NODE really means no node affinity, which also means all cpus should be usable. So the cpumask_of_node() should always return all cpus online when user passes the node id as NUMA_NO_NODE, just like similar semantic that page allocator handles NUMA_NO_NODE. But we cannot really copy the page allocator logic. Simply because the page allocator doesn't enforce the near node affinity. It just picks it up as a preferred node but then it is free to fallback to any other numa node. This is not the case here and node_to_cpumask_map will only restrict to the particular node's cpus which would have really non deterministic behavior depending on where the code is executed. So in fact we really want to return cpu_online_mask for NUMA_NO_NODE. Also there is a debuging version of node_to_cpumask_map(), which only is used when CONFIG_DEBUG_PER_CPU_MAPS is defined, this patch changes it to handle NUMA_NO_NODE as the normal node_to_cpumask_map(). And "fix" a sign "bug" since it is for debugging and should catch all the error cases. [1] https://lore.kernel.org/patchwork/patch/1125789/ Signed-off-by: Yunsheng Lin Suggested-by: Michal Hocko --- V3: Change to only handle NUMA_NO_NODE, and return cpu_online_mask for NUMA_NO_NODE case, and change the commit log to better justify the change. --- arch/arm64/include/asm/numa.h | 3 +++ arch/arm64/mm/numa.c | 5 ++++- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/numa.h b/arch/arm64/include/asm/numa.h index 626ad01..c8a4b31 100644 --- a/arch/arm64/include/asm/numa.h +++ b/arch/arm64/include/asm/numa.h @@ -25,6 +25,9 @@ const struct cpumask *cpumask_of_node(int node); /* Returns a pointer to the cpumask of CPUs on Node 'node'. */ static inline const struct cpumask *cpumask_of_node(int node) { + if (node == NUMA_NO_NODE) + return cpu_online_mask; + return node_to_cpumask_map[node]; } #endif diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c index 4f241cc..bef4bdd 100644 --- a/arch/arm64/mm/numa.c +++ b/arch/arm64/mm/numa.c @@ -46,7 +46,10 @@ EXPORT_SYMBOL(node_to_cpumask_map); */ const struct cpumask *cpumask_of_node(int node) { - if (WARN_ON(node >= nr_node_ids)) + if (node == NUMA_NO_NODE) + return cpu_online_mask; + + if (WARN_ON((unsigned int)node >= nr_node_ids)) return cpu_none_mask; if (WARN_ON(node_to_cpumask_map[node] == NULL)) From patchwork Thu Sep 12 10:15:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 1161501 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=huawei.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46TZVP3DcZz9sNk for ; Thu, 12 Sep 2019 20:18:37 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731118AbfILKSC (ORCPT ); Thu, 12 Sep 2019 06:18:02 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:33204 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1730811AbfILKSB (ORCPT ); Thu, 12 Sep 2019 06:18:01 -0400 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 01CAE14800DBA7180C95; Thu, 12 Sep 2019 18:17:57 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.439.0; Thu, 12 Sep 2019 18:17:55 +0800 From: Yunsheng Lin To: , , , , , , , , , , , , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v3 2/8] x86: numa: make node_to_cpumask_map() NUMA_NO_NODE aware for x86 Date: Thu, 12 Sep 2019 18:15:28 +0800 Message-ID: <1568283334-178380-3-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1568283334-178380-1-git-send-email-linyunsheng@huawei.com> References: <1568283334-178380-1-git-send-email-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org When passing the return value of dev_to_node() to cpumask_of_node() without checking the node id if the node id is NUMA_NO_NODE, there is global-out-of-bounds detected by KASAN. From the discussion [1], NUMA_NO_NODE really means no node affinity, which also means all cpus should be usable. So the cpumask_of_node() should always return all cpus online when user passes the node id as NUMA_NO_NODE, just like similar semantic that page allocator handles NUMA_NO_NODE. But we cannot really copy the page allocator logic. Simply because the page allocator doesn't enforce the near node affinity. It just picks it up as a preferred node but then it is free to fallback to any other numa node. This is not the case here and node_to_cpumask_map will only restrict to the particular node's cpus which would have really non deterministic behavior depending on where the code is executed. So in fact we really want to return cpu_online_mask for NUMA_NO_NODE. Also there is a debuging version of node_to_cpumask_map(), which only is used when CONFIG_DEBUG_PER_CPU_MAPS is defined, this patch changes it to handle NUMA_NO_NODE as the normal node_to_cpumask_map(). And "fix" a sign "bug" since it is for debugging and should catch all the error cases. [1] https://lore.kernel.org/patchwork/patch/1125789/ Signed-off-by: Yunsheng Lin Suggested-by: Michal Hocko --- V3: Change to only handle NUMA_NO_NODE, and return cpu_online_mask for NUMA_NO_NODE case, and change the commit log to better justify the change. --- arch/x86/include/asm/topology.h | 3 +++ arch/x86/mm/numa.c | 7 +++++-- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h index 4b14d23..7fa82e1 100644 --- a/arch/x86/include/asm/topology.h +++ b/arch/x86/include/asm/topology.h @@ -69,6 +69,9 @@ extern const struct cpumask *cpumask_of_node(int node); /* Returns a pointer to the cpumask of CPUs on Node 'node'. */ static inline const struct cpumask *cpumask_of_node(int node) { + if (node == NUMA_NO_NODE) + return cpu_online_mask; + return node_to_cpumask_map[node]; } #endif diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c index e6dad60..c676ffb 100644 --- a/arch/x86/mm/numa.c +++ b/arch/x86/mm/numa.c @@ -861,9 +861,12 @@ void numa_remove_cpu(int cpu) */ const struct cpumask *cpumask_of_node(int node) { - if (node >= nr_node_ids) { + if (node == NUMA_NO_NODE) + return cpu_online_mask; + + if ((unsigned int)node >= nr_node_ids) { printk(KERN_WARNING - "cpumask_of_node(%d): node > nr_node_ids(%u)\n", + "cpumask_of_node(%d): node >= nr_node_ids(%u)\n", node, nr_node_ids); dump_stack(); return cpu_none_mask; From patchwork Thu Sep 12 10:15:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 1161497 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=huawei.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46TZV54bKNz9sNx for ; Thu, 12 Sep 2019 20:18:21 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731208AbfILKSH (ORCPT ); Thu, 12 Sep 2019 06:18:07 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:59470 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1731139AbfILKSG (ORCPT ); Thu, 12 Sep 2019 06:18:06 -0400 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 3E41A8133B2273B54CD8; Thu, 12 Sep 2019 18:18:02 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.439.0; Thu, 12 Sep 2019 18:17:56 +0800 From: Yunsheng Lin To: , , , , , , , , , , , , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v3 3/8] alpha: numa: make node_to_cpumask_map() NUMA_NO_NODE aware for alpha Date: Thu, 12 Sep 2019 18:15:29 +0800 Message-ID: <1568283334-178380-4-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1568283334-178380-1-git-send-email-linyunsheng@huawei.com> References: <1568283334-178380-1-git-send-email-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org When passing the return value of dev_to_node() to cpumask_of_node() without checking the node id if the node id is NUMA_NO_NODE, there is global-out-of-bounds detected by KASAN. From the discussion [1], NUMA_NO_NODE really means no node affinity, which also means all cpus should be usable. So the cpumask_of_node() should always return all cpus online when user passes the node id as NUMA_NO_NODE, just like similar semantic that page allocator handles NUMA_NO_NODE. But we cannot really copy the page allocator logic. Simply because the page allocator doesn't enforce the near node affinity. It just picks it up as a preferred node but then it is free to fallback to any other numa node. This is not the case here and node_to_cpumask_map will only restrict to the particular node's cpus which would have really non deterministic behavior depending on where the code is executed. So in fact we really want to return cpu_online_mask for NUMA_NO_NODE. Since this arch was already NUMA_NO_NODE aware, this patch only changes it to return cpu_online_mask. [1] https://lore.kernel.org/patchwork/patch/1125789/ Signed-off-by: Yunsheng Lin Suggested-by: Michal Hocko --- V3: Change to only handle NUMA_NO_NODE, and return cpu_online_mask for NUMA_NO_NODE case, and change the commit log to better justify the change. --- arch/alpha/include/asm/topology.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/alpha/include/asm/topology.h b/arch/alpha/include/asm/topology.h index 5a77a40..836c9e2 100644 --- a/arch/alpha/include/asm/topology.h +++ b/arch/alpha/include/asm/topology.h @@ -31,7 +31,7 @@ static const struct cpumask *cpumask_of_node(int node) int cpu; if (node == NUMA_NO_NODE) - return cpu_all_mask; + return cpu_online_mask; cpumask_clear(&node_to_cpumask_map[node]); From patchwork Thu Sep 12 10:15:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 1161498 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=huawei.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46TZVC65byz9sPP for ; Thu, 12 Sep 2019 20:18:27 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730811AbfILKSF (ORCPT ); Thu, 12 Sep 2019 06:18:05 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:59348 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1731138AbfILKSE (ORCPT ); Thu, 12 Sep 2019 06:18:04 -0400 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 28016B5415292F76FC6D; Thu, 12 Sep 2019 18:18:02 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.439.0; Thu, 12 Sep 2019 18:17:56 +0800 From: Yunsheng Lin To: , , , , , , , , , , , , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v3 4/8] powerpc: numa: make node_to_cpumask_map() NUMA_NO_NODE aware for powerpc Date: Thu, 12 Sep 2019 18:15:30 +0800 Message-ID: <1568283334-178380-5-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1568283334-178380-1-git-send-email-linyunsheng@huawei.com> References: <1568283334-178380-1-git-send-email-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org When passing the return value of dev_to_node() to cpumask_of_node() without checking the node id if the node id is NUMA_NO_NODE, there is global-out-of-bounds detected by KASAN. From the discussion [1], NUMA_NO_NODE really means no node affinity, which also means all cpus should be usable. So the cpumask_of_node() should always return all cpus online when user passes the node id as NUMA_NO_NODE, just like similar semantic that page allocator handles NUMA_NO_NODE. But we cannot really copy the page allocator logic. Simply because the page allocator doesn't enforce the near node affinity. It just picks it up as a preferred node but then it is free to fallback to any other numa node. This is not the case here and node_to_cpumask_map will only restrict to the particular node's cpus which would have really non deterministic behavior depending on where the code is executed. So in fact we really want to return cpu_online_mask for NUMA_NO_NODE. Since this arch was already NUMA_NO_NODE aware, this patch only changes it to return cpu_online_mask and use NUMA_NO_NODE instead of "-1". [1] https://lore.kernel.org/patchwork/patch/1125789/ Signed-off-by: Yunsheng Lin Suggested-by: Michal Hocko --- V3: Change to only handle NUMA_NO_NODE, and return cpu_online_mask for NUMA_NO_NODE case, and change the commit log to better justify the change. --- arch/powerpc/include/asm/topology.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/include/asm/topology.h b/arch/powerpc/include/asm/topology.h index 2f7e1ea..107f5cd 100644 --- a/arch/powerpc/include/asm/topology.h +++ b/arch/powerpc/include/asm/topology.h @@ -17,8 +17,8 @@ struct device_node; #include -#define cpumask_of_node(node) ((node) == -1 ? \ - cpu_all_mask : \ +#define cpumask_of_node(node) ((node) == NUMA_NO_NODE ? \ + cpu_online_mask : \ node_to_cpumask_map[node]) struct pci_bus; From patchwork Thu Sep 12 10:15:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 1161495 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=huawei.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46TZV00lvPz9s00 for ; Thu, 12 Sep 2019 20:18:16 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731224AbfILKSH (ORCPT ); Thu, 12 Sep 2019 06:18:07 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:59526 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1731175AbfILKSH (ORCPT ); Thu, 12 Sep 2019 06:18:07 -0400 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 53BAE8AF0C5369A42E84; Thu, 12 Sep 2019 18:18:02 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.439.0; Thu, 12 Sep 2019 18:17:57 +0800 From: Yunsheng Lin To: , , , , , , , , , , , , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v3 5/8] s390: numa: make node_to_cpumask_map() NUMA_NO_NODE aware for s390 Date: Thu, 12 Sep 2019 18:15:31 +0800 Message-ID: <1568283334-178380-6-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1568283334-178380-1-git-send-email-linyunsheng@huawei.com> References: <1568283334-178380-1-git-send-email-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org When passing the return value of dev_to_node() to cpumask_of_node() without checking the node id if the node id is NUMA_NO_NODE, there is global-out-of-bounds detected by KASAN. From the discussion [1], NUMA_NO_NODE really means no node affinity, which also means all cpus should be usable. So the cpumask_of_node() should always return all cpus online when user passes the node id as NUMA_NO_NODE, just like similar semantic that page allocator handles NUMA_NO_NODE. But we cannot really copy the page allocator logic. Simply because the page allocator doesn't enforce the near node affinity. It just picks it up as a preferred node but then it is free to fallback to any other numa node. This is not the case here and node_to_cpumask_map will only restrict to the particular node's cpus which would have really non deterministic behavior depending on where the code is executed. So in fact we really want to return cpu_online_mask for NUMA_NO_NODE. [1] https://lore.kernel.org/patchwork/patch/1125789/ Signed-off-by: Yunsheng Lin Suggested-by: Michal Hocko --- V3: Change to only handle NUMA_NO_NODE, and return cpu_online_mask for NUMA_NO_NODE case, and change the commit log to better justify the change. --- arch/s390/include/asm/topology.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/s390/include/asm/topology.h b/arch/s390/include/asm/topology.h index cca406f..1bd2e73 100644 --- a/arch/s390/include/asm/topology.h +++ b/arch/s390/include/asm/topology.h @@ -78,6 +78,9 @@ static inline int cpu_to_node(int cpu) #define cpumask_of_node cpumask_of_node static inline const struct cpumask *cpumask_of_node(int node) { + if (node == NUMA_NO_NODE) + return cpu_online_mask; + return &node_to_cpumask_map[node]; } From patchwork Thu Sep 12 10:15:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 1161496 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=huawei.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46TZV26Xnqz9sPb for ; Thu, 12 Sep 2019 20:18:18 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731253AbfILKSR (ORCPT ); Thu, 12 Sep 2019 06:18:17 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:59534 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1731176AbfILKSH (ORCPT ); Thu, 12 Sep 2019 06:18:07 -0400 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 66E23A1E9E63440E65DF; Thu, 12 Sep 2019 18:18:02 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.439.0; Thu, 12 Sep 2019 18:17:57 +0800 From: Yunsheng Lin To: , , , , , , , , , , , , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v3 6/8] sparc64: numa: make node_to_cpumask_map() NUMA_NO_NODE aware for sparc64 Date: Thu, 12 Sep 2019 18:15:32 +0800 Message-ID: <1568283334-178380-7-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1568283334-178380-1-git-send-email-linyunsheng@huawei.com> References: <1568283334-178380-1-git-send-email-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org When passing the return value of dev_to_node() to cpumask_of_node() without checking the node id if the node id is NUMA_NO_NODE, there is global-out-of-bounds detected by KASAN. From the discussion [1], NUMA_NO_NODE really means no node affinity, which also means all cpus should be usable. So the cpumask_of_node() should always return all cpus online when user passes the node id as NUMA_NO_NODE, just like similar semantic that page allocator handles NUMA_NO_NODE. But we cannot really copy the page allocator logic. Simply because the page allocator doesn't enforce the near node affinity. It just picks it up as a preferred node but then it is free to fallback to any other numa node. This is not the case here and node_to_cpumask_map will only restrict to the particular node's cpus which would have really non deterministic behavior depending on where the code is executed. So in fact we really want to return cpu_online_mask for NUMA_NO_NODE. Since this arch was already NUMA_NO_NODE aware, this patch only changes it to return cpu_online_mask and use NUMA_NO_NODE instead of "-1". [1] https://lore.kernel.org/patchwork/patch/1125789/ Signed-off-by: Yunsheng Lin Suggested-by: Michal Hocko --- V3: Change to only handle NUMA_NO_NODE, and return cpu_online_mask for NUMA_NO_NODE case, and change the commit log to better justify the change. --- arch/sparc/include/asm/topology_64.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/sparc/include/asm/topology_64.h b/arch/sparc/include/asm/topology_64.h index 34c628a..34f9240 100644 --- a/arch/sparc/include/asm/topology_64.h +++ b/arch/sparc/include/asm/topology_64.h @@ -11,8 +11,8 @@ static inline int cpu_to_node(int cpu) return numa_cpu_lookup_table[cpu]; } -#define cpumask_of_node(node) ((node) == -1 ? \ - cpu_all_mask : \ +#define cpumask_of_node(node) ((node) == NUMA_NO_NODE ? \ + cpu_online_mask : \ &numa_cpumask_lookup_table[node]) struct pci_bus; From patchwork Thu Sep 12 10:15:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 1161499 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=huawei.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46TZVF5NlTz9sPR for ; Thu, 12 Sep 2019 20:18:29 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731164AbfILKSF (ORCPT ); Thu, 12 Sep 2019 06:18:05 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:58816 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1731137AbfILKSE (ORCPT ); Thu, 12 Sep 2019 06:18:04 -0400 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 1569256779CEDAB8F563; Thu, 12 Sep 2019 18:18:02 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.439.0; Thu, 12 Sep 2019 18:17:57 +0800 From: Yunsheng Lin To: , , , , , , , , , , , , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v3 7/8] mips: numa: make node_to_cpumask_map() NUMA_NO_NODE aware for mips Date: Thu, 12 Sep 2019 18:15:33 +0800 Message-ID: <1568283334-178380-8-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1568283334-178380-1-git-send-email-linyunsheng@huawei.com> References: <1568283334-178380-1-git-send-email-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org When passing the return value of dev_to_node() to cpumask_of_node() without checking the node id if the node id is NUMA_NO_NODE, there is global-out-of-bounds detected by KASAN. From the discussion [1], NUMA_NO_NODE really means no node affinity, which also means all cpus should be usable. So the cpumask_of_node() should always return all cpus online when user passes the node id as NUMA_NO_NODE, just like similar semantic that page allocator handles NUMA_NO_NODE. But we cannot really copy the page allocator logic. Simply because the page allocator doesn't enforce the near node affinity. It just picks it up as a preferred node but then it is free to fallback to any other numa node. This is not the case here and node_to_cpumask_map will only restrict to the particular node's cpus which would have really non deterministic behavior depending on where the code is executed. So in fact we really want to return cpu_online_mask for NUMA_NO_NODE. Since this arch was already NUMA_NO_NODE aware, this patch only changes it to return cpu_online_mask and use NUMA_NO_NODE instead of "-1". [1] https://lore.kernel.org/patchwork/patch/1125789/ Signed-off-by: Yunsheng Lin Suggested-by: Michal Hocko --- V3: Change to only handle NUMA_NO_NODE, and return cpu_online_mask for NUMA_NO_NODE case, and change the commit log to better justify the change. --- arch/mips/include/asm/mach-ip27/topology.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/mips/include/asm/mach-ip27/topology.h b/arch/mips/include/asm/mach-ip27/topology.h index 965f079..04505e6 100644 --- a/arch/mips/include/asm/mach-ip27/topology.h +++ b/arch/mips/include/asm/mach-ip27/topology.h @@ -15,8 +15,8 @@ struct cpuinfo_ip27 { extern struct cpuinfo_ip27 sn_cpu_info[NR_CPUS]; #define cpu_to_node(cpu) (sn_cpu_info[(cpu)].p_nodeid) -#define cpumask_of_node(node) ((node) == -1 ? \ - cpu_all_mask : \ +#define cpumask_of_node(node) ((node) == NUMA_NO_NODE ? \ + cpu_online_mask : \ &hub_data(node)->h_cpus) struct pci_bus; extern int pcibus_to_node(struct pci_bus *); From patchwork Thu Sep 12 10:15:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 1161500 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=huawei.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46TZVG5jn6z9sPD for ; Thu, 12 Sep 2019 20:18:30 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731158AbfILKSF (ORCPT ); Thu, 12 Sep 2019 06:18:05 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:58424 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1730811AbfILKSE (ORCPT ); Thu, 12 Sep 2019 06:18:04 -0400 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 010D1E8274BF11552BDD; Thu, 12 Sep 2019 18:18:02 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.439.0; Thu, 12 Sep 2019 18:17:58 +0800 From: Yunsheng Lin To: , , , , , , , , , , , , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v3 8/8] mips: numa: make node_to_cpumask_map() NUMA_NO_NODE aware for loongson64 Date: Thu, 12 Sep 2019 18:15:34 +0800 Message-ID: <1568283334-178380-9-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1568283334-178380-1-git-send-email-linyunsheng@huawei.com> References: <1568283334-178380-1-git-send-email-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org When passing the return value of dev_to_node() to cpumask_of_node() without checking the node id if the node id is NUMA_NO_NODE, there is global-out-of-bounds detected by KASAN. From the discussion [1], NUMA_NO_NODE really means no node affinity, which also means all cpus should be usable. So the cpumask_of_node() should always return all cpus online when user passes the node id as NUMA_NO_NODE, just like similar semantic that page allocator handles NUMA_NO_NODE. But we cannot really copy the page allocator logic. Simply because the page allocator doesn't enforce the near node affinity. It just picks it up as a preferred node but then it is free to fallback to any other numa node. This is not the case here and node_to_cpumask_map will only restrict to the particular node's cpus which would have really non deterministic behavior depending on where the code is executed. So in fact we really want to return cpu_online_mask for NUMA_NO_NODE. [1] https://lore.kernel.org/patchwork/patch/1125789/ Signed-off-by: Yunsheng Lin Suggested-by: Michal Hocko --- V3: Change to only handle NUMA_NO_NODE, and return cpu_online_mask for NUMA_NO_NODE case, and change the commit log to better justify the change. --- arch/mips/include/asm/mach-loongson64/topology.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/mips/include/asm/mach-loongson64/topology.h b/arch/mips/include/asm/mach-loongson64/topology.h index 7ff819a..2207e2e 100644 --- a/arch/mips/include/asm/mach-loongson64/topology.h +++ b/arch/mips/include/asm/mach-loongson64/topology.h @@ -5,7 +5,9 @@ #ifdef CONFIG_NUMA #define cpu_to_node(cpu) (cpu_logical_map(cpu) >> 2) -#define cpumask_of_node(node) (&__node_data[(node)]->cpumask) +#define cpumask_of_node(node) ((node) == NUMA_NO_NODE ? \ + cpu_online_mask : \ + (&__node_data[(node)]->cpumask) struct pci_bus; extern int pcibus_to_node(struct pci_bus *);