From patchwork Mon Aug 6 16:22:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gautham R Shenoy X-Patchwork-Id: 953982 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 41kjkZ2NlRz9ryt for ; Tue, 7 Aug 2018 02:28:22 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=linux.vnet.ibm.com Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 41kjkZ1FMnzF3Hb for ; Tue, 7 Aug 2018 02:28:22 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=linux.vnet.ibm.com X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.158.5; helo=mx0a-001b2d01.pphosted.com; envelope-from=ego@linux.vnet.ibm.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=linux.vnet.ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 41kjcX16T9zF3HL for ; Tue, 7 Aug 2018 02:23:07 +1000 (AEST) Received: from pps.filterd (m0098416.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w76GEYAJ179400 for ; Mon, 6 Aug 2018 12:23:04 -0400 Received: from e11.ny.us.ibm.com (e11.ny.us.ibm.com [129.33.205.201]) by mx0b-001b2d01.pphosted.com with ESMTP id 2kprarm6wg-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Mon, 06 Aug 2018 12:23:04 -0400 Received: from localhost by e11.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 6 Aug 2018 12:23:03 -0400 Received: from b01cxnp23032.gho.pok.ibm.com (9.57.198.27) by e11.ny.us.ibm.com (146.89.104.198) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Mon, 6 Aug 2018 12:22:59 -0400 Received: from b01ledav002.gho.pok.ibm.com (b01ledav002.gho.pok.ibm.com [9.57.199.107]) by b01cxnp23032.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w76GMwVt14549452 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Mon, 6 Aug 2018 16:22:58 GMT Received: from b01ledav002.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A68C6124053; Mon, 6 Aug 2018 13:24:05 -0400 (EDT) Received: from b01ledav002.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1F6C0124052; Mon, 6 Aug 2018 13:24:05 -0400 (EDT) Received: from sofia.ibm.com (unknown [9.84.220.3]) by b01ledav002.gho.pok.ibm.com (Postfix) with ESMTP; Mon, 6 Aug 2018 13:24:05 -0400 (EDT) Received: by sofia.ibm.com (Postfix, from userid 1000) id B927F2E3306; Mon, 6 Aug 2018 21:52:55 +0530 (IST) From: "Gautham R. Shenoy" To: Michael Ellerman , Benjamin Herrenschmidt , Michael Neuling , Vaidyanathan Srinivasan , Akshay Adiga , Shilpasri G Bhat , "Oliver O'Halloran" , Nicholas Piggin , Murilo Opsfelder Araujo , Anton Blanchard Subject: [PATCH v5 1/2] powerpc: Detect the presence of big-cores via "ibm, thread-groups" Date: Mon, 6 Aug 2018 21:52:44 +0530 X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1533572565-17357-1-git-send-email-ego@linux.vnet.ibm.com> References: <1533572565-17357-1-git-send-email-ego@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 18080616-2213-0000-0000-000002D614AA X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00009495; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000266; SDB=6.01070374; UDB=6.00550830; IPR=6.00849588; MB=3.00022540; MTD=3.00000008; XFM=3.00000015; UTC=2018-08-06 16:23:02 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18080616-2214-0000-0000-00005B1B57C7 Message-Id: <1533572565-17357-2-git-send-email-ego@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2018-08-06_08:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1808060169 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.27 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Gautham R. Shenoy" , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: "Gautham R. Shenoy" On IBM POWER9, the device tree exposes a property array identifed by "ibm,thread-groups" which will indicate which groups of threads share a particular set of resources. As of today we only have one form of grouping identifying the group of threads in the core that share the L1 cache, translation cache and instruction data flow. This patch defines the helper function to parse the contents of "ibm,thread-groups" and a new structure to contain the parsed output. The patch also creates the sysfs file named "small_core_siblings" that returns the physical ids of the threads in the core that share the L1 cache, translation cache and instruction data flow. Signed-off-by: Gautham R. Shenoy --- Documentation/ABI/testing/sysfs-devices-system-cpu | 8 ++ arch/powerpc/include/asm/cputhreads.h | 22 +++ arch/powerpc/kernel/setup-common.c | 154 +++++++++++++++++++++ arch/powerpc/kernel/sysfs.c | 35 +++++ 4 files changed, 219 insertions(+) diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu index 9c5e7732..52c9b50 100644 --- a/Documentation/ABI/testing/sysfs-devices-system-cpu +++ b/Documentation/ABI/testing/sysfs-devices-system-cpu @@ -487,3 +487,11 @@ Description: Information about CPU vulnerabilities "Not affected" CPU is not affected by the vulnerability "Vulnerable" CPU is affected and no mitigation in effect "Mitigation: $M" CPU is affected and mitigation $M is in effect + +What: /sys/devices/system/cpu/cpu[0-9]+/small_core_siblings +Date: 06-Aug-2018 +KernelVersion: v4.19.0 +Contact: Linux for PowerPC mailing list +Description: List of Physical ids of CPUs which share the L1 cache, + translation cache and instruction data-flow with this CPU. +Values: Comma separated list of decimal integers. diff --git a/arch/powerpc/include/asm/cputhreads.h b/arch/powerpc/include/asm/cputhreads.h index d71a909..33226d7 100644 --- a/arch/powerpc/include/asm/cputhreads.h +++ b/arch/powerpc/include/asm/cputhreads.h @@ -23,11 +23,13 @@ extern int threads_per_core; extern int threads_per_subcore; extern int threads_shift; +extern bool has_big_cores; extern cpumask_t threads_core_mask; #else #define threads_per_core 1 #define threads_per_subcore 1 #define threads_shift 0 +#define has_big_cores 0 #define threads_core_mask (*get_cpu_mask(0)) #endif @@ -69,12 +71,32 @@ static inline cpumask_t cpu_online_cores_map(void) return cpu_thread_mask_to_cores(cpu_online_mask); } +#define MAX_THREAD_LIST_SIZE 8 +struct thread_groups { + unsigned int property; + unsigned int nr_groups; + unsigned int threads_per_group; + unsigned int thread_list[MAX_THREAD_LIST_SIZE]; +}; + #ifdef CONFIG_SMP int cpu_core_index_of_thread(int cpu); int cpu_first_thread_of_core(int core); +int parse_thread_groups(struct device_node *dn, struct thread_groups *tg); +int get_cpu_thread_group_start(int cpu, struct thread_groups *tg); #else static inline int cpu_core_index_of_thread(int cpu) { return cpu; } static inline int cpu_first_thread_of_core(int core) { return core; } +static inline int parse_thread_groups(struct device_node *dn, + struct thread_groups *tg) +{ + return -ENODATA; +} + +static inline int get_cpu_thread_group_start(int cpu, struct thread_groups *tg) +{ + return -1; +} #endif static inline int cpu_thread_in_core(int cpu) diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c index 40b44bb..989edc1 100644 --- a/arch/powerpc/kernel/setup-common.c +++ b/arch/powerpc/kernel/setup-common.c @@ -402,10 +402,12 @@ void __init check_for_initrd(void) #ifdef CONFIG_SMP int threads_per_core, threads_per_subcore, threads_shift; +bool has_big_cores; cpumask_t threads_core_mask; EXPORT_SYMBOL_GPL(threads_per_core); EXPORT_SYMBOL_GPL(threads_per_subcore); EXPORT_SYMBOL_GPL(threads_shift); +EXPORT_SYMBOL_GPL(has_big_cores); EXPORT_SYMBOL_GPL(threads_core_mask); static void __init cpu_init_thread_core_maps(int tpc) @@ -433,6 +435,152 @@ static void __init cpu_init_thread_core_maps(int tpc) u32 *cpu_to_phys_id = NULL; +/* + * parse_thread_groups: Parses the "ibm,thread-groups" device tree + * property for the CPU device node @dn and stores + * the parsed output in the thread_groups + * structure @tg. + * + * @dn: The device node of the CPU device. + * @tg: Pointer to a thread group structure into which the parsed + * output of "ibm,thread-groups" is stored. + * + * ibm,thread-groups[0..N-1] array defines which group of threads in + * the CPU-device node can be grouped together based on the property. + * + * ibm,thread-groups[0] tells us the property based on which the + * threads are being grouped together. If this value is 1, it implies + * that the threads in the same group share L1, translation cache. + * + * ibm,thread-groups[1] tells us how many such thread groups exist. + * + * ibm,thread-groups[2] tells us the number of threads in each such + * group. + * + * ibm,thread-groups[3..N-1] is the list of threads identified by + * "ibm,ppc-interrupt-server#s" arranged as per their membership in + * the grouping. + * + * Example: If ibm,thread-groups = [1,2,4,5,6,7,8,9,10,11,12] it + * implies that there are 2 groups of 4 threads each, where each group + * of threads share L1, translation cache. + * + * The "ibm,ppc-interrupt-server#s" of the first group is {5,6,7,8} + * and the "ibm,ppc-interrupt-server#s" of the second group is {9, 10, + * 11, 12} structure + * + * Returns 0 on success, -EINVAL if the property does not exist, + * -ENODATA if property does not have a value, and -EOVERFLOW if the + * property data isn't large enough. + */ +int parse_thread_groups(struct device_node *dn, + struct thread_groups *tg) +{ + unsigned int nr_groups, threads_per_group, property; + int i; + u32 thread_group_array[3 + MAX_THREAD_LIST_SIZE]; + u32 *thread_list; + size_t total_threads; + int ret; + + ret = of_property_read_u32_array(dn, "ibm,thread-groups", + thread_group_array, 3); + + if (ret) + goto out_err; + + property = thread_group_array[0]; + nr_groups = thread_group_array[1]; + threads_per_group = thread_group_array[2]; + total_threads = nr_groups * threads_per_group; + + ret = of_property_read_u32_array(dn, "ibm,thread-groups", + thread_group_array, + 3 + total_threads); + if (ret) + goto out_err; + + thread_list = &thread_group_array[3]; + + for (i = 0 ; i < total_threads; i++) + tg->thread_list[i] = thread_list[i]; + + tg->property = property; + tg->nr_groups = nr_groups; + tg->threads_per_group = threads_per_group; + + return 0; +out_err: + tg->property = 0; + tg->nr_groups = 0; + tg->threads_per_group = 0; + return ret; +} + +/* + * dt_has_big_core : Parses the device tree property + * "ibm,thread-groups" for device node pointed by @dn + * and stores the parsed output in the structure + * pointed to by @tg. Then checks if the output in + * @tg corresponds to a big-core. + * + * @dn: Device node pointer of the CPU node being checked for a + * big-core. + * @tg: Pointer to thread_groups struct in which parsed output of + * "ibm,thread-groups" is recorded. + * + * Returns true if the @dn points to a big-core. + * Returns false if there is an error in parsing "ibm,thread-groups" + * or the parsed output doesn't correspond to a big-core. + */ +static inline bool dt_has_big_core(struct device_node *dn, + struct thread_groups *tg) +{ + if (parse_thread_groups(dn, tg)) + return false; + + if (tg->property != 1) + return false; + + if (tg->nr_groups < 1) + return false; + + return true; +} + +/* + * get_cpu_thread_group_start : Searches the thread group in tg->thread_list + * that @cpu belongs to. + * + * @cpu : The logical CPU whose thread group is being searched. + * @tg : The thread-group structure of the CPU node which @cpu belongs + * to. + * + * Returns the index to tg->thread_list that points to the the start + * of the thread_group that @cpu belongs to. + * + * Returns -1 if cpu doesn't belong to any of the groups pointed to by + * tg->thread_list. + */ +int get_cpu_thread_group_start(int cpu, struct thread_groups *tg) +{ + int hw_cpu_id = get_hard_smp_processor_id(cpu); + int i, j; + + for (i = 0; i < tg->nr_groups; i++) { + int group_start = i * tg->threads_per_group; + + for (j = 0; j < tg->threads_per_group; j++) { + int idx = group_start + j; + + if (tg->thread_list[idx] == hw_cpu_id) + return group_start; + } + } + + return -1; +} + /** * setup_cpu_maps - initialize the following cpu maps: * cpu_possible_mask @@ -457,6 +605,7 @@ void __init smp_setup_cpu_maps(void) int cpu = 0; int nthreads = 1; + has_big_cores = true; DBG("smp_setup_cpu_maps()\n"); cpu_to_phys_id = __va(memblock_alloc(nr_cpu_ids * sizeof(u32), @@ -467,6 +616,7 @@ void __init smp_setup_cpu_maps(void) const __be32 *intserv; __be32 cpu_be; int j, len; + struct thread_groups tg; DBG(" * %pOF...\n", dn); @@ -505,6 +655,10 @@ void __init smp_setup_cpu_maps(void) cpu++; } + if (has_big_cores && !dt_has_big_core(dn, &tg)) { + has_big_cores = false; + } + if (cpu >= nr_cpu_ids) { of_node_put(dn); break; diff --git a/arch/powerpc/kernel/sysfs.c b/arch/powerpc/kernel/sysfs.c index 755dc98..f5717de 100644 --- a/arch/powerpc/kernel/sysfs.c +++ b/arch/powerpc/kernel/sysfs.c @@ -18,6 +18,7 @@ #include #include #include +#include #include "cacheinfo.h" #include "setup.h" @@ -1025,6 +1026,33 @@ static ssize_t show_physical_id(struct device *dev, } static DEVICE_ATTR(physical_id, 0444, show_physical_id, NULL); +static ssize_t show_small_core_siblings(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct cpu *cpu = container_of(dev, struct cpu, dev); + struct device_node *dn = of_get_cpu_node(cpu->dev.id, NULL); + struct thread_groups tg; + int i, j; + ssize_t ret = 0; + + if (parse_thread_groups(dn, &tg)) + return -ENODATA; + + i = get_cpu_thread_group_start(cpu->dev.id, &tg); + + if (i == -1) + return -ENODATA; + + for (j = 0; j < tg.threads_per_group - 1; j++) + ret += sprintf(buf + ret, "%d,", tg.thread_list[i + j]); + + ret += sprintf(buf + ret, "%d\n", tg.thread_list[i + j]); + + return ret; +} +static DEVICE_ATTR(small_core_siblings, 0444, show_small_core_siblings, NULL); + static int __init topology_init(void) { int cpu, r; @@ -1048,6 +1076,13 @@ static int __init topology_init(void) register_cpu(c, cpu); device_create_file(&c->dev, &dev_attr_physical_id); + + if (has_big_cores) { + const struct device_attribute *attr = + &dev_attr_small_core_siblings; + + device_create_file(&c->dev, attr); + } } } r = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "powerpc/topology:online", From patchwork Mon Aug 6 16:22:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gautham R Shenoy X-Patchwork-Id: 953981 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 41kjhQ5JNjz9ryt for ; Tue, 7 Aug 2018 02:26:30 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=linux.vnet.ibm.com Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 41kjhQ3yG2zF3Hx for ; Tue, 7 Aug 2018 02:26:30 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=linux.vnet.ibm.com X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com (client-ip=148.163.158.5; helo=mx0a-001b2d01.pphosted.com; envelope-from=ego@linux.vnet.ibm.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=linux.vnet.ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 41kjcV3FvtzF3HM for ; Tue, 7 Aug 2018 02:23:06 +1000 (AEST) Received: from pps.filterd (m0098413.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w76GEWj1015037 for ; Mon, 6 Aug 2018 12:23:03 -0400 Received: from e13.ny.us.ibm.com (e13.ny.us.ibm.com [129.33.205.203]) by mx0b-001b2d01.pphosted.com with ESMTP id 2kpqv45jtw-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Mon, 06 Aug 2018 12:23:03 -0400 Received: from localhost by e13.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 6 Aug 2018 12:23:02 -0400 Received: from b01cxnp23033.gho.pok.ibm.com (9.57.198.28) by e13.ny.us.ibm.com (146.89.104.200) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Mon, 6 Aug 2018 12:22:59 -0400 Received: from b01ledav002.gho.pok.ibm.com (b01ledav002.gho.pok.ibm.com [9.57.199.107]) by b01cxnp23033.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w76GMwRF14025192 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Mon, 6 Aug 2018 16:22:58 GMT Received: from b01ledav002.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8F995124058; Mon, 6 Aug 2018 13:24:05 -0400 (EDT) Received: from b01ledav002.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 431A7124053; Mon, 6 Aug 2018 13:24:05 -0400 (EDT) Received: from sofia.ibm.com (unknown [9.84.220.3]) by b01ledav002.gho.pok.ibm.com (Postfix) with ESMTP; Mon, 6 Aug 2018 13:24:05 -0400 (EDT) Received: by sofia.ibm.com (Postfix, from userid 1000) id C62422E3334; Mon, 6 Aug 2018 21:52:55 +0530 (IST) From: "Gautham R. Shenoy" To: Michael Ellerman , Benjamin Herrenschmidt , Michael Neuling , Vaidyanathan Srinivasan , Akshay Adiga , Shilpasri G Bhat , "Oliver O'Halloran" , Nicholas Piggin , Murilo Opsfelder Araujo , Anton Blanchard Subject: [PATCH v5 2/2] powerpc: Use cpu_smallcore_sibling_mask at SMT level on bigcores Date: Mon, 6 Aug 2018 21:52:45 +0530 X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1533572565-17357-1-git-send-email-ego@linux.vnet.ibm.com> References: <1533572565-17357-1-git-send-email-ego@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 18080616-0064-0000-0000-0000033614C0 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00009495; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000266; SDB=6.01070373; UDB=6.00550830; IPR=6.00849589; MB=3.00022540; MTD=3.00000008; XFM=3.00000015; UTC=2018-08-06 16:23:01 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18080616-0065-0000-0000-00003A355698 Message-Id: <1533572565-17357-3-git-send-email-ego@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2018-08-06_08:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1808060169 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.27 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Gautham R. Shenoy" , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: "Gautham R. Shenoy" Each of the SMT4 cores forming a big-core are more or less independent units. Thus when multiple tasks are scheduled to run on the fused core, we get the best performance when the tasks are spread across the pair of SMT4 cores. This patch achieves this by setting the SMT level mask to correspond to the smallcore sibling mask on big-core systems. With this patch, the SMT sched-domain with SMT=8,4,2 on big-core systems are as follows: 1) ppc64_cpu --smt=8 CPU0 attaching sched-domain(s): domain-0: span=0,2,4,6 level=SMT groups: 0:{ span=0 cap=294 }, 2:{ span=2 cap=294 }, 4:{ span=4 cap=294 }, 6:{ span=6 cap=294 } CPU1 attaching sched-domain(s): domain-0: span=1,3,5,7 level=SMT groups: 1:{ span=1 cap=294 }, 3:{ span=3 cap=294 }, 5:{ span=5 cap=294 }, 7:{ span=7 cap=294 } 2) ppc64_cpu --smt=4 CPU0 attaching sched-domain(s): domain-0: span=0,2 level=SMT groups: 0:{ span=0 cap=589 }, 2:{ span=2 cap=589 } CPU1 attaching sched-domain(s): domain-0: span=1,3 level=SMT groups: 1:{ span=1 cap=589 }, 3:{ span=3 cap=589 } 3) ppc64_cpu --smt=2 SMT domain ceases to exist as each domain consists of just one group. Signed-off-by: Gautham R. Shenoy --- arch/powerpc/include/asm/smp.h | 6 +++++ arch/powerpc/kernel/smp.c | 55 +++++++++++++++++++++++++++++++++++++++--- 2 files changed, 57 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h index 29ffaab..30798c7 100644 --- a/arch/powerpc/include/asm/smp.h +++ b/arch/powerpc/include/asm/smp.h @@ -99,6 +99,7 @@ static inline void set_hard_smp_processor_id(int cpu, int phys) #endif DECLARE_PER_CPU(cpumask_var_t, cpu_sibling_map); +DECLARE_PER_CPU(cpumask_var_t, cpu_smallcore_sibling_map); DECLARE_PER_CPU(cpumask_var_t, cpu_l2_cache_map); DECLARE_PER_CPU(cpumask_var_t, cpu_core_map); @@ -107,6 +108,11 @@ static inline struct cpumask *cpu_sibling_mask(int cpu) return per_cpu(cpu_sibling_map, cpu); } +static inline struct cpumask *cpu_smallcore_sibling_mask(int cpu) +{ + return per_cpu(cpu_smallcore_sibling_map, cpu); +} + static inline struct cpumask *cpu_core_mask(int cpu) { return per_cpu(cpu_core_map, cpu); diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 4794d6b..ea3b306 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -76,10 +76,12 @@ struct thread_info *secondary_ti; DEFINE_PER_CPU(cpumask_var_t, cpu_sibling_map); +DEFINE_PER_CPU(cpumask_var_t, cpu_smallcore_sibling_map); DEFINE_PER_CPU(cpumask_var_t, cpu_l2_cache_map); DEFINE_PER_CPU(cpumask_var_t, cpu_core_map); EXPORT_PER_CPU_SYMBOL(cpu_sibling_map); +EXPORT_PER_CPU_SYMBOL(cpu_smallcore_sibling_map); EXPORT_PER_CPU_SYMBOL(cpu_l2_cache_map); EXPORT_PER_CPU_SYMBOL(cpu_core_map); @@ -689,6 +691,9 @@ void __init smp_prepare_cpus(unsigned int max_cpus) for_each_possible_cpu(cpu) { zalloc_cpumask_var_node(&per_cpu(cpu_sibling_map, cpu), GFP_KERNEL, cpu_to_node(cpu)); + zalloc_cpumask_var_node(&per_cpu(cpu_smallcore_sibling_map, + cpu), + GFP_KERNEL, cpu_to_node(cpu)); zalloc_cpumask_var_node(&per_cpu(cpu_l2_cache_map, cpu), GFP_KERNEL, cpu_to_node(cpu)); zalloc_cpumask_var_node(&per_cpu(cpu_core_map, cpu), @@ -707,6 +712,10 @@ void __init smp_prepare_cpus(unsigned int max_cpus) cpumask_set_cpu(boot_cpuid, cpu_sibling_mask(boot_cpuid)); cpumask_set_cpu(boot_cpuid, cpu_l2_cache_mask(boot_cpuid)); cpumask_set_cpu(boot_cpuid, cpu_core_mask(boot_cpuid)); + if (has_big_cores) { + cpumask_set_cpu(boot_cpuid, + cpu_smallcore_sibling_mask(boot_cpuid)); + } if (smp_ops && smp_ops->probe) smp_ops->probe(); @@ -991,6 +1000,10 @@ static void remove_cpu_from_masks(int cpu) set_cpus_unrelated(cpu, i, cpu_core_mask); set_cpus_unrelated(cpu, i, cpu_l2_cache_mask); set_cpus_unrelated(cpu, i, cpu_sibling_mask); + if (has_big_cores) { + set_cpus_unrelated(cpu, i, + cpu_smallcore_sibling_mask); + } } } #endif @@ -999,7 +1012,17 @@ static void add_cpu_to_masks(int cpu) { int first_thread = cpu_first_thread_sibling(cpu); int chipid = cpu_to_chip_id(cpu); - int i; + + struct thread_groups tg; + int i, cpu_group_start = -1; + + if (has_big_cores) { + struct device_node *dn = of_get_cpu_node(cpu, NULL); + + parse_thread_groups(dn, &tg); + cpu_group_start = get_cpu_thread_group_start(cpu, &tg); + cpumask_set_cpu(cpu, cpu_smallcore_sibling_mask(cpu)); + } /* * This CPU will not be in the online mask yet so we need to manually @@ -1007,9 +1030,21 @@ static void add_cpu_to_masks(int cpu) */ cpumask_set_cpu(cpu, cpu_sibling_mask(cpu)); - for (i = first_thread; i < first_thread + threads_per_core; i++) - if (cpu_online(i)) - set_cpus_related(i, cpu, cpu_sibling_mask); + for (i = first_thread; i < first_thread + threads_per_core; i++) { + int i_group_start; + + if (!cpu_online(i)) + continue; + + set_cpus_related(i, cpu, cpu_sibling_mask); + + if (!has_big_cores) + continue; + + i_group_start = get_cpu_thread_group_start(i, &tg); + if (i_group_start == cpu_group_start) + set_cpus_related(i, cpu, cpu_smallcore_sibling_mask); + } /* * Copy the thread sibling mask into the cache sibling mask @@ -1136,6 +1171,11 @@ static const struct cpumask *shared_cache_mask(int cpu) return cpu_l2_cache_mask(cpu); } +static const struct cpumask *smallcore_smt_mask(int cpu) +{ + return cpu_smallcore_sibling_mask(cpu); +} + static struct sched_domain_topology_level power9_topology[] = { #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) }, @@ -1158,6 +1198,13 @@ void __init smp_cpus_done(unsigned int max_cpus) dump_numa_cpu_topology(); +#ifdef CONFIG_SCHED_SMT + if (has_big_cores) { + pr_info("Using small cores at SMT level\n"); + power9_topology[0].mask = smallcore_smt_mask; + powerpc_topology[0].mask = smallcore_smt_mask; + } +#endif /* * If any CPU detects that it's sharing a cache with another CPU then * use the deeper topology that is aware of this sharing.