From patchwork Fri May 22 21:55:16 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hong H. Pham" X-Patchwork-Id: 27538 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@bilbo.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from ozlabs.org (ozlabs.org [203.10.76.45]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mx.ozlabs.org", Issuer "CA Cert Signing Authority" (verified OK)) by bilbo.ozlabs.org (Postfix) with ESMTPS id 7FCEAB6F35 for ; Sat, 23 May 2009 07:55:25 +1000 (EST) Received: by ozlabs.org (Postfix) id 6D5B9DE1E4; Sat, 23 May 2009 07:55:25 +1000 (EST) Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by ozlabs.org (Postfix) with ESMTP id 1AFA8DE1D8 for ; Sat, 23 May 2009 07:55:24 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757292AbZEVVzU (ORCPT ); Fri, 22 May 2009 17:55:20 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1757295AbZEVVzU (ORCPT ); Fri, 22 May 2009 17:55:20 -0400 Received: from mail.windriver.com ([147.11.1.11]:49990 "EHLO mail.wrs.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757292AbZEVVzT (ORCPT ); Fri, 22 May 2009 17:55:19 -0400 Received: from ALA-MAIL03.corp.ad.wrs.com (ala-mail03 [147.11.57.144]) by mail.wrs.com (8.13.6/8.13.6) with ESMTP id n4MLtHVQ019904; Fri, 22 May 2009 14:55:17 -0700 (PDT) Received: from ala-mail06.corp.ad.wrs.com ([147.11.57.147]) by ALA-MAIL03.corp.ad.wrs.com with Microsoft SMTPSVC(6.0.3790.1830); Fri, 22 May 2009 14:55:17 -0700 Received: from [128.224.143.12] ([128.224.143.12]) by ala-mail06.corp.ad.wrs.com with Microsoft SMTPSVC(6.0.3790.1830); Fri, 22 May 2009 14:55:17 -0700 Message-ID: <4A171F44.6090608@windriver.com> Date: Fri, 22 May 2009 17:55:16 -0400 From: "Hong H. Pham" User-Agent: Thunderbird 2.0.0.21 (X11/20090409) MIME-Version: 1.0 To: David Miller CC: sparclinux@vger.kernel.org, sam@ravnborg.org Subject: Re: [PATCH v2] sparc64: fix and optimize irq distribution References: <4A0AFA39.1080203@windriver.com> <1242233551-3369-1-git-send-email-hong.pham@windriver.com> <20090521.171424.77528494.davem@davemloft.net> In-Reply-To: <20090521.171424.77528494.davem@davemloft.net> X-OriginalArrivalTime: 22 May 2009 21:55:17.0681 (UTC) FILETIME=[FE807A10:01C9DB27] Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org David Miller wrote: > There is absolutely no connection between virtual cpu numbers > and the hierarchy in which they sit in the cores and higher > level hierarchy of the processor. So you can't just say > (cpu_id / 4) is the core number or anything like that. > > You must use the machine description to determine this kind of > information, just as we do in arch/sparc/kernel/mdesc.c to figure out > the CPU scheduler grouping maps. (see mark_proc_ids() and > mark_core_ids()) Thanks for pointing me in this direction. mark_proc_ids() and mark_core_ids() sets the core_id and proc_id members in the per cpu __cpu_data. Looks like I can use cpu_data() to figure out the CPU distribution. As a side note, here's a dump of cpu_data() on a 2 way T5440. There's a hole between 48 and 71. [714162.134215] Brought up 96 CPUs [714162.135440] CPU 0: node=0 core_id=1 proc_id=0 [714162.135452] CPU 1: node=0 core_id=1 proc_id=0 [714162.135464] CPU 2: node=0 core_id=1 proc_id=0 [714162.135475] CPU 3: node=0 core_id=1 proc_id=0 [714162.135487] CPU 4: node=0 core_id=1 proc_id=1 [714162.135498] CPU 5: node=0 core_id=1 proc_id=1 [714162.135509] CPU 6: node=0 core_id=1 proc_id=1 [714162.135521] CPU 7: node=0 core_id=1 proc_id=1 [714162.135532] CPU 8: node=0 core_id=2 proc_id=2 [714162.135544] CPU 9: node=0 core_id=2 proc_id=2 [714162.135555] CPU 10: node=0 core_id=2 proc_id=2 ... [714162.135961] CPU 45: node=0 core_id=6 proc_id=11 [714162.135973] CPU 46: node=0 core_id=6 proc_id=11 [714162.135984] CPU 47: node=0 core_id=6 proc_id=11 [714162.135996] CPU 72: node=1 core_id=7 proc_id=12 [714162.136008] CPU 73: node=1 core_id=7 proc_id=12 [714162.136019] CPU 74: node=1 core_id=7 proc_id=12 [714162.136031] CPU 75: node=1 core_id=7 proc_id=12 [714162.136043] CPU 76: node=1 core_id=7 proc_id=13 ... [714162.136554] CPU 119: node=1 core_id=12 proc_id=23 Regards, Hong diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c index 54906aa..7fa909f 100644 --- a/arch/sparc/kernel/smp_64.c +++ b/arch/sparc/kernel/smp_64.c @@ -1353,8 +1353,20 @@ void __cpu_die(unsigned int cpu) } #endif +static void dump_cpu_data(void) +{ + int i; + + for_each_online_cpu(i) { + printk(KERN_DEBUG "CPU %i: node=%i core_id=%i proc_id=%i\n", + i, cpu_to_node(i), + cpu_data(i).core_id, cpu_data(i).proc_id); + } +} + void __init smp_cpus_done(unsigned int max_cpus) { + dump_cpu_data(); } void smp_send_reschedule(int cpu)