From patchwork Wed Jun 23 06:04:03 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vaidyanathan Srinivasan X-Patchwork-Id: 56682 X-Patchwork-Delegate: benh@kernel.crashing.org Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from bilbo.ozlabs.org (localhost [127.0.0.1]) by ozlabs.org (Postfix) with ESMTP id 36411100828 for ; Thu, 24 Jun 2010 01:36:23 +1000 (EST) Received: from e23smtp07.au.ibm.com (e23smtp07.au.ibm.com [202.81.31.140]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "e23smtp07.au.ibm.com", Issuer "Equifax" (verified OK)) by ozlabs.org (Postfix) with ESMTPS id 64491B70BE for ; Thu, 24 Jun 2010 01:36:03 +1000 (EST) Received: from d23relay03.au.ibm.com (d23relay03.au.ibm.com [202.81.31.245]) by e23smtp07.au.ibm.com (8.14.4/8.13.1) with ESMTP id o5NFa7Sx017738 for ; Thu, 24 Jun 2010 01:36:07 +1000 Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay03.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id o5NFa2311519686 for ; Thu, 24 Jun 2010 01:36:02 +1000 Received: from d23av04.au.ibm.com (loopback [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id o5NFa2gP019969 for ; Thu, 24 Jun 2010 01:36:02 +1000 Received: from dirshya.in.ibm.com ([9.77.65.72]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id o5NFZsQu019862 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO) for ; Thu, 24 Jun 2010 01:35:58 +1000 Resent-From: Vaidyanathan Srinivasan Resent-Date: Wed, 23 Jun 2010 21:05:51 +0530 Resent-Message-ID: <20100623153551.GB508@dirshya.in.ibm.com> Resent-To: linuxppc-dev@lists.ozlabs.org Received: from imap.linux.ibm.com (9.37.253.145) by drishya.in.ibm.com with IMAP4-SSL; 23 Jun 2010 06:07:13 -0000 Received: from imap.linux.ibm.com ([unix socket]) by imap.linux.ibm.com (Cyrus v2.3.7-Invoca-RPM-2.3.7-7) with LMTPA; Wed, 23 Jun 2010 02:04:15 -0400 X-Sieve: CMU Sieve 2.3 Received: by imap.linux.ibm.com (Postfix, from userid 101) id 99D6B2E1E437; Wed, 23 Jun 2010 02:04:15 -0400 (EDT) X-Spam-TestScore: ALL_TRUSTED=-1.44,DNS_FROM_RFC_ABUSE=0.479 X-Spam-TokenSummary: Bayes not run. X-Spam-Checker-Version: SpamAssassin 3.1.7 (2006-10-05) on imap.linux.ibm.com X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED, DNS_FROM_RFC_ABUSE autolearn=disabled version=3.1.7 X-Spam-Relay-Country: Received: from smtp.linux.ibm.com (smtp.linux.ibm.com [9.26.4.197]) by imap.linux.ibm.com (Postfix) with ESMTP id EFD872E1E444 for ; Wed, 23 Jun 2010 02:04:14 -0400 (EDT) Received: from localhost (localhost.localdomain [127.0.0.1]) by smtp.linux.ibm.com (Postfix) with ESMTP id C84E6C7A01 for ; Wed, 23 Jun 2010 02:04:14 -0400 (EDT) X-Virus-Scanned: amavisd-new at linux.ibm.com Received: from SYDVM9.AU.IBM.COM (sydvm9.au.ibm.com [9.190.3.49]) by smtp.linux.ibm.com (Postfix) with ESMTP id 030AEC7A04 for ; Wed, 23 Jun 2010 02:04:13 -0400 (EDT) Received: by SYDVM9.AU.IBM.COM (IBM VM SMTP Level 430) via spool with SMTP id 0407 ; Wed, 23 Jun 2010 16:04:12 EST Received: by sydvm9.vnet.ibm.com (xagent2 6.2.4) via xagsmtp with spool id 5779 for svaidy@linux.vnet.ibm.com; Wed, 23 Jun 2010 16:04:12 +1000 (EST) Received: from d23relay04.au.ibm.com [9.190.234.120] by SYDVM9.AU.IBM.COM (IBM VM SMTP Level 430) via TCP with ESMTP ; Wed, 23 Jun 2010 16:04:11 EST Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay04.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id o5N64BHc1425552; Wed, 23 Jun 2010 16:04:11 +1000 Received: from d23av04.au.ibm.com (loopback [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id o5N64BCd016401; Wed, 23 Jun 2010 16:04:11 +1000 Received: from drishya.in.ibm.com ([9.126.239.198]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id o5N646sj016202; Wed, 23 Jun 2010 16:04:08 +1000 Subject: [PATCH v3 1/2] powerpc: cleanup APIs for cpu/thread/core mappings To: Benjamin Herrenschmidt , Paul Mackerras , Anton Blanchard From: Vaidyanathan Srinivasan Date: Wed, 23 Jun 2010 11:34:03 +0530 Message-ID: <20100623060403.4957.93682.stgit@drishya.in.ibm.com> In-Reply-To: <20100623060122.4957.33819.stgit@drishya.in.ibm.com> References: <20100623060122.4957.33819.stgit@drishya.in.ibm.com> User-Agent: StGit/0.15 MIME-Version: 1.0 X-Xagent-From: svaidy@linux.vnet.ibm.com X-Xagent-To: svaidy@linux.vnet.ibm.com X-Xagent-Gateway: sydvm9.vnet.ibm.com (XAGENTU at SYDVM9) Cc: linuxppc-dev@ozlab.org X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org These APIs take logical cpu number as input Change cpu_first_thread_in_core() to cpu_leftmost_thread_sibling() Change cpu_last_thread_in_core() to cpu_rightmost_thread_sibling() These APIs convert core number (index) to logical cpu/thread numbers Add cpu_first_thread_of_core(int core) Changed cpu_thread_to_core() to cpu_core_of_thread(int cpu) The goal is to make 'threads_per_core' accessible to the pseries_energy module. Instead of making an API to read threads_per_core, this is a higher level wrapper function to convert from logical cpu number to core number. The current APIs cpu_first_thread_in_core() and cpu_last_thread_in_core() returns logical CPU number while cpu_thread_to_core() returns core number or index which is not a logical CPU number. The APIs are now clearly named to distinguish 'core number' versus first and last 'logical cpu number' in that core. The new APIs cpu_{left,right}most_thread_sibling() work on logical cpu numbers. While cpu_first_thread_of_core() and cpu_core_of_thread() work on core index. Example usage: (4 threads per core system) cpu_leftmost_thread_sibling(5) = 4 cpu_rightmost_thread_sibling(5) = 7 cpu_core_of_thread(5) = 1 cpu_first_thread_of_core(1) = 4 cpu_core_of_thread() is used in cpu_to_drc_index() in the module and cpu_first_thread_of_core() is used in drc_index_to_cpu() in the module. Made API changes to few callers. Exported symbols for use in modules. Signed-off-by: Vaidyanathan Srinivasan --- arch/powerpc/include/asm/cputhreads.h | 15 +++++++++------ arch/powerpc/kernel/smp.c | 19 ++++++++++++++++--- arch/powerpc/mm/mmu_context_nohash.c | 12 ++++++------ 3 files changed, 31 insertions(+), 15 deletions(-) diff --git a/arch/powerpc/include/asm/cputhreads.h b/arch/powerpc/include/asm/cputhreads.h index a8e1844..26dc6bd 100644 --- a/arch/powerpc/include/asm/cputhreads.h +++ b/arch/powerpc/include/asm/cputhreads.h @@ -61,22 +61,25 @@ static inline cpumask_t cpu_online_cores_map(void) return cpu_thread_mask_to_cores(cpu_online_map); } -static inline int cpu_thread_to_core(int cpu) -{ - return cpu >> threads_shift; -} +#ifdef CONFIG_SMP +int cpu_core_of_thread(int cpu); +int cpu_first_thread_of_core(int core); +#else +static inline int cpu_core_of_thread(int cpu) { return cpu; } +static inline int cpu_first_thread_of_core(int core) { return core; } +#endif static inline int cpu_thread_in_core(int cpu) { return cpu & (threads_per_core - 1); } -static inline int cpu_first_thread_in_core(int cpu) +static inline int cpu_leftmost_thread_sibling(int cpu) { return cpu & ~(threads_per_core - 1); } -static inline int cpu_last_thread_in_core(int cpu) +static inline int cpu_rightmost_thread_sibling(int cpu) { return cpu | (threads_per_core - 1); } diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 5c196d1..da4c2f8 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -468,7 +468,20 @@ out: return id; } -/* Must be called when no change can occur to cpu_present_mask, +/* Helper routines for cpu to core mapping */ +int cpu_core_of_thread(int cpu) +{ + return cpu >> threads_shift; +} +EXPORT_SYMBOL_GPL(cpu_core_of_thread); + +int cpu_first_thread_of_core(int core) +{ + return core << threads_shift; +} +EXPORT_SYMBOL_GPL(cpu_first_thread_of_core); + +/* Must be called when no change can occur to cpu_present_map, * i.e. during cpu online or offline. */ static struct device_node *cpu_to_l2cache(int cpu) @@ -527,7 +540,7 @@ int __devinit start_secondary(void *unused) notify_cpu_starting(cpu); set_cpu_online(cpu, true); /* Update sibling maps */ - base = cpu_first_thread_in_core(cpu); + base = cpu_leftmost_thread_sibling(cpu); for (i = 0; i < threads_per_core; i++) { if (cpu_is_offline(base + i)) continue; @@ -606,7 +619,7 @@ int __cpu_disable(void) return err; /* Update sibling maps */ - base = cpu_first_thread_in_core(cpu); + base = cpu_leftmost_thread_sibling(cpu); for (i = 0; i < threads_per_core; i++) { cpumask_clear_cpu(cpu, cpu_sibling_mask(base + i)); cpumask_clear_cpu(base + i, cpu_sibling_mask(cpu)); diff --git a/arch/powerpc/mm/mmu_context_nohash.c b/arch/powerpc/mm/mmu_context_nohash.c index ddfd7ad..22f3bc5 100644 --- a/arch/powerpc/mm/mmu_context_nohash.c +++ b/arch/powerpc/mm/mmu_context_nohash.c @@ -111,8 +111,8 @@ static unsigned int steal_context_smp(unsigned int id) * a core map instead but this will do for now. */ for_each_cpu(cpu, mm_cpumask(mm)) { - for (i = cpu_first_thread_in_core(cpu); - i <= cpu_last_thread_in_core(cpu); i++) + for (i = cpu_leftmost_thread_sibling(cpu); + i <= cpu_rightmost_thread_sibling(cpu); i++) __set_bit(id, stale_map[i]); cpu = i - 1; } @@ -264,14 +264,14 @@ void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next) */ if (test_bit(id, stale_map[cpu])) { pr_hardcont(" | stale flush %d [%d..%d]", - id, cpu_first_thread_in_core(cpu), - cpu_last_thread_in_core(cpu)); + id, cpu_leftmost_thread_sibling(cpu), + cpu_rightmost_thread_sibling(cpu)); local_flush_tlb_mm(next); /* XXX This clear should ultimately be part of local_flush_tlb_mm */ - for (i = cpu_first_thread_in_core(cpu); - i <= cpu_last_thread_in_core(cpu); i++) { + for (i = cpu_leftmost_thread_sibling(cpu); + i <= cpu_rightmost_thread_sibling(cpu); i++) { __clear_bit(id, stale_map[i]); } }