From patchwork Tue Jun 2 15:59:32 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: maddy X-Patchwork-Id: 479544 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id C5F411412E1 for ; Wed, 3 Jun 2015 02:04:20 +1000 (AEST) Received: from ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id A7E6C1A0E6F for ; Wed, 3 Jun 2015 02:04:20 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Received: from e23smtp05.au.ibm.com (e23smtp05.au.ibm.com [202.81.31.147]) (using TLSv1 with cipher CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 7FD5F1A074E for ; Wed, 3 Jun 2015 02:01:07 +1000 (AEST) Received: from /spool/local by e23smtp05.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 3 Jun 2015 02:01:07 +1000 Received: from d23dlp02.au.ibm.com (202.81.31.213) by e23smtp05.au.ibm.com (202.81.31.211) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 3 Jun 2015 02:01:06 +1000 Received: from d23relay09.au.ibm.com (d23relay09.au.ibm.com [9.185.63.181]) by d23dlp02.au.ibm.com (Postfix) with ESMTP id 4BD722BB0047 for ; Wed, 3 Jun 2015 02:01:05 +1000 (EST) Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay09.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id t52G0vcZ43647170 for ; Wed, 3 Jun 2015 02:01:05 +1000 Received: from d23av04.au.ibm.com (localhost [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id t52G0VMo019008 for ; Wed, 3 Jun 2015 02:00:32 +1000 Received: from SrihariMadhavan.ibm.com ([9.124.88.83]) by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id t52Fxx1m017595; Wed, 3 Jun 2015 02:00:28 +1000 From: Madhavan Srinivasan To: linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Subject: [PATCH v1 3/9]powerpc/powernv: Add cpu hotplug support Date: Tue, 2 Jun 2015 21:29:32 +0530 Message-Id: <1433260778-26497-4-git-send-email-maddy@linux.vnet.ibm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1433260778-26497-1-git-send-email-maddy@linux.vnet.ibm.com> References: <1433260778-26497-1-git-send-email-maddy@linux.vnet.ibm.com> X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15060216-0017-0000-0000-000001574672 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Madhavan Srinivasan , Peter Zijlstra , Stephane Eranian , Paul Mackerras , Preeti U Murthy , Sukadev Bhattiprolu , Ingo Molnar , Anshuman Khandual MIME-Version: 1.0 Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Patch adds cpu hotplug support. First online cpu in a node is picked as designated thread to read the Nest pmu counter data, and at the time of hotplug, next online cpu from the same node is picked up. Cc: Michael Ellerman Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Sukadev Bhattiprolu Cc: Anshuman Khandual Cc: Stephane Eranian Cc: Preeti U Murthy Cc: Ingo Molnar Cc: Peter Zijlstra Signed-off-by: Madhavan Srinivasan --- arch/powerpc/perf/nest-pmu.c | 84 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 84 insertions(+) diff --git a/arch/powerpc/perf/nest-pmu.c b/arch/powerpc/perf/nest-pmu.c index d4413bb..3e7010e 100644 --- a/arch/powerpc/perf/nest-pmu.c +++ b/arch/powerpc/perf/nest-pmu.c @@ -30,6 +30,86 @@ static struct attribute_group cpumask_nest_pmu_attr_group = { .attrs = cpumask_nest_pmu_attrs, }; +static void nest_init(void *dummy) +{ + opal_nest_ima_control(P8_NEST_ENGINE_START); +} + +static void nest_change_cpu_context(int old_cpu, int new_cpu) +{ + int i; + + for (i=0; per_nestpmu_arr[i] != NULL; i++) + perf_pmu_migrate_context(&per_nestpmu_arr[i]->pmu, + old_cpu, new_cpu); + return; +} + +static void nest_exit_cpu(int cpu) +{ + int i, nid, target = -1; + const struct cpumask *l_cpumask; + int src_chipid; + + if (!cpumask_test_and_clear_cpu(cpu, &cpu_mask_nest_pmu)) + return; + + nid = cpu_to_node(cpu); + src_chipid = topology_physical_package_id(cpu); + l_cpumask = cpumask_of_node(nid); + for_each_cpu(i, l_cpumask) { + if (i == cpu) + continue; + if (src_chipid == topology_physical_package_id(i)) { + target = i; + break; + } + } + + cpumask_set_cpu(target, &cpu_mask_nest_pmu); + nest_change_cpu_context (cpu, target); + return; +} + +static void nest_init_cpu(int cpu) +{ + int i, src_chipid; + + src_chipid = topology_physical_package_id(cpu); + for_each_cpu(i, &cpu_mask_nest_pmu) + if (src_chipid == topology_physical_package_id(i)) + return; + + cpumask_set_cpu(cpu, &cpu_mask_nest_pmu); + nest_change_cpu_context ( -1, cpu); + return; +} + +static int nest_cpu_notifier(struct notifier_block *self, + unsigned long action, void *hcpu) +{ + unsigned int cpu = (long)hcpu; + + switch (action & ~CPU_TASKS_FROZEN) { + case CPU_DOWN_FAILED: + case CPU_STARTING: + nest_init_cpu(cpu); + break; + case CPU_DOWN_PREPARE: + nest_exit_cpu(cpu); + break; + default: + break; + } + + return NOTIFY_OK; +} + +static struct notifier_block nest_cpu_nb = { + .notifier_call = nest_cpu_notifier, + .priority = CPU_PRI_PERF + 1, +} + void cpumask_chip(void) { const struct cpumask *l_cpumask; @@ -47,6 +127,10 @@ void cpumask_chip(void) cpumask_set_cpu(cpu, &cpu_mask_nest_pmu); } + on_each_cpu_mask(&cpu_mask_nest_pmu, (smp_call_func_t)nest_init, NULL, 1); + + __register_cpu_notifier(&nest_cpu_nb); + cpu_notifier_register_done(); }