get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/783386/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 783386,
    "url": "http://patchwork.ozlabs.org/api/patches/783386/?format=api",
    "web_url": "http://patchwork.ozlabs.org/project/linuxppc-dev/patch/1499074728-30680-1-git-send-email-anju@linux.vnet.ibm.com/",
    "project": {
        "id": 2,
        "url": "http://patchwork.ozlabs.org/api/projects/2/?format=api",
        "name": "Linux PPC development",
        "link_name": "linuxppc-dev",
        "list_id": "linuxppc-dev.lists.ozlabs.org",
        "list_email": "linuxppc-dev@lists.ozlabs.org",
        "web_url": "https://github.com/linuxppc/wiki/wiki",
        "scm_url": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git",
        "webscm_url": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/",
        "list_archive_url": "https://lore.kernel.org/linuxppc-dev/",
        "list_archive_url_format": "https://lore.kernel.org/linuxppc-dev/{}/",
        "commit_url_format": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/commit/?id={}"
    },
    "msgid": "<1499074728-30680-1-git-send-email-anju@linux.vnet.ibm.com>",
    "list_archive_url": "https://lore.kernel.org/linuxppc-dev/1499074728-30680-1-git-send-email-anju@linux.vnet.ibm.com/",
    "date": "2017-07-03T09:38:46",
    "name": "[v12,05/10] powerpc/perf: IMC pmu cpumask and cpuhotplug support",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "c5028043c849d4a28fef73d6ad4512e8d1d10708",
    "submitter": {
        "id": 67491,
        "url": "http://patchwork.ozlabs.org/api/people/67491/?format=api",
        "name": "Anju T Sudhakar",
        "email": "anju@linux.vnet.ibm.com"
    },
    "delegate": null,
    "mbox": "http://patchwork.ozlabs.org/project/linuxppc-dev/patch/1499074728-30680-1-git-send-email-anju@linux.vnet.ibm.com/mbox/",
    "series": [],
    "comments": "http://patchwork.ozlabs.org/api/patches/783386/comments/",
    "check": "pending",
    "checks": "http://patchwork.ozlabs.org/api/patches/783386/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org>",
        "X-Original-To": [
            "patchwork-incoming@ozlabs.org",
            "linuxppc-dev@lists.ozlabs.org"
        ],
        "Delivered-To": [
            "patchwork-incoming@ozlabs.org",
            "linuxppc-dev@lists.ozlabs.org"
        ],
        "Received": [
            "from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3])\n\t(using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits))\n\t(No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 3x1Mmj3n7Vz9rxl\n\tfor <patchwork-incoming@ozlabs.org>;\n\tMon,  3 Jul 2017 19:49:41 +1000 (AEST)",
            "from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 3x1Mmj2nlhzDrXZ\n\tfor <patchwork-incoming@ozlabs.org>;\n\tMon,  3 Jul 2017 19:49:41 +1000 (AEST)",
            "from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com\n\t[148.163.156.1])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256\n\tbits)) (No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 3x1MYV4w61zDr8t\n\tfor <linuxppc-dev@lists.ozlabs.org>;\n\tMon,  3 Jul 2017 19:39:58 +1000 (AEST)",
            "from pps.filterd (m0098394.ppops.net [127.0.0.1])\n\tby mx0a-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id\n\tv639dNnf089833\n\tfor <linuxppc-dev@lists.ozlabs.org>; Mon, 3 Jul 2017 05:39:56 -0400",
            "from e23smtp08.au.ibm.com (e23smtp08.au.ibm.com [202.81.31.141])\n\tby mx0a-001b2d01.pphosted.com with ESMTP id 2bfc02ygg0-1\n\t(version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT)\n\tfor <linuxppc-dev@lists.ozlabs.org>; Mon, 03 Jul 2017 05:39:56 -0400",
            "from localhost\n\tby e23smtp08.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use\n\tOnly! Violators will be prosecuted\n\tfor <linuxppc-dev@lists.ozlabs.org> from <anju@linux.vnet.ibm.com>;\n\tMon, 3 Jul 2017 19:39:53 +1000",
            "from d23relay10.au.ibm.com (202.81.31.229)\n\tby e23smtp08.au.ibm.com (202.81.31.205) with IBM ESMTP SMTP Gateway:\n\tAuthorized Use Only! Violators will be prosecuted; \n\tMon, 3 Jul 2017 19:39:50 +1000",
            "from d23av05.au.ibm.com (d23av05.au.ibm.com [9.190.234.119])\n\tby d23relay10.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id\n\tv639dg9F8519994\n\tfor <linuxppc-dev@lists.ozlabs.org>; Mon, 3 Jul 2017 19:39:50 +1000",
            "from d23av05.au.ibm.com (localhost [127.0.0.1])\n\tby d23av05.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id\n\tv639dGt9025172\n\tfor <linuxppc-dev@lists.ozlabs.org>; Mon, 3 Jul 2017 19:39:17 +1000",
            "from xenial-xerus.in.ibm.com (xenial-xerus.in.ibm.com [9.124.35.20]\n\t(may be forged))\n\tby d23av05.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id\n\tv639dCsu024325; Mon, 3 Jul 2017 19:39:13 +1000"
        ],
        "From": "Anju T Sudhakar <anju@linux.vnet.ibm.com>",
        "To": "mpe@ellerman.id.au",
        "Subject": "[PATCH v12 05/10] powerpc/perf: IMC pmu cpumask and cpuhotplug\n\tsupport",
        "Date": "Mon,  3 Jul 2017 15:08:46 +0530",
        "X-Mailer": "git-send-email 2.7.4",
        "X-TM-AS-MML": "disable",
        "x-cbid": "17070309-0048-0000-0000-0000024EFA99",
        "X-IBM-AV-DETECTION": "SAVI=unused REMOTE=unused XFE=unused",
        "x-cbparentid": "17070309-0049-0000-0000-0000480025DC",
        "Message-Id": "<1499074728-30680-1-git-send-email-anju@linux.vnet.ibm.com>",
        "X-Proofpoint-Virus-Version": "vendor=fsecure engine=2.50.10432:, ,\n\tdefinitions=2017-07-03_06:, , signatures=0",
        "X-Proofpoint-Spam-Details": "rule=outbound_notspam policy=outbound score=0\n\tspamscore=0 suspectscore=3\n\tmalwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam\n\tadjust=0 reason=mlx scancount=1 engine=8.0.1-1703280000\n\tdefinitions=main-1707030161",
        "X-BeenThere": "linuxppc-dev@lists.ozlabs.org",
        "X-Mailman-Version": "2.1.23",
        "Precedence": "list",
        "List-Id": "Linux on PowerPC Developers Mail List\n\t<linuxppc-dev.lists.ozlabs.org>",
        "List-Unsubscribe": "<https://lists.ozlabs.org/options/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=unsubscribe>",
        "List-Archive": "<http://lists.ozlabs.org/pipermail/linuxppc-dev/>",
        "List-Post": "<mailto:linuxppc-dev@lists.ozlabs.org>",
        "List-Help": "<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=help>",
        "List-Subscribe": "<https://lists.ozlabs.org/listinfo/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=subscribe>",
        "Cc": "stewart@linux.vnet.ibm.com, ego@linux.vnet.ibm.com, mikey@neuling.org,\n\tmaddy@linux.vnet.ibm.com, hemant@linux.vnet.ibm.com,\n\tlinux-kernel@vger.kernel.org, eranian@google.com,\n\tanju@linux.vnet.ibm.com, \n\tanton@samba.org, tglx@linutronix.de, sukadev@linux.vnet.ibm.com,\n\tlinuxppc-dev@lists.ozlabs.org, dja@axtens.net",
        "Errors-To": "linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org",
        "Sender": "\"Linuxppc-dev\"\n\t<linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org>"
    },
    "content": "Adds cpumask attribute to be used by each IMC pmu. Only one cpu (any            \nonline CPU) from each chip for nest PMUs is designated to read counters.        \n                                                                                \nOn CPU hotplug, dying CPU is checked to see whether it is one of the            \ndesignated cpus, if yes, next online cpu from the same chip (for nest           \nunits) is designated as new cpu to read counters. For this purpose, we          \nintroduce a new state : CPUHP_AP_PERF_POWERPC_NEST_IMC_ONLINE.                  \n                                                                                \nSigned-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>                        \nSigned-off-by: Hemant Kumar <hemant@linux.vnet.ibm.com>                         \nSigned-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> \n---\n arch/powerpc/include/asm/imc-pmu.h             |  11 +\n arch/powerpc/include/asm/opal-api.h            |  10 +-\n arch/powerpc/include/asm/opal.h                |   4 +\n arch/powerpc/perf/imc-pmu.c                    | 280 ++++++++++++++++++++++++-\n arch/powerpc/platforms/powernv/opal-imc.c      |  21 +-\n arch/powerpc/platforms/powernv/opal-wrappers.S |   3 +\n include/linux/cpuhotplug.h                     |   1 +\n 7 files changed, 324 insertions(+), 6 deletions(-)",
    "diff": "diff --git a/arch/powerpc/include/asm/imc-pmu.h b/arch/powerpc/include/asm/imc-pmu.h\nindex 25d0c57d14fe..aeed903b2a79 100644\n--- a/arch/powerpc/include/asm/imc-pmu.h\n+++ b/arch/powerpc/include/asm/imc-pmu.h\n@@ -24,6 +24,7 @@\n  * For static allocation of some of the structures.\n  */\n #define IMC_MAX_PMUS\t\t\t32\n+#define IMC_MAX_CHIPS\t\t\t32\n \n /*\n  * This macro is used for memory buffer allocation of\n@@ -94,6 +95,16 @@ struct imc_pmu {\n \tconst struct attribute_group *attr_groups[4];\n };\n \n+/*\n+ * Structure to hold id, lock and reference count for the imc events which\n+ * are inited.\n+ */\n+struct imc_pmu_ref {\n+       unsigned int id;\n+       struct mutex lock;\n+       int refc;\n+};\n+\n /* In-Memory Collection Counters Type */\n enum {\n \tIMC_COUNTER_PER_CHIP            = 0x10,\ndiff --git a/arch/powerpc/include/asm/opal-api.h b/arch/powerpc/include/asm/opal-api.h\nindex cb3e6242a78c..fdacb030cd77 100644\n--- a/arch/powerpc/include/asm/opal-api.h\n+++ b/arch/powerpc/include/asm/opal-api.h\n@@ -190,7 +190,10 @@\n #define OPAL_NPU_INIT_CONTEXT\t\t\t146\n #define OPAL_NPU_DESTROY_CONTEXT\t\t147\n #define OPAL_NPU_MAP_LPAR\t\t\t148\n-#define OPAL_LAST\t\t\t\t148\n+#define OPAL_IMC_COUNTERS_INIT\t\t\t149\n+#define OPAL_IMC_COUNTERS_START\t\t\t150\n+#define OPAL_IMC_COUNTERS_STOP\t\t\t151\n+#define OPAL_LAST\t\t\t\t151\n \n /* Device tree flags */\n \n@@ -1003,6 +1006,11 @@ enum {\n \tXIVE_DUMP_EMU_STATE\t= 5,\n };\n \n+/* Argument to OPAL_IMC_COUNTERS_*  */\n+enum {\n+\tOPAL_IMC_COUNTERS_NEST = 1,\n+};\n+\n #endif /* __ASSEMBLY__ */\n \n #endif /* __OPAL_API_H */\ndiff --git a/arch/powerpc/include/asm/opal.h b/arch/powerpc/include/asm/opal.h\nindex 588fb1c23af9..48842d2d465c 100644\n--- a/arch/powerpc/include/asm/opal.h\n+++ b/arch/powerpc/include/asm/opal.h\n@@ -268,6 +268,10 @@ int64_t opal_xive_free_irq(uint32_t girq);\n int64_t opal_xive_sync(uint32_t type, uint32_t id);\n int64_t opal_xive_dump(uint32_t type, uint32_t id);\n \n+int64_t opal_imc_counters_init(uint32_t type, uint64_t address, uint64_t cpu);\n+int64_t opal_imc_counters_start(uint32_t type, uint64_t cpu_pir);\n+int64_t opal_imc_counters_stop(uint32_t type, uint64_t cpu_pir);\n+\n /* Internal functions */\n extern int early_init_dt_scan_opal(unsigned long node, const char *uname,\n \t\t\t\t   int depth, void *data);\ndiff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c\nindex 4e2f837b8bb7..ca9662bea7d6 100644\n--- a/arch/powerpc/perf/imc-pmu.c\n+++ b/arch/powerpc/perf/imc-pmu.c\n@@ -20,6 +20,16 @@\n \n /* Needed for sanity check */\n struct imc_pmu *per_nest_pmu_arr[IMC_MAX_PMUS];\n+static cpumask_t nest_imc_cpumask;\n+static int nest_imc_cpumask_initialized;\n+static int nest_pmus;\n+/*\n+ * Used to avoid races in counting the nest-pmu units during hotplug\n+ * register and unregister\n+ */\n+static DEFINE_MUTEX(imc_nest_inited_reserve);\n+\n+struct imc_pmu_ref *nest_imc_refc;\n \n struct imc_pmu *imc_event_to_pmu(struct perf_event *event)\n {\n@@ -43,12 +53,183 @@ static struct attribute_group imc_format_group = {\n \t.attrs = nest_imc_format_attrs,\n };\n \n+/* Get the cpumask printed to a buffer \"buf\" */\n+static ssize_t imc_pmu_cpumask_get_attr(struct device *dev,\n+\t\t\t\t\tstruct device_attribute *attr,\n+\t\t\t\t\tchar *buf)\n+{\n+\tcpumask_t *active_mask;\n+\n+\tactive_mask = &nest_imc_cpumask;\n+\treturn cpumap_print_to_pagebuf(true, buf, active_mask);\n+}\n+\n+static DEVICE_ATTR(cpumask, S_IRUGO, imc_pmu_cpumask_get_attr, NULL);\n+\n+static struct attribute *imc_pmu_cpumask_attrs[] = {\n+\t&dev_attr_cpumask.attr,\n+\tNULL,\n+};\n+\n+static struct attribute_group imc_pmu_cpumask_attr_group = {\n+\t.attrs = imc_pmu_cpumask_attrs,\n+};\n+\n+static void nest_change_cpu_context(int old_cpu, int new_cpu)\n+{\n+\tstruct imc_pmu **pn = per_nest_pmu_arr;\n+\tint i;\n+\n+\tif (old_cpu < 0 || new_cpu < 0)\n+\t\treturn;\n+\n+\tfor (i = 0; *pn && i < IMC_MAX_PMUS; i++, pn++)\n+\t\tperf_pmu_migrate_context(&(*pn)->pmu, old_cpu, new_cpu);\n+}\n+\n+/* get_nest_pmu_ref: Return the imc_pmu_ref struct for the given node */\n+static struct imc_pmu_ref *get_nest_pmu_ref(unsigned int node_id)\n+{\n+\tint nid, i = 0;\n+\n+\tif (!nest_imc_refc)\n+\t\treturn NULL;\n+\n+\tfor_each_online_node(nid) {\n+\t\tif (nest_imc_refc[i].id == node_id)\n+\t\t\treturn &nest_imc_refc[i];\n+\t\ti++;\n+\t}\n+\treturn NULL;\n+}\n+\n+static int ppc_nest_imc_cpu_offline(unsigned int cpu)\n+{\n+\tint nid, target = -1;\n+\tconst struct cpumask *l_cpumask;\n+\tstruct imc_pmu_ref *ref;\n+\n+\t/*\n+\t * Check in the designated list for this cpu. Dont bother\n+\t * if not one of them.\n+\t */\n+\tif (!cpumask_test_and_clear_cpu(cpu, &nest_imc_cpumask))\n+\t\treturn 0;\n+\n+\t/*\n+\t * Now that this cpu is one of the designated,\n+\t * find a next cpu a) which is online and b) in same chip.\n+\t */\n+\tnid = cpu_to_node(cpu);\n+\tl_cpumask = cpumask_of_node(nid);\n+\ttarget = cpumask_any_but(l_cpumask, cpu);\n+\n+\t/*\n+\t * Update the cpumask with the target cpu and\n+\t * migrate the context if needed\n+\t */\n+\tif (target >= 0 && target < nr_cpu_ids) {\n+\t\tcpumask_set_cpu(target, &nest_imc_cpumask);\n+\t\tnest_change_cpu_context(cpu, target);\n+\t} else {\n+\t\topal_imc_counters_stop(OPAL_IMC_COUNTERS_NEST,\n+\t\t\t\t       get_hard_smp_processor_id(cpu));\n+\t\t/*\n+\t\t * If this is the last cpu in this chip then, skip the lock and\n+\t\t * make the reference count on this chip zero.\n+\t\t */\n+\t\tref = get_nest_pmu_ref(nid);\n+\t\tif (!ref)\n+\t\t\treturn -EINVAL;\n+\n+\t\tref->refc = 0;\n+\t}\n+\treturn 0;\n+}\n+\n+static int ppc_nest_imc_cpu_online(unsigned int cpu)\n+{\n+\tconst struct cpumask *l_cpumask;\n+\tstatic struct cpumask tmp_mask;\n+\tint res;\n+\n+\t/* Get the cpumask of this node */\n+\tl_cpumask = cpumask_of_node(cpu_to_node(cpu));\n+\n+\t/*\n+\t * If this is not the first online CPU on this node, then\n+\t * just return.\n+\t */\n+\tif (cpumask_and(&tmp_mask, l_cpumask, &nest_imc_cpumask))\n+\t\treturn 0;\n+\n+\t/*\n+\t * If this is the first online cpu on this node\n+\t * disable the nest counters by making an OPAL call.\n+\t */\n+\tres = opal_imc_counters_stop(OPAL_IMC_COUNTERS_NEST,\n+\t\t\t\t     get_hard_smp_processor_id(cpu));\n+\tif (res)\n+\t\treturn res;\n+\n+\t/* Make this CPU the designated target for counter collection */\n+\tcpumask_set_cpu(cpu, &nest_imc_cpumask);\n+\treturn 0;\n+}\n+\n+static int nest_pmu_cpumask_init(void)\n+{\n+\treturn cpuhp_setup_state(CPUHP_AP_PERF_POWERPC_NEST_IMC_ONLINE,\n+\t\t\t\t \"perf/powerpc/imc:online\",\n+\t\t\t\t ppc_nest_imc_cpu_online,\n+\t\t\t\t ppc_nest_imc_cpu_offline);\n+}\n+\n+static void nest_imc_counters_release(struct perf_event *event)\n+{\n+\tint rc, node_id;\n+\tstruct imc_pmu_ref *ref;\n+\n+\tif (event->cpu < 0)\n+\t\treturn;\n+\n+\tnode_id = cpu_to_node(event->cpu);\n+\n+\t/*\n+\t * See if we need to disable the nest PMU.\n+\t * If no events are currently in use, then we have to take a\n+\t * mutex to ensure that we don't race with another task doing\n+\t * enable or disable the nest counters.\n+\t */\n+\tref = get_nest_pmu_ref(node_id);\n+\tif (!ref)\n+\t\treturn;\n+\n+\t/* Take the mutex lock for this node and then decrement the reference count */\n+\tmutex_lock(&ref->lock);\n+\tref->refc--;\n+\tif (ref->refc == 0) {\n+\t\trc = opal_imc_counters_stop(OPAL_IMC_COUNTERS_NEST,\n+\t\t\t\t\t    get_hard_smp_processor_id(event->cpu));\n+\t\tif (rc) {\n+\t\t\tmutex_unlock(&nest_imc_refc[node_id].lock);\n+\t\t\tpr_err(\"IMC: Unable to stop the counters for core %d\\n\", node_id);\n+\t\t\treturn;\n+\t\t}\n+\t} else if (ref->refc < 0) {\n+\t\tWARN(1, \"nest-imc: Invalid event reference count\\n\");\n+\t\tref->refc = 0;\n+\t}\n+\tmutex_unlock(&ref->lock);\n+}\n+\n static int nest_imc_event_init(struct perf_event *event)\n {\n-\tint chip_id;\n+\tint chip_id, rc, node_id;\n \tu32 l_config, config = event->attr.config;\n \tstruct imc_mem_info *pcni;\n \tstruct imc_pmu *pmu;\n+\tstruct imc_pmu_ref *ref;\n \tbool flag = false;\n \n \tif (event->attr.type != event->pmu->type)\n@@ -102,6 +283,31 @@ static int nest_imc_event_init(struct perf_event *event)\n \tl_config = config & IMC_EVENT_OFFSET_MASK;\n \tevent->hw.event_base = (u64)pcni->vbase[l_config/PAGE_SIZE] +\n \t\t\t       (l_config & ~PAGE_MASK);\n+\tnode_id = cpu_to_node(event->cpu);\n+\n+\t/*\n+\t * Get the imc_pmu_ref struct for this node.\n+\t * Take the mutex lock and then increment the count of nest pmu events\n+\t * inited.\n+\t */\n+\tref = get_nest_pmu_ref(node_id);\n+\tif (!ref)\n+\t\treturn -EINVAL;\n+\n+\tmutex_lock(&ref->lock);\n+\tif (ref->refc == 0) {\n+\t\trc = opal_imc_counters_start(OPAL_IMC_COUNTERS_NEST,\n+\t\t\t\t\t     get_hard_smp_processor_id(event->cpu));\n+\t\tif (rc) {\n+\t\t\tmutex_unlock(&nest_imc_refc[node_id].lock);\n+\t\t\tpr_err(\"IMC: Unable to start the counters for node %d\\n\", node_id);\n+\t\t\treturn rc;\n+\t\t}\n+\t}\n+\t++ref->refc;\n+\tmutex_unlock(&ref->lock);\n+\n+\tevent->destroy = nest_imc_counters_release;\n \treturn 0;\n }\n \n@@ -179,6 +385,7 @@ static int update_pmu_ops(struct imc_pmu *pmu)\n \tpmu->pmu.start = imc_event_start;\n \tpmu->pmu.stop = imc_event_stop;\n \tpmu->pmu.read = imc_perf_event_update;\n+\tpmu->attr_groups[IMC_CPUMASK_ATTR] = &imc_pmu_cpumask_attr_group;\n \tpmu->attr_groups[IMC_FORMAT_ATTR] = &imc_format_group;\n \tpmu->pmu.attr_groups = pmu->attr_groups;\n \n@@ -242,18 +449,71 @@ static int update_events_in_group(struct imc_events *events,\n \treturn 0;\n }\n \n+/* init_nest_pmu_ref: Initialize the imc_pmu_ref struct for all the nodes */\n+static int init_nest_pmu_ref(void)\n+{\n+\tint nid, i = 0;\n+\n+\tnest_imc_refc = kzalloc((sizeof(struct imc_pmu_ref) *\n+\t\t\t\t IMC_MAX_CHIPS), GFP_KERNEL);\n+\n+\tif (!nest_imc_refc)\n+\t\treturn -ENOMEM;\n+\n+\tfor_each_online_node(nid) {\n+\t\tnest_imc_refc[i].id = nid;\n+\t\t/*\n+\t\t * Mutex lock to avoid races while tracking the number of\n+\t\t * sessions using the chip's nest pmu units.\n+\t\t */\n+\t\tmutex_init(&nest_imc_refc[i].lock);\n+\t\ti++;\n+\t}\n+\treturn 0;\n+}\n+\n /*\n  * init_imc_pmu : Setup and register the IMC pmu device.\n  *\n  * @events:\tevents memory for this pmu.\n  * @idx:\tnumber of event entries created.\n  * @pmu_ptr:\tmemory allocated for this pmu.\n+ *\n+ * init_imc_pmu() setup the cpu mask information for these pmus and setup\n+ * the state machine hotplug notifiers as well.\n  */\n int init_imc_pmu(struct imc_events *events, int idx,\n \t\t struct imc_pmu *pmu_ptr)\n {\n \tint ret;\n \n+\t/*\n+\t * Register for cpu hotplug notification.\n+\t *\n+\t * Nest imc pmu need only one cpu per chip, we initialize the cpumask\n+\t * for the first nest imc pmu and use the same for the rest.\n+\t * To handle the cpuhotplug callback unregister, we track the number of\n+\t * nest pmus in \"nest_pmus\".\n+\t * \"nest_imc_cpumask_initialized\" is set to zero during cpuhotplug\n+\t * callback unregister.\n+\t */\n+\tmutex_lock(&imc_nest_inited_reserve);\n+\tif (nest_pmus == 0) {\n+\t\tret = init_nest_pmu_ref();\n+\t\tif (ret) {\n+\t\t\tmutex_unlock(&imc_nest_inited_reserve);\n+\t\t\tgoto err_free;\n+\t\t}\n+\t\tret = nest_pmu_cpumask_init();\n+\t\tif (ret) {\n+\t\t\tmutex_unlock(&imc_nest_inited_reserve);\n+\t\t\tgoto err_free;\n+\t\t}\n+\t\tnest_imc_cpumask_initialized = 1;\n+\t}\n+\tnest_pmus++;\n+\tmutex_unlock(&imc_nest_inited_reserve);\n+\n \tret = update_events_in_group(events, idx, pmu_ptr);\n \tif (ret)\n \t\tgoto err_free;\n@@ -278,6 +538,22 @@ int init_imc_pmu(struct imc_events *events, int idx,\n \t\t\tkfree(pmu_ptr->attr_groups[IMC_EVENT_ATTR]->attrs);\n \t\tkfree(pmu_ptr->attr_groups[IMC_EVENT_ATTR]);\n \t}\n-\n+\tif (pmu_ptr->domain == IMC_DOMAIN_NEST) {\n+\t\t/*\n+\t\t * If no nest pmu units are registered, then obtain the mutex\n+\t\t * lock and unregister the hotplug callback.\n+\t\t */\n+\t\tmutex_lock(&imc_nest_inited_reserve);\n+\t\t--nest_pmus;\n+\t\tif (nest_pmus <= 0) {\n+\t\t\tif (nest_imc_cpumask_initialized == 1) {\n+\t\t\t\tcpuhp_remove_state(CPUHP_AP_PERF_POWERPC_NEST_IMC_ONLINE);\n+\t\t\t\tnest_imc_cpumask_initialized = 0;\n+\t\t\t}\n+\t\t\tkfree(nest_imc_refc);\n+\t\t\tnest_pmus = 0;\n+\t\t}\n+\t\tmutex_unlock(&imc_nest_inited_reserve);\n+\t}\n \treturn ret;\n }\ndiff --git a/arch/powerpc/platforms/powernv/opal-imc.c b/arch/powerpc/platforms/powernv/opal-imc.c\nindex a68d66d1ddb1..406f7c10850a 100644\n--- a/arch/powerpc/platforms/powernv/opal-imc.c\n+++ b/arch/powerpc/platforms/powernv/opal-imc.c\n@@ -467,6 +467,19 @@ static int imc_pmu_create(struct device_node *parent, int pmu_index, int domain)\n \treturn ret;\n }\n \n+static void disable_nest_pmu_counters(void)\n+{\n+\tint nid, cpu;\n+\tstruct cpumask *l_cpumask;\n+\n+\tfor_each_online_node(nid) {\n+\t\tl_cpumask = cpumask_of_node(nid);\n+\t\tcpu = cpumask_first(l_cpumask);\n+\t\topal_imc_counters_stop(OPAL_IMC_COUNTERS_NEST,\n+\t\t\t\t       get_hard_smp_processor_id(cpu));\n+\t}\n+}\n+\n static int opal_imc_counters_probe(struct platform_device *pdev)\n {\n \tstruct device_node *imc_dev = NULL;\n@@ -477,11 +490,13 @@ static int opal_imc_counters_probe(struct platform_device *pdev)\n \t\treturn -ENODEV;\n \n \t/*\n-\t * Check whether this is kdump kernel. If yes, just return.\n+\t * Check whether this is kdump kernel. If yes, force the engines to\n+\t * stop and return.\n \t */\n-\tif (is_kdump_kernel())\n+\tif (is_kdump_kernel()) {\n+\t\tdisable_nest_pmu_counters();\n \t\treturn -ENODEV;\n-\n+\t}\n \timc_dev = pdev->dev.of_node;\n \tif (!imc_dev)\n \t\treturn -ENODEV;\ndiff --git a/arch/powerpc/platforms/powernv/opal-wrappers.S b/arch/powerpc/platforms/powernv/opal-wrappers.S\nindex f620572f891f..1828b24fbb53 100644\n--- a/arch/powerpc/platforms/powernv/opal-wrappers.S\n+++ b/arch/powerpc/platforms/powernv/opal-wrappers.S\n@@ -310,3 +310,6 @@ OPAL_CALL(opal_xive_dump,\t\t\tOPAL_XIVE_DUMP);\n OPAL_CALL(opal_npu_init_context,\t\tOPAL_NPU_INIT_CONTEXT);\n OPAL_CALL(opal_npu_destroy_context,\t\tOPAL_NPU_DESTROY_CONTEXT);\n OPAL_CALL(opal_npu_map_lpar,\t\t\tOPAL_NPU_MAP_LPAR);\n+OPAL_CALL(opal_imc_counters_init,              OPAL_IMC_COUNTERS_INIT);\n+OPAL_CALL(opal_imc_counters_start,             OPAL_IMC_COUNTERS_START);\n+OPAL_CALL(opal_imc_counters_stop,              OPAL_IMC_COUNTERS_STOP);\ndiff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h\nindex 0f2a80377520..dca7f2b07f93 100644\n--- a/include/linux/cpuhotplug.h\n+++ b/include/linux/cpuhotplug.h\n@@ -139,6 +139,7 @@ enum cpuhp_state {\n \tCPUHP_AP_PERF_ARM_L2X0_ONLINE,\n \tCPUHP_AP_PERF_ARM_QCOM_L2_ONLINE,\n \tCPUHP_AP_PERF_ARM_QCOM_L3_ONLINE,\n+\tCPUHP_AP_PERF_POWERPC_NEST_IMC_ONLINE,\n \tCPUHP_AP_WORKQUEUE_ONLINE,\n \tCPUHP_AP_RCUTREE_ONLINE,\n \tCPUHP_AP_ONLINE_DYN,\n",
    "prefixes": [
        "v12",
        "05/10"
    ]
}