Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/1.1/patches/2228805/?format=api
{ "id": 2228805, "url": "http://patchwork.ozlabs.org/api/1.1/patches/2228805/?format=api", "web_url": "http://patchwork.ozlabs.org/project/linuxppc-dev/patch/20260427122742.210074-7-mkchauras@gmail.com/", "project": { "id": 2, "url": "http://patchwork.ozlabs.org/api/1.1/projects/2/?format=api", "name": "Linux PPC development", "link_name": "linuxppc-dev", "list_id": "linuxppc-dev.lists.ozlabs.org", "list_email": "linuxppc-dev@lists.ozlabs.org", "web_url": "https://github.com/linuxppc/wiki/wiki", "scm_url": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git", "webscm_url": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/" }, "msgid": "<20260427122742.210074-7-mkchauras@gmail.com>", "date": "2026-04-27T12:27:40", "name": "[v5,6/8] powerpc: Prepare for IRQ entry exit", "commit_ref": null, "pull_url": null, "state": "new", "archived": false, "hash": "c5ce505e11c03eccd6de4ed203f33dab75ac8356", "submitter": { "id": 92575, "url": "http://patchwork.ozlabs.org/api/1.1/people/92575/?format=api", "name": "Mukesh Kumar Chaurasiya", "email": "mkchauras@gmail.com" }, "delegate": null, "mbox": "http://patchwork.ozlabs.org/project/linuxppc-dev/patch/20260427122742.210074-7-mkchauras@gmail.com/mbox/", "series": [ { "id": 501638, "url": "http://patchwork.ozlabs.org/api/1.1/series/501638/?format=api", "web_url": "http://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=501638", "date": "2026-04-27T12:27:34", "name": "Generic IRQ entry/exit support for powerpc", "version": 5, "mbox": "http://patchwork.ozlabs.org/series/501638/mbox/" } ], "comments": "http://patchwork.ozlabs.org/api/patches/2228805/comments/", "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/2228805/checks/", "tags": {}, "headers": { "Return-Path": "\n <linuxppc-dev+bounces-20165-incoming=patchwork.ozlabs.org@lists.ozlabs.org>", "X-Original-To": [ "incoming@patchwork.ozlabs.org", "linuxppc-dev@lists.ozlabs.org" ], "Delivered-To": "patchwork-incoming@legolas.ozlabs.org", "Authentication-Results": [ "legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256\n header.s=20251104 header.b=RdikmkhT;\n\tdkim-atps=neutral", "legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org\n (client-ip=112.213.38.117; helo=lists.ozlabs.org;\n envelope-from=linuxppc-dev+bounces-20165-incoming=patchwork.ozlabs.org@lists.ozlabs.org;\n receiver=patchwork.ozlabs.org)", "lists.ozlabs.org;\n arc=none smtp.remote-ip=\"2607:f8b0:4864:20::431\"", "lists.ozlabs.org;\n dmarc=pass (p=none dis=none) header.from=gmail.com", "lists.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256\n header.s=20251104 header.b=RdikmkhT;\n\tdkim-atps=neutral", "lists.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=gmail.com\n (client-ip=2607:f8b0:4864:20::431; helo=mail-pf1-x431.google.com;\n envelope-from=mkchauras@gmail.com; receiver=lists.ozlabs.org)" ], "Received": [ "from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4g42th61BDz1yHv\n\tfor <incoming@patchwork.ozlabs.org>; Mon, 27 Apr 2026 22:29:16 +1000 (AEST)", "from boromir.ozlabs.org (localhost [127.0.0.1])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 4g42th4wRPz2yLG;\n\tMon, 27 Apr 2026 22:29:16 +1000 (AEST)", "from mail-pf1-x431.google.com (mail-pf1-x431.google.com\n [IPv6:2607:f8b0:4864:20::431])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 4g42tg5rYhz2y2B\n\tfor <linuxppc-dev@lists.ozlabs.org>; Mon, 27 Apr 2026 22:29:15 +1000 (AEST)", "by mail-pf1-x431.google.com with SMTP id\n d2e1a72fcca58-82f8cebc935so4331069b3a.0\n for <linuxppc-dev@lists.ozlabs.org>;\n Mon, 27 Apr 2026 05:29:15 -0700 (PDT)", "from li-1a3e774c-28e4-11b2-a85c-acc9f2883e29.ibm.com ([129.41.58.4])\n by smtp.gmail.com with ESMTPSA id\n d2e1a72fcca58-82f8e9f7735sm32733466b3a.21.2026.04.27.05.29.00\n (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);\n Mon, 27 Apr 2026 05:29:13 -0700 (PDT)" ], "ARC-Seal": "i=1; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1777292956;\n\tcv=none;\n b=cnIjT10mYj2nNo2TciQIA2HVRwdhKmg5VSKG4Ol7iV4M/OV8sXHPOfyKiGnGv8oM6n07rgLM7pgvXmR5Z0C+7OIskVkkaY15tkYYYcTaQ65Z/JqDUieeZ2SvdXAWdENp7TEa1bx6S1pOB3EzNi6gruttbm2TmpotX1R5+RYomDbewrclUSYceoj4P93Kk+7aKDbp1CtadrTsyj4EKXPFe9lWInSLrK7fv/KGa59bQIpazUz94duFyoFvOBbPjLzyVhmYcAmrsLK/0/J/fiAXCYJNUho+uXn9dELNH7+Qj3GQtH2r4G8tIHcX7WdH74JOnQWhzOGAkQMYwy1/wKEqLg==", "ARC-Message-Signature": "i=1; a=rsa-sha256; d=lists.ozlabs.org; s=201707;\n\tt=1777292956; c=relaxed/relaxed;\n\tbh=XARyapmtrilvR5w12XJiFpVGShufbq9BKuLcUgswpko=;\n\th=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References:\n\t MIME-Version;\n b=B+DsmEO6CikgD4pTjyYrHz6WOUtaR6JY3W+mYg/5vxiuw2kK1ugHPd00xPOA1uETF/d9HXe3qA5uWdSBuyS1L7Py4IcNBGEkOUoYHUeBzCZZzMOiCfvFqFgRB/5OFPIitj1/7WrTlZTN1zk8YO9mobdZnkAkRNmmu3VgPoxZli2VYCCsAP0jFHCmkisUcmWWuE6tqVp5XunTh8568FcIJrvboL+sCOPZO/1Zndtyy1MIEprMdjK1AJxx9apsUc1YdKFeF0T0nnuWDGrCKtKrKvFgesT2xwzJqIJ/GmOTvQ35iHqXZ1dGPMRmjaWAiXKavDNZ2YC+j7+kosb/kQxURg==", "ARC-Authentication-Results": "i=1; lists.ozlabs.org;\n dmarc=pass (p=none dis=none) header.from=gmail.com; dkim=pass (2048-bit key;\n unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256\n header.s=20251104 header.b=RdikmkhT; dkim-atps=neutral;\n spf=pass (client-ip=2607:f8b0:4864:20::431; helo=mail-pf1-x431.google.com;\n envelope-from=mkchauras@gmail.com;\n receiver=lists.ozlabs.org) smtp.mailfrom=gmail.com", "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=gmail.com; s=20251104; t=1777292954; x=1777897754;\n darn=lists.ozlabs.org;\n h=content-transfer-encoding:mime-version:references:in-reply-to\n :message-id:date:subject:cc:to:from:from:to:cc:subject:date\n :message-id:reply-to;\n bh=XARyapmtrilvR5w12XJiFpVGShufbq9BKuLcUgswpko=;\n b=RdikmkhTi5xWZWXTxnXHxoXX++R8v64nOMqOzg9caV2OJXjmOlznAGL6l0JYlBXxQg\n wTqMdBDWQCPqN2vTJBYI9INfw4ccipvwhIhRzjPVIhcJ6HZvC2XNGGAVbV3zW+UVPIoC\n E0cANJdEhYY1wKsYBjLSa1abEAsuVn9qwfsjh5ayOYlhC/Yybi5HgY0MhfBEibRy3HYH\n KlAKbnNa3669sEKS1I0A8oUNPyER6DGc5EwGjFQqoAXZdA34N5rQcZFFjKSqRwP8VzWj\n pZ1pNuiq7TipobZDVs02h2M4PoceWwa/ivShq84TW9n7jwmoEGBEoyRN+HClOE6yRSjy\n IL5w==", "X-Google-DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=1e100.net; s=20251104; t=1777292954; x=1777897754;\n h=content-transfer-encoding:mime-version:references:in-reply-to\n :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from\n :to:cc:subject:date:message-id:reply-to;\n bh=XARyapmtrilvR5w12XJiFpVGShufbq9BKuLcUgswpko=;\n b=C10Q23AKcZsxottXNT0UkkhkxCMH+2S27f2tFow2ZachwmMedOcIsLAzEhl4CjTrbI\n RiSfvyTThXK2angNlXMXVwqsF+eBqPrOJNBcB/CunCeMuei4seiBgPBP+bEvGPxqUqPk\n R6YiamKrc0vCz/3wd3BIPGvDJUDNOQahNprfLF/7RTKEjjDV4+trkvcE8BmhA5hAxAQR\n yW8/qqpCBmcktsQwksMg74q59TDfsveLp+yRz2zlhV4UCw7Wqhh2RPxuUlEEFS6iT7Ic\n FDxBQIVb+BpkrjoCGQqelzEr6547PDIzhnHkguDaWL9Tn0RUTuvBXQeShWdGsYFQlIA8\n +qGg==", "X-Forwarded-Encrypted": "i=1;\n AFNElJ+SD7a7n8+fD1MrsUQZEQMiTygBCT2M3lWz/el4poSuDUSmNRXAOuGFtsIQzJc/adHAbTONjoQfht/Apbo=@lists.ozlabs.org", "X-Gm-Message-State": "AOJu0Yx9eIJLRLnB25tjsincUygiQwmVG89bGdGOcJTtu3dE59B5OoXX\n\t9Sy78ykfS9C6z9kCaL9FJX7Trknco7QCaNoGFxaxe6zwnSmI1T5fsrGf", "X-Gm-Gg": "AeBDievyBS0s/fyHWwCpPNFX5a9YQbICwwXoL2k7/UdHIsKdiOvkpxwolPSijsxgsvS\n\twzeKstpL7NYgbzl6Ytn4kfk7EOWs2VOwjqNbwSS4QKG5nbI1RsvSadTMZ3kh7sU1MVDd/6wDsaQ\n\thcM9UWFcJj7B6LnhaS0GQQP645AK8Neejx2UTSJvlLr97GYcvB/R1/3JzMke5GopfQdkOVaMoq9\n\t6Dn+mCS5ldd5P2VjPDqRhTWtNuO1u5ZFiIH5YE07ukG5F/swyMzPyAcyKx1p/jrokmlSfwuJxtu\n\tYoioP0EAOV4HO+YMwG8Qchc/+vafdx1Pm0/z/MNYq+QqAx4KKOi38BmYW1Lc1bsQQTJmuOqMZf8\n\tN3l7I32WPLRPkNnLAB7xRKTD5RS7huZU41uyd2INRc96JAeZ1pfTLNia4D6DmPH81JmwiEAKSoh\n\tdq55dNg+iVv0yG+KfXqQs1rjPEq6QY9IYtq+afRhV5AOpvVGif8P5wnOPfdWN0jpSwe6y+a7aV3\n\tHU/Yg==", "X-Received": "by 2002:a05:6a00:1307:b0:82a:7046:86a2 with SMTP id\n d2e1a72fcca58-82f8c7d1011mr43021858b3a.10.1777292953443;\n Mon, 27 Apr 2026 05:29:13 -0700 (PDT)", "From": "\"Mukesh Kumar Chaurasiya (IBM)\" <mkchauras@gmail.com>", "To": "maddy@linux.ibm.com,\n\tmpe@ellerman.id.au,\n\tnpiggin@gmail.com,\n\tchleroy@kernel.org,\n\tryabinin.a.a@gmail.com,\n\tglider@google.com,\n\tandreyknvl@gmail.com,\n\tdvyukov@google.com,\n\tvincenzo.frascino@arm.com,\n\toleg@redhat.com,\n\tkees@kernel.org,\n\tluto@amacapital.net,\n\twad@chromium.org,\n\tmchauras@linux.ibm.com,\n\tsshegde@linux.ibm.com,\n\tthuth@redhat.com,\n\truanjinjie@huawei.com,\n\takpm@linux-foundation.org,\n\tmacro@orcam.me.uk,\n\tldv@strace.io,\n\tcharlie@rivosinc.com,\n\tdeller@gmx.de,\n\tkevin.brodsky@arm.com,\n\tritesh.list@gmail.com,\n\tyeoreum.yun@arm.com,\n\tagordeev@linux.ibm.com,\n\tsegher@kernel.crashing.org,\n\tmark.rutland@arm.com,\n\tryan.roberts@arm.com,\n\tpmladek@suse.com,\n\tfeng.tang@linux.alibaba.com,\n\tpeterz@infradead.org,\n\tkan.liang@linux.intel.com,\n\tlinuxppc-dev@lists.ozlabs.org,\n\tlinux-kernel@vger.kernel.org,\n\tkasan-dev@googlegroups.com", "Cc": "Samir M <samir@linux.ibm.com>,\n\tDavid Gow <davidgow@google.com>,\n\tVenkat Rao Bagalkote <venkat88@linux.ibm.com>", "Subject": "[PATCH v5 6/8] powerpc: Prepare for IRQ entry exit", "Date": "Mon, 27 Apr 2026 17:57:40 +0530", "Message-ID": "<20260427122742.210074-7-mkchauras@gmail.com>", "X-Mailer": "git-send-email 2.53.0", "In-Reply-To": "<20260427122742.210074-1-mkchauras@gmail.com>", "References": "<20260427122742.210074-1-mkchauras@gmail.com>", "X-Mailing-List": "linuxppc-dev@lists.ozlabs.org", "List-Id": "<linuxppc-dev.lists.ozlabs.org>", "List-Help": "<mailto:linuxppc-dev+help@lists.ozlabs.org>", "List-Owner": "<mailto:linuxppc-dev+owner@lists.ozlabs.org>", "List-Post": "<mailto:linuxppc-dev@lists.ozlabs.org>", "List-Archive": "<https://lore.kernel.org/linuxppc-dev/>,\n <https://lists.ozlabs.org/pipermail/linuxppc-dev/>", "List-Subscribe": "<mailto:linuxppc-dev+subscribe@lists.ozlabs.org>,\n <mailto:linuxppc-dev+subscribe-digest@lists.ozlabs.org>,\n <mailto:linuxppc-dev+subscribe-nomail@lists.ozlabs.org>", "List-Unsubscribe": "<mailto:linuxppc-dev+unsubscribe@lists.ozlabs.org>", "Precedence": "list", "MIME-Version": "1.0", "Content-Transfer-Encoding": "8bit", "X-Spam-Status": "No, score=-0.2 required=3.0 tests=DKIM_SIGNED,DKIM_VALID,\n\tDKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,RCVD_IN_DNSWL_NONE,\n\tSPF_HELO_NONE,SPF_PASS autolearn=disabled version=4.0.1 OzLabs 8", "X-Spam-Checker-Version": "SpamAssassin 4.0.1 (2024-03-25) on lists.ozlabs.org" }, "content": "From: Mukesh Kumar Chaurasiya <mchauras@linux.ibm.com>\n\nMove interrupt entry and exit helper routines from interrupt.h into the\nPowerPC-specific entry-common.h header as a preparatory step for enabling\nthe generic entry/exit framework.\n\nThis consolidation places all PowerPC interrupt entry/exit handling in a\nsingle common header, aligning with the generic entry infrastructure.\nThe helpers provide architecture-specific handling for interrupt and NMI\nentry/exit sequences, including:\n\n - arch_interrupt_enter/exit_prepare()\n - arch_interrupt_async_enter/exit_prepare()\n - arch_interrupt_nmi_enter/exit_prepare()\n - Supporting helpers such as nap_adjust_return(), check_return_regs_valid(),\n debug register maintenance, and soft mask handling.\n\nThe functions are copied verbatim from interrupt.h.Subsequent patches will\nintegrate these routines into the generic entry/exit flow.\n\nNo functional change intended.\n\nSigned-off-by: Mukesh Kumar Chaurasiya <mchauras@linux.ibm.com>\nTested-by: Samir M <samir@linux.ibm.com>\nTested-by: David Gow <davidgow@google.com>\nTested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com>\nReviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>\n---\n arch/powerpc/include/asm/entry-common.h | 358 ++++++++++++++++++++++++\n 1 file changed, 358 insertions(+)", "diff": "diff --git a/arch/powerpc/include/asm/entry-common.h b/arch/powerpc/include/asm/entry-common.h\nindex ff0625e04778..de5601282755 100644\n--- a/arch/powerpc/include/asm/entry-common.h\n+++ b/arch/powerpc/include/asm/entry-common.h\n@@ -5,10 +5,75 @@\n \n #include <asm/cputime.h>\n #include <asm/interrupt.h>\n+#include <asm/runlatch.h>\n #include <asm/stacktrace.h>\n #include <asm/switch_to.h>\n #include <asm/tm.h>\n \n+#ifdef CONFIG_PPC_IRQ_SOFT_MASK_DEBUG\n+/*\n+ * WARN/BUG is handled with a program interrupt so minimise checks here to\n+ * avoid recursion and maximise the chance of getting the first oops handled.\n+ */\n+#define INT_SOFT_MASK_BUG_ON(regs, cond)\t\t\t\t\\\n+do {\t\t\t\t\t\t\t\t\t\\\n+\tif ((user_mode(regs) || (TRAP(regs) != INTERRUPT_PROGRAM)))\t\\\n+\t\tBUG_ON(cond);\t\t\t\t\t\t\\\n+} while (0)\n+#else\n+#define INT_SOFT_MASK_BUG_ON(regs, cond)\n+#endif\n+\n+#ifdef CONFIG_PPC_BOOK3S_64\n+extern char __end_soft_masked[];\n+bool search_kernel_soft_mask_table(unsigned long addr);\n+unsigned long search_kernel_restart_table(unsigned long addr);\n+\n+DECLARE_STATIC_KEY_FALSE(interrupt_exit_not_reentrant);\n+\n+static inline bool is_implicit_soft_masked(struct pt_regs *regs)\n+{\n+\tif (user_mode(regs))\n+\t\treturn false;\n+\n+\tif (regs->nip >= (unsigned long)__end_soft_masked)\n+\t\treturn false;\n+\n+\treturn search_kernel_soft_mask_table(regs->nip);\n+}\n+\n+static inline void srr_regs_clobbered(void)\n+{\n+\tlocal_paca->srr_valid = 0;\n+\tlocal_paca->hsrr_valid = 0;\n+}\n+#else\n+static inline unsigned long search_kernel_restart_table(unsigned long addr)\n+{\n+\treturn 0;\n+}\n+\n+static inline bool is_implicit_soft_masked(struct pt_regs *regs)\n+{\n+\treturn false;\n+}\n+\n+static inline void srr_regs_clobbered(void)\n+{\n+}\n+#endif\n+\n+static inline void nap_adjust_return(struct pt_regs *regs)\n+{\n+#ifdef CONFIG_PPC_970_NAP\n+\tif (unlikely(test_thread_local_flags(_TLF_NAPPING))) {\n+\t\t/* Can avoid a test-and-clear because NMIs do not call this */\n+\t\tclear_thread_local_flags(_TLF_NAPPING);\n+\t\tregs_set_return_ip(regs, (unsigned long)power4_idle_nap_return);\n+\t}\n+#endif\n+}\n+\n static __always_inline void booke_load_dbcr0(void)\n {\n #ifdef CONFIG_PPC_ADV_DEBUG_REGS\n@@ -31,6 +96,299 @@ static __always_inline void booke_load_dbcr0(void)\n #endif\n }\n \n+static inline void booke_restore_dbcr0(void)\n+{\n+#ifdef CONFIG_PPC_ADV_DEBUG_REGS\n+\tunsigned long dbcr0 = current->thread.debug.dbcr0;\n+\n+\tif (IS_ENABLED(CONFIG_PPC32) && unlikely(dbcr0 & DBCR0_IDM)) {\n+\t\tmtspr(SPRN_DBSR, -1);\n+\t\tmtspr(SPRN_DBCR0, global_dbcr0[smp_processor_id()]);\n+\t}\n+#endif\n+}\n+\n+static inline void check_return_regs_valid(struct pt_regs *regs)\n+{\n+#ifdef CONFIG_PPC_BOOK3S_64\n+\tunsigned long trap, srr0, srr1;\n+\tstatic bool warned;\n+\tu8 *validp;\n+\tchar *h;\n+\n+\tif (trap_is_scv(regs))\n+\t\treturn;\n+\n+\ttrap = TRAP(regs);\n+\t// EE in HV mode sets HSRRs like 0xea0\n+\tif (cpu_has_feature(CPU_FTR_HVMODE) && trap == INTERRUPT_EXTERNAL)\n+\t\ttrap = 0xea0;\n+\n+\tswitch (trap) {\n+\tcase 0x980:\n+\tcase INTERRUPT_H_DATA_STORAGE:\n+\tcase 0xe20:\n+\tcase 0xe40:\n+\tcase INTERRUPT_HMI:\n+\tcase 0xe80:\n+\tcase 0xea0:\n+\tcase INTERRUPT_H_FAC_UNAVAIL:\n+\tcase 0x1200:\n+\tcase 0x1500:\n+\tcase 0x1600:\n+\tcase 0x1800:\n+\t\tvalidp = &local_paca->hsrr_valid;\n+\t\tif (!READ_ONCE(*validp))\n+\t\t\treturn;\n+\n+\t\tsrr0 = mfspr(SPRN_HSRR0);\n+\t\tsrr1 = mfspr(SPRN_HSRR1);\n+\t\th = \"H\";\n+\n+\t\tbreak;\n+\tdefault:\n+\t\tvalidp = &local_paca->srr_valid;\n+\t\tif (!READ_ONCE(*validp))\n+\t\t\treturn;\n+\n+\t\tsrr0 = mfspr(SPRN_SRR0);\n+\t\tsrr1 = mfspr(SPRN_SRR1);\n+\t\th = \"\";\n+\t\tbreak;\n+\t}\n+\n+\tif (srr0 == regs->nip && srr1 == regs->msr)\n+\t\treturn;\n+\n+\t/*\n+\t * A NMI / soft-NMI interrupt may have come in after we found\n+\t * srr_valid and before the SRRs are loaded. The interrupt then\n+\t * comes in and clobbers SRRs and clears srr_valid. Then we load\n+\t * the SRRs here and test them above and find they don't match.\n+\t *\n+\t * Test validity again after that, to catch such false positives.\n+\t *\n+\t * This test in general will have some window for false negatives\n+\t * and may not catch and fix all such cases if an NMI comes in\n+\t * later and clobbers SRRs without clearing srr_valid, but hopefully\n+\t * such things will get caught most of the time, statistically\n+\t * enough to be able to get a warning out.\n+\t */\n+\tif (!READ_ONCE(*validp))\n+\t\treturn;\n+\n+\tif (!data_race(warned)) {\n+\t\tdata_race(warned = true);\n+\t\tpr_warn(\"%sSRR0 was: %lx should be: %lx\\n\", h, srr0, regs->nip);\n+\t\tpr_warn(\"%sSRR1 was: %lx should be: %lx\\n\", h, srr1, regs->msr);\n+\t\tshow_regs(regs);\n+\t}\n+\n+\tWRITE_ONCE(*validp, 0); /* fixup */\n+#endif\n+}\n+\n+static inline void arch_interrupt_enter_prepare(struct pt_regs *regs)\n+{\n+#ifdef CONFIG_PPC64\n+\tirq_soft_mask_set(IRQS_ALL_DISABLED);\n+\n+\t/*\n+\t * If the interrupt was taken with HARD_DIS clear, then enable MSR[EE].\n+\t * Asynchronous interrupts get here with HARD_DIS set (see below), so\n+\t * this enables MSR[EE] for synchronous interrupts. IRQs remain\n+\t * soft-masked. The interrupt handler may later call\n+\t * interrupt_cond_local_irq_enable() to achieve a regular process\n+\t * context.\n+\t */\n+\tif (!(local_paca->irq_happened & PACA_IRQ_HARD_DIS)) {\n+\t\tINT_SOFT_MASK_BUG_ON(regs, !(regs->msr & MSR_EE));\n+\t\t__hard_irq_enable();\n+\t} else {\n+\t\t__hard_RI_enable();\n+\t}\n+\t/* Enable MSR[RI] early, to support kernel SLB and hash faults */\n+#endif\n+\n+\tif (!regs_irqs_disabled(regs))\n+\t\ttrace_hardirqs_off();\n+\n+\tif (user_mode(regs)) {\n+\t\tkuap_lock();\n+\t\taccount_cpu_user_entry();\n+\t\taccount_stolen_time();\n+\t} else {\n+\t\tkuap_save_and_lock(regs);\n+\t\t/*\n+\t\t * CT_WARN_ON comes here via program_check_exception,\n+\t\t * so avoid recursion.\n+\t\t */\n+\t\tif (TRAP(regs) != INTERRUPT_PROGRAM)\n+\t\t\tCT_WARN_ON(ct_state() != CT_STATE_KERNEL &&\n+\t\t\t\t ct_state() != CT_STATE_IDLE);\n+\t\tINT_SOFT_MASK_BUG_ON(regs, is_implicit_soft_masked(regs));\n+\t\tINT_SOFT_MASK_BUG_ON(regs, regs_irqs_disabled(regs) &&\n+\t\t\t\t search_kernel_restart_table(regs->nip));\n+\t}\n+\tINT_SOFT_MASK_BUG_ON(regs, !regs_irqs_disabled(regs) &&\n+\t\t\t !(regs->msr & MSR_EE));\n+\n+\tbooke_restore_dbcr0();\n+}\n+\n+/*\n+ * Care should be taken to note that arch_interrupt_exit_prepare and\n+ * arch_interrupt_async_exit_prepare do not necessarily return immediately to\n+ * regs context (e.g., if regs is usermode, we don't necessarily return to\n+ * user mode). Other interrupts might be taken between here and return,\n+ * context switch / preemption may occur in the exit path after this, or a\n+ * signal may be delivered, etc.\n+ *\n+ * The real interrupt exit code is platform specific, e.g.,\n+ * interrupt_exit_user_prepare / interrupt_exit_kernel_prepare for 64s.\n+ *\n+ * However arch_interrupt_nmi_exit_prepare does return directly to regs, because\n+ * NMIs do not do \"exit work\" or replay soft-masked interrupts.\n+ */\n+static inline void arch_interrupt_exit_prepare(struct pt_regs *regs)\n+{\n+\tif (user_mode(regs)) {\n+\t\tBUG_ON(regs_is_unrecoverable(regs));\n+\t\tBUG_ON(regs_irqs_disabled(regs));\n+\t\t/*\n+\t\t * We don't need to restore AMR on the way back to userspace for KUAP.\n+\t\t * AMR can only have been unlocked if we interrupted the kernel.\n+\t\t */\n+\t\tkuap_assert_locked();\n+\n+\t\tlocal_irq_disable();\n+\t}\n+}\n+\n+static inline void arch_interrupt_async_enter_prepare(struct pt_regs *regs)\n+{\n+#ifdef CONFIG_PPC64\n+\t/* Ensure arch_interrupt_enter_prepare does not enable MSR[EE] */\n+\tlocal_paca->irq_happened |= PACA_IRQ_HARD_DIS;\n+#endif\n+\tarch_interrupt_enter_prepare(regs);\n+#ifdef CONFIG_PPC_BOOK3S_64\n+\t/*\n+\t * RI=1 is set by arch_interrupt_enter_prepare, so this thread flags access\n+\t * has to come afterward (it can cause SLB faults).\n+\t */\n+\tif (cpu_has_feature(CPU_FTR_CTRL) &&\n+\t !test_thread_local_flags(_TLF_RUNLATCH))\n+\t\t__ppc64_runlatch_on();\n+#endif\n+}\n+\n+static inline void arch_interrupt_async_exit_prepare(struct pt_regs *regs)\n+{\n+\t/*\n+\t * Adjust at exit so the main handler sees the true NIA. This must\n+\t * come before irq_exit() because irq_exit can enable interrupts, and\n+\t * if another interrupt is taken before nap_adjust_return has run\n+\t * here, then that interrupt would return directly to idle nap return.\n+\t */\n+\tnap_adjust_return(regs);\n+\n+\tarch_interrupt_exit_prepare(regs);\n+}\n+\n+struct interrupt_nmi_state {\n+#ifdef CONFIG_PPC64\n+\tu8 irq_soft_mask;\n+\tu8 irq_happened;\n+\tu8 ftrace_enabled;\n+\tu64 softe;\n+#endif\n+};\n+\n+static inline bool nmi_disables_ftrace(struct pt_regs *regs)\n+{\n+\t/* Allow DEC and PMI to be traced when they are soft-NMI */\n+\tif (IS_ENABLED(CONFIG_PPC_BOOK3S_64)) {\n+\t\tif (TRAP(regs) == INTERRUPT_DECREMENTER)\n+\t\t\treturn false;\n+\t\tif (TRAP(regs) == INTERRUPT_PERFMON)\n+\t\t\treturn false;\n+\t}\n+\tif (IS_ENABLED(CONFIG_PPC_BOOK3E_64)) {\n+\t\tif (TRAP(regs) == INTERRUPT_PERFMON)\n+\t\t\treturn false;\n+\t}\n+\n+\treturn true;\n+}\n+\n+static inline void arch_interrupt_nmi_enter_prepare(struct pt_regs *regs,\n+\t\t\t\t\t\t struct interrupt_nmi_state *state)\n+{\n+#ifdef CONFIG_PPC64\n+\tstate->irq_soft_mask = local_paca->irq_soft_mask;\n+\tstate->irq_happened = local_paca->irq_happened;\n+\tstate->softe = regs->softe;\n+\n+\t/*\n+\t * Set IRQS_ALL_DISABLED unconditionally so irqs_disabled() does\n+\t * the right thing, and set IRQ_HARD_DIS. We do not want to reconcile\n+\t * because that goes through irq tracing which we don't want in NMI.\n+\t */\n+\tlocal_paca->irq_soft_mask = IRQS_ALL_DISABLED;\n+\tlocal_paca->irq_happened |= PACA_IRQ_HARD_DIS;\n+\n+\tif (!(regs->msr & MSR_EE) || is_implicit_soft_masked(regs)) {\n+\t\t/*\n+\t\t * Adjust regs->softe to be soft-masked if it had not been\n+\t\t * reconcied (e.g., interrupt entry with MSR[EE]=0 but softe\n+\t\t * not yet set disabled), or if it was in an implicit soft\n+\t\t * masked state. This makes regs_irqs_disabled(regs)\n+\t\t * behave as expected.\n+\t\t */\n+\t\tregs->softe = IRQS_ALL_DISABLED;\n+\t}\n+\n+\t__hard_RI_enable();\n+\n+\t/* Don't do any per-CPU operations until interrupt state is fixed */\n+\n+\tif (nmi_disables_ftrace(regs)) {\n+\t\tstate->ftrace_enabled = this_cpu_get_ftrace_enabled();\n+\t\tthis_cpu_set_ftrace_enabled(0);\n+\t}\n+#endif\n+}\n+\n+static inline void arch_interrupt_nmi_exit_prepare(struct pt_regs *regs,\n+\t\t\t\t\t\t struct interrupt_nmi_state *state)\n+{\n+\t/*\n+\t * nmi does not call nap_adjust_return because nmi should not create\n+\t * new work to do (must use irq_work for that).\n+\t */\n+\n+#ifdef CONFIG_PPC64\n+#ifdef CONFIG_PPC_BOOK3S\n+\tif (regs_irqs_disabled(regs)) {\n+\t\tunsigned long rst = search_kernel_restart_table(regs->nip);\n+\n+\t\tif (rst)\n+\t\t\tregs_set_return_ip(regs, rst);\n+\t}\n+#endif\n+\n+\tif (nmi_disables_ftrace(regs))\n+\t\tthis_cpu_set_ftrace_enabled(state->ftrace_enabled);\n+\n+\t/* Check we didn't change the pending interrupt mask. */\n+\tWARN_ON_ONCE((state->irq_happened | PACA_IRQ_HARD_DIS) != local_paca->irq_happened);\n+\tregs->softe = state->softe;\n+\tlocal_paca->irq_happened = state->irq_happened;\n+\tlocal_paca->irq_soft_mask = state->irq_soft_mask;\n+#endif\n+}\n+\n static __always_inline void arch_enter_from_user_mode(struct pt_regs *regs)\n {\n \tkuap_lock();\n", "prefixes": [ "v5", "6/8" ] }