diff mbox series

Revert "powerpc/64: irq_work avoid interrupt when called with hardware irqs enabled"

Message ID 20200402120401.1115883-1-npiggin@gmail.com (mailing list archive)
State Accepted
Commit abc3fce76adbdfa8f87272c784b388cd20b46049
Headers show
Series Revert "powerpc/64: irq_work avoid interrupt when called with hardware irqs enabled" | expand

Checks

Context Check Description
snowpatch_ozlabs/apply_patch success Successfully applied on branch powerpc/merge (d0c12846a3a24cd6d68b608c866712bc7e471634)
snowpatch_ozlabs/build-ppc64le success Build succeeded
snowpatch_ozlabs/build-ppc64be success Build succeeded
snowpatch_ozlabs/build-ppc64e success Build succeeded
snowpatch_ozlabs/build-pmac32 fail build failed!
snowpatch_ozlabs/checkpatch success total: 0 errors, 0 warnings, 0 checks, 64 lines checked
snowpatch_ozlabs/needsstable warning Please consider tagging this patch for stable!

Commit Message

Nicholas Piggin April 2, 2020, 12:04 p.m. UTC
This reverts commit ebb37cf3ffd39fdb6ec5b07111f8bb2f11d92c5f.

That commit does not play well with soft-masked irq state manipulations
in idle, interrupt replay, and possibly others due to tracing code
sometimes using irq_work_queue (e.g., in trace_hardirqs_on()). That
can cause PACA_IRQ_DEC to become set when it is not expected, and be
ignored or cleared or cause warnings.

The net result seems to be missing an irq_work until the next timer
interrupt in the worst case which is usually not going to be noticed,
however it could be a long time if the tick is disabled, which is
agains the spirit of irq_work and might cause real problems.

The idea is still solid, but it would need more work. It's not really
clear if it would be worth added complexity, so revert this for now
(not a straight revert, but replace with a comment explaining why we
might see interrupts happening, and gives git blame something to find).

Fixes: ebb37cf3ffd3 ("powerpc/64: irq_work avoid interrupt when called with hardware irqs enabled")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
This started tripping some warnings testing the latest interrupt code
which juggled a few things around, but it looks like it may have had
problems before that too.

 arch/powerpc/kernel/time.c | 44 +++++++++++---------------------------
 1 file changed, 13 insertions(+), 31 deletions(-)

Comments

Michael Ellerman April 6, 2020, 1:05 p.m. UTC | #1
On Thu, 2020-04-02 at 12:04:01 UTC, Nicholas Piggin wrote:
> This reverts commit ebb37cf3ffd39fdb6ec5b07111f8bb2f11d92c5f.
> 
> That commit does not play well with soft-masked irq state manipulations
> in idle, interrupt replay, and possibly others due to tracing code
> sometimes using irq_work_queue (e.g., in trace_hardirqs_on()). That
> can cause PACA_IRQ_DEC to become set when it is not expected, and be
> ignored or cleared or cause warnings.
> 
> The net result seems to be missing an irq_work until the next timer
> interrupt in the worst case which is usually not going to be noticed,
> however it could be a long time if the tick is disabled, which is
> agains the spirit of irq_work and might cause real problems.
> 
> The idea is still solid, but it would need more work. It's not really
> clear if it would be worth added complexity, so revert this for now
> (not a straight revert, but replace with a comment explaining why we
> might see interrupts happening, and gives git blame something to find).
> 
> Fixes: ebb37cf3ffd3 ("powerpc/64: irq_work avoid interrupt when called with hardware irqs enabled")
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/abc3fce76adbdfa8f87272c784b388cd20b46049

cheers
diff mbox series

Patch

diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 1168e8b37e30..716f8d0960a7 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -522,35 +522,6 @@  static inline void clear_irq_work_pending(void)
 		"i" (offsetof(struct paca_struct, irq_work_pending)));
 }
 
-void arch_irq_work_raise(void)
-{
-	preempt_disable();
-	set_irq_work_pending_flag();
-	/*
-	 * Non-nmi code running with interrupts disabled will replay
-	 * irq_happened before it re-enables interrupts, so setthe
-	 * decrementer there instead of causing a hardware exception
-	 * which would immediately hit the masked interrupt handler
-	 * and have the net effect of setting the decrementer in
-	 * irq_happened.
-	 *
-	 * NMI interrupts can not check this when they return, so the
-	 * decrementer hardware exception is raised, which will fire
-	 * when interrupts are next enabled.
-	 *
-	 * BookE does not support this yet, it must audit all NMI
-	 * interrupt handlers to ensure they call nmi_enter() so this
-	 * check would be correct.
-	 */
-	if (IS_ENABLED(CONFIG_BOOKE) || !irqs_disabled() || in_nmi()) {
-		set_dec(1);
-	} else {
-		hard_irq_disable();
-		local_paca->irq_happened |= PACA_IRQ_DEC;
-	}
-	preempt_enable();
-}
-
 #else /* 32-bit */
 
 DEFINE_PER_CPU(u8, irq_work_pending);
@@ -559,16 +530,27 @@  DEFINE_PER_CPU(u8, irq_work_pending);
 #define test_irq_work_pending()		__this_cpu_read(irq_work_pending)
 #define clear_irq_work_pending()	__this_cpu_write(irq_work_pending, 0)
 
+#endif /* 32 vs 64 bit */
+
 void arch_irq_work_raise(void)
 {
+	/*
+	 * 64-bit code that uses irq soft-mask can just cause an immediate
+	 * interrupt here that gets soft masked, if this is called under
+	 * local_irq_disable(). It might be possible to prevent that happening
+	 * by noticing interrupts are disabled and setting decrementer pending
+	 * to be replayed when irqs are enabled. The problem there is that
+	 * tracing can call irq_work_raise, including in code that does low
+	 * level manipulations of irq soft-mask state (e.g., trace_hardirqs_on)
+	 * which could get tangled up if we're messing with the same state
+	 * here.
+	 */
 	preempt_disable();
 	set_irq_work_pending_flag();
 	set_dec(1);
 	preempt_enable();
 }
 
-#endif /* 32 vs 64 bit */
-
 #else  /* CONFIG_IRQ_WORK */
 
 #define test_irq_work_pending()	0