diff mbox

[v2] powerpc: handle simultaneous interrupts at once

Message ID 20170316085545.EEE4A68481@localhost.localdomain (mailing list archive)
State Accepted
Commit 45cb08f4791ce6a15c54598b4cb73db4b4b8294f
Headers show

Commit Message

Christophe Leroy March 16, 2017, 8:55 a.m. UTC
It often happens to have simultaneous interrupts, for instance
when having double Ethernet attachment. With the current
implementation, we suffer the cost of kernel entry/exit for each
interrupt.

This patch introduces a loop in __do_irq() to handle all interrupts
at once before returning.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
Changed from v1(RFC): simplified following remark from benh

 arch/powerpc/kernel/irq.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

Comments

Michael Ellerman June 5, 2017, 10:21 a.m. UTC | #1
On Thu, 2017-03-16 at 08:55:45 UTC, Christophe Leroy wrote:
> It often happens to have simultaneous interrupts, for instance
> when having double Ethernet attachment. With the current
> implementation, we suffer the cost of kernel entry/exit for each
> interrupt.
> 
> This patch introduces a loop in __do_irq() to handle all interrupts
> at once before returning.
> 
> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/45cb08f4791ce6a15c54598b4cb73d

cheers
Benjamin Herrenschmidt June 5, 2017, 11:17 a.m. UTC | #2
On Mon, 2017-06-05 at 20:21 +1000, Michael Ellerman wrote:
> On Thu, 2017-03-16 at 08:55:45 UTC, Christophe Leroy wrote:
> > It often happens to have simultaneous interrupts, for instance
> > when having double Ethernet attachment. With the current
> > implementation, we suffer the cost of kernel entry/exit for each
> > interrupt.
> > 
> > This patch introduces a loop in __do_irq() to handle all interrupts
> > at once before returning.
> > 
> > Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> 
> Applied to powerpc next, thanks.
> 
> https://git.kernel.org/powerpc/c/45cb08f4791ce6a15c54598b4cb73d

Hrm, I hadn't noticed that patch...

We used to do that and then removed the code for it. There's a cost,
sometimes noticeable, to an extra call to ppc_md.get_irq.

Why not have your get_irq (or eoi) implementation set a per-cpu
requesting a new spin of the loop ?

We could move the xive force replay stuff to use the same thing.

Ben.
diff mbox

Patch

diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index a018f5cae899..ba0cb6c2ee7d 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -522,7 +522,11 @@  void __do_irq(struct pt_regs *regs)
 	if (unlikely(!irq))
 		__this_cpu_inc(irq_stat.spurious_irqs);
 	else
-		generic_handle_irq(irq);
+		do {
+			generic_handle_irq(irq);
+
+			irq = ppc_md.get_irq();
+		} while (irq);
 
 	trace_irq_exit(regs);