Message ID | 20150330092410.24979.59887.stgit@preeti.in.ibm.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
On Mon, 30 Mar 2015, Preeti U Murthy wrote: > It was found when doing a hotplug stress test on POWER, that the machine > either hit softlockups or rcu_sched stall warnings. The issue was > traced to commit 7cba160ad789a powernv/cpuidle: Redesign idle states > management, which exposed the cpu down race with hrtimer based broadcast > mode(Commit 5d1638acb9f6(tick: Introduce hrtimer based broadcast). This > is explained below. > > Assume CPU1 is the CPU which holds the hrtimer broadcasting duty before > it is taken down. > > CPU0 CPU1 > > cpu_down() take_cpu_down() > disable_interrupts() > > cpu_die() > > while(CPU1 != CPU_DEAD) { > msleep(100); > switch_to_idle(); > stop_cpu_timer(); > schedule_broadcast(); > } > > tick_cleanup_cpu_dead() > take_over_broadcast() > > So after CPU1 disabled interrupts it cannot handle the broadcast hrtimer > anymore, so CPU0 will be stuck forever. > > Fix this by explicitly taking over broadcast duty before cpu_die(). > This is a temporary workaround. What we really want is a callback in the > clockevent device which allows us to do that from the dying CPU by > pushing the hrtimer onto a different cpu. That might involve an IPI and > is definitely more complex than this immediate fix. > > Fixes: > http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html > Suggested-by: Thomas Gleixner <tglx@linutronix.de> > Signed-off-by: Preeti U. Murthy <preeti@linux.vnet.ibm.com> > [Changelog drawn from: https://lkml.org/lkml/2015/2/16/213] The lock-up I was experiencing with v1 of this patch is no longer reproducible with this one. Tested-by: Nicolas Pitre <nico@linaro.org> > --- > Change from V1: https://lkml.org/lkml/2015/2/26/11 > 1. Decoupled this fix from the kernel/time cleanup patches. V1 had a fail > related to the cleanup which needs to be fixed. But since this bug fix > is independent of this and needs to go in quickly, the patch is being posted > out separately to be merged. > > include/linux/tick.h | 10 +++++++--- > kernel/cpu.c | 2 ++ > kernel/time/tick-broadcast.c | 19 +++++++++++-------- > 3 files changed, 20 insertions(+), 11 deletions(-) > > diff --git a/include/linux/tick.h b/include/linux/tick.h > index 9c085dc..3069256 100644 > --- a/include/linux/tick.h > +++ b/include/linux/tick.h > @@ -94,14 +94,18 @@ extern void tick_cancel_sched_timer(int cpu); > static inline void tick_cancel_sched_timer(int cpu) { } > # endif > > -# ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST > +# if defined CONFIG_GENERIC_CLOCKEVENTS_BROADCAST > extern struct tick_device *tick_get_broadcast_device(void); > extern struct cpumask *tick_get_broadcast_mask(void); > > -# ifdef CONFIG_TICK_ONESHOT > +# if defined CONFIG_TICK_ONESHOT > extern struct cpumask *tick_get_broadcast_oneshot_mask(void); > +extern void tick_takeover(int deadcpu); > +# else > +static inline void tick_takeover(int deadcpu) {} > # endif > - > +# else > +static inline void tick_takeover(int deadcpu) {} > # endif /* BROADCAST */ > > # ifdef CONFIG_TICK_ONESHOT > diff --git a/kernel/cpu.c b/kernel/cpu.c > index 1972b16..f9ca351 100644 > --- a/kernel/cpu.c > +++ b/kernel/cpu.c > @@ -20,6 +20,7 @@ > #include <linux/gfp.h> > #include <linux/suspend.h> > #include <linux/lockdep.h> > +#include <linux/tick.h> > #include <trace/events/power.h> > > #include "smpboot.h" > @@ -411,6 +412,7 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen) > while (!idle_cpu(cpu)) > cpu_relax(); > > + tick_takeover(cpu); > /* This actually kills the CPU. */ > __cpu_die(cpu); > > diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c > index 066f0ec..0fd6634 100644 > --- a/kernel/time/tick-broadcast.c > +++ b/kernel/time/tick-broadcast.c > @@ -669,14 +669,19 @@ static void broadcast_shutdown_local(struct clock_event_device *bc, > clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN); > } > > -static void broadcast_move_bc(int deadcpu) > +void tick_takeover(int deadcpu) > { > - struct clock_event_device *bc = tick_broadcast_device.evtdev; > + struct clock_event_device *bc; > + unsigned long flags; > > - if (!bc || !broadcast_needs_cpu(bc, deadcpu)) > - return; > - /* This moves the broadcast assignment to this cpu */ > - clockevents_program_event(bc, bc->next_event, 1); > + raw_spin_lock_irqsave(&tick_broadcast_lock, flags); > + bc = tick_broadcast_device.evtdev; > + > + if (bc && broadcast_needs_cpu(bc, deadcpu)) { > + /* This moves the broadcast assignment to this cpu */ > + clockevents_program_event(bc, bc->next_event, 1); > + } > + raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags); > } > > /* > @@ -913,8 +918,6 @@ void tick_shutdown_broadcast_oneshot(unsigned int *cpup) > cpumask_clear_cpu(cpu, tick_broadcast_pending_mask); > cpumask_clear_cpu(cpu, tick_broadcast_force_mask); > > - broadcast_move_bc(cpu); > - > raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags); > } > > >
* Preeti U Murthy <preeti@linux.vnet.ibm.com> wrote: > It was found when doing a hotplug stress test on POWER, that the machine > either hit softlockups or rcu_sched stall warnings. The issue was > traced to commit 7cba160ad789a powernv/cpuidle: Redesign idle states > management, which exposed the cpu down race with hrtimer based broadcast > mode(Commit 5d1638acb9f6(tick: Introduce hrtimer based broadcast). This > is explained below. > > Assume CPU1 is the CPU which holds the hrtimer broadcasting duty before > it is taken down. > > CPU0 CPU1 > > cpu_down() take_cpu_down() > disable_interrupts() > > cpu_die() > > while(CPU1 != CPU_DEAD) { > msleep(100); > switch_to_idle(); > stop_cpu_timer(); > schedule_broadcast(); > } > > tick_cleanup_cpu_dead() > take_over_broadcast() > > So after CPU1 disabled interrupts it cannot handle the broadcast hrtimer > anymore, so CPU0 will be stuck forever. > > Fix this by explicitly taking over broadcast duty before cpu_die(). > This is a temporary workaround. What we really want is a callback in the > clockevent device which allows us to do that from the dying CPU by > pushing the hrtimer onto a different cpu. That might involve an IPI and > is definitely more complex than this immediate fix. So why not use a suitable CPU_DOWN* notifier for this, instead of open coding it all into a random place in the hotplug machinery? Also, I improved the changelog (attached below), but decided against applying it until these questions are cleared - please use that for future versions of this patch. Thanks, Ingo ===================> From 413fbf5193b330c5f478ef7aaeaaee08907a993e Mon Sep 17 00:00:00 2001 From: Preeti U Murthy <preeti@linux.vnet.ibm.com> Date: Mon, 30 Mar 2015 14:59:19 +0530 Subject: [PATCH] clockevents: Fix cpu_down() race for hrtimer based broadcasting It was found when doing a hotplug stress test on POWER, that the machine either hit softlockups or rcu_sched stall warnings. The issue was traced to commit: 7cba160ad789 ("powernv/cpuidle: Redesign idle states management") which exposed the cpu_down() race with hrtimer based broadcast mode: 5d1638acb9f6 ("tick: Introduce hrtimer based broadcast") The race is the following: Assume CPU1 is the CPU which holds the hrtimer broadcasting duty before it is taken down. CPU0 CPU1 cpu_down() take_cpu_down() disable_interrupts() cpu_die() while (CPU1 != CPU_DEAD) { msleep(100); switch_to_idle(); stop_cpu_timer(); schedule_broadcast(); } tick_cleanup_cpu_dead() take_over_broadcast() So after CPU1 disabled interrupts it cannot handle the broadcast hrtimer anymore, so CPU0 will be stuck forever. Fix this by explicitly taking over broadcast duty before cpu_die(). This is a temporary workaround. What we really want is a callback in the clockevent device which allows us to do that from the dying CPU by pushing the hrtimer onto a different cpu. That might involve an IPI and is definitely more complex than this immediate fix. Changelog was picked up from: https://lkml.org/lkml/2015/2/16/213 Suggested-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Preeti U. Murthy <preeti@linux.vnet.ibm.com> Cc: linuxppc-dev@lists.ozlabs.org Cc: mpe@ellerman.id.au Cc: nicolas.pitre@linaro.org Cc: peterz@infradead.org Cc: rjw@rjwysocki.net Fixes: http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
On 04/02/2015 04:12 PM, Ingo Molnar wrote: > > * Preeti U Murthy <preeti@linux.vnet.ibm.com> wrote: > >> It was found when doing a hotplug stress test on POWER, that the machine >> either hit softlockups or rcu_sched stall warnings. The issue was >> traced to commit 7cba160ad789a powernv/cpuidle: Redesign idle states >> management, which exposed the cpu down race with hrtimer based broadcast >> mode(Commit 5d1638acb9f6(tick: Introduce hrtimer based broadcast). This >> is explained below. >> >> Assume CPU1 is the CPU which holds the hrtimer broadcasting duty before >> it is taken down. >> >> CPU0 CPU1 >> >> cpu_down() take_cpu_down() >> disable_interrupts() >> >> cpu_die() >> >> while(CPU1 != CPU_DEAD) { >> msleep(100); >> switch_to_idle(); >> stop_cpu_timer(); >> schedule_broadcast(); >> } >> >> tick_cleanup_cpu_dead() >> take_over_broadcast() >> >> So after CPU1 disabled interrupts it cannot handle the broadcast hrtimer >> anymore, so CPU0 will be stuck forever. >> >> Fix this by explicitly taking over broadcast duty before cpu_die(). >> This is a temporary workaround. What we really want is a callback in the >> clockevent device which allows us to do that from the dying CPU by >> pushing the hrtimer onto a different cpu. That might involve an IPI and >> is definitely more complex than this immediate fix. > > So why not use a suitable CPU_DOWN* notifier for this, instead of open > coding it all into a random place in the hotplug machinery? This is because each of them is unsuitable for a reason: 1. CPU_DOWN_PREPARE stage allows for a fail. The cpu in question may not successfully go down. So we may pull the hrtimer unnecessarily. 2. CPU_DYING notifiers are run on the cpu that is going down. So the alternative would be to IPI an online cpu to take up the broadcast duty. 3. CPU_DEAD and CPU_POST_DEAD stages both have the drawback described in the changelog. I hope I got your question right. Regards Preeti U Murthy > > Also, I improved the changelog (attached below), but decided against > applying it until these questions are cleared - please use that for > future versions of this patch. > > Thanks, > > Ingo > > ===================> > From 413fbf5193b330c5f478ef7aaeaaee08907a993e Mon Sep 17 00:00:00 2001 > From: Preeti U Murthy <preeti@linux.vnet.ibm.com> > Date: Mon, 30 Mar 2015 14:59:19 +0530 > Subject: [PATCH] clockevents: Fix cpu_down() race for hrtimer based broadcasting > > It was found when doing a hotplug stress test on POWER, that the > machine either hit softlockups or rcu_sched stall warnings. The > issue was traced to commit: > > 7cba160ad789 ("powernv/cpuidle: Redesign idle states management") > > which exposed the cpu_down() race with hrtimer based broadcast mode: > > 5d1638acb9f6 ("tick: Introduce hrtimer based broadcast") > > The race is the following: > > Assume CPU1 is the CPU which holds the hrtimer broadcasting duty > before it is taken down. > > CPU0 CPU1 > > cpu_down() take_cpu_down() > disable_interrupts() > > cpu_die() > > while (CPU1 != CPU_DEAD) { > msleep(100); > switch_to_idle(); > stop_cpu_timer(); > schedule_broadcast(); > } > > tick_cleanup_cpu_dead() > take_over_broadcast() > > So after CPU1 disabled interrupts it cannot handle the broadcast > hrtimer anymore, so CPU0 will be stuck forever. > > Fix this by explicitly taking over broadcast duty before cpu_die(). > > This is a temporary workaround. What we really want is a callback > in the clockevent device which allows us to do that from the dying > CPU by pushing the hrtimer onto a different cpu. That might involve > an IPI and is definitely more complex than this immediate fix. > > Changelog was picked up from: > > https://lkml.org/lkml/2015/2/16/213 > > Suggested-by: Thomas Gleixner <tglx@linutronix.de> > Tested-by: Nicolas Pitre <nico@linaro.org> > Signed-off-by: Preeti U. Murthy <preeti@linux.vnet.ibm.com> > Cc: linuxppc-dev@lists.ozlabs.org > Cc: mpe@ellerman.id.au > Cc: nicolas.pitre@linaro.org > Cc: peterz@infradead.org > Cc: rjw@rjwysocki.net > Fixes: http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html >
* Preeti U Murthy <preeti@linux.vnet.ibm.com> wrote: > On 04/02/2015 04:12 PM, Ingo Molnar wrote: > > > > * Preeti U Murthy <preeti@linux.vnet.ibm.com> wrote: > > > >> It was found when doing a hotplug stress test on POWER, that the machine > >> either hit softlockups or rcu_sched stall warnings. The issue was > >> traced to commit 7cba160ad789a powernv/cpuidle: Redesign idle states > >> management, which exposed the cpu down race with hrtimer based broadcast > >> mode(Commit 5d1638acb9f6(tick: Introduce hrtimer based broadcast). This > >> is explained below. > >> > >> Assume CPU1 is the CPU which holds the hrtimer broadcasting duty before > >> it is taken down. > >> > >> CPU0 CPU1 > >> > >> cpu_down() take_cpu_down() > >> disable_interrupts() > >> > >> cpu_die() > >> > >> while(CPU1 != CPU_DEAD) { > >> msleep(100); > >> switch_to_idle(); > >> stop_cpu_timer(); > >> schedule_broadcast(); > >> } > >> > >> tick_cleanup_cpu_dead() > >> take_over_broadcast() > >> > >> So after CPU1 disabled interrupts it cannot handle the broadcast hrtimer > >> anymore, so CPU0 will be stuck forever. > >> > >> Fix this by explicitly taking over broadcast duty before cpu_die(). > >> This is a temporary workaround. What we really want is a callback in the > >> clockevent device which allows us to do that from the dying CPU by > >> pushing the hrtimer onto a different cpu. That might involve an IPI and > >> is definitely more complex than this immediate fix. > > > > So why not use a suitable CPU_DOWN* notifier for this, instead of open > > coding it all into a random place in the hotplug machinery? > > This is because each of them is unsuitable for a reason: > > 1. CPU_DOWN_PREPARE stage allows for a fail. The cpu in question may not > successfully go down. So we may pull the hrtimer unnecessarily. Failure is really rare - and as long as things will continue to work afterwards it's not a problem to pull the hrtimer to this CPU. Right? Thanks, Ingo
On 04/02/2015 05:01 PM, Ingo Molnar wrote: > > * Preeti U Murthy <preeti@linux.vnet.ibm.com> wrote: > >> On 04/02/2015 04:12 PM, Ingo Molnar wrote: >>> >>> * Preeti U Murthy <preeti@linux.vnet.ibm.com> wrote: >>> >>>> It was found when doing a hotplug stress test on POWER, that the machine >>>> either hit softlockups or rcu_sched stall warnings. The issue was >>>> traced to commit 7cba160ad789a powernv/cpuidle: Redesign idle states >>>> management, which exposed the cpu down race with hrtimer based broadcast >>>> mode(Commit 5d1638acb9f6(tick: Introduce hrtimer based broadcast). This >>>> is explained below. >>>> >>>> Assume CPU1 is the CPU which holds the hrtimer broadcasting duty before >>>> it is taken down. >>>> >>>> CPU0 CPU1 >>>> >>>> cpu_down() take_cpu_down() >>>> disable_interrupts() >>>> >>>> cpu_die() >>>> >>>> while(CPU1 != CPU_DEAD) { >>>> msleep(100); >>>> switch_to_idle(); >>>> stop_cpu_timer(); >>>> schedule_broadcast(); >>>> } >>>> >>>> tick_cleanup_cpu_dead() >>>> take_over_broadcast() >>>> >>>> So after CPU1 disabled interrupts it cannot handle the broadcast hrtimer >>>> anymore, so CPU0 will be stuck forever. >>>> >>>> Fix this by explicitly taking over broadcast duty before cpu_die(). >>>> This is a temporary workaround. What we really want is a callback in the >>>> clockevent device which allows us to do that from the dying CPU by >>>> pushing the hrtimer onto a different cpu. That might involve an IPI and >>>> is definitely more complex than this immediate fix. >>> >>> So why not use a suitable CPU_DOWN* notifier for this, instead of open >>> coding it all into a random place in the hotplug machinery? >> >> This is because each of them is unsuitable for a reason: >> >> 1. CPU_DOWN_PREPARE stage allows for a fail. The cpu in question may not >> successfully go down. So we may pull the hrtimer unnecessarily. > > Failure is really rare - and as long as things will continue to work > afterwards it's not a problem to pull the hrtimer to this CPU. Right? We will need to move this function to the clockevents_notify() call under CPU_DOWN_PREPARE. But I see that Tglx wanted to get rid of the clockevents_notify() function because it is more of a multiplex call and less of a notification mechanism and get rid of this function explicitly. Regards Preeti U Murthy > > Thanks, > > Ingo > _______________________________________________ > Linuxppc-dev mailing list > Linuxppc-dev@lists.ozlabs.org > https://lists.ozlabs.org/listinfo/linuxppc-dev >
On Thu, Apr 02, 2015 at 12:42:27PM +0200, Ingo Molnar wrote: > So why not use a suitable CPU_DOWN* notifier for this, instead of open > coding it all into a random place in the hotplug machinery? Because notifiers are crap? ;-) Its entirely impossible to figure out what's happening to core code in hotplug. You need to go chase down and random order notifier things. I'm planning on taking out many of the core hotplug notifiers and hard coding their callbacks into the hotplug code. That way at least its clear wtf happens when. > Also, I improved the changelog (attached below), but decided against > applying it until these questions are cleared - please use that for > future versions of this patch. > Fixes: http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html You forgot to fix the Fixes line ;-) My copy has: Fixes: 5d1638acb9f6 ("tick: Introduce hrtimer based broadcast")
diff --git a/include/linux/tick.h b/include/linux/tick.h index 9c085dc..3069256 100644 --- a/include/linux/tick.h +++ b/include/linux/tick.h @@ -94,14 +94,18 @@ extern void tick_cancel_sched_timer(int cpu); static inline void tick_cancel_sched_timer(int cpu) { } # endif -# ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST +# if defined CONFIG_GENERIC_CLOCKEVENTS_BROADCAST extern struct tick_device *tick_get_broadcast_device(void); extern struct cpumask *tick_get_broadcast_mask(void); -# ifdef CONFIG_TICK_ONESHOT +# if defined CONFIG_TICK_ONESHOT extern struct cpumask *tick_get_broadcast_oneshot_mask(void); +extern void tick_takeover(int deadcpu); +# else +static inline void tick_takeover(int deadcpu) {} # endif - +# else +static inline void tick_takeover(int deadcpu) {} # endif /* BROADCAST */ # ifdef CONFIG_TICK_ONESHOT diff --git a/kernel/cpu.c b/kernel/cpu.c index 1972b16..f9ca351 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -20,6 +20,7 @@ #include <linux/gfp.h> #include <linux/suspend.h> #include <linux/lockdep.h> +#include <linux/tick.h> #include <trace/events/power.h> #include "smpboot.h" @@ -411,6 +412,7 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen) while (!idle_cpu(cpu)) cpu_relax(); + tick_takeover(cpu); /* This actually kills the CPU. */ __cpu_die(cpu); diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c index 066f0ec..0fd6634 100644 --- a/kernel/time/tick-broadcast.c +++ b/kernel/time/tick-broadcast.c @@ -669,14 +669,19 @@ static void broadcast_shutdown_local(struct clock_event_device *bc, clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN); } -static void broadcast_move_bc(int deadcpu) +void tick_takeover(int deadcpu) { - struct clock_event_device *bc = tick_broadcast_device.evtdev; + struct clock_event_device *bc; + unsigned long flags; - if (!bc || !broadcast_needs_cpu(bc, deadcpu)) - return; - /* This moves the broadcast assignment to this cpu */ - clockevents_program_event(bc, bc->next_event, 1); + raw_spin_lock_irqsave(&tick_broadcast_lock, flags); + bc = tick_broadcast_device.evtdev; + + if (bc && broadcast_needs_cpu(bc, deadcpu)) { + /* This moves the broadcast assignment to this cpu */ + clockevents_program_event(bc, bc->next_event, 1); + } + raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags); } /* @@ -913,8 +918,6 @@ void tick_shutdown_broadcast_oneshot(unsigned int *cpup) cpumask_clear_cpu(cpu, tick_broadcast_pending_mask); cpumask_clear_cpu(cpu, tick_broadcast_force_mask); - broadcast_move_bc(cpu); - raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags); }
It was found when doing a hotplug stress test on POWER, that the machine either hit softlockups or rcu_sched stall warnings. The issue was traced to commit 7cba160ad789a powernv/cpuidle: Redesign idle states management, which exposed the cpu down race with hrtimer based broadcast mode(Commit 5d1638acb9f6(tick: Introduce hrtimer based broadcast). This is explained below. Assume CPU1 is the CPU which holds the hrtimer broadcasting duty before it is taken down. CPU0 CPU1 cpu_down() take_cpu_down() disable_interrupts() cpu_die() while(CPU1 != CPU_DEAD) { msleep(100); switch_to_idle(); stop_cpu_timer(); schedule_broadcast(); } tick_cleanup_cpu_dead() take_over_broadcast() So after CPU1 disabled interrupts it cannot handle the broadcast hrtimer anymore, so CPU0 will be stuck forever. Fix this by explicitly taking over broadcast duty before cpu_die(). This is a temporary workaround. What we really want is a callback in the clockevent device which allows us to do that from the dying CPU by pushing the hrtimer onto a different cpu. That might involve an IPI and is definitely more complex than this immediate fix. Fixes: http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Preeti U. Murthy <preeti@linux.vnet.ibm.com> [Changelog drawn from: https://lkml.org/lkml/2015/2/16/213] --- Change from V1: https://lkml.org/lkml/2015/2/26/11 1. Decoupled this fix from the kernel/time cleanup patches. V1 had a fail related to the cleanup which needs to be fixed. But since this bug fix is independent of this and needs to go in quickly, the patch is being posted out separately to be merged. include/linux/tick.h | 10 +++++++--- kernel/cpu.c | 2 ++ kernel/time/tick-broadcast.c | 19 +++++++++++-------- 3 files changed, 20 insertions(+), 11 deletions(-)