diff mbox

[RFC,4/5] cpuidle/ppc: CPU goes tickless if there are no arch-specific constraints

Message ID 20130725090302.12500.42998.stgit@preeti.in.ibm.com (mailing list archive)
State Superseded
Headers show

Commit Message

Preeti U Murthy July 25, 2013, 9:03 a.m. UTC
In the current design of timer offload framework, the broadcast cpu should
*not* go into tickless idle so as to avoid missed wakeups on CPUs in deep idle states.

Since we prevent the CPUs entering deep idle states from programming the lapic of the
broadcast cpu for their respective next local events for reasons mentioned in
PATCH[3/5], the broadcast CPU checks if there are any CPUs to be woken up during
each of its timer interrupt programmed to its local events.

With tickless idle, the broadcast CPU might not get a timer interrupt till after
many ticks which can result in missed wakeups on CPUs in deep idle states. By
disabling tickless idle, worst case, the tick_sched hrtimer will trigger a
timer interrupt every period to check for broadcast.

However the current setup of tickless idle does not let us make the choice
of tickless on individual cpus. NOHZ_MODE_INACTIVE which disables tickless idle,
is a system wide setting. Hence resort to an arch specific call to check if a cpu
can go into tickless idle.

Signed-off-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
---

 arch/powerpc/kernel/time.c |    5 +++++
 kernel/time/tick-sched.c   |    7 +++++++
 2 files changed, 12 insertions(+)

Comments

Frédéric Weisbecker July 25, 2013, 1:30 p.m. UTC | #1
On Thu, Jul 25, 2013 at 02:33:02PM +0530, Preeti U Murthy wrote:
> In the current design of timer offload framework, the broadcast cpu should
> *not* go into tickless idle so as to avoid missed wakeups on CPUs in deep idle states.
> 
> Since we prevent the CPUs entering deep idle states from programming the lapic of the
> broadcast cpu for their respective next local events for reasons mentioned in
> PATCH[3/5], the broadcast CPU checks if there are any CPUs to be woken up during
> each of its timer interrupt programmed to its local events.
> 
> With tickless idle, the broadcast CPU might not get a timer interrupt till after
> many ticks which can result in missed wakeups on CPUs in deep idle states. By
> disabling tickless idle, worst case, the tick_sched hrtimer will trigger a
> timer interrupt every period to check for broadcast.
> 
> However the current setup of tickless idle does not let us make the choice
> of tickless on individual cpus. NOHZ_MODE_INACTIVE which disables tickless idle,
> is a system wide setting. Hence resort to an arch specific call to check if a cpu
> can go into tickless idle.

Hi Preeti,

I'm not exactly sure why you can't enter the broadcast CPU in dynticks idle mode.
I read in the previous patch that's because in dynticks idle mode the broadcast
CPU deactivates its lapic so it doesn't receive the IPI. But may be I misunderstood.
Anyway that's not good for powersaving.

Also when an arch wants to prevent a CPU from entering dynticks idle mode, it typically
use arch_needs_cpu(). May be that could fit for you as well?

Thanks.
Preeti U Murthy July 26, 2013, 2:39 a.m. UTC | #2
Hi Frederic,

On 07/25/2013 07:00 PM, Frederic Weisbecker wrote:
> On Thu, Jul 25, 2013 at 02:33:02PM +0530, Preeti U Murthy wrote:
>> In the current design of timer offload framework, the broadcast cpu should
>> *not* go into tickless idle so as to avoid missed wakeups on CPUs in deep idle states.
>>
>> Since we prevent the CPUs entering deep idle states from programming the lapic of the
>> broadcast cpu for their respective next local events for reasons mentioned in
>> PATCH[3/5], the broadcast CPU checks if there are any CPUs to be woken up during
>> each of its timer interrupt programmed to its local events.
>>
>> With tickless idle, the broadcast CPU might not get a timer interrupt till after
>> many ticks which can result in missed wakeups on CPUs in deep idle states. By
>> disabling tickless idle, worst case, the tick_sched hrtimer will trigger a
>> timer interrupt every period to check for broadcast.
>>
>> However the current setup of tickless idle does not let us make the choice
>> of tickless on individual cpus. NOHZ_MODE_INACTIVE which disables tickless idle,
>> is a system wide setting. Hence resort to an arch specific call to check if a cpu
>> can go into tickless idle.
> 
> Hi Preeti,
> 
> I'm not exactly sure why you can't enter the broadcast CPU in dynticks idle mode.
> I read in the previous patch that's because in dynticks idle mode the broadcast
> CPU deactivates its lapic so it doesn't receive the IPI. But may be I misunderstood.
> Anyway that's not good for powersaving.

Let me elaborate. The CPUs in deep idle states have their lapics
deactivated. This means the next timer event which would typically have
been taken care of by a lapic firing at the appropriate moment does not
get taken care of in deep idle states, due to the lapic being switched off.

Hence such CPUs offload their next timer event to the broadcast CPU,
which should *not* enter deep idle states. The broadcast CPU has the
responsibility of waking the CPUs in deep idle states.

*The lapic of a broadcast CPU is active always*. Say CPUX, wants the
broadcast CPU to wake it up at timeX.  Since we cannot program the lapic
of a remote CPU, CPUX will need to send an IPI to the broadcast CPU,
asking it to program its lapic to fire at timeX so as to wake up CPUX.
*With multiple CPUs the overhead of sending IPI, could result in
performance bottlenecks and may not scale well.*

Hence the workaround is that the broadcast CPU on each of its timer
interrupt checks if any of the next timer event of a CPU in deep idle
state has expired, which can very well be found from dev->next_event of
that CPU. For example the timeX that has been mentioned above has
expired. If so the broadcast handler is called to send an IPI to the
idling CPU to wake it up.

*If the broadcast CPU, is in tickless idle, its timer interrupt could be
many ticks away. It could miss waking up a CPU in deep idle*, if its
wakeup is much before this timer interrupt of the broadcast CPU. But
without tickless idle, atleast at each period we are assured of a timer
interrupt. At which time broadcast handling is done as stated in the
previous paragraph and we will not miss wakeup of CPUs in deep idle states.

Yeah it is true that not allowing the broadcast CPU to enter tickless
idle is bad for power savings, but for the use case that we are aiming
at in this patch series, the current approach seems to be the best, with
minimal trade-offs in performance, power savings, scalability and no
change in the broadcast framework that exists today in the kernel.

> 
> Also when an arch wants to prevent a CPU from entering dynticks idle mode, it typically
> use arch_needs_cpu(). May be that could fit for you as well?

Oh ok thanks :) I will look into this and get back on if we can use it.

Regards
Preeti U Murthy
Preeti U Murthy July 26, 2013, 3:03 a.m. UTC | #3
Hi Frederic,

On 07/25/2013 07:00 PM, Frederic Weisbecker wrote:
> On Thu, Jul 25, 2013 at 02:33:02PM +0530, Preeti U Murthy wrote:
>> In the current design of timer offload framework, the broadcast cpu should
>> *not* go into tickless idle so as to avoid missed wakeups on CPUs in deep idle states.
>>
>> Since we prevent the CPUs entering deep idle states from programming the lapic of the
>> broadcast cpu for their respective next local events for reasons mentioned in
>> PATCH[3/5], the broadcast CPU checks if there are any CPUs to be woken up during
>> each of its timer interrupt programmed to its local events.
>>
>> With tickless idle, the broadcast CPU might not get a timer interrupt till after
>> many ticks which can result in missed wakeups on CPUs in deep idle states. By
>> disabling tickless idle, worst case, the tick_sched hrtimer will trigger a
>> timer interrupt every period to check for broadcast.
>>
>> However the current setup of tickless idle does not let us make the choice
>> of tickless on individual cpus. NOHZ_MODE_INACTIVE which disables tickless idle,
>> is a system wide setting. Hence resort to an arch specific call to check if a cpu
>> can go into tickless idle.
> 
> Hi Preeti,
> 
> I'm not exactly sure why you can't enter the broadcast CPU in dynticks idle mode.
> I read in the previous patch that's because in dynticks idle mode the broadcast
> CPU deactivates its lapic so it doesn't receive the IPI. But may be I misunderstood.
> Anyway that's not good for powersaving.
> 
> Also when an arch wants to prevent a CPU from entering dynticks idle mode, it typically
> use arch_needs_cpu(). May be that could fit for you as well?

Yes this will suit our requirement perfectly. I will note down this
change for the next version of this patchset. Thank you very much for
pointing this out :)

Regards
Preeti U Murthy
Paul Mackerras July 26, 2013, 3:19 a.m. UTC | #4
On Fri, Jul 26, 2013 at 08:09:23AM +0530, Preeti U Murthy wrote:
> Hi Frederic,
> 
> On 07/25/2013 07:00 PM, Frederic Weisbecker wrote:
> > Hi Preeti,
> > 
> > I'm not exactly sure why you can't enter the broadcast CPU in dynticks idle mode.
> > I read in the previous patch that's because in dynticks idle mode the broadcast
> > CPU deactivates its lapic so it doesn't receive the IPI. But may be I misunderstood.
> > Anyway that's not good for powersaving.
> 
> Let me elaborate. The CPUs in deep idle states have their lapics
> deactivated. This means the next timer event which would typically have
> been taken care of by a lapic firing at the appropriate moment does not
> get taken care of in deep idle states, due to the lapic being switched off.

I really don't think it's helpful to use the term "lapic" in
connection with Power systems.  There is nothing that is called a
"lapic" in a Power machine.  The nearest equivalent of the LAPIC on
x86 machines is the ICP, the interrupt-controller presentation
element, of which there is one per CPU thread.

However, I don't believe the ICP gets disabled in deep sleep modes.
What does get disabled is the "decrementer", which is a register that
normally counts down (at 512MHz) and generates an exception when it is
negative.  The decrementer *is* part of the CPU core, unlike the ICP.
That's why we can still get IPIs but not timer interrupts.

Please reword your patch description to not use the term "lapic",
which is not defined in the Power context and is therefore just
causing confusion.

Paul.
Preeti U Murthy July 26, 2013, 3:35 a.m. UTC | #5
Hi Paul,

On 07/26/2013 08:49 AM, Paul Mackerras wrote:
> On Fri, Jul 26, 2013 at 08:09:23AM +0530, Preeti U Murthy wrote:
>> Hi Frederic,
>>
>> On 07/25/2013 07:00 PM, Frederic Weisbecker wrote:
>>> Hi Preeti,
>>>
>>> I'm not exactly sure why you can't enter the broadcast CPU in dynticks idle mode.
>>> I read in the previous patch that's because in dynticks idle mode the broadcast
>>> CPU deactivates its lapic so it doesn't receive the IPI. But may be I misunderstood.
>>> Anyway that's not good for powersaving.
>>
>> Let me elaborate. The CPUs in deep idle states have their lapics
>> deactivated. This means the next timer event which would typically have
>> been taken care of by a lapic firing at the appropriate moment does not
>> get taken care of in deep idle states, due to the lapic being switched off.
> 
> I really don't think it's helpful to use the term "lapic" in
> connection with Power systems.  There is nothing that is called a
> "lapic" in a Power machine.  The nearest equivalent of the LAPIC on
> x86 machines is the ICP, the interrupt-controller presentation
> element, of which there is one per CPU thread.
> 
> However, I don't believe the ICP gets disabled in deep sleep modes.
> What does get disabled is the "decrementer", which is a register that
> normally counts down (at 512MHz) and generates an exception when it is
> negative.  The decrementer *is* part of the CPU core, unlike the ICP.
> That's why we can still get IPIs but not timer interrupts.
> 
> Please reword your patch description to not use the term "lapic",
> which is not defined in the Power context and is therefore just
> causing confusion.

Noted. Thank you :) I will probably send out a fresh patchset with the
appropriate changelog to avoid this confusion ?
> 
> Paul.
> 
Regards
Preeti U murthy
Preeti U Murthy July 26, 2013, 4:11 a.m. UTC | #6
Hi Frederic,

I apologise for the confusion. As Paul pointed out maybe the usage of
the term lapic is causing a large amount of confusion. So please see the
clarification below. Maybe it will help answer your question.

On 07/26/2013 08:09 AM, Preeti U Murthy wrote:
> Hi Frederic,
> 
> On 07/25/2013 07:00 PM, Frederic Weisbecker wrote:
>> On Thu, Jul 25, 2013 at 02:33:02PM +0530, Preeti U Murthy wrote:
>>> In the current design of timer offload framework, the broadcast cpu should
>>> *not* go into tickless idle so as to avoid missed wakeups on CPUs in deep idle states.
>>>
>>> Since we prevent the CPUs entering deep idle states from programming the lapic of the
>>> broadcast cpu for their respective next local events for reasons mentioned in
>>> PATCH[3/5], the broadcast CPU checks if there are any CPUs to be woken up during
>>> each of its timer interrupt programmed to its local events.
>>>
>>> With tickless idle, the broadcast CPU might not get a timer interrupt till after
>>> many ticks which can result in missed wakeups on CPUs in deep idle states. By
>>> disabling tickless idle, worst case, the tick_sched hrtimer will trigger a
>>> timer interrupt every period to check for broadcast.
>>>
>>> However the current setup of tickless idle does not let us make the choice
>>> of tickless on individual cpus. NOHZ_MODE_INACTIVE which disables tickless idle,
>>> is a system wide setting. Hence resort to an arch specific call to check if a cpu
>>> can go into tickless idle.
>>
>> Hi Preeti,
>>
>> I'm not exactly sure why you can't enter the broadcast CPU in dynticks idle mode.
>> I read in the previous patch that's because in dynticks idle mode the broadcast
>> CPU deactivates its lapic so it doesn't receive the IPI. But may be I misunderstood.
>> Anyway that's not good for powersaving.

Firstly, when CPUs enter deep idle states, their local clock event
devices get switched off. In the case of powerpc, local clock event
device is the decrementer. Hence such CPUs *do not get timer interrupts*
but are still *capable of taking IPIs.*

So we need to ensure that some other CPU, in this case the broadcast
CPU, makes note of when the timer interrupt of the CPU in such deep idle
states is to trigger and at that moment issue an IPI to that CPU.

*The broadcast CPU however should have its decrementer active always*,
meaning it is disallowed from entering deep idle states, where the
decrementer switches off, precisely because the other idling CPUs bank
on it for the above mentioned reason.

> *The lapic of a broadcast CPU is active always*. Say CPUX, wants the
> broadcast CPU to wake it up at timeX.  Since we cannot program the lapic
> of a remote CPU, CPUX will need to send an IPI to the broadcast CPU,
> asking it to program its lapic to fire at timeX so as to wake up CPUX.
> *With multiple CPUs the overhead of sending IPI, could result in
> performance bottlenecks and may not scale well.*

Rewording the above. The decrementer of the broadcast CPU is active
always. Since we cannot program the clock event device
of a remote CPU, CPUX will need to send an IPI to the broadcast CPU,
(which the broadcast CPU is very well capable of receiving), asking it
to program its decrementer to fire at timeX so as to wake up CPUX
*With multiple CPUs the overhead of sending IPI, could result in
performance bottlenecks and may not scale well.*

> 
> Hence the workaround is that the broadcast CPU on each of its timer
> interrupt checks if any of the next timer event of a CPU in deep idle
> state has expired, which can very well be found from dev->next_event of
> that CPU. For example the timeX that has been mentioned above has
> expired. If so the broadcast handler is called to send an IPI to the
> idling CPU to wake it up.
> 
> *If the broadcast CPU, is in tickless idle, its timer interrupt could be
> many ticks away. It could miss waking up a CPU in deep idle*, if its
> wakeup is much before this timer interrupt of the broadcast CPU. But
> without tickless idle, atleast at each period we are assured of a timer
> interrupt. At which time broadcast handling is done as stated in the
> previous paragraph and we will not miss wakeup of CPUs in deep idle states.
> 
> Yeah it is true that not allowing the broadcast CPU to enter tickless
> idle is bad for power savings, but for the use case that we are aiming
> at in this patch series, the current approach seems to be the best, with
> minimal trade-offs in performance, power savings, scalability and no
> change in the broadcast framework that exists today in the kernel.
> 

Regards
Preeti U Murthy
Benjamin Herrenschmidt July 27, 2013, 6:30 a.m. UTC | #7
On Fri, 2013-07-26 at 08:09 +0530, Preeti U Murthy wrote:
> *The lapic of a broadcast CPU is active always*. Say CPUX, wants the
> broadcast CPU to wake it up at timeX.  Since we cannot program the lapic
> of a remote CPU, CPUX will need to send an IPI to the broadcast CPU,
> asking it to program its lapic to fire at timeX so as to wake up CPUX.
> *With multiple CPUs the overhead of sending IPI, could result in
> performance bottlenecks and may not scale well.*
> 
> Hence the workaround is that the broadcast CPU on each of its timer
> interrupt checks if any of the next timer event of a CPU in deep idle
> state has expired, which can very well be found from dev->next_event of
> that CPU. For example the timeX that has been mentioned above has
> expired. If so the broadcast handler is called to send an IPI to the
> idling CPU to wake it up.
> 
> *If the broadcast CPU, is in tickless idle, its timer interrupt could be
> many ticks away. It could miss waking up a CPU in deep idle*, if its
> wakeup is much before this timer interrupt of the broadcast CPU. But
> without tickless idle, atleast at each period we are assured of a timer
> interrupt. At which time broadcast handling is done as stated in the
> previous paragraph and we will not miss wakeup of CPUs in deep idle states.

But that means a great loss of power saving on the broadcast CPU when the machine
is basically completely idle. We might be able to come up with some thing better.

(Note : I do no know the timer offload code if it exists already, I'm describing
how things could happen "out of the blue" without any knowledge of pre-existing
framework here)

We can know when the broadcast CPU expects to wake up next. When a CPU goes to
a deep sleep state, it can then

 - Indicate to the broadcast CPU when it intends to be woken up by queuing
itself into an ordered queue (ordered by target wakeup time). (OPTIMISATION:
Play with the locality of that: have one queue (and one "broadcast CPU") per
chip or per node instead of a global one to limit cache bouncing).

 - Check if that happens before the broadcast CPU intended wake time (we
need statistics to see how often that happens), and in that case send an IPI
to wake it up now. When the broadcast CPU goes to sleep, it limits its sleep
time to the min of it's intended sleep time and the new sleeper time.
(OPTIMISATION: Dynamically chose a broadcast CPU based on closest expiry ?)

 - We can probably limit spurrious wakeups a *LOT* by aligning that target time
to a global jiffy boundary, meaning that several CPUs going to idle are likely
to be choosing the same. Or maybe better, an adaptative alignment by essentially
getting more coarse grained as we go further in the future

 - When the "broadcast" CPU goes to sleep, it can play the same game of alignment.

I don't like the concept of a dedicated broadcast CPU however. I'd rather have a
general queue (or per node) of sleepers needing a wakeup and more/less dynamically
pick a waker to be the last man standing, but it does make things a bit more
tricky with tickless scheduler (non-idle).

Still, I wonder if we could just have some algorithm to actually pick wakers
more dynamically based on who ever has the closest "next wakeup" planned,
that sort of thing. A fixed broadcaster will create an imbalance in
power/thermal within the chip in addition to needing to be moved around on
hotplug etc...

Cheers,
Ben.
Preeti U Murthy July 27, 2013, 7:50 a.m. UTC | #8
Hi Ben,

On 07/27/2013 12:00 PM, Benjamin Herrenschmidt wrote:
> On Fri, 2013-07-26 at 08:09 +0530, Preeti U Murthy wrote:
>> *The lapic of a broadcast CPU is active always*. Say CPUX, wants the
>> broadcast CPU to wake it up at timeX.  Since we cannot program the lapic
>> of a remote CPU, CPUX will need to send an IPI to the broadcast CPU,
>> asking it to program its lapic to fire at timeX so as to wake up CPUX.
>> *With multiple CPUs the overhead of sending IPI, could result in
>> performance bottlenecks and may not scale well.*
>>
>> Hence the workaround is that the broadcast CPU on each of its timer
>> interrupt checks if any of the next timer event of a CPU in deep idle
>> state has expired, which can very well be found from dev->next_event of
>> that CPU. For example the timeX that has been mentioned above has
>> expired. If so the broadcast handler is called to send an IPI to the
>> idling CPU to wake it up.
>>
>> *If the broadcast CPU, is in tickless idle, its timer interrupt could be
>> many ticks away. It could miss waking up a CPU in deep idle*, if its
>> wakeup is much before this timer interrupt of the broadcast CPU. But
>> without tickless idle, atleast at each period we are assured of a timer
>> interrupt. At which time broadcast handling is done as stated in the
>> previous paragraph and we will not miss wakeup of CPUs in deep idle states.
> 
> But that means a great loss of power saving on the broadcast CPU when the machine
> is basically completely idle. We might be able to come up with some thing better.
> 
> (Note : I do no know the timer offload code if it exists already, I'm describing
> how things could happen "out of the blue" without any knowledge of pre-existing
> framework here)
> 
> We can know when the broadcast CPU expects to wake up next. When a CPU goes to
> a deep sleep state, it can then
> 
>  - Indicate to the broadcast CPU when it intends to be woken up by queuing
> itself into an ordered queue (ordered by target wakeup time). (OPTIMISATION:
> Play with the locality of that: have one queue (and one "broadcast CPU") per
> chip or per node instead of a global one to limit cache bouncing).
> 
>  - Check if that happens before the broadcast CPU intended wake time (we
> need statistics to see how often that happens), and in that case send an IPI
> to wake it up now. When the broadcast CPU goes to sleep, it limits its sleep
> time to the min of it's intended sleep time and the new sleeper time.
> (OPTIMISATION: Dynamically chose a broadcast CPU based on closest expiry ?)
> 
>  - We can probably limit spurrious wakeups a *LOT* by aligning that target time
> to a global jiffy boundary, meaning that several CPUs going to idle are likely
> to be choosing the same. Or maybe better, an adaptative alignment by essentially
> getting more coarse grained as we go further in the future
> 
>  - When the "broadcast" CPU goes to sleep, it can play the same game of alignment.
> 
> I don't like the concept of a dedicated broadcast CPU however. I'd rather have a
> general queue (or per node) of sleepers needing a wakeup and more/less dynamically
> pick a waker to be the last man standing, but it does make things a bit more
> tricky with tickless scheduler (non-idle).
> 
> Still, I wonder if we could just have some algorithm to actually pick wakers
> more dynamically based on who ever has the closest "next wakeup" planned,
> that sort of thing. A fixed broadcaster will create an imbalance in
> power/thermal within the chip in addition to needing to be moved around on
> hotplug etc...

Thank you for having listed out the above suggestions. Below, I will
bring out some ideas about how the concerns that you have raised can be
addressed in the increasing order of priority.

- To begin with, I think we can have the following model to have the
responsibility of the broadcast CPU float around certain CPUs. i.e. Not
have a dedicated broadcast CPU. I will refer to the broadcast CPU as the
bc_cpu henceforth for convenience.

1. The first CPU that intends to enter deep sleep state will be the bc_cpu.

2. Every other CPU that intends to enter deep idle state will enter
themselves into a mask, say the bc_mask, which is already being done
today, after they check that a bc_cpu has been assigned.

3. The bc_cpu should not enter tickless idle, until step 5a holds true.

4. So on every timer interrupt, which is at-least every period, it
checks the bc_mask to see if any CPUs need to be woken up.

5. The bc cpu should not enter tickless idle *until* it is de-nominated
as the bc_cpu. The de-nomination occurs when:
  a. In one of its timer interrupts, it does broadcast handling to find
out that there are no CPUs to be woken up.

6. So if 5a holds, then there is no bc_cpu anymore until a CPU decides
to enter deep idle state again, in which case steps 1 to 5 repeat.


- We could optimize this further, to allow the bc_cpu to enter tickless
idle, even while it is nominated as one. This can be the next step, if
we can get the above to work stably.

You have already brought out this point, so I will just reword it. Each
time broadcast handling is done, the bc_cpu needs to check if the wakeup
time of a CPU, that has entered deep idle state, and is yet to be woken
up, is before the bc_cpu's wakeup time, which was programmed to its
local events.

If so, then reprogram the decrementer to the wakeup time of a CPU that
is in deep idle state.

But we need to keep in mind one point. When CPUs go into deep idle, they
cannot program the local timer of the bc_cpu to their wakeup time. This
is because a CPU cannot program the timer of a remote CPU.

Therefore the only time we can check if 'wakeup time of the CPU that
enters deep idle state is before broadcast CPU's intended wake time so
as to reprogram the decrementer', is in the broadcast handler itself,
which is done *on* the bc_cpu alone.



What do you think?


- Coming to your third suggestion of aligning the wakeup time of CPUs, I
will spend some time on this and get back regarding the same.


> 
> Cheers,
> Ben.
> 

Thank you

Regards
Preeti U Murthy
Vaidyanathan Srinivasan July 29, 2013, 5:11 a.m. UTC | #9
* Benjamin Herrenschmidt <benh@kernel.crashing.org> [2013-07-27 16:30:05]:

> On Fri, 2013-07-26 at 08:09 +0530, Preeti U Murthy wrote:
> > *The lapic of a broadcast CPU is active always*. Say CPUX, wants the
> > broadcast CPU to wake it up at timeX.  Since we cannot program the lapic
> > of a remote CPU, CPUX will need to send an IPI to the broadcast CPU,
> > asking it to program its lapic to fire at timeX so as to wake up CPUX.
> > *With multiple CPUs the overhead of sending IPI, could result in
> > performance bottlenecks and may not scale well.*
> > 
> > Hence the workaround is that the broadcast CPU on each of its timer
> > interrupt checks if any of the next timer event of a CPU in deep idle
> > state has expired, which can very well be found from dev->next_event of
> > that CPU. For example the timeX that has been mentioned above has
> > expired. If so the broadcast handler is called to send an IPI to the
> > idling CPU to wake it up.
> > 
> > *If the broadcast CPU, is in tickless idle, its timer interrupt could be
> > many ticks away. It could miss waking up a CPU in deep idle*, if its
> > wakeup is much before this timer interrupt of the broadcast CPU. But
> > without tickless idle, atleast at each period we are assured of a timer
> > interrupt. At which time broadcast handling is done as stated in the
> > previous paragraph and we will not miss wakeup of CPUs in deep idle states.
> 
> But that means a great loss of power saving on the broadcast CPU when the machine
> is basically completely idle. We might be able to come up with some thing better.

Hi Ben,

Yes, we will need to improve on this case in stages.  In the current
design, we will have to hold one of the CPU in shallow idle state
(nap) to wakeup other deep idle cpus.  The cost of keeping the
periodic tick ON on the broadcast CPU in minimal (but not zero) since
we would not allow that CPU to enter any deep idle states even if
there were no periodic timers queued.
 
> (Note : I do no know the timer offload code if it exists already, I'm describing
> how things could happen "out of the blue" without any knowledge of pre-existing
> framework here)
> 
> We can know when the broadcast CPU expects to wake up next. When a CPU goes to
> a deep sleep state, it can then
> 
>  - Indicate to the broadcast CPU when it intends to be woken up by queuing
> itself into an ordered queue (ordered by target wakeup time). (OPTIMISATION:
> Play with the locality of that: have one queue (and one "broadcast CPU") per
> chip or per node instead of a global one to limit cache bouncing).
> 
>  - Check if that happens before the broadcast CPU intended wake time (we
> need statistics to see how often that happens), and in that case send an IPI
> to wake it up now. When the broadcast CPU goes to sleep, it limits its sleep
> time to the min of it's intended sleep time and the new sleeper time.
> (OPTIMISATION: Dynamically chose a broadcast CPU based on closest expiry ?)

This will be an improvement and the idea we have is to have
a hierarchical method of finding a waking CPU within core/socket/node
in order to find a better fit and ultimately send IPI to wakeup
a broadcast CPU only if there is no other fit.  This condition would
imply that more CPUs are in deep idle state and the cost of sending an
IPI to reprogram the broadcast cpu's local timer may well payoff.

>  - We can probably limit spurrious wakeups a *LOT* by aligning that target time
> to a global jiffy boundary, meaning that several CPUs going to idle are likely
> to be choosing the same. Or maybe better, an adaptative alignment by essentially
> getting more coarse grained as we go further in the future
> 
>  - When the "broadcast" CPU goes to sleep, it can play the same game of alignment.

CPUs entering deep idle state would need to wakeup only at a jiffy
boundary or the jiffy boundary earlier than the target wakeup time.
Your point is can the broadcast cpu wakeup the sleeping CPU *around*
the designated wakeup time (earlier) so as to avoid reprogramming its
timer.
 
> I don't like the concept of a dedicated broadcast CPU however. I'd rather have a
> general queue (or per node) of sleepers needing a wakeup and more/less dynamically
> pick a waker to be the last man standing, but it does make things a bit more
> tricky with tickless scheduler (non-idle).
> 
> Still, I wonder if we could just have some algorithm to actually pick wakers
> more dynamically based on who ever has the closest "next wakeup" planned,
> that sort of thing. A fixed broadcaster will create an imbalance in
> power/thermal within the chip in addition to needing to be moved around on
> hotplug etc...

Right Ben.  The hierarchical way of selecting the waker will help us
have multiple wakers in different sockets/cores.  The broadcast
framework allows us to decouple the cpu going to idle and the waker to
be selected independently.  This patch series is the start where we
pick one and allow it to move around.  The ideal goal to achieve would
be that we can have multiple wakers serving wakeup requests from
a queue (mask) and wakers are generally busy or just idle cpu who need
not prevent itself from going to tickless idle or deep idle states
just to wakeup another one.  This optimizations can adapt to the case
where we will need the last cpu to stay in shallow idle mode and in
tickless with the wakeup events queued to target one of the deep idle
cpu.

--Vaidy
Vaidyanathan Srinivasan July 29, 2013, 5:28 a.m. UTC | #10
* Preeti U Murthy <preeti@linux.vnet.ibm.com> [2013-07-27 13:20:37]:

> Hi Ben,
> 
> On 07/27/2013 12:00 PM, Benjamin Herrenschmidt wrote:
> > On Fri, 2013-07-26 at 08:09 +0530, Preeti U Murthy wrote:
> >> *The lapic of a broadcast CPU is active always*. Say CPUX, wants the
> >> broadcast CPU to wake it up at timeX.  Since we cannot program the lapic
> >> of a remote CPU, CPUX will need to send an IPI to the broadcast CPU,
> >> asking it to program its lapic to fire at timeX so as to wake up CPUX.
> >> *With multiple CPUs the overhead of sending IPI, could result in
> >> performance bottlenecks and may not scale well.*
> >>
> >> Hence the workaround is that the broadcast CPU on each of its timer
> >> interrupt checks if any of the next timer event of a CPU in deep idle
> >> state has expired, which can very well be found from dev->next_event of
> >> that CPU. For example the timeX that has been mentioned above has
> >> expired. If so the broadcast handler is called to send an IPI to the
> >> idling CPU to wake it up.
> >>
> >> *If the broadcast CPU, is in tickless idle, its timer interrupt could be
> >> many ticks away. It could miss waking up a CPU in deep idle*, if its
> >> wakeup is much before this timer interrupt of the broadcast CPU. But
> >> without tickless idle, atleast at each period we are assured of a timer
> >> interrupt. At which time broadcast handling is done as stated in the
> >> previous paragraph and we will not miss wakeup of CPUs in deep idle states.
> > 
> > But that means a great loss of power saving on the broadcast CPU when the machine
> > is basically completely idle. We might be able to come up with some thing better.
> > 
> > (Note : I do no know the timer offload code if it exists already, I'm describing
> > how things could happen "out of the blue" without any knowledge of pre-existing
> > framework here)
> > 
> > We can know when the broadcast CPU expects to wake up next. When a CPU goes to
> > a deep sleep state, it can then
> > 
> >  - Indicate to the broadcast CPU when it intends to be woken up by queuing
> > itself into an ordered queue (ordered by target wakeup time). (OPTIMISATION:
> > Play with the locality of that: have one queue (and one "broadcast CPU") per
> > chip or per node instead of a global one to limit cache bouncing).
> > 
> >  - Check if that happens before the broadcast CPU intended wake time (we
> > need statistics to see how often that happens), and in that case send an IPI
> > to wake it up now. When the broadcast CPU goes to sleep, it limits its sleep
> > time to the min of it's intended sleep time and the new sleeper time.
> > (OPTIMISATION: Dynamically chose a broadcast CPU based on closest expiry ?)
> > 
> >  - We can probably limit spurrious wakeups a *LOT* by aligning that target time
> > to a global jiffy boundary, meaning that several CPUs going to idle are likely
> > to be choosing the same. Or maybe better, an adaptative alignment by essentially
> > getting more coarse grained as we go further in the future
> > 
> >  - When the "broadcast" CPU goes to sleep, it can play the same game of alignment.
> > 
> > I don't like the concept of a dedicated broadcast CPU however. I'd rather have a
> > general queue (or per node) of sleepers needing a wakeup and more/less dynamically
> > pick a waker to be the last man standing, but it does make things a bit more
> > tricky with tickless scheduler (non-idle).
> > 
> > Still, I wonder if we could just have some algorithm to actually pick wakers
> > more dynamically based on who ever has the closest "next wakeup" planned,
> > that sort of thing. A fixed broadcaster will create an imbalance in
> > power/thermal within the chip in addition to needing to be moved around on
> > hotplug etc...
> 
> Thank you for having listed out the above suggestions. Below, I will
> bring out some ideas about how the concerns that you have raised can be
> addressed in the increasing order of priority.
> 
> - To begin with, I think we can have the following model to have the
> responsibility of the broadcast CPU float around certain CPUs. i.e. Not
> have a dedicated broadcast CPU. I will refer to the broadcast CPU as the
> bc_cpu henceforth for convenience.
> 
> 1. The first CPU that intends to enter deep sleep state will be the bc_cpu.
> 
> 2. Every other CPU that intends to enter deep idle state will enter
> themselves into a mask, say the bc_mask, which is already being done
> today, after they check that a bc_cpu has been assigned.
> 
> 3. The bc_cpu should not enter tickless idle, until step 5a holds true.
> 
> 4. So on every timer interrupt, which is at-least every period, it
> checks the bc_mask to see if any CPUs need to be woken up.
> 
> 5. The bc cpu should not enter tickless idle *until* it is de-nominated
> as the bc_cpu. The de-nomination occurs when:
>   a. In one of its timer interrupts, it does broadcast handling to find
> out that there are no CPUs to be woken up.
> 
> 6. So if 5a holds, then there is no bc_cpu anymore until a CPU decides
> to enter deep idle state again, in which case steps 1 to 5 repeat.
> 
> 
> - We could optimize this further, to allow the bc_cpu to enter tickless
> idle, even while it is nominated as one. This can be the next step, if
> we can get the above to work stably.
> 
> You have already brought out this point, so I will just reword it. Each
> time broadcast handling is done, the bc_cpu needs to check if the wakeup
> time of a CPU, that has entered deep idle state, and is yet to be woken
> up, is before the bc_cpu's wakeup time, which was programmed to its
> local events.
> 
> If so, then reprogram the decrementer to the wakeup time of a CPU that
> is in deep idle state.
> 
> But we need to keep in mind one point. When CPUs go into deep idle, they
> cannot program the local timer of the bc_cpu to their wakeup time. This
> is because a CPU cannot program the timer of a remote CPU.
> 
> Therefore the only time we can check if 'wakeup time of the CPU that
> enters deep idle state is before broadcast CPU's intended wake time so
> as to reprogram the decrementer', is in the broadcast handler itself,
> which is done *on* the bc_cpu alone.
> 
> 
> 
> What do you think?
> 
> 
> - Coming to your third suggestion of aligning the wakeup time of CPUs, I
> will spend some time on this and get back regarding the same.

Hi Preeti,

One of Ben's suggestions is to coarse grain the waker's timer event.
The trade off is whether we issue an IPI for each CPU needing a wakeup
or let the bc_cpu wakeup periodically and *see* that there is a new
request.  The interval for a wakeup request will be much coarse grain
than a tick.  We maybe able to easily reduce the power impact of not
letting bc_cpu go tickless by choosing a right coarse grain period.
For example we can let the bc_cpu look for new wakeup requests once in
every 10 or 20 jiffies rather than every jiffy and align the wakeup
requests at this coarse grain wakeup.  We do pay a power penalty by
waking up few jiffies earlier which we can mitigate by reevaluating
the situation and queueing a fine grain timer to the right jiffy on
the bc_cpu if such a situation arises.

The point is a new wakeup request will *ask* for a wakeup later than
the coarse grain period.  So the bc_cpu can wakeup at the coarse time
period and reprogram its timer to the right jiffy.

--Vaidy
Preeti U Murthy July 29, 2013, 10:11 a.m. UTC | #11
Hi,

On 07/29/2013 10:58 AM, Vaidyanathan Srinivasan wrote:
> * Preeti U Murthy <preeti@linux.vnet.ibm.com> [2013-07-27 13:20:37]:
> 
>> Hi Ben,
>>
>> On 07/27/2013 12:00 PM, Benjamin Herrenschmidt wrote:
>>> On Fri, 2013-07-26 at 08:09 +0530, Preeti U Murthy wrote:
>>>> *The lapic of a broadcast CPU is active always*. Say CPUX, wants the
>>>> broadcast CPU to wake it up at timeX.  Since we cannot program the lapic
>>>> of a remote CPU, CPUX will need to send an IPI to the broadcast CPU,
>>>> asking it to program its lapic to fire at timeX so as to wake up CPUX.
>>>> *With multiple CPUs the overhead of sending IPI, could result in
>>>> performance bottlenecks and may not scale well.*
>>>>
>>>> Hence the workaround is that the broadcast CPU on each of its timer
>>>> interrupt checks if any of the next timer event of a CPU in deep idle
>>>> state has expired, which can very well be found from dev->next_event of
>>>> that CPU. For example the timeX that has been mentioned above has
>>>> expired. If so the broadcast handler is called to send an IPI to the
>>>> idling CPU to wake it up.
>>>>
>>>> *If the broadcast CPU, is in tickless idle, its timer interrupt could be
>>>> many ticks away. It could miss waking up a CPU in deep idle*, if its
>>>> wakeup is much before this timer interrupt of the broadcast CPU. But
>>>> without tickless idle, atleast at each period we are assured of a timer
>>>> interrupt. At which time broadcast handling is done as stated in the
>>>> previous paragraph and we will not miss wakeup of CPUs in deep idle states.
>>>
>>> But that means a great loss of power saving on the broadcast CPU when the machine
>>> is basically completely idle. We might be able to come up with some thing better.
>>>
>>> (Note : I do no know the timer offload code if it exists already, I'm describing
>>> how things could happen "out of the blue" without any knowledge of pre-existing
>>> framework here)
>>>
>>> We can know when the broadcast CPU expects to wake up next. When a CPU goes to
>>> a deep sleep state, it can then
>>>
>>>  - Indicate to the broadcast CPU when it intends to be woken up by queuing
>>> itself into an ordered queue (ordered by target wakeup time). (OPTIMISATION:
>>> Play with the locality of that: have one queue (and one "broadcast CPU") per
>>> chip or per node instead of a global one to limit cache bouncing).
>>>
>>>  - Check if that happens before the broadcast CPU intended wake time (we
>>> need statistics to see how often that happens), and in that case send an IPI
>>> to wake it up now. When the broadcast CPU goes to sleep, it limits its sleep
>>> time to the min of it's intended sleep time and the new sleeper time.
>>> (OPTIMISATION: Dynamically chose a broadcast CPU based on closest expiry ?)
>>>
>>>  - We can probably limit spurrious wakeups a *LOT* by aligning that target time
>>> to a global jiffy boundary, meaning that several CPUs going to idle are likely
>>> to be choosing the same. Or maybe better, an adaptative alignment by essentially
>>> getting more coarse grained as we go further in the future
>>>
>>>  - When the "broadcast" CPU goes to sleep, it can play the same game of alignment.
>>>
>>> I don't like the concept of a dedicated broadcast CPU however. I'd rather have a
>>> general queue (or per node) of sleepers needing a wakeup and more/less dynamically
>>> pick a waker to be the last man standing, but it does make things a bit more
>>> tricky with tickless scheduler (non-idle).
>>>
>>> Still, I wonder if we could just have some algorithm to actually pick wakers
>>> more dynamically based on who ever has the closest "next wakeup" planned,
>>> that sort of thing. A fixed broadcaster will create an imbalance in
>>> power/thermal within the chip in addition to needing to be moved around on
>>> hotplug etc...
>>
>> Thank you for having listed out the above suggestions. Below, I will
>> bring out some ideas about how the concerns that you have raised can be
>> addressed in the increasing order of priority.
>>
>> - To begin with, I think we can have the following model to have the
>> responsibility of the broadcast CPU float around certain CPUs. i.e. Not
>> have a dedicated broadcast CPU. I will refer to the broadcast CPU as the
>> bc_cpu henceforth for convenience.
>>
>> 1. The first CPU that intends to enter deep sleep state will be the bc_cpu.
>>
>> 2. Every other CPU that intends to enter deep idle state will enter
>> themselves into a mask, say the bc_mask, which is already being done
>> today, after they check that a bc_cpu has been assigned.
>>
>> 3. The bc_cpu should not enter tickless idle, until step 5a holds true.
>>
>> 4. So on every timer interrupt, which is at-least every period, it
>> checks the bc_mask to see if any CPUs need to be woken up.
>>
>> 5. The bc cpu should not enter tickless idle *until* it is de-nominated
>> as the bc_cpu. The de-nomination occurs when:
>>   a. In one of its timer interrupts, it does broadcast handling to find
>> out that there are no CPUs to be woken up.
>>
>> 6. So if 5a holds, then there is no bc_cpu anymore until a CPU decides
>> to enter deep idle state again, in which case steps 1 to 5 repeat.
>>
>>
>> - We could optimize this further, to allow the bc_cpu to enter tickless
>> idle, even while it is nominated as one. This can be the next step, if
>> we can get the above to work stably.
>>
>> You have already brought out this point, so I will just reword it. Each
>> time broadcast handling is done, the bc_cpu needs to check if the wakeup
>> time of a CPU, that has entered deep idle state, and is yet to be woken
>> up, is before the bc_cpu's wakeup time, which was programmed to its
>> local events.
>>
>> If so, then reprogram the decrementer to the wakeup time of a CPU that
>> is in deep idle state.
>>
>> But we need to keep in mind one point. When CPUs go into deep idle, they
>> cannot program the local timer of the bc_cpu to their wakeup time. This
>> is because a CPU cannot program the timer of a remote CPU.
>>
>> Therefore the only time we can check if 'wakeup time of the CPU that
>> enters deep idle state is before broadcast CPU's intended wake time so
>> as to reprogram the decrementer', is in the broadcast handler itself,
>> which is done *on* the bc_cpu alone.
>>
>>
>>
>> What do you think?
>>
>>
>> - Coming to your third suggestion of aligning the wakeup time of CPUs, I
>> will spend some time on this and get back regarding the same.
> 
> Hi Preeti,
> 
> One of Ben's suggestions is to coarse grain the waker's timer event.
> The trade off is whether we issue an IPI for each CPU needing a wakeup
> or let the bc_cpu wakeup periodically and *see* that there is a new
> request.  The interval for a wakeup request will be much coarse grain
> than a tick.  We maybe able to easily reduce the power impact of not
> letting bc_cpu go tickless by choosing a right coarse grain period.
> For example we can let the bc_cpu look for new wakeup requests once in
> every 10 or 20 jiffies rather than every jiffy and align the wakeup
> requests at this coarse grain wakeup.  We do pay a power penalty by
> waking up few jiffies earlier which we can mitigate by reevaluating
> the situation and queueing a fine grain timer to the right jiffy on
> the bc_cpu if such a situation arises.
> 
> The point is a new wakeup request will *ask* for a wakeup later than
> the coarse grain period.  So the bc_cpu can wakeup at the coarse time
> period and reprogram its timer to the right jiffy.
> 
> --Vaidy

Thanks Ben, Vaidy for your suggestions.

I will work on the second version of this patchset, which will address
two major issues that are brought out in this thread:

1. Dynamically choosing a broadcast CPU and floating this responsibility
around. This could have a lot of scope for optimization. and can be done
in steps. To begin with we could have the first CPU that sleeps to be
nominated the broadcast CPU. It would be relieved of this duty when
there are no more CPUs in deep idle states to be woken up. The next time
a CPU enters deep idle it will be nominated as the broadcast CPU.

However although the above will address the problems associated with a
dedicated broadcast CPU, it still does not solve the issue with
broadcast CPU having to refrain from entering tickless idle.

Point 2 will address this issue to an extent.

2. Have a timer on the broadcast CPU, that is specifically intended to
wake up CPUs in deep idle states. This timer will have a fixed period
much larger than a jiffy, a period sufficient enough not to miss wakeup
of CPUs in deep idle states. But if the wakeup of a CPU in deep idle is
smaller than this fixed period, it will send an IPI to the broadcast CPU
to reprogram its decrementer for this wakeup. This will allow the
broadcast CPU to enter tickless idle.

This is as opposed to the current approach where we have the broadcast
CPU waking up every periodic tick to check if broadcast is required.

Do let me know if there are points that are missed, which need to be
considered as pressing next steps.

Thank you

Regards
Preeti U Murthy
diff mbox

Patch

diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 8ed0fb3..68a636f 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -862,6 +862,11 @@  static void decrementer_timer_broadcast(const struct cpumask *mask)
 	arch_send_tick_broadcast(mask);
 }
 
+int arch_can_stop_idle_tick(int cpu)
+{
+	return cpu != bc_cpu;
+}
+
 static void register_decrementer_clockevent(int cpu)
 {
 	struct clock_event_device *dec = &per_cpu(decrementers, cpu);
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 6960172..e9ffa84 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -700,8 +700,15 @@  static void tick_nohz_full_stop_tick(struct tick_sched *ts)
 #endif
 }
 
+int __weak arch_can_stop_idle_tick(int cpu)
+{
+	return 1;
+}
+
 static bool can_stop_idle_tick(int cpu, struct tick_sched *ts)
 {
+	if (!arch_can_stop_idle_tick(cpu))
+		return false;
 	/*
 	 * If this cpu is offline and it is the one which updates
 	 * jiffies, then give up the assignment and let it be taken by