Message ID | 1271771163-2779-1-git-send-email-chase.douglas@canonical.com |
---|---|
State | Accepted |
Delegated to: | Stefan Bader |
Headers | show |
I think I am a little confused by other discussions going on on this. A bit it sounds like the moment you submitted this there was Peter responding with yet another idea. Or am I just confused? -Stefan
On Tue, Apr 20, 2010 at 2:14 PM, Stefan Bader <stefan.bader@canonical.com> wrote: > I think I am a little confused by other discussions going on on this. A bit it > sounds like the moment you submitted this there was Peter responding with yet > another idea. Or am I just confused? Peter Zijlstra has another idea about using the idle load balancing mechanism to do load accounting. However, his idea is so far not fully thought out, and not tested at all I presume. I pointed out a few concerns with his approach, and he agreed there are issues that would need to be worked around. Right now, the only tested and reasonable solution is my patch. I'm waiting to hear back from him on why he feels using the ILB mechanism is a better approach, but I've also made clear that I don't understand the code and can't create a solution myself. I think you really need to understand ILB deep down to do it right. I think it makes sense to put this in Karmic and leave the patch in Lucid for now until upstream decides its final direction. P.S.: I CC'd kernel-team cause I figured we all would like to see the discussion, but Peter un-CC'd it cause of bounces. Is there a proper way to do this, or should we just not CC our list? -- Chase
On 04/20/2010 12:28 PM, Chase Douglas wrote: > P.S.: I CC'd kernel-team cause I figured we all would like to see the > discussion, but Peter un-CC'd it cause of bounces. Is there a proper > way to do this, or should we just not CC our list? > > -- Chase > I turned off the option in the list serv that annoys unsubscribed email submitters. rtg
Based on previous review and current state of the discussion I have applied this to Karmic master
diff --git a/kernel/sched.c b/kernel/sched.c index 81ede13..c372249 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -2967,6 +2967,7 @@ unsigned long nr_iowait(void) /* Variables and functions for calc_load */ static atomic_long_t calc_load_tasks; +static atomic_long_t calc_load_tasks_deferred; static unsigned long calc_load_update; unsigned long avenrun[3]; EXPORT_SYMBOL(avenrun); @@ -3021,7 +3022,7 @@ void calc_global_load(void) */ static void calc_load_account_active(struct rq *this_rq) { - long nr_active, delta; + long nr_active, delta, deferred; nr_active = this_rq->nr_running; nr_active += (long) this_rq->nr_uninterruptible; @@ -3029,6 +3030,25 @@ static void calc_load_account_active(struct rq *this_rq) if (nr_active != this_rq->calc_load_active) { delta = nr_active - this_rq->calc_load_active; this_rq->calc_load_active = nr_active; + + /* + * Update calc_load_tasks only once per cpu in 10 tick update + * window. + */ + if (unlikely(time_before(jiffies, this_rq->calc_load_update) && + time_after_eq(jiffies, calc_load_update))) { + if (delta) + atomic_long_add(delta, + &calc_load_tasks_deferred); + return; + } + + if (atomic_long_read(&calc_load_tasks_deferred)) { + deferred = atomic_long_xchg(&calc_load_tasks_deferred, + 0); + delta += deferred; + } + atomic_long_add(delta, &calc_load_tasks); } } @@ -3072,8 +3092,8 @@ static void update_cpu_load(struct rq *this_rq) } if (time_after_eq(jiffies, this_rq->calc_load_update)) { - this_rq->calc_load_update += LOAD_FREQ; calc_load_account_active(this_rq); + this_rq->calc_load_update += LOAD_FREQ; } }