From patchwork Thu Apr 1 20:23:41 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chase Douglas X-Patchwork-Id: 49241 X-Patchwork-Delegate: apw@canonical.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from chlorine.canonical.com (chlorine.canonical.com [91.189.94.204]) by ozlabs.org (Postfix) with ESMTP id 5689FB7CF4 for ; Fri, 2 Apr 2010 07:23:54 +1100 (EST) Received: from localhost ([127.0.0.1] helo=chlorine.canonical.com) by chlorine.canonical.com with esmtp (Exim 4.69) (envelope-from ) id 1NxQvb-0003yw-IY; Thu, 01 Apr 2010 21:23:47 +0100 Received: from adelie.canonical.com ([91.189.90.139]) by chlorine.canonical.com with esmtp (Exim 4.69) (envelope-from ) id 1NxQva-0003xN-1F for kernel-team@lists.ubuntu.com; Thu, 01 Apr 2010 21:23:46 +0100 Received: from hutte.canonical.com ([91.189.90.181]) by adelie.canonical.com with esmtp (Exim 4.69 #1 (Debian)) id 1NxQvY-0003tq-T4; Thu, 01 Apr 2010 21:23:44 +0100 Received: from cpe-75-180-27-10.columbus.res.rr.com ([75.180.27.10] helo=canonical.com) by hutte.canonical.com with esmtpsa (TLS-1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.69) (envelope-from ) id 1NxQvY-0006z6-BZ; Thu, 01 Apr 2010 21:23:44 +0100 From: Chase Douglas To: linux-kernel@vger.kernel.org Subject: [REGRESSION 2.6.30][PATCH v2] sched: update load count only once per cpu in 10 tick update window Date: Thu, 1 Apr 2010 16:23:41 -0400 Message-Id: <1270153421-9199-1-git-send-email-chase.douglas@canonical.com> X-Mailer: git-send-email 1.7.0 Cc: Andrew Morton , Peter Zijlstra , "Rafael J. Wysocki" , kernel-team , Thomas Gleixner , Ingo Molnar X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.9 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: kernel-team-bounces@lists.ubuntu.com Errors-To: kernel-team-bounces@lists.ubuntu.com A task that often runs for less than 10 ticks at a time is likely to be left out of the load avg calculation. It is possible to craft a task that is runnable 90% of the time, but sleeps at least once every 10 ticks. When run on an otherwise idle system, the load avg will remain near 0.00, even though the cpu usage is 90%. There's a period of 10 ticks where calc_load_tasks is updated by all the cpus for the load avg. Usually all the cpus do this during the first tick. If any cpus go idle, calc_load_tasks is decremented accordingly. However, if they wake up calc_load_tasks is not incremented. Thus, if cpus go idle during the 10 tick period, calc_load_tasks may be decremented to a non-representative value. This issue can lead to systems having a load avg of exactly 0, even though the real load avg could theoretically be up to NR_CPUS. This is a regression since 2.6.30. The offending commit is: dce48a84adf1806676319f6f480e30a6daa012f9. This change defers calc_load_tasks accounting after each cpu updates the count until after the 10 tick update window. A few points: * A global atomic deferral counter, and not per-cpu vars, is needed because a cpu may go NOHZ idle and not be able to update the global calc_load_tasks variable for subsequent load calculations. * It is not enough to add calls to account for the load when a cpu is awakened: - Load avg calculation must be independent of cpu load. - If a cpu is awakend by one tasks, but then has more scheduled before the end of the update window, only the first task will be accounted. BugLink: http://bugs.launchpad.net/bugs/513848 Signed-off-by: Chase Douglas --- kernel/sched.c | 24 ++++++++++++++++++++++-- 1 files changed, 22 insertions(+), 2 deletions(-) diff --git a/kernel/sched.c b/kernel/sched.c index 81ede13..bc5233f 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -2967,6 +2967,7 @@ unsigned long nr_iowait(void) /* Variables and functions for calc_load */ static atomic_long_t calc_load_tasks; +static atomic_long_t calc_load_tasks_deferred; static unsigned long calc_load_update; unsigned long avenrun[3]; EXPORT_SYMBOL(avenrun); @@ -3021,7 +3022,7 @@ void calc_global_load(void) */ static void calc_load_account_active(struct rq *this_rq) { - long nr_active, delta; + long nr_active, delta, deferred; nr_active = this_rq->nr_running; nr_active += (long) this_rq->nr_uninterruptible; @@ -3029,6 +3030,25 @@ static void calc_load_account_active(struct rq *this_rq) if (nr_active != this_rq->calc_load_active) { delta = nr_active - this_rq->calc_load_active; this_rq->calc_load_active = nr_active; + + /* + * Update calc_load_tasks only once per cpu in 10 tick update + * window. + */ + if (unlikely(time_before(jiffies, this_rq->calc_load_update) && + time_after_eq(jiffies, calc_load_update))) { + if (delta) + atomic_long_add(delta, + &calc_load_tasks_deferred); + return; + } + + if (calc_load_tasks_deferred.counter) { + deferred = atomic_long_xchg(&calc_load_tasks_deferred, + 0); + delta += deferred; + } + atomic_long_add(delta, &calc_load_tasks); } } @@ -3072,8 +3092,8 @@ static void update_cpu_load(struct rq *this_rq) } if (time_after_eq(jiffies, this_rq->calc_load_update)) { - this_rq->calc_load_update += LOAD_FREQ; calc_load_account_active(this_rq); + this_rq->calc_load_update += LOAD_FREQ; } }