From patchwork Tue Jan 18 15:34:22 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Bader X-Patchwork-Id: 79323 X-Patchwork-Delegate: stefan.bader@canonical.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from chlorine.canonical.com (chlorine.canonical.com [91.189.94.204]) by ozlabs.org (Postfix) with ESMTP id C7F92B7118 for ; Wed, 19 Jan 2011 02:34:40 +1100 (EST) Received: from localhost ([127.0.0.1] helo=chlorine.canonical.com) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1PfDZl-0007RZ-8B; Tue, 18 Jan 2011 15:34:29 +0000 Received: from adelie.canonical.com ([91.189.90.139]) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1PfDZj-0007RF-Hb for kernel-team@lists.ubuntu.com; Tue, 18 Jan 2011 15:34:27 +0000 Received: from hutte.canonical.com ([91.189.90.181]) by adelie.canonical.com with esmtp (Exim 4.71 #1 (Debian)) id 1PfDZi-0008Rg-Nk for ; Tue, 18 Jan 2011 15:34:26 +0000 Received: from p5b2e57a5.dip.t-dialin.net ([91.46.87.165] helo=canonical.com) by hutte.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1PfDZi-0007jv-EU for kernel-team@lists.ubuntu.com; Tue, 18 Jan 2011 15:34:26 +0000 From: Stefan Bader To: kernel-team@lists.ubuntu.com Subject: [PATCH 1/2] sched: Prevent divide by zero when cpu_power is 0 Date: Tue, 18 Jan 2011 16:34:22 +0100 Message-Id: <1295364863-9028-2-git-send-email-stefan.bader@canonical.com> X-Mailer: git-send-email 1.7.0.4 In-Reply-To: <1295364863-9028-1-git-send-email-stefan.bader@canonical.com> References: <1295364863-9028-1-git-send-email-stefan.bader@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.13 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: kernel-team-bounces@lists.ubuntu.com Errors-To: kernel-team-bounces@lists.ubuntu.com From: Andrew Dickinson This is a patch to fix the corner case where we're crashing with divide_error in find_busiest_group. I don't fully understand what the case is that causes sds.total_pwr to be zero in find_busiest_group, but this patch guards against the divide-by-zero bug. I also added safe-guarding around other routines in the scheduler code where we're dividing by power; that's more of a just-in-case and I'm definitely open for debate on that. BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=16991 BugLink: http://bugs.launchpad.net/bugs/614853 Signed-off-by: Andrew Dickinson Signed-off-by: Stefan Bader Acked-by: Andy Whitcroft --- kernel/sched.c | 10 +++++++--- kernel/sched_fair.c | 4 +++- 2 files changed, 10 insertions(+), 4 deletions(-) diff --git a/kernel/sched.c b/kernel/sched.c index 7dd8aad..d4a4b14 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -3836,7 +3836,9 @@ static inline void update_sg_lb_stats(struct sched_domain *sd, } /* Adjust by relative CPU power of the group */ - sgs->avg_load = (sgs->group_load * SCHED_LOAD_SCALE) / group->cpu_power; + sgs->avg_load = (sgs->group_load * SCHED_LOAD_SCALE); + if (group->cpu_power) + sgs->avg_load /= group->cpu_power; /* * Consider the group unbalanced when the imbalance is larger @@ -4119,7 +4121,7 @@ find_busiest_group(struct sched_domain *sd, int this_cpu, if (balance && !(*balance)) goto ret; - if (!sds.busiest || sds.busiest_nr_running == 0) + if (!sds.busiest || sds.busiest_nr_running == 0 || sds.total_pwr == 0) goto out_balanced; if (sds.this_load >= sds.max_load) @@ -4184,7 +4186,9 @@ find_busiest_queue(struct sched_group *group, enum cpu_idle_type idle, * the load can be moved away from the cpu that is potentially * running at a lower capacity. */ - wl = (wl * SCHED_LOAD_SCALE) / power; + wl = (wl * SCHED_LOAD_SCALE); + if (power) + wl /= power; if (wl > max_load) { max_load = wl; diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index 01e311e..3087249 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -1344,7 +1344,9 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, } /* Adjust by relative CPU power of the group */ - avg_load = (avg_load * SCHED_LOAD_SCALE) / group->cpu_power; + avg_load = (avg_load * SCHED_LOAD_SCALE); + if (group->cpu_power) + avg_load /= group->cpu_power; if (local_group) { this_load = avg_load;