From patchwork Mon Nov 12 21:28:40 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herton Ronaldo Krzesinski X-Patchwork-Id: 198465 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from chlorine.canonical.com (chlorine.canonical.com [91.189.94.204]) by ozlabs.org (Postfix) with ESMTP id D3F722C007E for ; Tue, 13 Nov 2012 08:28:52 +1100 (EST) Received: from localhost ([127.0.0.1] helo=chlorine.canonical.com) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1TY1Yk-0005u9-Qi; Mon, 12 Nov 2012 21:28:46 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1TY1Yi-0005tP-GV for kernel-team@lists.ubuntu.com; Mon, 12 Nov 2012 21:28:44 +0000 Received: from [177.43.130.127] (helo=canonical.com) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1TY1Yh-0005ti-Fo; Mon, 12 Nov 2012 21:28:44 +0000 From: Herton Ronaldo Krzesinski To: Mike Galbraith Subject: [ 3.5.yuz extended stable] Patch "sched: Fix migration thread runtime bogosity" has been added to staging queue Date: Mon, 12 Nov 2012 19:28:40 -0200 Message-Id: <1352755720-15887-1-git-send-email-herton.krzesinski@canonical.com> X-Mailer: git-send-email 1.7.9.5 Cc: Peter Zijlstra , linux-kernel@vger.kernel.org, stable@vger.kernel.org, kernel-team@lists.ubuntu.com, Thomas Gleixner X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.13 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: kernel-team-bounces@lists.ubuntu.com Errors-To: kernel-team-bounces@lists.ubuntu.com This is a note to let you know that I have just added a patch titled sched: Fix migration thread runtime bogosity to the linux-3.5.y-queue branch of the 3.5.yuz extended stable tree which can be found at: http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.5.y-queue If you, or anyone else, feels it should not be added to the 3.5 Linux kernel, or for any feedback related to it, please reply to this email. For more information on extended stable, see https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable Thanks. -Herton ------ From e1ad6614897d7b13cb5531d8cdc0e905bb0964d3 Mon Sep 17 00:00:00 2001 From: Mike Galbraith Date: Sat, 4 Aug 2012 05:44:14 +0200 Subject: [PATCH] sched: Fix migration thread runtime bogosity commit 8f6189684eb4e85e6c593cd710693f09c944450a upstream. Make stop scheduler class do the same accounting as other classes, Migration threads can be caught in the act while doing exec balancing, leading to the below due to use of unmaintained ->se.exec_start. The load that triggered this particular instance was an apparently out of control heavily threaded application that does system monitoring in what equated to an exec bomb, with one of the VERY frequently migrated tasks being ps. %CPU PID USER CMD 99.3 45 root [migration/10] 97.7 53 root [migration/12] 97.0 57 root [migration/13] 90.1 49 root [migration/11] 89.6 65 root [migration/15] 88.7 17 root [migration/3] 80.4 37 root [migration/8] 78.1 41 root [migration/9] 44.2 13 root [migration/2] Signed-off-by: Mike Galbraith Signed-off-by: Peter Zijlstra Link: http://lkml.kernel.org/r/1344051854.6739.19.camel@marge.simpson.net Signed-off-by: Thomas Gleixner Signed-off-by: Herton Ronaldo Krzesinski --- kernel/sched/stop_task.c | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) -- 1.7.9.5 diff --git a/kernel/sched/stop_task.c b/kernel/sched/stop_task.c index 7b386e8..da5eb5b 100644 --- a/kernel/sched/stop_task.c +++ b/kernel/sched/stop_task.c @@ -27,8 +27,10 @@ static struct task_struct *pick_next_task_stop(struct rq *rq) { struct task_struct *stop = rq->stop; - if (stop && stop->on_rq) + if (stop && stop->on_rq) { + stop->se.exec_start = rq->clock_task; return stop; + } return NULL; } @@ -52,6 +54,21 @@ static void yield_task_stop(struct rq *rq) static void put_prev_task_stop(struct rq *rq, struct task_struct *prev) { + struct task_struct *curr = rq->curr; + u64 delta_exec; + + delta_exec = rq->clock_task - curr->se.exec_start; + if (unlikely((s64)delta_exec < 0)) + delta_exec = 0; + + schedstat_set(curr->se.statistics.exec_max, + max(curr->se.statistics.exec_max, delta_exec)); + + curr->se.sum_exec_runtime += delta_exec; + account_group_exec_runtime(curr, delta_exec); + + curr->se.exec_start = rq->clock_task; + cpuacct_charge(curr, delta_exec); } static void task_tick_stop(struct rq *rq, struct task_struct *curr, int queued) @@ -60,6 +77,9 @@ static void task_tick_stop(struct rq *rq, struct task_struct *curr, int queued) static void set_curr_task_stop(struct rq *rq) { + struct task_struct *stop = rq->stop; + + stop->se.exec_start = rq->clock_task; } static void switched_to_stop(struct rq *rq, struct task_struct *p)