From patchwork Thu Oct 17 17:51:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Connor Kuehl X-Patchwork-Id: 1178863 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 46vGvx5Czhz9sPc; Fri, 18 Oct 2019 04:52:29 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1iL9wo-0001VJ-EP; Thu, 17 Oct 2019 17:52:26 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1iL9wl-0001TN-De for kernel-team@lists.ubuntu.com; Thu, 17 Oct 2019 17:52:23 +0000 Received: from mail-pf1-f198.google.com ([209.85.210.198]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1iL9wl-0004ab-3h for kernel-team@lists.ubuntu.com; Thu, 17 Oct 2019 17:52:23 +0000 Received: by mail-pf1-f198.google.com with SMTP id t65so2333891pfd.14 for ; Thu, 17 Oct 2019 10:52:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=yxt2Nc0aCeRYk83GZodCGsxbT+PDWpkizp6qVhFVPsc=; b=P99fKTRHyPVy48F/jIOpFwsUoJSVXB35UJQELzditR6MUXteWOmbpVLL+a8ioIIb0n 4np+rTdm3niiwqD+ut/KqpROUXEG+m7QpQlZBNkgxVi67dsKUyy+Es0IfqWvrt22JJnn ZqmHjXo2A56GlWxdP90H4jQn9Pe7GQ5NzNp6XolBi7Q6+e2KkUoL/uAndHwEHqzQnhMq COm9WWeykFM7NL+C/5b/8bI/4tqoJNC994TbZkzNqc6yhbJzMUEgWDhoRq/O4hKy+Bn7 SX+A2r403pLk1wT7rP3FzOPUbX8uhfU6hkCKDoE0Cas3NOqacHz4+HDTFAiUxyizWqys O//w== X-Gm-Message-State: APjAAAWIKwCy3GOiTg9VmqiZ0XX0paWY5lOuJaZtvap2tYPkAm9MyGsE ejY19x+Jr+RHcBOiH9ZnFe4qk9KK9e2VJPY2Kwbfyi7Dzr2A58um2ZYLSUr0nRCJTQMNLrRx7EP JHlSkg9HtYJ0AynKl/oWvXv3R4lr1wZIJ/jT4Oru1XA== X-Received: by 2002:a17:902:7d8e:: with SMTP id a14mr5344877plm.260.1571334741525; Thu, 17 Oct 2019 10:52:21 -0700 (PDT) X-Google-Smtp-Source: APXvYqxYUjpSuLLszeafW9dNbwqH6CMsavKtIMIyZvHmZIVYUOM7GaKLe85zr/sgNSvTXKlDawJ71g== X-Received: by 2002:a17:902:7d8e:: with SMTP id a14mr5344863plm.260.1571334741232; Thu, 17 Oct 2019 10:52:21 -0700 (PDT) Received: from localhost.localdomain (us.sesame.canonical.com. [91.189.91.19]) by smtp.gmail.com with ESMTPSA id b16sm5930847pfb.54.2019.10.17.10.52.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Oct 2019 10:52:20 -0700 (PDT) From: Connor Kuehl To: kernel-team@lists.ubuntu.com Subject: [Xenial][SRU][CVE-2018-20784][PATCH 5/6] sched/fair: Optimize update_blocked_averages() Date: Thu, 17 Oct 2019 10:51:32 -0700 Message-Id: <20191017175133.11149-6-connor.kuehl@canonical.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191017175133.11149-1-connor.kuehl@canonical.com> References: <20191017175133.11149-1-connor.kuehl@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Vincent Guittot CVE-2018-20784 Removing a cfs_rq from rq->leaf_cfs_rq_list can break the parent/child ordering of the list when it will be added back. In order to remove an empty and fully decayed cfs_rq, we must remove its children too, so they will be added back in the right order next time. With a normal decay of PELT, a parent will be empty and fully decayed if all children are empty and fully decayed too. In such a case, we just have to ensure that the whole branch will be added when a new task is enqueued. This is default behavior since : commit f6783319737f ("sched/fair: Fix insertion in rq->leaf_cfs_rq_list") In case of throttling, the PELT of throttled cfs_rq will not be updated whereas the parent will. This breaks the assumption made above unless we remove the children of a cfs_rq that is throttled. Then, they will be added back when unthrottled and a sched_entity will be enqueued. As throttled cfs_rq are now removed from the list, we can remove the associated test in update_blocked_averages(). Signed-off-by: Vincent Guittot Signed-off-by: Peter Zijlstra (Intel) Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: sargun@sargun.me Cc: tj@kernel.org Cc: xiexiuqi@huawei.com Cc: xiezhipeng1@huawei.com Link: https://lkml.kernel.org/r/1549469662-13614-2-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar (backported from commit 31bc6aeaab1d1de8959b67edbed5c7a4b3cdbe7c) [ Connor Kuehl: offset adjustments and the hunk in 'update_blocked_averages' required manual placement. ] Signed-off-by: Connor Kuehl --- kernel/sched/fair.c | 27 +++++++++++++++++++++------ 1 file changed, 21 insertions(+), 6 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index d8c0966385de..8bc9a5e24380 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -354,6 +354,18 @@ static inline bool list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq) static inline void list_del_leaf_cfs_rq(struct cfs_rq *cfs_rq) { if (cfs_rq->on_list) { + struct rq *rq = rq_of(cfs_rq); + + /* + * With cfs_rq being unthrottled/throttled during an enqueue, + * it can happen the tmp_alone_branch points the a leaf that + * we finally want to del. In this case, tmp_alone_branch moves + * to the prev element but it will point to rq->leaf_cfs_rq_list + * at the end of the enqueue. + */ + if (rq->tmp_alone_branch == &cfs_rq->leaf_cfs_rq_list) + rq->tmp_alone_branch = cfs_rq->leaf_cfs_rq_list.prev; + list_del_rcu(&cfs_rq->leaf_cfs_rq_list); cfs_rq->on_list = 0; } @@ -3659,6 +3671,10 @@ static int tg_unthrottle_up(struct task_group *tg, void *data) /* adjust cfs_rq_clock_task() */ cfs_rq->throttled_clock_task_time += rq_clock_task(rq) - cfs_rq->throttled_clock_task; + + /* Add cfs_rq with already running entity in the list */ + if (cfs_rq->nr_running >= 1) + list_add_leaf_cfs_rq(cfs_rq); } #endif @@ -3671,8 +3687,10 @@ static int tg_throttle_down(struct task_group *tg, void *data) struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)]; /* group is entering throttled state, stop time */ - if (!cfs_rq->throttle_count) + if (!cfs_rq->throttle_count) { cfs_rq->throttled_clock_task = rq_clock_task(rq); + list_del_leaf_cfs_rq(cfs_rq); + } cfs_rq->throttle_count++; return 0; @@ -3775,6 +3793,8 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) break; } + assert_list_leaf_cfs_rq(rq); + if (!se) add_nr_running(rq, task_delta); @@ -6126,11 +6146,6 @@ static void update_blocked_averages(int cpu) * list_add_leaf_cfs_rq() for details. */ for_each_leaf_cfs_rq(rq, cfs_rq) { - - /* throttled entities do not contribute to load */ - if (throttled_hierarchy(cfs_rq)) - continue; - if (update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq)) update_tg_load_avg(cfs_rq, 0);