From patchwork Thu Jul 27 23:22:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cengiz Can X-Patchwork-Id: 1814007 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=WCohAYJa; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4RBn026xCjz1yYD for ; Fri, 28 Jul 2023 09:24:02 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1qPAKs-0002rS-Au; Thu, 27 Jul 2023 23:23:58 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1qPAKq-0002oz-Db for kernel-team@lists.ubuntu.com; Thu, 27 Jul 2023 23:23:56 +0000 Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id 2205F3F71D for ; Thu, 27 Jul 2023 23:23:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1690500236; bh=IFpGYV7AWMDIezgxt8If1sImhN/tfHPdJyyBqNxqxNg=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=WCohAYJaT/659oQltJGpWwNUpUKmwTQXtX/Dv3XXjsGRplWk300JGREYek4INkj+U p3S2abVT34eUzGyPOwE31y1z8AgtvzbaqvPcAOBK5tCUTU7iuja5ryIul1tlgJbvT7 fSTju3nZpg1u6Z/dYO9oL1hOUAXmV7ly7P6XFKSJDTsFUMLzZagfMeLgWkLQdr4XSC eLRQy+peqouxqrOPrlT/gtZHsRJX//cA3kclaQkzEA2Bc13aVTwBYTZToiYQxUZwxf hHcdE/lMhHUOBokFnOUUtcADPWjHH4YgmenaSqCcfXGHitwU+BRRAsvqhHk9V7eYbN 74Z2A0pzCMWhw== Received: by mail-wr1-f72.google.com with SMTP id ffacd0b85a97d-3177af1ceacso899378f8f.1 for ; Thu, 27 Jul 2023 16:23:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690500235; x=1691105035; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IFpGYV7AWMDIezgxt8If1sImhN/tfHPdJyyBqNxqxNg=; b=ju8hiRgv7A2PI1toYzbZhCU0maUDQowvMrm94jQVcNKPrL7BMsVdahyuE0EuStNmX+ K8lq+TpytjwSxSXp5ZUN4J6xbWR0mLfGHMcGUOA+J+C+QABbgfkVe5rWf25nS0il2oiM z3wo/T5JhYRMd0t/xHjANS04XKk+UO0wsTAuL76wPEq4c/FMB7U8HVNj9QXTxiVsZ4i4 z582ExyR0EsoujnlFUXzfIMU9k35u7KpoZ/jSrh6RyX0kemsrieXokU6XhVM4Uf0MyPD eE60Nl2+Lp3hC7Z2NeYxe0kFd5vPz2FVcS9gy9JLJtiVpHO81LGdkbO294wG3Xskh8mQ otMQ== X-Gm-Message-State: ABy/qLbfmUWm1iSL7yCPkJRIuNAmj3MarFbbXiSYC15mpssbb6eYkUyE PnHhGibR0graKrrtQY7DQnDiwO7E+2LxTPsgTyOgvlIGbW2kmTrpdpI2rBxnNmh/n3mE0fCogt/ L+rXinlw+FyIMUFehXMu8L7FJbp1XT8v9TkDZpuicaROxBfbJmSt8 X-Received: by 2002:a5d:61cb:0:b0:317:5e91:5588 with SMTP id q11-20020a5d61cb000000b003175e915588mr399190wrv.3.1690500235419; Thu, 27 Jul 2023 16:23:55 -0700 (PDT) X-Google-Smtp-Source: APBJJlGZhwg2ZtK75UdPC3fzMMOx9r4RBfQvjQmdB4yT/1g1mkO6gOsemWdLodYIc5uc2qhSLZCYtA== X-Received: by 2002:a5d:61cb:0:b0:317:5e91:5588 with SMTP id q11-20020a5d61cb000000b003175e915588mr399183wrv.3.1690500235151; Thu, 27 Jul 2023 16:23:55 -0700 (PDT) Received: from localhost (uk.sesame.canonical.com. [185.125.190.60]) by smtp.gmail.com with ESMTPSA id n1-20020a5d4c41000000b0031764e85b91sm3224727wrt.68.2023.07.27.16.23.54 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 16:23:54 -0700 (PDT) From: Cengiz Can To: kernel-team@lists.ubuntu.com Subject: [SRU OEM-5.17/OEM-6.0 2/2] net/sched: sch_qfq: account for stab overhead in qfq_enqueue Date: Fri, 28 Jul 2023 02:22:26 +0300 Message-Id: <20230727232220.972472-6-cengiz.can@canonical.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727232220.972472-1-cengiz.can@canonical.com> References: <20230727232220.972472-1-cengiz.can@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Pedro Tammela Lion says: ------- In the QFQ scheduler a similar issue to CVE-2023-31436 persists. Consider the following code in net/sched/sch_qfq.c: static int qfq_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free) { unsigned int len = qdisc_pkt_len(skb), gso_segs; // ... if (unlikely(cl->agg->lmax < len)) { pr_debug("qfq: increasing maxpkt from %u to %u for class %u", cl->agg->lmax, len, cl->common.classid); err = qfq_change_agg(sch, cl, cl->agg->class_weight, len); if (err) { cl->qstats.drops++; return qdisc_drop(skb, sch, to_free); } // ... } Similarly to CVE-2023-31436, "lmax" is increased without any bounds checks according to the packet length "len". Usually this would not impose a problem because packet sizes are naturally limited. This is however not the actual packet length, rather the "qdisc_pkt_len(skb)" which might apply size transformations according to "struct qdisc_size_table" as created by "qdisc_get_stab()" in net/sched/sch_api.c if the TCA_STAB option was set when modifying the qdisc. A user may choose virtually any size using such a table. As a result the same issue as in CVE-2023-31436 can occur, allowing heap out-of-bounds read / writes in the kmalloc-8192 cache. ------- We can create the issue with the following commands: tc qdisc add dev $DEV root handle 1: stab mtu 2048 tsize 512 mpu 0 \ overhead 999999999 linklayer ethernet qfq tc class add dev $DEV parent 1: classid 1:1 htb rate 6mbit burst 15k tc filter add dev $DEV parent 1: matchall classid 1:1 ping -I $DEV 1.1.1.2 This is caused by incorrectly assuming that qdisc_pkt_len() returns a length within the QFQ_MIN_LMAX < len < QFQ_MAX_LMAX. Fixes: 462dbc9101ac ("pkt_sched: QFQ Plus: fair-queueing service at DRR cost") Reported-by: Lion Reviewed-by: Eric Dumazet Signed-off-by: Jamal Hadi Salim Signed-off-by: Pedro Tammela Reviewed-by: Simon Horman Signed-off-by: Paolo Abeni CVE-2023-3611 (cherry picked from commit 3e337087c3b5805fe0b8a46ba622a962880b5d64) Signed-off-by: Cengiz Can --- net/sched/sch_qfq.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c index 6bd803c0b244..a505b0c51310 100644 --- a/net/sched/sch_qfq.c +++ b/net/sched/sch_qfq.c @@ -381,8 +381,13 @@ static int qfq_change_agg(struct Qdisc *sch, struct qfq_class *cl, u32 weight, u32 lmax) { struct qfq_sched *q = qdisc_priv(sch); - struct qfq_aggregate *new_agg = qfq_find_agg(q, lmax, weight); + struct qfq_aggregate *new_agg; + /* 'lmax' can range from [QFQ_MIN_LMAX, pktlen + stab overhead] */ + if (lmax > QFQ_MAX_LMAX) + return -EINVAL; + + new_agg = qfq_find_agg(q, lmax, weight); if (new_agg == NULL) { /* create new aggregate */ new_agg = kzalloc(sizeof(*new_agg), GFP_ATOMIC); if (new_agg == NULL)