From patchwork Fri May 12 18:01:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuxuan Luo X-Patchwork-Id: 1780750 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=UpumrTIZ; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QHxRc6lCJz20d9 for ; Sat, 13 May 2023 04:02:03 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1pxX5R-00027D-CG; Fri, 12 May 2023 18:01:49 +0000 Received: from smtp-relay-internal-1.internal ([10.131.114.114] helo=smtp-relay-internal-1.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1pxX5O-00026H-Pq for kernel-team@lists.ubuntu.com; Fri, 12 May 2023 18:01:46 +0000 Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-1.canonical.com (Postfix) with ESMTPS id 378603F11A for ; Fri, 12 May 2023 18:01:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1683914506; bh=wSUhgJrkFDvwvI2dkFLkvsEw4i7ORbCUkbjRr2T9w4Y=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=UpumrTIZylF21RX0iASnRnJ1Qe3RweJ1inspTdffTxF6cslqt7tX8iq4N5cWhorqU KCYF2mncO0PE6YnSyy1Im99dlZCkeIrHzgBorXi+WZMNQbzc7LGSjp1H7UoO3G/RrD WYW7xdLjJ5XDkCXO2Vi0Jdw/pwX8Gs8ymdS8xN206vtOPX9Yg3JadURgDhDC5np1gM EgsGinnB+8QlrbFc/q0VFLvruK2efPuFiIGu4TtrzanHtZmWw+xwiQ2S2wkqOTGUhP IVqPdSdNeJZTkG4HHH4pPTz1NXIYYCNvIridRi8O2dfuxRcaIV6Q8o9K6PumqPp+W2 r0IwO0pr6P9uA== Received: by mail-qt1-f197.google.com with SMTP id d75a77b69052e-3f4deee3ec6so36628001cf.1 for ; Fri, 12 May 2023 11:01:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683914504; x=1686506504; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wSUhgJrkFDvwvI2dkFLkvsEw4i7ORbCUkbjRr2T9w4Y=; b=LiJGDsJ6pRXbYcIkAT8iphbh3e2uSp62pP81LbsZJRgZjswKxzPUsJ+67pekB8CRV8 8lz8piJ3g3F7es7fr0wVwDtNPitfM4euowXcWjTI/BSd91QSRW5CMsGUZU81TvxV1QMi ublhVKqk1Udst2JURVeGLYPvshPTmiYWBqsq7LcCFeEUritT+jsuhr/YJgWtBQKBsiIF Fb6kDHi1UhwQuJF+N/SO1mydnfuk/BGAE6fr2CHm4iGRySEyAuYJP6bvTJHSC1Z6hqld YC+CL1Nl8xNY5oxZeSd9+F5oq1oT4GVGKeY5V5kSrteZyICzVibTubbDgBRXTLKWTINN ph0g== X-Gm-Message-State: AC+VfDwi7U/7r5bK4212bX5aJHZGhQzVigc/kGs/Dm+sQq/ebU7VVHY5 nc12BnAqjceLMqmgbbwNj0GDCBzqlcdaNrADFurMhwnL88RTCvqalHPpwcrGgsmA76thRsr1Vpa O8B+oJyiCw3CaLM2bsXFzO0yfTKpnILMp5YsIIZs+NWDClcEIGQ== X-Received: by 2002:ac8:5ac5:0:b0:3e4:dfbf:1f45 with SMTP id d5-20020ac85ac5000000b003e4dfbf1f45mr46227515qtd.37.1683914504212; Fri, 12 May 2023 11:01:44 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4zf+dexI3oHn+fAvbc9vge3LQVQsBcW9o+kJuN16vm8Z65nSS/FfRpFTfaXET585KFYgtEng== X-Received: by 2002:ac8:5ac5:0:b0:3e4:dfbf:1f45 with SMTP id d5-20020ac85ac5000000b003e4dfbf1f45mr46227465qtd.37.1683914503746; Fri, 12 May 2023 11:01:43 -0700 (PDT) Received: from cache-ubuntu.hsd1.nj.comcast.net ([2601:86:200:98b0:a85b:cbdf:4a2e:9f8c]) by smtp.gmail.com with ESMTPSA id u19-20020a05622a199300b003f37e7b6f11sm3237189qtc.88.2023.05.12.11.01.43 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 May 2023 11:01:43 -0700 (PDT) From: Yuxuan Luo To: kernel-team@lists.ubuntu.com Subject: [SRU][Jammy][PATCH v2 1/2] net/sched: act_mirred: better wording on protection against excessive stack growth Date: Fri, 12 May 2023 14:01:32 -0400 Message-Id: <20230512180139.27507-2-yuxuan.luo@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230512180139.27507-1-yuxuan.luo@canonical.com> References: <20230512180139.27507-1-yuxuan.luo@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Davide Caratti with commit e2ca070f89ec ("net: sched: protect against stack overflow in TC act_mirred"), act_mirred protected itself against excessive stack growth using per_cpu counter of nested calls to tcf_mirred_act(), and capping it to MIRRED_RECURSION_LIMIT. However, such protection does not detect recursion/loops in case the packet is enqueued to the backlog (for example, when the mirred target device has RPS or skb timestamping enabled). Change the wording from "recursion" to "nesting" to make it more clear to readers. CC: Jamal Hadi Salim Signed-off-by: Davide Caratti Reviewed-by: Marcelo Ricardo Leitner Acked-by: Jamal Hadi Salim Signed-off-by: Paolo Abeni (cherry picked from commit 78dcdffe0418ac8f3f057f26fe71ccf4d8ed851f) CVE-2022-4269 Signed-off-by: Yuxuan Luo --- net/sched/act_mirred.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c index efc963ab995a3..b28d49495de09 100644 --- a/net/sched/act_mirred.c +++ b/net/sched/act_mirred.c @@ -28,8 +28,8 @@ static LIST_HEAD(mirred_list); static DEFINE_SPINLOCK(mirred_list_lock); -#define MIRRED_RECURSION_LIMIT 4 -static DEFINE_PER_CPU(unsigned int, mirred_rec_level); +#define MIRRED_NEST_LIMIT 4 +static DEFINE_PER_CPU(unsigned int, mirred_nest_level); static bool tcf_mirred_is_act_redirect(int action) { @@ -223,7 +223,7 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a, struct sk_buff *skb2 = skb; bool m_mac_header_xmit; struct net_device *dev; - unsigned int rec_level; + unsigned int nest_level; int retval, err = 0; bool use_reinsert; bool want_ingress; @@ -234,11 +234,11 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a, int mac_len; bool at_nh; - rec_level = __this_cpu_inc_return(mirred_rec_level); - if (unlikely(rec_level > MIRRED_RECURSION_LIMIT)) { + nest_level = __this_cpu_inc_return(mirred_nest_level); + if (unlikely(nest_level > MIRRED_NEST_LIMIT)) { net_warn_ratelimited("Packet exceeded mirred recursion limit on dev %s\n", netdev_name(skb->dev)); - __this_cpu_dec(mirred_rec_level); + __this_cpu_dec(mirred_nest_level); return TC_ACT_SHOT; } @@ -308,7 +308,7 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a, err = tcf_mirred_forward(res->ingress, skb); if (err) tcf_action_inc_overlimit_qstats(&m->common); - __this_cpu_dec(mirred_rec_level); + __this_cpu_dec(mirred_nest_level); return TC_ACT_CONSUMED; } } @@ -320,7 +320,7 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a, if (tcf_mirred_is_act_redirect(m_eaction)) retval = TC_ACT_SHOT; } - __this_cpu_dec(mirred_rec_level); + __this_cpu_dec(mirred_nest_level); return retval; }