From patchwork Thu Dec 19 22:34:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mat Martineau X-Patchwork-Id: 1213811 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.01.org (client-ip=198.145.21.10; helo=ml01.01.org; envelope-from=mptcp-bounces@lists.01.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=linux.intel.com Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47f6Bz1qBYz9sRf for ; Fri, 20 Dec 2019 09:35:06 +1100 (AEDT) Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id C95D710097F36; Thu, 19 Dec 2019 14:38:22 -0800 (PST) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=134.134.136.24; helo=mga09.intel.com; envelope-from=mathew.j.martineau@linux.intel.com; receiver= Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 0BA6510113668 for ; Thu, 19 Dec 2019 14:38:20 -0800 (PST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Dec 2019 14:35:00 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.69,333,1571727600"; d="scan'208";a="248484468" Received: from mjmartin-nuc02.mjmartin-nuc02 (HELO mjmartin-nuc02.sea.intel.com) ([10.251.1.107]) by fmsmga002.fm.intel.com with ESMTP; 19 Dec 2019 14:34:59 -0800 From: Mat Martineau To: netdev@vger.kernel.org, mptcp@lists.01.org Cc: Paolo Abeni , Florian Westphal , Mat Martineau Date: Thu, 19 Dec 2019 14:34:33 -0800 Message-Id: <20191219223434.19722-11-mathew.j.martineau@linux.intel.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20191219223434.19722-1-mathew.j.martineau@linux.intel.com> References: <20191219223434.19722-1-mathew.j.martineau@linux.intel.com> MIME-Version: 1.0 Message-ID-Hash: MEF75LCH3HVFRGDQBHZQDYV75QKS4P56 X-Message-ID-Hash: MEF75LCH3HVFRGDQBHZQDYV75QKS4P56 X-MailFrom: mathew.j.martineau@linux.intel.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; suspicious-header X-Mailman-Version: 3.1.1 Precedence: list Subject: [MPTCP] [PATCH net-next v5 10/11] tcp: clean ext on tx recycle List-Id: Discussions regarding MPTCP upstreaming Archived-At: <> List-Archive: <> List-Help: List-Post: List-Subscribe: List-Unsubscribe: From: Paolo Abeni Otherwise we will find stray/unexpected/old extensions value on next iteration. On tcp_write_xmit() we can end-up splitting an already queued skb in two parts, via tso_fragment(). The newly created skb can be allocated via the tx cache and an upper layer will not be aware of it, so that upper layer cannot set the ext properly. Resetting the ext on recycle ensures that stale data is not propagated in to packet headers or elsewhere. An alternative would be add an additional hook in tso_fragment() or in sk_stream_alloc_skb() to init the ext for upper layers that need it. Co-developed-by: Florian Westphal Signed-off-by: Florian Westphal Signed-off-by: Paolo Abeni Signed-off-by: Mat Martineau Reviewed-by: Eric Dumazet --- include/net/sock.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/net/sock.h b/include/net/sock.h index b93cadba1a3b..23efed7f4e70 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1474,6 +1474,7 @@ static inline void sk_wmem_free_skb(struct sock *sk, struct sk_buff *skb) sk_mem_uncharge(sk, skb->truesize); if (static_branch_unlikely(&tcp_tx_skb_cache_key) && !sk->sk_tx_skb_cache && !skb_cloned(skb)) { + skb_ext_reset(skb); skb_zcopy_clear(skb, true); sk->sk_tx_skb_cache = skb; return;