From patchwork Mon Jul 23 16:28:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 947865 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b="WT3cgfE2"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 41Z6PK1Wkfz9s0n for ; Tue, 24 Jul 2018 02:28:37 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388855AbeGWRaf (ORCPT ); Mon, 23 Jul 2018 13:30:35 -0400 Received: from mail-pf1-f195.google.com ([209.85.210.195]:45583 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388112AbeGWRaf (ORCPT ); Mon, 23 Jul 2018 13:30:35 -0400 Received: by mail-pf1-f195.google.com with SMTP id i26-v6so190456pfo.12 for ; Mon, 23 Jul 2018 09:28:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=lwdFTb9NsBFOQn+IjrUKtR6Qa3hAge+vXC6E50jqNpo=; b=WT3cgfE2ZaqucFnHjK7wwN8bFIk+itj0Yq0mbaQIzF8KPquNjpxOnDs5UQoRZZ7iRf WqYNzkgOxTcY57sqJpqDkPLg4U38q9FBjdckwgZyQHRGB2vBGty2TuhCE0y9nbb3bTik kdeQg2FGIihEVQ3kEmu8YN6I54qvzWbgu9W3EUoXqYywzck4DSPUEW7xWP8BG4x+zli3 cWRQ9Zb3r3Sm+iJtCOgQtE3XnbVdLAuuF5jjxLIIPxDyDz91EAqRlmQDi+sPV0g2wVwQ oPIVvlqR2NpcwFWg6zEc9CqH/YHFtx4jxtzqOZrGY+WmUzX/BmwfZAs7aLVOcLKtpa8i DeFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=lwdFTb9NsBFOQn+IjrUKtR6Qa3hAge+vXC6E50jqNpo=; b=hsaEKhACaAvau0iel++WzCIM4SZG2jVGexjDg6iiVfmK6aDdkUVCVCm0sG2cU/t5TU fDs5afHjdPJxgaI3Yl7mr61eGMAQPX8mYmU8po4ue5A/h/6pj4AAwy+524AeFpzIm8a+ NPQBqsHE9+QCBA8XCNwPRTsoFN5FWH4BZj9ALmrVBgIqEDapsAMqGWJFXLsMha4x/NDL uyPrhJGbsEsZQ8kr7zLM2akN041CKXGncGSYvo9yYIlmUVRSIbvjGjn9/hqiIOyd6c3/ 9MDdMiKzxmgUy5hE4Ihw8R/pQxxRjJY+N9rEuwdcVJG33bPaMQX51F+Q6R4ABzK7xkl8 jQwA== X-Gm-Message-State: AOUpUlEwBPTqeAI4InN/ffcg0+DWkXz11n7gPvs9ZBe0jmq0Trt/hasP CwyOzhvWiQyx7Mc6KZa9k2ezvw== X-Google-Smtp-Source: AAOMgpeDN+LIiizxsc63ag292MMAdNtMUKE+9oF/kTgX0P6sOxNNI3Jduhk/+D05sfFOwz02Plm1eQ== X-Received: by 2002:a63:1f4d:: with SMTP id q13-v6mr12959911pgm.241.1532363314202; Mon, 23 Jul 2018 09:28:34 -0700 (PDT) Received: from localhost ([2620:15c:2c4:201:f5a:7eca:440a:3ead]) by smtp.gmail.com with ESMTPSA id j72-v6sm14777113pge.19.2018.07.23.09.28.33 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 23 Jul 2018 09:28:33 -0700 (PDT) From: Eric Dumazet To: "David S . Miller" , Juha-Matti Tilli , Yuchung Cheng , Soheil Hassas Yeganeh Cc: netdev , Eric Dumazet , Eric Dumazet Subject: [PATCH net 5/5] tcp: add tcp_ooo_try_coalesce() helper Date: Mon, 23 Jul 2018 09:28:21 -0700 Message-Id: <20180723162821.11556-6-edumazet@google.com> X-Mailer: git-send-email 2.18.0.233.g985f88cf7e-goog In-Reply-To: <20180723162821.11556-1-edumazet@google.com> References: <20180723162821.11556-1-edumazet@google.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org In case skb in out_or_order_queue is the result of multiple skbs coalescing, we would like to get a proper gso_segs counter tracking, so that future tcp_drop() can report an accurate number. I chose to not implement this tracking for skbs in receive queue, since they are not dropped, unless socket is disconnected. Signed-off-by: Eric Dumazet Acked-by: Soheil Hassas Yeganeh Acked-by: Yuchung Cheng --- net/ipv4/tcp_input.c | 25 +++++++++++++++++++++---- 1 file changed, 21 insertions(+), 4 deletions(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index b062a76922384f6199563af7cf30a30c5baa7601..3bcd30a2ba06827e061d86ba22680986824e3ee4 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -4358,6 +4358,23 @@ static bool tcp_try_coalesce(struct sock *sk, return true; } +static bool tcp_ooo_try_coalesce(struct sock *sk, + struct sk_buff *to, + struct sk_buff *from, + bool *fragstolen) +{ + bool res = tcp_try_coalesce(sk, to, from, fragstolen); + + /* In case tcp_drop() is called later, update to->gso_segs */ + if (res) { + u32 gso_segs = max_t(u16, 1, skb_shinfo(to)->gso_segs) + + max_t(u16, 1, skb_shinfo(from)->gso_segs); + + skb_shinfo(to)->gso_segs = min_t(u32, gso_segs, 0xFFFF); + } + return res; +} + static void tcp_drop(struct sock *sk, struct sk_buff *skb) { sk_drops_add(sk, skb); @@ -4481,8 +4498,8 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb) /* In the typical case, we are adding an skb to the end of the list. * Use of ooo_last_skb avoids the O(Log(N)) rbtree lookup. */ - if (tcp_try_coalesce(sk, tp->ooo_last_skb, - skb, &fragstolen)) { + if (tcp_ooo_try_coalesce(sk, tp->ooo_last_skb, + skb, &fragstolen)) { coalesce_done: tcp_grow_window(sk, skb); kfree_skb_partial(skb, fragstolen); @@ -4532,8 +4549,8 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb) tcp_drop(sk, skb1); goto merge_right; } - } else if (tcp_try_coalesce(sk, skb1, - skb, &fragstolen)) { + } else if (tcp_ooo_try_coalesce(sk, skb1, + skb, &fragstolen)) { goto coalesce_done; } p = &parent->rb_right;