From patchwork Fri Jan 13 06:11:31 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuchung Cheng X-Patchwork-Id: 714822 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3v0C2q0BfDz9t17 for ; Fri, 13 Jan 2017 17:12:19 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b="t/ICStmh"; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751207AbdAMGMQ (ORCPT ); Fri, 13 Jan 2017 01:12:16 -0500 Received: from mail-pf0-f182.google.com ([209.85.192.182]:32895 "EHLO mail-pf0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751198AbdAMGMO (ORCPT ); Fri, 13 Jan 2017 01:12:14 -0500 Received: by mail-pf0-f182.google.com with SMTP id y143so26040312pfb.0 for ; Thu, 12 Jan 2017 22:12:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=GUxjcyDv3ZKvYAasC5Ve9VpdRRJEYurEpeyRuGk9KyA=; b=t/ICStmhpq9Op2CdeA7uqbNv48B4mN1NJkYy+RfOOL4z+XXlN5NHXaa5lE+Zeqlq59 zIfhaU8bwx51iJgonVmCF4x3DYrASOgm7xhKBqTHaNDjkhChtH/uazSBPGOpf4oP87lX 6qkqWO1cYobDmHoYx50HHVdlOVRq4n9tdUL8yvrwm8Fd2WI+yTfp9fT13gKwFpfDpVFi hCQz5kSf/+3ymFfnCC1GxCGRQRwsyEDbVGyWdLAo4NiUnPa5GsYKXFeleV4Seyn5eGno txkHOWzv9VWzX3TDOP08mTHZnaZQS8HgaRbbb9Uiiays9QbNdajFyflc7l46P5Vgzv1I wW/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=GUxjcyDv3ZKvYAasC5Ve9VpdRRJEYurEpeyRuGk9KyA=; b=aBSUlbX/cpoHq6jpfmS585f3xlaaO4r4Q+q7qOKvyiz6Rzd3Zx3v36nEokiM4aZ8dd /19Onk9H7FJP+F6w9cqMVMXjI/yMYigpZ7oKBU1JNlBgB9TvDV2miR2j82K/7ER38tGu VdGcWoklOD20b5yMZac4a92nmqL89rXagM9ZRsjBxCn9696UD2XdKc7OjK/M3I0O7YEi 21icWvZQL1zbuHcoWypGQvqemYMIyllR+qkBAqPd514wbUsY1Yv5KRlUAksda27RuIkY BovSiv5G1F5C2b8NC+7ISnwCtsBkJoHgWj6iFZrHKJjO28vDleKmW/A1vq9Enlv04WV0 0P+w== X-Gm-Message-State: AIkVDXIiL5JEUHCDU8ktWoRNNTAT3efcByGXzlAaROVKuviWbG92ANRGhFylPH2KbaxXBxfZ X-Received: by 10.98.157.83 with SMTP id i80mr20973916pfd.177.1484287928762; Thu, 12 Jan 2017 22:12:08 -0800 (PST) Received: from ycheng2.mtv.corp.google.com ([172.17.93.15]) by smtp.gmail.com with ESMTPSA id z70sm9524933pff.26.2017.01.12.22.12.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 12 Jan 2017 22:12:08 -0800 (PST) From: Yuchung Cheng To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, ncardwell@google.com, nanditad@google.com, Yuchung Cheng Subject: [PATCH net-next v2 02/13] tcp: new helper for RACK to detect loss Date: Thu, 12 Jan 2017 22:11:31 -0800 Message-Id: <20170113061142.127344-3-ycheng@google.com> X-Mailer: git-send-email 2.11.0.483.g087da7b7c-goog In-Reply-To: <20170113061142.127344-1-ycheng@google.com> References: <20170113061142.127344-1-ycheng@google.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Create a new helper tcp_rack_detect_loss to prepare the upcoming RACK reordering timer patch. Signed-off-by: Yuchung Cheng Signed-off-by: Neal Cardwell Acked-by: Eric Dumazet --- include/net/tcp.h | 3 +-- net/ipv4/tcp_input.c | 12 ++++++++---- net/ipv4/tcp_recovery.c | 22 +++++++++++++--------- 3 files changed, 22 insertions(+), 15 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index 1da0aa724929..51183bba3835 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1863,8 +1863,7 @@ extern int sysctl_tcp_recovery; /* Use TCP RACK to detect (some) tail and retransmit losses */ #define TCP_RACK_LOST_RETRANS 0x1 -extern int tcp_rack_mark_lost(struct sock *sk); - +extern void tcp_rack_mark_lost(struct sock *sk); extern void tcp_rack_advance(struct tcp_sock *tp, const struct skb_mstamp *xmit_time, u8 sacked); diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index ec6d84363024..bb24b93e64bc 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -2865,10 +2865,14 @@ static void tcp_fastretrans_alert(struct sock *sk, const int acked, } /* Use RACK to detect loss */ - if (sysctl_tcp_recovery & TCP_RACK_LOST_RETRANS && - tcp_rack_mark_lost(sk)) { - flag |= FLAG_LOST_RETRANS; - *ack_flag |= FLAG_LOST_RETRANS; + if (sysctl_tcp_recovery & TCP_RACK_LOST_RETRANS) { + u32 prior_retrans = tp->retrans_out; + + tcp_rack_mark_lost(sk); + if (prior_retrans > tp->retrans_out) { + flag |= FLAG_LOST_RETRANS; + *ack_flag |= FLAG_LOST_RETRANS; + } } /* E. Process state. */ diff --git a/net/ipv4/tcp_recovery.c b/net/ipv4/tcp_recovery.c index f38dba5aed7a..7ea0377229c0 100644 --- a/net/ipv4/tcp_recovery.c +++ b/net/ipv4/tcp_recovery.c @@ -32,17 +32,11 @@ static void tcp_rack_mark_skb_lost(struct sock *sk, struct sk_buff *skb) * The current version is only used after recovery starts but can be * easily extended to detect the first loss. */ -int tcp_rack_mark_lost(struct sock *sk) +static void tcp_rack_detect_loss(struct sock *sk) { struct tcp_sock *tp = tcp_sk(sk); struct sk_buff *skb; - u32 reo_wnd, prior_retrans = tp->retrans_out; - - if (inet_csk(sk)->icsk_ca_state < TCP_CA_Recovery || !tp->rack.advanced) - return 0; - - /* Reset the advanced flag to avoid unnecessary queue scanning */ - tp->rack.advanced = 0; + u32 reo_wnd; /* To be more reordering resilient, allow min_rtt/4 settling delay * (lower-bounded to 1000uS). We use min_rtt instead of the smoothed @@ -82,7 +76,17 @@ int tcp_rack_mark_lost(struct sock *sk) break; } } - return prior_retrans - tp->retrans_out; +} + +void tcp_rack_mark_lost(struct sock *sk) +{ + struct tcp_sock *tp = tcp_sk(sk); + + if (inet_csk(sk)->icsk_ca_state < TCP_CA_Recovery || !tp->rack.advanced) + return; + /* Reset the advanced flag to avoid unnecessary queue scanning */ + tp->rack.advanced = 0; + tcp_rack_detect_loss(sk); } /* Record the most recently (re)sent time among the (s)acked packets */