From patchwork Tue Feb 2 18:33:09 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuchung Cheng X-Patchwork-Id: 577511 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 8A5C1140C07 for ; Wed, 3 Feb 2016 05:33:54 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b=XXEi1ATd; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933807AbcBBSdu (ORCPT ); Tue, 2 Feb 2016 13:33:50 -0500 Received: from mail-pf0-f172.google.com ([209.85.192.172]:33835 "EHLO mail-pf0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965115AbcBBSdo (ORCPT ); Tue, 2 Feb 2016 13:33:44 -0500 Received: by mail-pf0-f172.google.com with SMTP id o185so104552917pfb.1 for ; Tue, 02 Feb 2016 10:33:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=oBtHJO3OOs+OEmsaDe2K4ZkTkLAo+uY8DibKqf+4BKs=; b=XXEi1ATdkgpuThqrKrsAz5aDujXE2GWcabadZ5Qw1T/hBWeTi0XvLKoP7JfWpLvp56 BDXiAnPElcK37kO8Po/mKjeUW2qggXK/uZuscaDqZCWGJ98t8Td1f8zysVlQC3IK67i4 hjB7tfLskF+XzquEcGgd/zTRrVESTgD85GknEo5YrezN/YfqB6gp9zTPCeusy8Krspdc OxEYdDBqo0ei0I3teS1trDQqg9lU4KwxLvX8hIEdsHEd7kXhJ+iJc2ggZMXq8xqt6QrX UhaQuPnW8w9eYdBXwIpX2XrMsuHKEY2Bpf2ldvKvJpnhRiEHPI/myCsk6GuGjQpxONyb VOew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=oBtHJO3OOs+OEmsaDe2K4ZkTkLAo+uY8DibKqf+4BKs=; b=WwncIvLW9dNMZauYK8kAWF8oCP1gj84CYhDIP1lR+TgzkcuO2wafKzbsmHEh2kdwIQ 2A8OnYS6nzNrL2wDmA9sVIVUAfUPYCVJFxs1VNF5JSeRVtUGKtgtZi/hLyGIx3mzXflL HwWErBecWgQa5vu4VwCyg+AbbZCKKJWqdwsZm7gzhIW5tv4SS1AuW19suejz3WQL4sKS NENTFhzJoqYrt00oppCn42qFAsipMzNTrYdaUZx3Q9IHWCr1/9fi0mQGOXoJ0ELdfP+G dbXygabxELxH8/UT/rlZ58WJTz9j8dXNfZtLgcK/1KNpcB/9JlkmYeXPGTX4ftHBFJAF qb7w== X-Gm-Message-State: AG10YOSN6vktszq/BGfp/+oUSGKBJ2F1Zh7el4e3DbSbGV+X7yimg0q3+GtVo8FDYlehOys0 X-Received: by 10.98.10.65 with SMTP id s62mr17066744pfi.119.1454438023255; Tue, 02 Feb 2016 10:33:43 -0800 (PST) Received: from ycheng.mtv.corp.google.com ([172.17.133.36]) by smtp.gmail.com with ESMTPSA id d8sm4279651pas.14.2016.02.02.10.33.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 02 Feb 2016 10:33:42 -0800 (PST) From: Yuchung Cheng To: davem@davemloft.net Cc: netdev@vger.kernel.org, Yuchung Cheng , Neal Cardwell Subject: [PATCH net-next 6/6] tcp: tcp_cong_control helper Date: Tue, 2 Feb 2016 10:33:09 -0800 Message-Id: <1454437989-3842-7-git-send-email-ycheng@google.com> X-Mailer: git-send-email 2.7.0.rc3.207.g0ac5344 In-Reply-To: <1454437989-3842-1-git-send-email-ycheng@google.com> References: <1454437989-3842-1-git-send-email-ycheng@google.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Refactor and consolidate cwnd and rate updates into a new function tcp_cong_control(). Signed-off-by: Yuchung Cheng Signed-off-by: Neal Cardwell Signed-off-by: Eric Dumazet --- net/ipv4/tcp_input.c | 31 +++++++++++++++++++------------ 1 file changed, 19 insertions(+), 12 deletions(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 39c5326..52aa5df 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -3325,6 +3325,24 @@ static inline bool tcp_may_raise_cwnd(const struct sock *sk, const int flag) return flag & FLAG_DATA_ACKED; } +/* The "ultimate" congestion control function that aims to replace the rigid + * cwnd increase and decrease control (tcp_cong_avoid,tcp_*cwnd_reduction). + * It's called toward the end of processing an ACK with precise rate + * information. All transmission or retransmission are delayed afterwards. + */ +static void tcp_cong_control(struct sock *sk, u32 ack, u32 acked_sacked, + int flag) +{ + if (tcp_in_cwnd_reduction(sk)) { + /* Reduce cwnd if state mandates */ + tcp_cwnd_reduction(sk, acked_sacked, flag); + } else if (tcp_may_raise_cwnd(sk, flag)) { + /* Advance cwnd if state allows */ + tcp_cong_avoid(sk, ack, acked_sacked); + } + tcp_update_pacing_rate(sk); +} + /* Check that window update is acceptable. * The function assumes that snd_una<=ack<=snd_next. */ @@ -3555,7 +3573,6 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag) int prior_packets = tp->packets_out; u32 prior_delivered = tp->delivered; int acked = 0; /* Number of packets newly acked */ - u32 acked_sacked; /* Number of packets newly acked or sacked */ int rexmit = REXMIT_NONE; /* Flag to (re)transmit to recover losses */ sack_state.first_sackt.v64 = 0; @@ -3655,16 +3672,6 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag) if (tp->tlp_high_seq) tcp_process_tlp_ack(sk, ack, flag); - acked_sacked = tp->delivered - prior_delivered; - /* Advance cwnd if state allows */ - if (tcp_in_cwnd_reduction(sk)) { - /* Reduce cwnd if state mandates */ - tcp_cwnd_reduction(sk, acked_sacked, flag); - } else if (tcp_may_raise_cwnd(sk, flag)) { - /* Advance cwnd if state allows */ - tcp_cong_avoid(sk, ack, acked_sacked); - } - if ((flag & FLAG_FORWARD_PROGRESS) || !(flag & FLAG_NOT_DUP)) { struct dst_entry *dst = __sk_dst_get(sk); if (dst) @@ -3673,7 +3680,7 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag) if (icsk->icsk_pending == ICSK_TIME_RETRANS) tcp_schedule_loss_probe(sk); - tcp_update_pacing_rate(sk); + tcp_cong_control(sk, ack, tp->delivered - prior_delivered, flag); tcp_xmit_recovery(sk, rexmit); return 1;