From patchwork Fri Jul 14 21:49:22 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neal Cardwell X-Patchwork-Id: 788815 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3x8RDV3lDbz9sNv for ; Sat, 15 Jul 2017 07:49:46 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b="psrTGenu"; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751156AbdGNVtp (ORCPT ); Fri, 14 Jul 2017 17:49:45 -0400 Received: from mail-qt0-f170.google.com ([209.85.216.170]:32920 "EHLO mail-qt0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751124AbdGNVtn (ORCPT ); Fri, 14 Jul 2017 17:49:43 -0400 Received: by mail-qt0-f170.google.com with SMTP id r30so72288748qtc.0 for ; Fri, 14 Jul 2017 14:49:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=R+Dgd0Gl6EaxzZi7ah2ZfJz/3fx3F1f0TadPeCC21Zg=; b=psrTGenuACTs8d9xw0b94Z+NH5s7Q7vwpkXG13baoNCIpLph4K3ywQtT1KmOtKjrcm kdKUHpFV7Wc8fZK1SC99glYl0W/vFEajkESLdleTE+ebttpC6JIjDrwEq8C+XarYCxMC jzDvZ4wZCkWz3oMyYf1FvWEAu0ESGpwAlFPmPSd0m/2FYh2ea+SbLH+fNzJ1859A1ZiQ QWoeHqrq7MzGd933KCu186DfT85bwVRht9kd3uRLiCkJlAZ7wzWbIdi0KZbrbobuGfhS TmmJntPxb1LzTVnies3v4kAXH1a2WT7szExWO3hWiu6kcBXz2xLkwGAXgQ6bz+Pska3M zwvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=R+Dgd0Gl6EaxzZi7ah2ZfJz/3fx3F1f0TadPeCC21Zg=; b=uk9E6z9FeLo2zabfjrnInf+zU9cCO0r0xmPtPNn5WFWJU5q2w4w9jUqEzHDMWoSXnC lxQKjNKeISpRogmRscIZizFea8IaFHr4H5DIZBVDEfIAnWpsG+bhvJU4r7E57UXXmSH5 5ouP17kJHcQJPsTzvTXyVq1p4OaZKKdp8pm6dRZmuCap4sZjNHgnaUwULLwt3YiJ+HMu Et+A6meGWkn3L+9miOr/uTI/Hy8/SBuqNb3f0jSsMOxWt6mIehHxpdJIbqEdethv87TU a/kGGCJ+5aF3o99llb81PrFMj6hicp2cBsgdxVhCDmtSATqVbtgg5Gk9dcd4+eLrbKwy vLhQ== X-Gm-Message-State: AIVw112RTa9/FxYqGtEIwEk8F2ffgMw7jvJfVXeK5fD2Bu9qg8B5nDH4 zDDpu0G02tXwub6c X-Received: by 10.200.0.213 with SMTP id d21mr13699357qtg.28.1500068981287; Fri, 14 Jul 2017 14:49:41 -0700 (PDT) Received: from joy.nyc.corp.google.com ([100.101.212.71]) by smtp.gmail.com with ESMTPSA id f124sm6940655qkj.57.2017.07.14.14.49.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 14 Jul 2017 14:49:40 -0700 (PDT) From: Neal Cardwell To: David Miller Cc: netdev@vger.kernel.org, Neal Cardwell , Yuchung Cheng , Soheil Hassas Yeganeh Subject: [PATCH net 2/5] tcp_bbr: introduce bbr_bw_to_pacing_rate() helper Date: Fri, 14 Jul 2017 17:49:22 -0400 Message-Id: <20170714214925.30720-2-ncardwell@google.com> X-Mailer: git-send-email 2.13.2.932.g7449e964c-goog In-Reply-To: <20170714214925.30720-1-ncardwell@google.com> References: <20170714214925.30720-1-ncardwell@google.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Introduce a helper to convert a BBR bandwidth and gain factor to a pacing rate in bytes per second. This is a pure refactor, but is needed for two following fixes. Fixes: 0f8782ea1497 ("tcp_bbr: add BBR congestion control") Signed-off-by: Neal Cardwell Signed-off-by: Yuchung Cheng Signed-off-by: Soheil Hassas Yeganeh --- net/ipv4/tcp_bbr.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c index 743e97511dc8..29e23b851b97 100644 --- a/net/ipv4/tcp_bbr.c +++ b/net/ipv4/tcp_bbr.c @@ -211,6 +211,16 @@ static u64 bbr_rate_bytes_per_sec(struct sock *sk, u64 rate, int gain) return rate >> BW_SCALE; } +/* Convert a BBR bw and gain factor to a pacing rate in bytes per second. */ +static u32 bbr_bw_to_pacing_rate(struct sock *sk, u32 bw, int gain) +{ + u64 rate = bw; + + rate = bbr_rate_bytes_per_sec(sk, rate, gain); + rate = min_t(u64, rate, sk->sk_max_pacing_rate); + return rate; +} + /* Pace using current bw estimate and a gain factor. In order to help drive the * network toward lower queues while maintaining high utilization and low * latency, the average pacing rate aims to be slightly (~1%) lower than the @@ -220,10 +230,8 @@ static u64 bbr_rate_bytes_per_sec(struct sock *sk, u64 rate, int gain) */ static void bbr_set_pacing_rate(struct sock *sk, u32 bw, int gain) { - u64 rate = bw; + u32 rate = bbr_bw_to_pacing_rate(sk, bw, gain); - rate = bbr_rate_bytes_per_sec(sk, rate, gain); - rate = min_t(u64, rate, sk->sk_max_pacing_rate); if (bbr_full_bw_reached(sk) || rate > sk->sk_pacing_rate) sk->sk_pacing_rate = rate; }