Message ID | 1326944879-22780-1-git-send-email-ncardwell@google.com |
---|---|
State | Accepted, archived |
Delegated to: | David Miller |
Headers | show |
From: Neal Cardwell <ncardwell@google.com> Date: Wed, 18 Jan 2012 22:47:58 -0500 > This patch fixes BIC so that cwnd reductions made during RTOs can be > undone (just as they already can be undone when using the default/Reno > behavior). > > When undoing cwnd reductions, BIC-derived congestion control modules > were restoring the cwnd from last_max_cwnd. There were two problems > with using last_max_cwnd to restore a cwnd during undo: > > (a) last_max_cwnd was set to 0 on state transitions into TCP_CA_Loss > (by calling the module's reset() functions), so cwnd reductions from > RTOs could not be undone. > > (b) when fast_covergence is enabled (which it is by default) > last_max_cwnd does not actually hold the value of snd_cwnd before the > loss; instead, it holds a scaled-down version of snd_cwnd. > > This patch makes the following changes: > > (1) upon undo, revert snd_cwnd to ca->loss_cwnd, which is already, as > the existing comment notes, the "congestion window at last loss" > > (2) stop forgetting ca->loss_cwnd on TCP_CA_Loss events > > (3) use ca->last_max_cwnd to check if we're in slow start > > Signed-off-by: Neal Cardwell <ncardwell@google.com> Applied. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/net/ipv4/tcp_bic.c b/net/ipv4/tcp_bic.c index 6187eb4..f45e1c2 100644 --- a/net/ipv4/tcp_bic.c +++ b/net/ipv4/tcp_bic.c @@ -63,7 +63,6 @@ static inline void bictcp_reset(struct bictcp *ca) { ca->cnt = 0; ca->last_max_cwnd = 0; - ca->loss_cwnd = 0; ca->last_cwnd = 0; ca->last_time = 0; ca->epoch_start = 0; @@ -72,7 +71,11 @@ static inline void bictcp_reset(struct bictcp *ca) static void bictcp_init(struct sock *sk) { - bictcp_reset(inet_csk_ca(sk)); + struct bictcp *ca = inet_csk_ca(sk); + + bictcp_reset(ca); + ca->loss_cwnd = 0; + if (initial_ssthresh) tcp_sk(sk)->snd_ssthresh = initial_ssthresh; } @@ -127,7 +130,7 @@ static inline void bictcp_update(struct bictcp *ca, u32 cwnd) } /* if in slow start or link utilization is very low */ - if (ca->loss_cwnd == 0) { + if (ca->last_max_cwnd == 0) { if (ca->cnt > 20) /* increase cwnd 5% per RTT */ ca->cnt = 20; } @@ -185,7 +188,7 @@ static u32 bictcp_undo_cwnd(struct sock *sk) { const struct tcp_sock *tp = tcp_sk(sk); const struct bictcp *ca = inet_csk_ca(sk); - return max(tp->snd_cwnd, ca->last_max_cwnd); + return max(tp->snd_cwnd, ca->loss_cwnd); } static void bictcp_state(struct sock *sk, u8 new_state)
This patch fixes BIC so that cwnd reductions made during RTOs can be undone (just as they already can be undone when using the default/Reno behavior). When undoing cwnd reductions, BIC-derived congestion control modules were restoring the cwnd from last_max_cwnd. There were two problems with using last_max_cwnd to restore a cwnd during undo: (a) last_max_cwnd was set to 0 on state transitions into TCP_CA_Loss (by calling the module's reset() functions), so cwnd reductions from RTOs could not be undone. (b) when fast_covergence is enabled (which it is by default) last_max_cwnd does not actually hold the value of snd_cwnd before the loss; instead, it holds a scaled-down version of snd_cwnd. This patch makes the following changes: (1) upon undo, revert snd_cwnd to ca->loss_cwnd, which is already, as the existing comment notes, the "congestion window at last loss" (2) stop forgetting ca->loss_cwnd on TCP_CA_Loss events (3) use ca->last_max_cwnd to check if we're in slow start Signed-off-by: Neal Cardwell <ncardwell@google.com> --- net/ipv4/tcp_bic.c | 11 +++++++---- 1 files changed, 7 insertions(+), 4 deletions(-)