Message ID | 1411233550.26859.76.camel@edumazet-glaptop2.roam.corp.google.com |
---|---|
State | Superseded, archived |
Delegated to: | David Miller |
Headers | show |
On Sat, 2014-09-20 at 10:19 -0700, Eric Dumazet wrote: > From: Eric Dumazet <edumazet@google.com> > > icsk_rto is an 32bit field, and icsk_backoff can reach 15 by default, > or more if some sysctl (eg tcp_retries2) are changed. > > Better use 64bit to perform icsk_rto << icsk_backoff operations Maybe better to use a helper function for this? something like: static inline u64 icsk_rto_backoff(const struct inet_connection_sock *icsk) { u64 when = (u64)icsk->icsk_rto; return when << icsk->icsk_backoff; } > diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c [] > @@ -3208,9 +3208,12 @@ static void tcp_ack_probe(struct sock *sk) > * This function is not for random using! > */ > } else { > + unsigned long when; > + > + when = min((u64)icsk->icsk_rto << icsk->icsk_backoff, > + (u64)TCP_RTO_MAX); Maybe: u32 when = (u32)min_t(u64, icsk_rto_backoff(icsk), TCP_RTO_MAX); -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sat, Sep 20, 2014 at 11:01 AM, Joe Perches <joe@perches.com> wrote: > On Sat, 2014-09-20 at 10:19 -0700, Eric Dumazet wrote: >> From: Eric Dumazet <edumazet@google.com> >> >> icsk_rto is an 32bit field, and icsk_backoff can reach 15 by default, >> or more if some sysctl (eg tcp_retries2) are changed. >> >> Better use 64bit to perform icsk_rto << icsk_backoff operations > > Maybe better to use a helper function for this? > > something like: > > static inline u64 icsk_rto_backoff(const struct inet_connection_sock *icsk) > { > u64 when = (u64)icsk->icsk_rto; > > return when << icsk->icsk_backoff; > } Thanks for the fix Eric. I second Joe's idea to use a helper function. > >> diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c > [] >> @@ -3208,9 +3208,12 @@ static void tcp_ack_probe(struct sock *sk) >> * This function is not for random using! >> */ >> } else { >> + unsigned long when; >> + >> + when = min((u64)icsk->icsk_rto << icsk->icsk_backoff, >> + (u64)TCP_RTO_MAX); > > Maybe: > u32 when = (u32)min_t(u64, icsk_rto_backoff(icsk), TCP_RTO_MAX); > > -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sat, 2014-09-20 at 12:46 -0700, Yuchung Cheng wrote: > On Sat, Sep 20, 2014 at 11:01 AM, Joe Perches <joe@perches.com> wrote: > > On Sat, 2014-09-20 at 10:19 -0700, Eric Dumazet wrote: > >> From: Eric Dumazet <edumazet@google.com> > >> > >> icsk_rto is an 32bit field, and icsk_backoff can reach 15 by default, > >> or more if some sysctl (eg tcp_retries2) are changed. > >> > >> Better use 64bit to perform icsk_rto << icsk_backoff operations > > > > Maybe better to use a helper function for this? > > > > something like: > > > > static inline u64 icsk_rto_backoff(const struct inet_connection_sock *icsk) > > { > > u64 when = (u64)icsk->icsk_rto; > > > > return when << icsk->icsk_backoff; > > } > Thanks for the fix Eric. I second Joe's idea to use a helper function. > Yep. Given the timeout functions in the kernel use 'unsigned long', I prefer to keep the u64 magic private to this helper. I will probably use static inline unsigned long icsk_rto_backoff(const struct inet_connection_sock *icsk) { u64 when = (u64)icsk->icsk_rto << icsk->icsk_backoff; return min_t(u64, when, ~0UL); } On 64bit arches, the min_t() should be a nop. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sat, 2014-09-20 at 12:55 -0700, Eric Dumazet wrote: > On Sat, 2014-09-20 at 12:46 -0700, Yuchung Cheng wrote: > > On Sat, Sep 20, 2014 at 11:01 AM, Joe Perches <joe@perches.com> wrote: > > > On Sat, 2014-09-20 at 10:19 -0700, Eric Dumazet wrote: > > >> From: Eric Dumazet <edumazet@google.com> > > >> > > >> icsk_rto is an 32bit field, and icsk_backoff can reach 15 by default, > > >> or more if some sysctl (eg tcp_retries2) are changed. > > >> > > >> Better use 64bit to perform icsk_rto << icsk_backoff operations > > > > > > Maybe better to use a helper function for this? > > > > > > something like: > > > > > > static inline u64 icsk_rto_backoff(const struct inet_connection_sock *icsk) > > > { > > > u64 when = (u64)icsk->icsk_rto; > > > > > > return when << icsk->icsk_backoff; > > > } > > Thanks for the fix Eric. I second Joe's idea to use a helper function. > > > > Yep. > > Given the timeout functions in the kernel use 'unsigned long', I prefer > to keep the u64 magic private to this helper. > > I will probably use > > static inline unsigned long icsk_rto_backoff(const struct inet_connection_sock *icsk) > { > u64 when = (u64)icsk->icsk_rto << icsk->icsk_backoff; > > return min_t(u64, when, ~0UL); OK. I think an explicit cast to unsigned long after the min_t to avoid the implicit downcast would be better return (unsigned long)min_t(etc...) so that no warning is produced if someone does make W=3 -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
From: Eric Dumazet > On Sat, 2014-09-20 at 12:46 -0700, Yuchung Cheng wrote: > > On Sat, Sep 20, 2014 at 11:01 AM, Joe Perches <joe@perches.com> wrote: > > > On Sat, 2014-09-20 at 10:19 -0700, Eric Dumazet wrote: > > >> From: Eric Dumazet <edumazet@google.com> > > >> > > >> icsk_rto is an 32bit field, and icsk_backoff can reach 15 by default, > > >> or more if some sysctl (eg tcp_retries2) are changed. > > >> > > >> Better use 64bit to perform icsk_rto << icsk_backoff operations > > > > > > Maybe better to use a helper function for this? > > > > > > something like: > > > > > > static inline u64 icsk_rto_backoff(const struct inet_connection_sock *icsk) > > > { > > > u64 when = (u64)icsk->icsk_rto; > > > > > > return when << icsk->icsk_backoff; > > > } > > Thanks for the fix Eric. I second Joe's idea to use a helper function. > > > > Yep. > > Given the timeout functions in the kernel use 'unsigned long', I prefer > to keep the u64 magic private to this helper. > > I will probably use > > static inline unsigned long icsk_rto_backoff(const struct inet_connection_sock *icsk) > { > u64 when = (u64)icsk->icsk_rto << icsk->icsk_backoff; > > return min_t(u64, when, ~0UL); > } > > On 64bit arches, the min_t() should be a nop. Isn't it likely to generate a 'condition is always true/false' warning? (that may depend on whether min_t contains < or <=) David
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 02fb66d4a018..1ea3847c62fc 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -3208,9 +3208,12 @@ static void tcp_ack_probe(struct sock *sk) * This function is not for random using! */ } else { + unsigned long when; + + when = min((u64)icsk->icsk_rto << icsk->icsk_backoff, + (u64)TCP_RTO_MAX); inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0, - min(icsk->icsk_rto << icsk->icsk_backoff, TCP_RTO_MAX), - TCP_RTO_MAX); + when, TCP_RTO_MAX); } } diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 7f1280dcad57..2231b400f3ce 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -3279,6 +3279,7 @@ void tcp_send_probe0(struct sock *sk) { struct inet_connection_sock *icsk = inet_csk(sk); struct tcp_sock *tp = tcp_sk(sk); + unsigned long when; int err; err = tcp_write_wakeup(sk); @@ -3294,9 +3295,8 @@ void tcp_send_probe0(struct sock *sk) if (icsk->icsk_backoff < sysctl_tcp_retries2) icsk->icsk_backoff++; icsk->icsk_probes_out++; - inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0, - min(icsk->icsk_rto << icsk->icsk_backoff, TCP_RTO_MAX), - TCP_RTO_MAX); + when = min((u64)icsk->icsk_rto << icsk->icsk_backoff, + (u64)TCP_RTO_MAX); } else { /* If packet was not sent due to local congestion, * do not backoff and do not remember icsk_probes_out. @@ -3306,11 +3306,10 @@ void tcp_send_probe0(struct sock *sk) */ if (!icsk->icsk_probes_out) icsk->icsk_probes_out = 1; - inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0, - min(icsk->icsk_rto << icsk->icsk_backoff, - TCP_RESOURCE_PROBE_INTERVAL), - TCP_RTO_MAX); + when = min((u64)icsk->icsk_rto << icsk->icsk_backoff, + (u64)TCP_RESOURCE_PROBE_INTERVAL); } + inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0, when, TCP_RTO_MAX); } int tcp_rtx_synack(struct sock *sk, struct request_sock *req) diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c index a339e7ba05a4..05e1d0723233 100644 --- a/net/ipv4/tcp_timer.c +++ b/net/ipv4/tcp_timer.c @@ -180,7 +180,7 @@ static int tcp_write_timeout(struct sock *sk) retry_until = sysctl_tcp_retries2; if (sock_flag(sk, SOCK_DEAD)) { - const int alive = (icsk->icsk_rto < TCP_RTO_MAX); + const int alive = icsk->icsk_rto < TCP_RTO_MAX; retry_until = tcp_orphan_retries(sk, alive); do_reset = alive || @@ -294,7 +294,8 @@ static void tcp_probe_timer(struct sock *sk) max_probes = sysctl_tcp_retries2; if (sock_flag(sk, SOCK_DEAD)) { - const int alive = ((icsk->icsk_rto << icsk->icsk_backoff) < TCP_RTO_MAX); + u64 exp_rto = (u64)icsk->icsk_rto << icsk->icsk_backoff; + const int alive = exp_rto < TCP_RTO_MAX; max_probes = tcp_orphan_retries(sk, alive);