diff mbox series

[net,v2] tcp: Prevent low rmem stalls with SO_RCVLOWAT.

Message ID 20201023184709.217614-1-arjunroy.kdev@gmail.com
State Accepted
Delegated to: David Miller
Headers show
Series [net,v2] tcp: Prevent low rmem stalls with SO_RCVLOWAT. | expand

Checks

Context Check Description
jkicinski/cover_letter success Link
jkicinski/fixes_present success Link
jkicinski/patch_count success Link
jkicinski/tree_selection success Clearly marked for net
jkicinski/subject_prefix success Link
jkicinski/source_inline success Was 0 now: 0
jkicinski/verify_signedoff success Link
jkicinski/module_param success Was 0 now: 0
jkicinski/build_32bit success Errors and warnings before: 47 this patch: 47
jkicinski/kdoc success Errors and warnings before: 0 this patch: 0
jkicinski/verify_fixes success Link
jkicinski/checkpatch success total: 0 errors, 0 warnings, 0 checks, 17 lines checked
jkicinski/build_allmodconfig_warn success Errors and warnings before: 51 this patch: 51
jkicinski/header_inline success Link
jkicinski/stable success Stable not CCed

Commit Message

Arjun Roy Oct. 23, 2020, 6:47 p.m. UTC
From: Arjun Roy <arjunroy@google.com>

With SO_RCVLOWAT, under memory pressure,
it is possible to enter a state where:

1. We have not received enough bytes to satisfy SO_RCVLOWAT.
2. We have not entered buffer pressure (see tcp_rmem_pressure()).
3. But, we do not have enough buffer space to accept more packets.

In this case, we advertise 0 rwnd (due to #3) but the application does
not drain the receive queue (no wakeup because of #1 and #2) so the
flow stalls.

Modify the heuristic for SO_RCVLOWAT so that, if we are advertising
rwnd<=rcv_mss, force a wakeup to prevent a stall.

Without this patch, setting tcp_rmem to 6143 and disabling TCP
autotune causes a stalled flow. With this patch, no stall occurs. This
is with RPC-style traffic with large messages.

Fixes: 03f45c883c6f ("tcp: avoid extra wakeups for SO_RCVLOWAT users")
Signed-off-by: Arjun Roy <arjunroy@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
 net/ipv4/tcp.c       | 2 ++
 net/ipv4/tcp_input.c | 3 ++-
 2 files changed, 4 insertions(+), 1 deletion(-)

Comments

Jakub Kicinski Oct. 24, 2020, 2:13 a.m. UTC | #1
On Fri, 23 Oct 2020 11:47:09 -0700 Arjun Roy wrote:
> From: Arjun Roy <arjunroy@google.com>
> 
> With SO_RCVLOWAT, under memory pressure,
> it is possible to enter a state where:
> 
> 1. We have not received enough bytes to satisfy SO_RCVLOWAT.
> 2. We have not entered buffer pressure (see tcp_rmem_pressure()).
> 3. But, we do not have enough buffer space to accept more packets.
> 
> In this case, we advertise 0 rwnd (due to #3) but the application does
> not drain the receive queue (no wakeup because of #1 and #2) so the
> flow stalls.
> 
> Modify the heuristic for SO_RCVLOWAT so that, if we are advertising
> rwnd<=rcv_mss, force a wakeup to prevent a stall.
> 
> Without this patch, setting tcp_rmem to 6143 and disabling TCP
> autotune causes a stalled flow. With this patch, no stall occurs. This
> is with RPC-style traffic with large messages.
> 
> Fixes: 03f45c883c6f ("tcp: avoid extra wakeups for SO_RCVLOWAT users")
> Signed-off-by: Arjun Roy <arjunroy@google.com>
> Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
> Acked-by: Neal Cardwell <ncardwell@google.com>
> Signed-off-by: Eric Dumazet <edumazet@google.com>

Applied, thank you!
patchwork-bot+netdevbpf@kernel.org Oct. 24, 2020, 2:20 a.m. UTC | #2
Hello:

This patch was applied to netdev/net.git (refs/heads/master):

On Fri, 23 Oct 2020 11:47:09 -0700 you wrote:
> From: Arjun Roy <arjunroy@google.com>
> 
> With SO_RCVLOWAT, under memory pressure,
> it is possible to enter a state where:
> 
> 1. We have not received enough bytes to satisfy SO_RCVLOWAT.
> 2. We have not entered buffer pressure (see tcp_rmem_pressure()).
> 3. But, we do not have enough buffer space to accept more packets.
> 
> [...]

Here is the summary with links:
  - [net,v2] tcp: Prevent low rmem stalls with SO_RCVLOWAT.
    https://git.kernel.org/netdev/net/c/435ccfa894e3

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
Arjun Roy Oct. 24, 2020, 6:26 a.m. UTC | #3
On Fri, Oct 23, 2020 at 7:13 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Fri, 23 Oct 2020 11:47:09 -0700 Arjun Roy wrote:
> > From: Arjun Roy <arjunroy@google.com>
> >
> > With SO_RCVLOWAT, under memory pressure,
> > it is possible to enter a state where:
> >
> > 1. We have not received enough bytes to satisfy SO_RCVLOWAT.
> > 2. We have not entered buffer pressure (see tcp_rmem_pressure()).
> > 3. But, we do not have enough buffer space to accept more packets.
> >
> > In this case, we advertise 0 rwnd (due to #3) but the application does
> > not drain the receive queue (no wakeup because of #1 and #2) so the
> > flow stalls.
> >
> > Modify the heuristic for SO_RCVLOWAT so that, if we are advertising
> > rwnd<=rcv_mss, force a wakeup to prevent a stall.
> >
> > Without this patch, setting tcp_rmem to 6143 and disabling TCP
> > autotune causes a stalled flow. With this patch, no stall occurs. This
> > is with RPC-style traffic with large messages.
> >
> > Fixes: 03f45c883c6f ("tcp: avoid extra wakeups for SO_RCVLOWAT users")
> > Signed-off-by: Arjun Roy <arjunroy@google.com>
> > Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
> > Acked-by: Neal Cardwell <ncardwell@google.com>
> > Signed-off-by: Eric Dumazet <edumazet@google.com>
>
> Applied, thank you!

Ack, thanks for the quick review!

-Arjun
diff mbox series

Patch

diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index bae4284bf542..b2bc3d7fe9e8 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -485,6 +485,8 @@  static inline bool tcp_stream_is_readable(const struct tcp_sock *tp,
 			return true;
 		if (tcp_rmem_pressure(sk))
 			return true;
+		if (tcp_receive_window(tp) <= inet_csk(sk)->icsk_ack.rcv_mss)
+			return true;
 	}
 	if (sk->sk_prot->stream_memory_read)
 		return sk->sk_prot->stream_memory_read(sk);
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index fc445833b5e5..389d1b340248 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -4908,7 +4908,8 @@  void tcp_data_ready(struct sock *sk)
 	int avail = tp->rcv_nxt - tp->copied_seq;
 
 	if (avail < sk->sk_rcvlowat && !tcp_rmem_pressure(sk) &&
-	    !sock_flag(sk, SOCK_DONE))
+	    !sock_flag(sk, SOCK_DONE) &&
+	    tcp_receive_window(tp) > inet_csk(sk)->icsk_ack.rcv_mss)
 		return;
 
 	sk->sk_data_ready(sk);