Message ID | 20200820131209.23430-1-fw@strlen.de |
---|---|
State | Accepted, archived |
Delegated to: | Matthieu Baerts |
Headers | show |
Series | [mptcp-next,v3] mptcp: adjust mptcp receive buffer limit if subflow has larger one | expand |
On Thu, 20 Aug 2020, Florian Westphal wrote: > In addition to tcp autotuning during read, it may also increase the > receive buffer in tcp_clamp_window(). > > In this case, mptcp should adjust its receive buffer size as well so > it can move all pending skbs from the subflow socket to the mptcp socket. > > At this time, TCP can have more skbs ready for processing than what the > mptcp receive buffer size allows. > > In the mptcp case, the receive window announced is based on the free > space of the mptcp parent socket instead of the individual subflows. > > Following the subflow allows mptcp to grow its receive buffer. > > This is especially noticeable for loopback traffic where two skbs are > enough to fill the initial receive window. > > In mptcp_data_ready() we do not hold the mptcp socket lock, so modifying > mptcp_sk->sk_rcvbuf is racy. Do it when moving skbs from subflow to > mptcp socket, both sockets are locked in this case. > > v2: move rcvbuf update to __mptcp_move_skbs_from_subflow where both > mptcp and subflow sk locks are held (pointed out by Mat Martineau). > v3: only adjust if receive buffer size isn't fixed. (Mat Martineau). > > Signed-off-by: Florian Westphal <fw@strlen.de> > --- Looks good to me. Thanks Florian. Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Mat > net/mptcp/protocol.c | 27 ++++++++++++++++++++++----- > 1 file changed, 22 insertions(+), 5 deletions(-) > > diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c > index 77d655b0650c..91718dbc89e1 100644 > --- a/net/mptcp/protocol.c > +++ b/net/mptcp/protocol.c > @@ -457,6 +457,18 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk, > bool more_data_avail; > struct tcp_sock *tp; > bool done = false; > + int sk_rbuf; > + > + sk_rbuf = READ_ONCE(sk->sk_rcvbuf); > + > + if (!(sk->sk_userlocks & SOCK_RCVBUF_LOCK)) { > + int ssk_rbuf = READ_ONCE(ssk->sk_rcvbuf); > + > + if (unlikely(ssk_rbuf > sk_rbuf)) { > + WRITE_ONCE(sk->sk_rcvbuf, ssk_rbuf); > + sk_rbuf = ssk_rbuf; > + } > + } > > pr_debug("msk=%p ssk=%p", msk, ssk); > tp = tcp_sk(ssk); > @@ -511,7 +523,7 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk, > WRITE_ONCE(tp->copied_seq, seq); > more_data_avail = mptcp_subflow_data_available(ssk); > > - if (atomic_read(&sk->sk_rmem_alloc) > READ_ONCE(sk->sk_rcvbuf)) { > + if (atomic_read(&sk->sk_rmem_alloc) > sk_rbuf) { > done = true; > break; > } > @@ -603,6 +615,7 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk) > { > struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); > struct mptcp_sock *msk = mptcp_sk(sk); > + int sk_rbuf, ssk_rbuf; > bool wake; > > /* move_skbs_to_msk below can legitly clear the data_avail flag, > @@ -613,12 +626,16 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk) > if (wake) > set_bit(MPTCP_DATA_READY, &msk->flags); > > - if (atomic_read(&sk->sk_rmem_alloc) < READ_ONCE(sk->sk_rcvbuf) && > - move_skbs_to_msk(msk, ssk)) > + ssk_rbuf = READ_ONCE(ssk->sk_rcvbuf); > + sk_rbuf = READ_ONCE(sk->sk_rcvbuf); > + if (unlikely(ssk_rbuf > sk_rbuf)) > + sk_rbuf = ssk_rbuf; > + > + /* over limit? can't append more skbs to msk */ > + if (atomic_read(&sk->sk_rmem_alloc) > sk_rbuf) > goto wake; > > - /* don't schedule if mptcp sk is (still) over limit */ > - if (atomic_read(&sk->sk_rmem_alloc) > READ_ONCE(sk->sk_rcvbuf)) > + if (move_skbs_to_msk(msk, ssk)) > goto wake; > > /* mptcp socket is owned, release_cb should retry */ > -- > 2.26.2 > _______________________________________________ > mptcp mailing list -- mptcp@lists.01.org > To unsubscribe send an email to mptcp-leave@lists.01.org > -- Mat Martineau Intel
Hi Florian, Mat, On 21/08/2020 02:25, Mat Martineau wrote: > > On Thu, 20 Aug 2020, Florian Westphal wrote: > >> In addition to tcp autotuning during read, it may also increase the >> receive buffer in tcp_clamp_window(). >> >> In this case, mptcp should adjust its receive buffer size as well so >> it can move all pending skbs from the subflow socket to the mptcp socket. >> >> At this time, TCP can have more skbs ready for processing than what the >> mptcp receive buffer size allows. >> >> In the mptcp case, the receive window announced is based on the free >> space of the mptcp parent socket instead of the individual subflows. >> >> Following the subflow allows mptcp to grow its receive buffer. >> >> This is especially noticeable for loopback traffic where two skbs are >> enough to fill the initial receive window. >> >> In mptcp_data_ready() we do not hold the mptcp socket lock, so modifying >> mptcp_sk->sk_rcvbuf is racy. Do it when moving skbs from subflow to >> mptcp socket, both sockets are locked in this case. >> >> v2: move rcvbuf update to __mptcp_move_skbs_from_subflow where both >> mptcp and subflow sk locks are held (pointed out by Mat Martineau). >> v3: only adjust if receive buffer size isn't fixed. (Mat Martineau). >> >> Signed-off-by: Florian Westphal <fw@strlen.de> >> --- > > Looks good to me. Thanks Florian. > > Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Thank you for the patch and the review! Applied at the end of the series. Tests + export are in progress! Cheers, Matt
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 77d655b0650c..91718dbc89e1 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -457,6 +457,18 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk, bool more_data_avail; struct tcp_sock *tp; bool done = false; + int sk_rbuf; + + sk_rbuf = READ_ONCE(sk->sk_rcvbuf); + + if (!(sk->sk_userlocks & SOCK_RCVBUF_LOCK)) { + int ssk_rbuf = READ_ONCE(ssk->sk_rcvbuf); + + if (unlikely(ssk_rbuf > sk_rbuf)) { + WRITE_ONCE(sk->sk_rcvbuf, ssk_rbuf); + sk_rbuf = ssk_rbuf; + } + } pr_debug("msk=%p ssk=%p", msk, ssk); tp = tcp_sk(ssk); @@ -511,7 +523,7 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk, WRITE_ONCE(tp->copied_seq, seq); more_data_avail = mptcp_subflow_data_available(ssk); - if (atomic_read(&sk->sk_rmem_alloc) > READ_ONCE(sk->sk_rcvbuf)) { + if (atomic_read(&sk->sk_rmem_alloc) > sk_rbuf) { done = true; break; } @@ -603,6 +615,7 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk) { struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); struct mptcp_sock *msk = mptcp_sk(sk); + int sk_rbuf, ssk_rbuf; bool wake; /* move_skbs_to_msk below can legitly clear the data_avail flag, @@ -613,12 +626,16 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk) if (wake) set_bit(MPTCP_DATA_READY, &msk->flags); - if (atomic_read(&sk->sk_rmem_alloc) < READ_ONCE(sk->sk_rcvbuf) && - move_skbs_to_msk(msk, ssk)) + ssk_rbuf = READ_ONCE(ssk->sk_rcvbuf); + sk_rbuf = READ_ONCE(sk->sk_rcvbuf); + if (unlikely(ssk_rbuf > sk_rbuf)) + sk_rbuf = ssk_rbuf; + + /* over limit? can't append more skbs to msk */ + if (atomic_read(&sk->sk_rmem_alloc) > sk_rbuf) goto wake; - /* don't schedule if mptcp sk is (still) over limit */ - if (atomic_read(&sk->sk_rmem_alloc) > READ_ONCE(sk->sk_rcvbuf)) + if (move_skbs_to_msk(msk, ssk)) goto wake; /* mptcp socket is owned, release_cb should retry */
In addition to tcp autotuning during read, it may also increase the receive buffer in tcp_clamp_window(). In this case, mptcp should adjust its receive buffer size as well so it can move all pending skbs from the subflow socket to the mptcp socket. At this time, TCP can have more skbs ready for processing than what the mptcp receive buffer size allows. In the mptcp case, the receive window announced is based on the free space of the mptcp parent socket instead of the individual subflows. Following the subflow allows mptcp to grow its receive buffer. This is especially noticeable for loopback traffic where two skbs are enough to fill the initial receive window. In mptcp_data_ready() we do not hold the mptcp socket lock, so modifying mptcp_sk->sk_rcvbuf is racy. Do it when moving skbs from subflow to mptcp socket, both sockets are locked in this case. v2: move rcvbuf update to __mptcp_move_skbs_from_subflow where both mptcp and subflow sk locks are held (pointed out by Mat Martineau). v3: only adjust if receive buffer size isn't fixed. (Mat Martineau). Signed-off-by: Florian Westphal <fw@strlen.de> --- net/mptcp/protocol.c | 27 ++++++++++++++++++++++----- 1 file changed, 22 insertions(+), 5 deletions(-)