From patchwork Fri Jan 8 11:50:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Abeni X-Patchwork-Id: 1423743 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.01.org (client-ip=198.145.21.10; helo=ml01.01.org; envelope-from=mptcp-bounces@lists.01.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=TlFW/mnx; dkim-atps=neutral Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4DC1d039hcz9sWk for ; Fri, 8 Jan 2021 22:50:26 +1100 (AEDT) Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id DFC2E100EA2B7; Fri, 8 Jan 2021 03:50:21 -0800 (PST) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=216.205.24.124; helo=us-smtp-delivery-124.mimecast.com; envelope-from=pabeni@redhat.com; receiver= Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id AC152100EA2B2 for ; Fri, 8 Jan 2021 03:50:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1610106618; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vFe5bxMcyulKUtrh8ioh3kgM5CPimdSLfzwGm2t/VRg=; b=TlFW/mnxOFlFCB3ekVyszlka9O/fVNSsauocuhgnl06wotAGQOsKHE1rQN052F3xvUr8fm BEyGeaeY3dFHjABOdXCFmMyYlwJZyk2kqhWBn1UWaVWeVfRscp8P/o9w+XWINyKsV2f5bw HAMHz4CHITf8Zm0eFTXapzA8g2MY1IM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-397-v-xXhJkCOp2TgmuZi_usCw-1; Fri, 08 Jan 2021 06:50:16 -0500 X-MC-Unique: v-xXhJkCOp2TgmuZi_usCw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DAD8B801817 for ; Fri, 8 Jan 2021 11:50:15 +0000 (UTC) Received: from gerbillo.redhat.com (ovpn-114-195.ams2.redhat.com [10.36.114.195]) by smtp.corp.redhat.com (Postfix) with ESMTP id 492F35D6D1 for ; Fri, 8 Jan 2021 11:50:15 +0000 (UTC) From: Paolo Abeni To: mptcp@lists.01.org Date: Fri, 8 Jan 2021 12:50:01 +0100 Message-Id: <8ddbfd8593fc67aa873d00b622fcf69008fcf0ed.1610106588.git.pabeni@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pabeni@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Message-ID-Hash: 2IGSW7XQPIJKS7VGVDQ6LB4V72WPKN62 X-Message-ID-Hash: 2IGSW7XQPIJKS7VGVDQ6LB4V72WPKN62 X-MailFrom: pabeni@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; suspicious-header X-Mailman-Version: 3.1.1 Precedence: list Subject: [MPTCP] [PATCH mptcp-next 1/3] mptcp: re-enable sndbuf autotune List-Id: Discussions regarding MPTCP upstreaming Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: After commit 6e628cd3a8f7 ("mptcp: use mptcp release_cb for delayed tasks"), MPTCP never sets the flag bit SOCK_NOSPACE on its subflow. As a side effect, autotune never takes place, as it happens inside tcp_new_space(), which in turn is called only when the mentioned bit is set. Let's sendmsg() set the subflows NOSPACE bit when looking for more memory. Additionally, cleanup the sndbuf propagation from subflow into the msk, leveraging the subflow write space callback and dropping a bunch of duplicate code. This also makes the SNDBUF_LIMITED chrono relevant again for MPTCP subflows. Fixes: 6e628cd3a8f7 ("mptcp: use mptcp release_cb for delayed tasks") Signed-off-by: Paolo Abeni --- net/mptcp/protocol.c | 70 ++++++++++++++++++++++++-------------------- net/mptcp/protocol.h | 20 +++++++++++++ net/mptcp/subflow.c | 10 ++++++- 3 files changed, 67 insertions(+), 33 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 5977e9e083be..eb6bb6b78d6f 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -734,10 +734,14 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk) void __mptcp_flush_join_list(struct mptcp_sock *msk) { + struct mptcp_subflow_context *subflow; + if (likely(list_empty(&msk->join_list))) return; spin_lock_bh(&msk->join_list_lock); + list_for_each_entry(subflow, &msk->join_list, node) + mptcp_propagate_sndbuf((struct sock *)msk, mptcp_subflow_tcp_sock(subflow)); list_splice_tail_init(&msk->join_list, &msk->conn_list); spin_unlock_bh(&msk->join_list_lock); } @@ -1037,13 +1041,7 @@ static void __mptcp_clean_una(struct sock *sk) __mptcp_update_wmem(sk); sk_mem_reclaim_partial(sk); } - - if (sk_stream_is_writeable(sk)) { - /* pairs with memory barrier in mptcp_poll */ - smp_mb(); - if (test_and_clear_bit(MPTCP_NOSPACE, &msk->flags)) - sk_stream_write_space(sk); - } + mptcp_write_space(sk); } if (snd_una == READ_ONCE(msk->snd_nxt)) { @@ -1351,6 +1349,26 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk, return ret; } +static void mptcp_start_sndbuf_autotune(struct mptcp_sock *msk) +{ + struct mptcp_subflow_context *subflow; + + mptcp_for_each_subflow(msk, subflow) { + struct sock *ssk = mptcp_subflow_tcp_sock(subflow); + struct socket *sock; + + /* on all active, unorphaned, subflows matching the current + * msk backup status + */ + if (!mptcp_subflow_active(subflow) || + subflow->backup != msk->use_backup || + !(sock = READ_ONCE(ssk->sk_socket))) + continue; + + set_bit(SOCK_NOSPACE, &sock->flags); + } +} + #define MPTCP_SEND_BURST_SIZE ((1 << 16) - \ sizeof(struct tcphdr) - \ MAX_TCP_OPTION_SPACE - \ @@ -1362,8 +1380,7 @@ struct subflow_send_info { u64 ratio; }; -static struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk, - u32 *sndbuf) +static struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk) { struct subflow_send_info send_info[2]; struct mptcp_subflow_context *subflow; @@ -1374,24 +1391,17 @@ static struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk, sock_owned_by_me((struct sock *)msk); - *sndbuf = 0; if (__mptcp_check_fallback(msk)) { if (!msk->first) return NULL; - *sndbuf = msk->first->sk_sndbuf; return sk_stream_memory_free(msk->first) ? msk->first : NULL; } /* re-use last subflow, if the burst allow that */ if (msk->last_snd && msk->snd_burst > 0 && sk_stream_memory_free(msk->last_snd) && - mptcp_subflow_active(mptcp_subflow_ctx(msk->last_snd))) { - mptcp_for_each_subflow(msk, subflow) { - ssk = mptcp_subflow_tcp_sock(subflow); - *sndbuf = max(tcp_sk(ssk)->snd_wnd, *sndbuf); - } + mptcp_subflow_active(mptcp_subflow_ctx(msk->last_snd))) return msk->last_snd; - } /* pick the subflow with the lower wmem/wspace ratio */ for (i = 0; i < 2; ++i) { @@ -1404,7 +1414,6 @@ static struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk, continue; nr_active += !subflow->backup; - *sndbuf = max(tcp_sk(ssk)->snd_wnd, *sndbuf); if (!sk_stream_memory_free(subflow->tcp_sock)) continue; @@ -1425,7 +1434,8 @@ static struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk, send_info[1].ssk, send_info[1].ratio); /* pick the best backup if no other subflow is active */ - if (!nr_active) + msk->use_backup = !nr_active; + if (msk->use_backup) send_info[0].ssk = send_info[1].ssk; if (send_info[0].ssk) { @@ -1434,6 +1444,7 @@ static struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk, sk_stream_wspace(msk->last_snd)); return msk->last_snd; } + return NULL; } @@ -1454,7 +1465,6 @@ static void mptcp_push_pending(struct sock *sk, unsigned int flags) }; struct mptcp_data_frag *dfrag; int len, copied = 0; - u32 sndbuf; while ((dfrag = mptcp_send_head(sk))) { info.sent = dfrag->already_sent; @@ -1465,12 +1475,7 @@ static void mptcp_push_pending(struct sock *sk, unsigned int flags) prev_ssk = ssk; __mptcp_flush_join_list(msk); - ssk = mptcp_subflow_get_send(msk, &sndbuf); - - /* do auto tuning */ - if (!(sk->sk_userlocks & SOCK_SNDBUF_LOCK) && - sndbuf > READ_ONCE(sk->sk_sndbuf)) - WRITE_ONCE(sk->sk_sndbuf, sndbuf); + ssk = mptcp_subflow_get_send(msk); /* try to keep the subflow socket lock across * consecutive xmit on the same socket @@ -1537,11 +1542,6 @@ static void __mptcp_subflow_push_pending(struct sock *sk, struct sock *ssk) while (len > 0) { int ret = 0; - /* do auto tuning */ - if (!(sk->sk_userlocks & SOCK_SNDBUF_LOCK) && - ssk->sk_sndbuf > READ_ONCE(sk->sk_sndbuf)) - WRITE_ONCE(sk->sk_sndbuf, ssk->sk_sndbuf); - if (unlikely(mptcp_must_reclaim_memory(sk, ssk))) { __mptcp_update_wmem(sk); sk_mem_reclaim_partial(sk); @@ -1680,6 +1680,7 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) continue; wait_for_memory: + mptcp_start_sndbuf_autotune(msk); set_bit(MPTCP_NOSPACE, &msk->flags); mptcp_push_pending(sk, msg->msg_flags); ret = sk_stream_wait_memory(sk, &timeo); @@ -1687,8 +1688,11 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) goto out; } - if (copied) + if (copied) { + if (!sk_stream_is_writeable(sk)) + mptcp_start_sndbuf_autotune(msk); mptcp_push_pending(sk, msg->msg_flags); + } out: release_sock(sk); @@ -2359,6 +2363,7 @@ static int __mptcp_init_sock(struct sock *sk) msk->tx_pending_data = 0; msk->size_goal_cache = TCP_BASE_MSS; + msk->use_backup = false; msk->ack_hint = NULL; msk->first = NULL; inet_csk(sk)->icsk_sync_mss = mptcp_sync_mss; @@ -3285,6 +3290,7 @@ static int mptcp_stream_accept(struct socket *sock, struct socket *newsock, mptcp_copy_inaddrs(newsk, msk->first); mptcp_rcv_space_init(msk, msk->first); + mptcp_propagate_sndbuf(newsk, msk->first); /* set ssk->sk_socket of accept()ed flows to mptcp socket. * This is needed so NOSPACE flag can be set from tcp stack. diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index d6400ad2d615..676f57ebd0f8 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -248,6 +248,7 @@ struct mptcp_sock { bool snd_data_fin_enable; bool rcv_fastclose; bool use_64bit_ack; /* Set when we received a 64-bit DSN */ + bool use_backup; /* only backup subflow available */ spinlock_t join_list_lock; struct sock *ack_hint; struct work_struct work; @@ -521,6 +522,25 @@ static inline bool mptcp_data_fin_enabled(const struct mptcp_sock *msk) READ_ONCE(msk->write_seq) == READ_ONCE(msk->snd_nxt); } +static inline bool mptcp_propagate_sndbuf(struct sock *sk, struct sock *ssk) +{ + if ((sk->sk_userlocks & SOCK_SNDBUF_LOCK) || ssk->sk_sndbuf <= READ_ONCE(sk->sk_sndbuf)) + return false; + + WRITE_ONCE(sk->sk_sndbuf, ssk->sk_sndbuf); + return true; +} + +static inline void mptcp_write_space(struct sock *sk) +{ + if (sk_stream_is_writeable(sk)) { + /* pairs with memory barrier in mptcp_poll */ + smp_mb(); + if (test_and_clear_bit(MPTCP_NOSPACE, &mptcp_sk(sk)->flags)) + sk_stream_write_space(sk); + } +} + void mptcp_destroy_common(struct mptcp_sock *msk); void __init mptcp_token_init(void); diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c index 278cbe3e539e..1352f9a7ede8 100644 --- a/net/mptcp/subflow.c +++ b/net/mptcp/subflow.c @@ -343,6 +343,7 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb) if (subflow->conn_finished) return; + mptcp_propagate_sndbuf(parent, sk); subflow->rel_write_seq = 1; subflow->conn_finished = 1; subflow->ssn_offset = TCP_SKB_CB(skb)->seq; @@ -1040,7 +1041,13 @@ static void subflow_data_ready(struct sock *sk) static void subflow_write_space(struct sock *ssk) { - /* we take action in __mptcp_clean_una() */ + struct socket *sock = ssk->sk_socket; + + if (mptcp_propagate_sndbuf(mptcp_subflow_ctx(ssk)->conn, ssk)) + mptcp_write_space(ssk); + + if (sk_stream_is_writeable(ssk) && sock) + clear_bit(SOCK_NOSPACE, &sock->flags); } static struct inet_connection_sock_af_ops * @@ -1299,6 +1306,7 @@ static void subflow_state_change(struct sock *sk) __subflow_state_change(sk); if (subflow_simultaneous_connect(sk)) { + mptcp_propagate_sndbuf(parent, sk); mptcp_do_fallback(sk); mptcp_rcv_space_init(mptcp_sk(parent), sk); pr_fallback(mptcp_sk(parent)); From patchwork Fri Jan 8 11:50:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Abeni X-Patchwork-Id: 1423742 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.01.org (client-ip=198.145.21.10; helo=ml01.01.org; envelope-from=mptcp-bounces@lists.01.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=cjmL5Tcf; dkim-atps=neutral Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4DC1d02Rsqz9sWh for ; Fri, 8 Jan 2021 22:50:27 +1100 (AEDT) Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id EA7B6100EA2B9; Fri, 8 Jan 2021 03:50:22 -0800 (PST) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=216.205.24.124; helo=us-smtp-delivery-124.mimecast.com; envelope-from=pabeni@redhat.com; receiver= Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 68356100EA2B1 for ; Fri, 8 Jan 2021 03:50:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1610106619; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gWrNmrPuAzW/OEwveZph94qcT+QeHSN/R2vPFJYNekc=; b=cjmL5Tcf8CCrmaZQNg9I9VZQAyomoi7b2WrDiFC137Orlkay7zzajFjoJIslgXD+tNvujy YHdlI64l7eDDtuRedGTK2UleifppJDuoJMHHnisaUwP9xx/7wsfdJny/26cU+PA0nH9FEY xGrjJeF+zC4kwCMV2NyzYMFI+G5DxSw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-469-75yvOsPXNBGCgxZyB4ntLA-1; Fri, 08 Jan 2021 06:50:17 -0500 X-MC-Unique: 75yvOsPXNBGCgxZyB4ntLA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id CC6EA15723 for ; Fri, 8 Jan 2021 11:50:16 +0000 (UTC) Received: from gerbillo.redhat.com (ovpn-114-195.ams2.redhat.com [10.36.114.195]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3DEB55D6D1 for ; Fri, 8 Jan 2021 11:50:16 +0000 (UTC) From: Paolo Abeni To: mptcp@lists.01.org Date: Fri, 8 Jan 2021 12:50:02 +0100 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pabeni@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Message-ID-Hash: RZJ7J7QLHHJXCQO4SKK7ZDYQWDJTZ2ND X-Message-ID-Hash: RZJ7J7QLHHJXCQO4SKK7ZDYQWDJTZ2ND X-MailFrom: pabeni@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; suspicious-header X-Mailman-Version: 3.1.1 Precedence: list Subject: [MPTCP] [PATCH mptcp-next 2/3] mptcp: do not queue excessive data on subflows. List-Id: Discussions regarding MPTCP upstreaming Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: The current packet scheduler can enqueue up to sndbuf data on each subflow. If the send buffer is large and the subflows are not symmetric, this could lead to suboptimal aggregate bandwidth utilization. Limit the amount of queued data to the maximum cwnd. Signed-off-by: Paolo Abeni --- net/mptcp/protocol.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index eb6bb6b78d6f..b5b979671d92 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -1414,7 +1414,7 @@ static struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk) continue; nr_active += !subflow->backup; - if (!sk_stream_memory_free(subflow->tcp_sock)) + if (!sk_stream_memory_free(subflow->tcp_sock) || !tcp_sk(ssk)->snd_wnd) continue; pace = READ_ONCE(ssk->sk_pacing_rate); @@ -1441,7 +1441,7 @@ static struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk) if (send_info[0].ssk) { msk->last_snd = send_info[0].ssk; msk->snd_burst = min_t(int, MPTCP_SEND_BURST_SIZE, - sk_stream_wspace(msk->last_snd)); + tcp_sk(msk->last_snd)->snd_wnd); return msk->last_snd; } From patchwork Fri Jan 8 11:50:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Abeni X-Patchwork-Id: 1423744 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.01.org (client-ip=198.145.21.10; helo=ml01.01.org; envelope-from=mptcp-bounces@lists.01.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=Nq75L4XK; dkim-atps=neutral Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4DC1d03G0lz9sWs for ; Fri, 8 Jan 2021 22:50:27 +1100 (AEDT) Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 002CE100EA2BD; Fri, 8 Jan 2021 03:50:23 -0800 (PST) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=63.128.21.124; helo=us-smtp-delivery-124.mimecast.com; envelope-from=pabeni@redhat.com; receiver= Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 4D929100EA2B1 for ; Fri, 8 Jan 2021 03:50:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1610106620; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=U+JYUjxIPDnXHYXiK8XjRPWbUsXd6bvyTRZ9MOshzSc=; b=Nq75L4XKLP7XM7e7ML/1Z/qvzzZtK3bPNMSHRCTOmEhwO68672NIvL/Qsrpa9vBAqf0tH8 l5JURJQtDkLhGuY2YG9SMY5Fl+s9V8e8SGB+vKPpAxTEcg+PMJ1tM4iXtsGf93oMEGy8Rx 1alX6XiuARCKdCAjLfJT4pwNrxlaEYI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-206-L-jAocs4Op2wilkKTj2Nrw-1; Fri, 08 Jan 2021 06:50:18 -0500 X-MC-Unique: L-jAocs4Op2wilkKTj2Nrw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C0B5815724 for ; Fri, 8 Jan 2021 11:50:17 +0000 (UTC) Received: from gerbillo.redhat.com (ovpn-114-195.ams2.redhat.com [10.36.114.195]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2F2655D6D1 for ; Fri, 8 Jan 2021 11:50:17 +0000 (UTC) From: Paolo Abeni To: mptcp@lists.01.org Date: Fri, 8 Jan 2021 12:50:03 +0100 Message-Id: <3fb59389d88e5498ce54841148f190f556b3940c.1610106588.git.pabeni@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pabeni@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Message-ID-Hash: 4MLNJZARPP6AISMYZMIOU7T7Q3KUI46R X-Message-ID-Hash: 4MLNJZARPP6AISMYZMIOU7T7Q3KUI46R X-MailFrom: pabeni@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; suspicious-header X-Mailman-Version: 3.1.1 Precedence: list Subject: [MPTCP] [PATCH mptcp-next 3/3] mptcp: schedule work for better snd subflow selection List-Id: Discussions regarding MPTCP upstreaming Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Otherwise the packet scheduler policy will not be enforced when pushing pending data at MPTCP-level ack reception time. Signed-off-by: Paolo Abeni --- I'm very sad to re-introduce the worker usage for the datapath, but I could not find an easy way to avoid it. I'm thinking of some weird schema involving a dummy percpu napi instance serving/processing a queue of mptcp sockets instead of actual packets. Any BH user can enqueue an MPTCP subflow there to delegate some action to the latter. The above will require also overriding the tcp_release_cb() with something able to process this delegated events. Yep, crazy, but possibly doable without any core change and possibly could be used to avoid the worker usage even for data fin processing/sending. --- net/mptcp/protocol.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index b5b979671d92..3ba927049dd6 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -2271,6 +2271,7 @@ static void mptcp_worker(struct work_struct *work) if (unlikely(state == TCP_CLOSE)) goto unlock; + mptcp_push_pending(sk, 0); mptcp_check_data_fin_ack(sk); __mptcp_flush_join_list(msk); @@ -2934,10 +2935,14 @@ void __mptcp_check_push(struct sock *sk, struct sock *ssk) if (!mptcp_send_head(sk)) return; - if (!sock_owned_by_user(sk)) - __mptcp_subflow_push_pending(sk, ssk); - else + if (!sock_owned_by_user(sk)) { + if (mptcp_subflow_get_send(mptcp_sk(sk)) == ssk) + __mptcp_subflow_push_pending(sk, ssk); + else + mptcp_schedule_work(sk); + } else { set_bit(MPTCP_PUSH_PENDING, &mptcp_sk(sk)->flags); + } } #define MPTCP_DEFERRED_ALL (TCPF_WRITE_TIMER_DEFERRED)