From patchwork Mon Jan 11 10:05:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Abeni X-Patchwork-Id: 1424484 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.01.org (client-ip=2001:19d0:306:5::1; helo=ml01.01.org; envelope-from=mptcp-bounces@lists.01.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=C9AZHjBm; dkim-atps=neutral Received: from ml01.01.org (ml01.01.org [IPv6:2001:19d0:306:5::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4DDq9v4Rc4z9sWL for ; Mon, 11 Jan 2021 21:06:42 +1100 (AEDT) Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 29A7F100EBBD6; Mon, 11 Jan 2021 02:06:40 -0800 (PST) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=63.128.21.124; helo=us-smtp-delivery-124.mimecast.com; envelope-from=pabeni@redhat.com; receiver= Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id B4768100EBBC0 for ; Mon, 11 Jan 2021 02:06:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1610359597; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bhqa10ElV4mALmSoAbBs95k1stu0i4JkU8yVajhR3c0=; b=C9AZHjBmROwZ5yfKV4e/xdV6P+NaUgvh1YZqirtywDv0f4Ii7kb54/T15nnjNjcTUqeLOE Jcmr3RHYgbLxzCRYnpOMlbLcTFlsv3LzBHCakj4VauId5VvUzGcKKsQDIQ54GsUZrridvZ 3YdlPKa5expPbUnqOzdAJd8G6DYwBHk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-594-RgwPqn_0OaqhRK_CtmROFQ-1; Mon, 11 Jan 2021 05:06:35 -0500 X-MC-Unique: RgwPqn_0OaqhRK_CtmROFQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D4D208144E8 for ; Mon, 11 Jan 2021 10:06:34 +0000 (UTC) Received: from gerbillo.redhat.com (ovpn-114-113.ams2.redhat.com [10.36.114.113]) by smtp.corp.redhat.com (Postfix) with ESMTP id 437055D6D5 for ; Mon, 11 Jan 2021 10:06:34 +0000 (UTC) From: Paolo Abeni To: mptcp@lists.01.org Date: Mon, 11 Jan 2021 11:05:42 +0100 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pabeni@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Message-ID-Hash: WQ5K4SBXAT6MR6D644M5XD5YT4L5BDRH X-Message-ID-Hash: WQ5K4SBXAT6MR6D644M5XD5YT4L5BDRH X-MailFrom: pabeni@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; suspicious-header X-Mailman-Version: 3.1.1 Precedence: list Subject: [MPTCP] [PATCH v2 mptcp-next 4/4] mptcp: schedule work for better snd subflow selection List-Id: Discussions regarding MPTCP upstreaming Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Otherwise the packet scheduler policy will not be enforced when pushing pending data at MPTCP-level ack reception time. Signed-off-by: Paolo Abeni --- I'm very sad to re-introduce the worker usage for the datapath, but I could not find an easy way to avoid it. I'm thinking of some weird schema involving a dummy percpu napi instance serving/processing a queue of mptcp sockets instead of actual packets. Any BH user can enqueue an MPTCP subflow there to delegate some action to the latter. The above will require also overriding the tcp_release_cb() with something able to process this delegated events. Yep, crazy, but possibly doable without any core change and possibly could be used to avoid the worker usage even for data fin processing/sending. --- net/mptcp/protocol.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 510b87a3553b..0791421a971f 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -2244,6 +2244,7 @@ static void mptcp_worker(struct work_struct *work) if (unlikely(state == TCP_CLOSE)) goto unlock; + mptcp_push_pending(sk, 0); mptcp_check_data_fin_ack(sk); __mptcp_flush_join_list(msk); @@ -2903,10 +2904,14 @@ void __mptcp_check_push(struct sock *sk, struct sock *ssk) if (!mptcp_send_head(sk)) return; - if (!sock_owned_by_user(sk)) - __mptcp_subflow_push_pending(sk, ssk); - else + if (!sock_owned_by_user(sk)) { + if (mptcp_subflow_get_send(mptcp_sk(sk)) == ssk) + __mptcp_subflow_push_pending(sk, ssk); + else + mptcp_schedule_work(sk); + } else { set_bit(MPTCP_PUSH_PENDING, &mptcp_sk(sk)->flags); + } } #define MPTCP_DEFERRED_ALL (TCPF_WRITE_TIMER_DEFERRED)