From patchwork Mon Feb 17 18:28:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Abeni X-Patchwork-Id: 1239498 X-Patchwork-Delegate: pabeni@redhat.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.01.org (client-ip=2001:19d0:306:5::1; helo=ml01.01.org; envelope-from=mptcp-bounces@lists.01.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=XEFw0O/f; dkim-atps=neutral Received: from ml01.01.org (ml01.01.org [IPv6:2001:19d0:306:5::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48Lsv959Lnz9s29 for ; Tue, 18 Feb 2020 05:28:53 +1100 (AEDT) Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 0A2EE10FC358C; Mon, 17 Feb 2020 10:32:09 -0800 (PST) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=205.139.110.61; helo=us-smtp-delivery-1.mimecast.com; envelope-from=pabeni@redhat.com; receiver= Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id B2E2210FC3581 for ; Mon, 17 Feb 2020 10:32:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1581964128; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=on36tQwHiC8AhbWhHGWpqde4Rog/LjvqHtkUkqcchWI=; b=XEFw0O/fjd6AkdZTjG2jUlpyHxYiAMqTUMFoMdAmPy7J+DM2OPywtNNmH4G02JQG33ZpFs /KW8V6YSaL7+G6TTMM/gM+PMh9fsRlzY8XY8xnRNiV1bGZqT3R3y9COrueRG6NFDV/vRpr j4FyFGPpxKq7dUpnIt5EbMoXME9m2WY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-208-cd9ph6O_Mi6HlfUt0HkKaA-1; Mon, 17 Feb 2020 13:28:47 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3ED9018B5FA4 for ; Mon, 17 Feb 2020 18:28:46 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-116-153.ams2.redhat.com [10.36.116.153]) by smtp.corp.redhat.com (Postfix) with ESMTP id A2AC485735 for ; Mon, 17 Feb 2020 18:28:45 +0000 (UTC) From: Paolo Abeni To: mptcp@lists.01.org Date: Mon, 17 Feb 2020 19:28:31 +0100 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-MC-Unique: cd9ph6O_Mi6HlfUt0HkKaA-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Message-ID-Hash: HYNIJK6325D72CVQJSYUG46OMOE34JX7 X-Message-ID-Hash: HYNIJK6325D72CVQJSYUG46OMOE34JX7 X-MailFrom: pabeni@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; suspicious-header X-Mailman-Version: 3.1.1 Precedence: list Subject: [MPTCP] [PATCH 5/7] Squash-to: "mptcp: Implement path manager interface commands" List-Id: Discussions regarding MPTCP upstreaming Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Implement stubs for PM events delegating the action to the work queue. This allows acquiring whatever lock is needed to perform the actual implementation. Try to avoid scheduling the worker if no action is needed/possible. I relies on the accounting info included into the PM struct and on great deal of double-checked locking [anti-]pattern. RFC -> v1: - simplify/cleanup mptcp_pm_work_pending() - Mat - likewise simplify/cleanup mptcp_pm_add_addr() Signed-off-by: Paolo Abeni --- net/mptcp/pm.c | 195 ++++++++++++++++++++++--------------------------- 1 file changed, 89 insertions(+), 106 deletions(-) diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c index fb66758e3f61..033393176096 100644 --- a/net/mptcp/pm.c +++ b/net/mptcp/pm.c @@ -15,117 +15,17 @@ static struct workqueue_struct *pm_wq; int mptcp_pm_announce_addr(struct mptcp_sock *msk, const struct mptcp_addr_info *addr) { - struct mptcp_sock *msk = mptcp_token_get_sock(token); - int err = 0; + pr_debug("msk=%p, local_id=%d", msk, addr->id); - if (!msk) - return -EINVAL; - - if (msk->pm.local_valid) { - err = -EBADR; - goto announce_put; - } - - pr_debug("msk=%p, local_id=%d", msk, local_id); - msk->pm.local_valid = 1; - msk->pm.local_id = local_id; - msk->pm.local_family = AF_INET; - msk->pm.local_addr = *addr; - msk->addr_signal = 1; - -announce_put: - sock_put((struct sock *)msk); - return err; -} - -int mptcp_pm_remove_addr(struct mptcp_sock *msk, u8 local_id) -{ - struct mptcp_sock *msk = mptcp_token_get_sock(token); - - if (!msk) - return -EINVAL; - - pr_debug("msk=%p", msk); - msk->pm.local_valid = 0; - - sock_put((struct sock *)msk); + msk->pm.local = *addr; + WRITE_ONCE(msk->pm.addr_signal, true); return 0; } -int mptcp_pm_create_subflow(u32 token, u8 remote_id, struct in6_addr *addr) -{ - struct mptcp_sock *msk = mptcp_token_get_sock(token); - struct sockaddr_in6 remote; - struct sockaddr_in6 local; - struct sock *sk; - int err; - - pr_debug("msk=%p", msk); - - sk = (struct sock *)msk; - if (!msk->pm.remote_valid || remote_id != msk->pm.remote_id) { - err = -EBADR; - goto create_put; - } - - local.sin_family = AF_INET; - local.sin_port = 0; - if (addr) - local.sin_addr = *addr; - else - local.sin_addr.s_addr = htonl(INADDR_ANY); - - remote.sin_family = msk->pm.remote_family; - remote.sin_port = inet_sk(sk)->inet_dport; - remote.sin_addr = msk->pm.remote_addr; - - err = mptcp_subflow_connect(sk, (struct sockaddr *)&local, - (struct sockaddr *)&remote, remote_id); - -create_put: - sock_put(sk); - return err; -} - -#if IS_ENABLED(CONFIG_MPTCP_IPV6) -int mptcp_pm_create_subflow6(u32 token, u8 remote_id, struct in6_addr *addr) +int mptcp_pm_remove_addr(struct mptcp_sock *msk, u8 local_id) { - struct mptcp_sock *msk = mptcp_token_get_sock(token); - struct sockaddr_in6 remote; - struct sockaddr_in6 local; - struct sock *sk; - int err; - - if (!msk) - return -EINVAL; - - pr_debug("msk=%p", msk); - sk = (struct sock *)msk; - - if (!msk->pm.remote_valid || remote_id != msk->pm.remote_id) { - err = -EBADR; - goto create_put; - } - - local.sin6_family = AF_INET6; - local.sin6_port = 0; - if (addr) - local.sin6_addr = *addr; - else - local.sin6_addr = in6addr_any; - - remote.sin6_family = msk->pm.remote_family; - remote.sin6_port = inet_sk(sk)->inet_dport; - remote.sin6_addr = msk->pm.remote_addr6; - - err = mptcp_subflow_connect(sk, (struct sockaddr *)&local, - (struct sockaddr *)&remote, remote_id); - -create_put: - sock_put(sk); - return err; + return -ENOTSUPP; } -#endif int mptcp_pm_remove_subflow(struct mptcp_sock *msk, u8 remote_id) { @@ -143,11 +43,36 @@ void mptcp_pm_new_connection(struct mptcp_sock *msk, int server_side) WRITE_ONCE(pm->server_side, server_side); } +static bool mptcp_pm_schedule_work(struct mptcp_sock *msk, + enum mptcp_pm_status new_status) +{ + if (msk->pm.status != MPTCP_PM_IDLE) + return false; + + if (queue_work(pm_wq, &msk->pm.work)) { + msk->pm.status = new_status; + sock_hold((struct sock *)msk); + return true; + } + return false; +} + void mptcp_pm_fully_established(struct mptcp_sock *msk) { struct mptcp_pm_data *pm = &msk->pm; pr_debug("msk=%p", msk); + + /* try to avoid acquiring the lock below */ + if (READ_ONCE(pm->fully_established)) + return; + + spin_lock_bh(&pm->lock); + if (!READ_ONCE(pm->fully_established) && + mptcp_pm_schedule_work(msk, MPTCP_PM_ESTABLISHED)) + WRITE_ONCE(pm->fully_established, true); + + spin_unlock_bh(&pm->lock); } void mptcp_pm_connection_closed(struct mptcp_sock *msk) @@ -158,7 +83,19 @@ void mptcp_pm_connection_closed(struct mptcp_sock *msk) void mptcp_pm_subflow_established(struct mptcp_sock *msk, struct mptcp_subflow_context *subflow) { + struct mptcp_pm_data *pm = &msk->pm; + pr_debug("msk=%p", msk); + + if (!READ_ONCE(pm->work_pending)) + return; + + spin_lock_bh(&pm->lock); + + if (READ_ONCE(pm->work_pending)) + mptcp_pm_schedule_work(msk, MPTCP_PM_SUBFLOW_ESTABLISHED); + + spin_unlock_bh(&pm->lock); } void mptcp_pm_subflow_closed(struct mptcp_sock *msk, u8 id) @@ -172,6 +109,21 @@ void mptcp_pm_add_addr(struct mptcp_sock *msk, struct mptcp_pm_data *pm = &msk->pm; pr_debug("msk=%p, remote_id=%d", msk, addr->id); + + /* avoid acquiring the lock if there is no room for fouther addresses */ + if (READ_ONCE(pm->accept_addr)) + return; + + spin_lock_bh(&pm->lock); + + /* be sure there is something to signal re-checking under PM lock */ + if (READ_ONCE(pm->accept_addr) && + mptcp_pm_schedule_work(msk, MPTCP_PM_ADD_ADDR)) { + pm->add_addr_accepted++; + pm->remote = *addr; + } + + spin_unlock_bh(&pm->lock); } /* path manager helpers */ @@ -179,7 +131,27 @@ void mptcp_pm_add_addr(struct mptcp_sock *msk, int mptcp_pm_addr_signal(struct mptcp_sock *msk, unsigned int remaining, struct mptcp_addr_info *saddr) { - return 0; + struct mptcp_addr_info addr; + int ret = -EINVAL; + + spin_lock_bh(&msk->pm.lock); + + /* double check after the lock is acquired */ + if (!mptcp_pm_should_signal(msk)) + goto out_unlock; + + /* load real data */ + memset(&addr, 0, sizeof(addr)); + + if (remaining < mptcp_add_addr_len(saddr->family)) + goto out_unlock; + + WRITE_ONCE(msk->pm.addr_signal, false); + ret = 0; + +out_unlock: + spin_unlock_bh(&msk->pm.lock); + return ret; } int mptcp_pm_get_local_id(struct mptcp_sock *msk, struct sock_common *skc) @@ -189,6 +161,17 @@ int mptcp_pm_get_local_id(struct mptcp_sock *msk, struct sock_common *skc) static void pm_worker(struct work_struct *work) { + struct mptcp_pm_data *pm = container_of(work, struct mptcp_pm_data, + work); + struct mptcp_sock *msk = container_of(pm, struct mptcp_sock, pm); + struct sock *sk = (struct sock *)msk; + + switch (pm->status) { + default: + break; + } + + sock_put(sk); } void mptcp_pm_data_init(struct mptcp_sock *msk)