From patchwork Wed Sep 8 09:47:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gaetan Rivet X-Patchwork-Id: 1525720 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=u256.net header.i=@u256.net header.a=rsa-sha256 header.s=fm2 header.b=J1tJSd23; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=G984bcLJ; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=2605:bc80:3010::136; helo=smtp3.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Received: from smtp3.osuosl.org (smtp3.osuosl.org [IPv6:2605:bc80:3010::136]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4H4HQj4wVsz9sW8 for ; Wed, 8 Sep 2021 19:49:01 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp3.osuosl.org (Postfix) with ESMTP id 58A5E613C6; Wed, 8 Sep 2021 09:48:59 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp3.osuosl.org ([127.0.0.1]) by localhost (smtp3.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id GTbQcbCOUkBh; Wed, 8 Sep 2021 09:48:58 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [IPv6:2605:bc80:3010:104::8cd3:938]) by smtp3.osuosl.org (Postfix) with ESMTPS id 69E81613BA; Wed, 8 Sep 2021 09:48:57 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 4E068C001C; Wed, 8 Sep 2021 09:48:57 +0000 (UTC) X-Original-To: ovs-dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133]) by lists.linuxfoundation.org (Postfix) with ESMTP id 07596C000D for ; Wed, 8 Sep 2021 09:48:56 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 51CB1404DB for ; Wed, 8 Sep 2021 09:48:25 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Authentication-Results: smtp2.osuosl.org (amavisd-new); dkim=pass (2048-bit key) header.d=u256.net header.b="J1tJSd23"; dkim=pass (2048-bit key) header.d=messagingengine.com header.b="G984bcLJ" Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id eOcEpFDjxY-r for ; Wed, 8 Sep 2021 09:48:24 +0000 (UTC) X-Greylist: from auto-whitelisted by SQLgrey-1.8.0 Received: from wout3-smtp.messagingengine.com (wout3-smtp.messagingengine.com [64.147.123.19]) by smtp2.osuosl.org (Postfix) with ESMTPS id 5CA2D406C7 for ; Wed, 8 Sep 2021 09:48:24 +0000 (UTC) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.west.internal (Postfix) with ESMTP id C1E3032009DD; Wed, 8 Sep 2021 05:48:23 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Wed, 08 Sep 2021 05:48:24 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=u256.net; h=from :to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm2; bh=SgCWR/u0RXg8h UGBAzMEITPCzlScpYGPNHI0iB2sGks=; b=J1tJSd231ygekBKRK2BV4SNWaxgKX b3Mw8jNVA3BMTWmFcrhSFd0yV33nyEQCGQYdAnzEEGK2IZNEpr+dxf+JzcymCqAP HufA1kFu06pPCDBjekuOuznzZLcmWipr/qy2p3LQLLNxo7mN5w5Q/tmgymP73WtI abTSqme+medMgIQKd5r8c8kqhckULMycwTz0lrpF9KxPWtySMkshnogia34mh3b1 h+lOw5X/qF3WTWcJcGwbf/AXGutjzpGx92BGb9dgSS3XK47bvbCxWQVJCNVt9H5g LoospdoekPtYqwc7eznIkM9u8vofu8BifwQ31FFac5Vspl/Q0RkirCHdA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=SgCWR/u0RXg8hUGBAzMEITPCzlScpYGPNHI0iB2sGks=; b=G984bcLJ GNS9KjpUHzTaGxV39rCIc17Ee5sj5Jc5cd7Ok4adnDCFoa1n+wySdOZ5R1p4ANPG Zm2hIVk9bVfQUM90iNRqLcxgZ29pXC/PVBybKzOj1XiqhnZEAbTdV0bwSCJ2PSMC 90gfJzI4C7c4Js9zi8Q34cdyUomJWsIbqhGn1/x51HJoO9pTt8TGdYt+rQ0/plo6 Mxzu8kpLJ+78Dz9/abMEvkFialZP5xzbLSYMTZF1j+tUp32AvD5rZTLRTY/NKPpd DSmd2OLKPHqpRK7Y8wRyvgpr7C6vB3406Fi0dtVQr9YXiLqgLhRObIYRKYEQNhks LjfhliDkJ76ZzQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudefjedgudekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefirggvthgr nhcutfhivhgvthcuoehgrhhivhgvsehuvdehiedrnhgvtheqnecuggftrfgrthhtvghrnh ephefgveffkeetheetfeeifedvheelfeejfeehveduteejhfekuedtkeeiuedvteehnecu vehluhhsthgvrhfuihiivgepvdenucfrrghrrghmpehmrghilhhfrhhomhepghhrihhvvg esuhdvheeirdhnvght X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 8 Sep 2021 05:48:22 -0400 (EDT) From: Gaetan Rivet To: ovs-dev@openvswitch.org Date: Wed, 8 Sep 2021 11:47:38 +0200 Message-Id: <2417516d527007c83a7983041813947124b9c40b.1631094144.git.grive@u256.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Cc: Eli Britstein , Maxime Coquelin Subject: [ovs-dev] [PATCH v5 14/27] netdev-offload: Add multi-thread API X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" Expose functions reporting user configuration of offloading threads, as well as utility functions for multithreading. This will only expose the configuration knob to the user, while no datapath will implement the multiple thread request. This will allow implementations to use this API for offload thread management in relevant layers before enabling the actual dataplane implementation. The offload thread ID is lazily allocated and can as such be in a different order than the offload thread start sequence. The RCU thread will sometime access hardware-offload objects from a provider for reclamation purposes. In such case, it will get a default offload thread ID of 0. Care must be taken that using this thread ID is safe concurrently with the offload threads. Signed-off-by: Gaetan Rivet Reviewed-by: Eli Britstein Reviewed-by: Maxime Coquelin --- lib/netdev-offload-provider.h | 1 + lib/netdev-offload.c | 88 ++++++++++++++++++++++++++++++++++- lib/netdev-offload.h | 19 ++++++++ vswitchd/vswitch.xml | 16 +++++++ 4 files changed, 122 insertions(+), 2 deletions(-) diff --git a/lib/netdev-offload-provider.h b/lib/netdev-offload-provider.h index bc52a3f61..8ff2de983 100644 --- a/lib/netdev-offload-provider.h +++ b/lib/netdev-offload-provider.h @@ -84,6 +84,7 @@ struct netdev_flow_api { struct dpif_flow_stats *); /* Get the number of flows offloaded to netdev. + * 'n_flows' is an array of counters, one per offload thread. * Return 0 if successful, otherwise returns a positive errno value. */ int (*flow_get_n_flows)(struct netdev *, uint64_t *n_flows); diff --git a/lib/netdev-offload.c b/lib/netdev-offload.c index 5ddd4d01d..fc5f815d0 100644 --- a/lib/netdev-offload.c +++ b/lib/netdev-offload.c @@ -60,6 +60,12 @@ VLOG_DEFINE_THIS_MODULE(netdev_offload); static bool netdev_flow_api_enabled = false; +#define DEFAULT_OFFLOAD_THREAD_NB 1 +#define MAX_OFFLOAD_THREAD_NB 10 + +static unsigned int offload_thread_nb = DEFAULT_OFFLOAD_THREAD_NB; +DEFINE_EXTERN_PER_THREAD_DATA(netdev_offload_thread_id, OVSTHREAD_ID_UNSET); + /* Protects 'netdev_flow_apis'. */ static struct ovs_mutex netdev_flow_api_provider_mutex = OVS_MUTEX_INITIALIZER; @@ -448,6 +454,64 @@ netdev_is_flow_api_enabled(void) return netdev_flow_api_enabled; } +unsigned int +netdev_offload_thread_nb(void) +{ + return offload_thread_nb; +} + +unsigned int +netdev_offload_ufid_to_thread_id(const ovs_u128 ufid) +{ + uint32_t ufid_hash; + + if (netdev_offload_thread_nb() == 1) { + return 0; + } + + ufid_hash = hash_words64_inline( + (const uint64_t [2]){ ufid.u64.lo, + ufid.u64.hi }, 2, 1); + return ufid_hash % netdev_offload_thread_nb(); +} + +unsigned int +netdev_offload_thread_init(void) +{ + static atomic_count next_id = ATOMIC_COUNT_INIT(0); + bool thread_is_hw_offload; + bool thread_is_rcu; + + thread_is_hw_offload = !strncmp(get_subprogram_name(), + "hw_offload", strlen("hw_offload")); + thread_is_rcu = !strncmp(get_subprogram_name(), "urcu", strlen("urcu")); + + /* Panic if any other thread besides offload and RCU tries + * to initialize their thread ID. */ + ovs_assert(thread_is_hw_offload || thread_is_rcu); + + if (*netdev_offload_thread_id_get() == OVSTHREAD_ID_UNSET) { + unsigned int id; + + if (thread_is_rcu) { + /* RCU will compete with other threads for shared object access. + * Reclamation functions using a thread ID must be thread-safe. + * For that end, and because RCU must consider all potential shared + * objects anyway, its thread-id can be whichever, so return 0. + */ + id = 0; + } else { + /* Only the actual offload threads have their own ID. */ + id = atomic_count_inc(&next_id); + } + /* Panic if any offload thread is getting a spurious ID. */ + ovs_assert(id < netdev_offload_thread_nb()); + return *netdev_offload_thread_id_get() = id; + } else { + return *netdev_offload_thread_id_get(); + } +} + void netdev_ports_flow_flush(const char *dpif_type) { @@ -660,7 +724,16 @@ netdev_ports_get_n_flows(const char *dpif_type, odp_port_t port_no, ovs_rwlock_rdlock(&netdev_hmap_rwlock); data = netdev_ports_lookup(port_no, dpif_type); if (data) { - ret = netdev_flow_get_n_flows(data->netdev, n_flows); + uint64_t thread_n_flows[MAX_OFFLOAD_THREAD_NB] = {0}; + unsigned int tid; + + ret = netdev_flow_get_n_flows(data->netdev, thread_n_flows); + *n_flows = 0; + if (!ret) { + for (tid = 0; tid < netdev_offload_thread_nb(); tid++) { + *n_flows += thread_n_flows[tid]; + } + } } ovs_rwlock_unlock(&netdev_hmap_rwlock); return ret; @@ -713,7 +786,18 @@ netdev_set_flow_api_enabled(const struct smap *ovs_other_config) if (ovsthread_once_start(&once)) { netdev_flow_api_enabled = true; - VLOG_INFO("netdev: Flow API Enabled"); + offload_thread_nb = smap_get_ullong(ovs_other_config, + "n-offload-threads", + DEFAULT_OFFLOAD_THREAD_NB); + if (offload_thread_nb > MAX_OFFLOAD_THREAD_NB) { + VLOG_WARN("netdev: Invalid number of threads requested: %u", + offload_thread_nb); + offload_thread_nb = DEFAULT_OFFLOAD_THREAD_NB; + } + + VLOG_INFO("netdev: Flow API Enabled, using %u thread%s", + offload_thread_nb, + offload_thread_nb > 1 ? "s" : ""); #ifdef __linux__ tc_set_policy(smap_get_def(ovs_other_config, "tc-policy", diff --git a/lib/netdev-offload.h b/lib/netdev-offload.h index b0a0ead0f..b281d69c9 100644 --- a/lib/netdev-offload.h +++ b/lib/netdev-offload.h @@ -21,6 +21,7 @@ #include "openvswitch/netdev.h" #include "openvswitch/types.h" #include "ovs-rcu.h" +#include "ovs-thread.h" #include "packets.h" #include "flow.h" @@ -81,6 +82,24 @@ struct offload_info { odp_port_t orig_in_port; /* Originating in_port for tnl flows. */ }; +DECLARE_EXTERN_PER_THREAD_DATA(unsigned int, netdev_offload_thread_id); + +unsigned int netdev_offload_thread_nb(void); +unsigned int netdev_offload_thread_init(void); +unsigned int netdev_offload_ufid_to_thread_id(const ovs_u128 ufid); + +static inline unsigned int +netdev_offload_thread_id(void) +{ + unsigned int id = *netdev_offload_thread_id_get(); + + if (OVS_UNLIKELY(id == OVSTHREAD_ID_UNSET)) { + id = netdev_offload_thread_init(); + } + + return id; +} + int netdev_flow_flush(struct netdev *); int netdev_flow_dump_create(struct netdev *, struct netdev_flow_dump **dump, bool terse); diff --git a/vswitchd/vswitch.xml b/vswitchd/vswitch.xml index 026b5e2ca..1e7444920 100644 --- a/vswitchd/vswitch.xml +++ b/vswitchd/vswitch.xml @@ -247,6 +247,22 @@

+ +

+ Set this value to the number of threads created to manage hardware + offloads. +

+

+ The default value is 1. Changing this value requires + restarting the daemon. +

+

+ This is only relevant if + is enabled. +

+
+