From patchwork Wed Jun 9 13:09:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gaetan Rivet X-Patchwork-Id: 1489875 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.133; helo=smtp2.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=u256.net header.i=@u256.net header.a=rsa-sha256 header.s=fm2 header.b=SzXyIYHB; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=dR4kS1iu; dkim-atps=neutral Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4G0SDW2hsvz9sSn for ; Wed, 9 Jun 2021 23:11:39 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 7DF9840575; Wed, 9 Jun 2021 13:11:37 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 95XnQXR3k4In; Wed, 9 Jun 2021 13:11:33 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by smtp2.osuosl.org (Postfix) with ESMTPS id 22C5E40641; Wed, 9 Jun 2021 13:11:32 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id D693EC000D; Wed, 9 Jun 2021 13:11:31 +0000 (UTC) X-Original-To: ovs-dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp1.osuosl.org (smtp1.osuosl.org [140.211.166.138]) by lists.linuxfoundation.org (Postfix) with ESMTP id 1AB58C000D for ; Wed, 9 Jun 2021 13:11:31 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 85D9483D3C for ; Wed, 9 Jun 2021 13:10:33 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Authentication-Results: smtp1.osuosl.org (amavisd-new); dkim=pass (2048-bit key) header.d=u256.net header.b="SzXyIYHB"; dkim=pass (2048-bit key) header.d=messagingengine.com header.b="dR4kS1iu" Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id y0CUImA2jouO for ; Wed, 9 Jun 2021 13:10:29 +0000 (UTC) X-Greylist: from auto-whitelisted by SQLgrey-1.8.0 Received: from wout3-smtp.messagingengine.com (wout3-smtp.messagingengine.com [64.147.123.19]) by smtp1.osuosl.org (Postfix) with ESMTPS id 1BCE983CBA for ; Wed, 9 Jun 2021 13:10:15 +0000 (UTC) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.west.internal (Postfix) with ESMTP id 54A01210E; Wed, 9 Jun 2021 09:10:15 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute1.internal (MEProxy); Wed, 09 Jun 2021 09:10:15 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=u256.net; h=from :to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm2; bh=+bAJMRnSatumX 5Gn7Iqip09REuuixH0SWPMMESizR74=; b=SzXyIYHBsE6Jo0KCbtf19XB+QFbwM klYP4sT4vkWZlm7GmjCHT91Omrfegse3HEj+Y7sTSb8o/08bbzbwyyXo5tijFOTr XY1RAWht8gzcIoYYA/awpmg5eJzZpJbckMDwAI+7Qko5YitQwbSYsZi2JTrFHpMn 9GaNXHLUsIPCQBKymCnnUHRMWXXqmo7f9T7Tt3V3sSu98wePbpJYoKanrk/ZI5Vv v6K0vtz2KGJkZnxlgO5riEodG6g3IkKFDsCotQVR/e0cliARv1jLiepHzzIfQ+HR JCf10p6/uztfaFg52+utr5oQ3Uq8o0liEt0ptfV52v2dBpklIMSVTECJw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=+bAJMRnSatumX5Gn7Iqip09REuuixH0SWPMMESizR74=; b=dR4kS1iu U2FWnuqSTUf2WqaF9dFbwXHyVPW02mNb1b5i3ZIc8518uQDpnJ8VbIMc1MZb1rRQ 3l6aZi59QhepFBG7y4B3ujHVpV+eAZNn90S/QxmD5zE71DudjkULcny84rpU262a jyVgHKXiGY1ZbXfrF0jOVLbcbYNAU4HtTdo8LiuZEjJdHNa/Oz5K1OtjVia9BlnI q00Sk2c/WJhvEbtIdVcPmrqRWpt0JTsvGPpcp0tNP5As8Afk6X/OIZjDGBkFV9BN ZLn72j8xyab80dZZL9qpCo9DO5d27FVA0jUgilvpkFW+Q/GlfibZ3T2DH9vsA2aT AJxZO4vQNqMBQw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrfeduuddgieefucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefirggvthgr nhcutfhivhgvthcuoehgrhhivhgvsehuvdehiedrnhgvtheqnecuggftrfgrthhtvghrnh ephefgveffkeetheetfeeifedvheelfeejfeehveduteejhfekuedtkeeiuedvteehnecu vehluhhsthgvrhfuihiivgepfeenucfrrghrrghmpehmrghilhhfrhhomhepghhrihhvvg esuhdvheeirdhnvght X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 9 Jun 2021 09:10:14 -0400 (EDT) From: Gaetan Rivet To: ovs-dev@openvswitch.org Date: Wed, 9 Jun 2021 15:09:27 +0200 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Cc: Maxime Coquelin Subject: [ovs-dev] [PATCH v4 19/27] dpif-netdev: Execute flush from offload thread X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" When a port is deleted, its offloads must be flushed. The operation runs in the thread that initiated it. Offload data is thus accessed jointly by the port deletion thread(s) and the offload thread, which complicates the data access model. To simplify this model, as a pre-step toward introducing parallel offloads, execute the flush operation in the offload thread. Signed-off-by: Gaetan Rivet Reviewed-by: Maxime Coquelin --- lib/dpif-netdev.c | 126 ++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 122 insertions(+), 4 deletions(-) diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index 1d7e55d47..82e55e60b 100644 --- a/lib/dpif-netdev.c +++ b/lib/dpif-netdev.c @@ -422,6 +422,7 @@ enum rxq_cycles_counter_type { enum dp_offload_type { DP_OFFLOAD_FLOW, + DP_OFFLOAD_FLUSH, }; enum { @@ -439,8 +440,15 @@ struct dp_offload_flow_item { size_t actions_len; }; +struct dp_offload_flush_item { + struct dp_netdev *dp; + struct netdev *netdev; + struct ovs_barrier *barrier; +}; + union dp_offload_thread_data { struct dp_offload_flow_item flow; + struct dp_offload_flush_item flush; }; struct dp_offload_thread_item { @@ -905,6 +913,9 @@ static void dp_netdev_del_bond_tx_from_pmd(struct dp_netdev_pmd_thread *pmd, uint32_t bond_id) OVS_EXCLUDED(pmd->bond_mutex); +static void dp_netdev_offload_flush(struct dp_netdev *dp, + struct dp_netdev_port *port); + static void reconfigure_datapath(struct dp_netdev *dp) OVS_REQUIRES(dp->port_mutex); static bool dp_netdev_pmd_try_ref(struct dp_netdev_pmd_thread *pmd); @@ -2305,7 +2316,7 @@ static void do_del_port(struct dp_netdev *dp, struct dp_netdev_port *port) OVS_REQUIRES(dp->port_mutex) { - netdev_flow_flush(port->netdev); + dp_netdev_offload_flush(dp, port); netdev_uninit_flow_api(port->netdev); hmap_remove(&dp->ports, &port->node); seq_change(dp->port_seq); @@ -2675,13 +2686,16 @@ dp_netdev_free_offload(struct dp_offload_thread_item *offload) case DP_OFFLOAD_FLOW: dp_netdev_free_flow_offload(offload); break; + case DP_OFFLOAD_FLUSH: + free(offload); + break; default: OVS_NOT_REACHED(); }; } static void -dp_netdev_append_flow_offload(struct dp_offload_thread_item *offload) +dp_netdev_append_offload(struct dp_offload_thread_item *offload) { ovs_mutex_lock(&dp_offload_thread.mutex); ovs_list_push_back(&dp_offload_thread.list, &offload->node); @@ -2814,6 +2828,23 @@ dp_offload_flow(struct dp_offload_thread_item *item) UUID_ARGS((struct uuid *) &flow_offload->flow->mega_ufid)); } +static void +dp_offload_flush(struct dp_offload_thread_item *item) +{ + struct dp_offload_flush_item *flush = &item->data->flush; + + ovs_mutex_lock(&flush->dp->port_mutex); + netdev_flow_flush(flush->netdev); + ovs_mutex_unlock(&flush->dp->port_mutex); + + ovs_barrier_block(flush->barrier); + + /* Allow the other thread to take again the port lock, before + * continuing offload operations in this thread. + */ + ovs_barrier_block(flush->barrier); +} + #define DP_NETDEV_OFFLOAD_QUIESCE_INTERVAL_US (10 * 1000) /* 10 ms */ static void * @@ -2844,6 +2875,9 @@ dp_netdev_flow_offload_main(void *data OVS_UNUSED) case DP_OFFLOAD_FLOW: dp_offload_flow(offload); break; + case DP_OFFLOAD_FLUSH: + dp_offload_flush(offload); + break; default: OVS_NOT_REACHED(); } @@ -2881,7 +2915,7 @@ queue_netdev_flow_del(struct dp_netdev_pmd_thread *pmd, offload = dp_netdev_alloc_flow_offload(pmd, flow, DP_NETDEV_FLOW_OFFLOAD_OP_DEL); offload->timestamp = pmd->ctx.now; - dp_netdev_append_flow_offload(offload); + dp_netdev_append_offload(offload); } static void @@ -2916,7 +2950,7 @@ queue_netdev_flow_put(struct dp_netdev_pmd_thread *pmd, flow_offload->actions_len = actions_len; item->timestamp = pmd->ctx.now; - dp_netdev_append_flow_offload(item); + dp_netdev_append_offload(item); } static void @@ -2940,6 +2974,90 @@ dp_netdev_pmd_remove_flow(struct dp_netdev_pmd_thread *pmd, dp_netdev_flow_unref(flow); } +static void +dp_netdev_offload_flush_enqueue(struct dp_netdev *dp, + struct netdev *netdev, + struct ovs_barrier *barrier) +{ + struct dp_offload_thread_item *item; + struct dp_offload_flush_item *flush; + + if (ovsthread_once_start(&offload_thread_once)) { + xpthread_cond_init(&dp_offload_thread.cond, NULL); + ovs_thread_create("hw_offload", dp_netdev_flow_offload_main, NULL); + ovsthread_once_done(&offload_thread_once); + } + + item = xmalloc(sizeof *item + sizeof *flush); + item->type = DP_OFFLOAD_FLUSH; + item->timestamp = time_usec(); + + flush = &item->data->flush; + flush->dp = dp; + flush->netdev = netdev; + flush->barrier = barrier; + + dp_netdev_append_offload(item); +} + +/* Blocking call that will wait on the offload thread to + * complete its work. As the flush order will only be + * enqueued after existing offload requests, those previous + * offload requests must be processed, which requires being + * able to lock the 'port_mutex' from the offload thread. + * + * Flow offload flush is done when a port is being deleted. + * Right after this call executes, the offload API is disabled + * for the port. This call must be made blocking until the + * offload provider completed its job. + */ +static void +dp_netdev_offload_flush(struct dp_netdev *dp, + struct dp_netdev_port *port) + OVS_REQUIRES(dp->port_mutex) +{ + /* The flush mutex only serves to protect the static memory barrier. + * The memory barrier needs to go beyond the function scope as + * the other thread can resume from blocking after this function + * already finished. + * As the barrier is made static, then it will be shared by + * calls to this function, and it needs to be protected from + * concurrent use. + */ + static struct ovs_mutex flush_mutex = OVS_MUTEX_INITIALIZER; + static struct ovs_barrier barrier OVS_GUARDED_BY(flush_mutex); + struct netdev *netdev; + + if (!netdev_is_flow_api_enabled()) { + return; + } + + ovs_mutex_unlock(&dp->port_mutex); + ovs_mutex_lock(&flush_mutex); + + /* This thread and the offload thread. */ + ovs_barrier_init(&barrier, 2); + + netdev = netdev_ref(port->netdev); + dp_netdev_offload_flush_enqueue(dp, netdev, &barrier); + ovs_barrier_block(&barrier); + netdev_close(netdev); + + /* Take back the datapath port lock before allowing the offload + * thread to proceed further. The port deletion must complete first, + * to ensure no further offloads are inserted after the flush. + * + * Some offload provider (e.g. DPDK) keeps a netdev reference with + * the offload data. If this reference is not closed, the netdev is + * kept indefinitely. */ + ovs_mutex_lock(&dp->port_mutex); + + ovs_barrier_block(&barrier); + ovs_barrier_destroy(&barrier); + + ovs_mutex_unlock(&flush_mutex); +} + static void dp_netdev_pmd_flow_flush(struct dp_netdev_pmd_thread *pmd) {