From patchwork Wed Feb 10 15:34:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gaetan Rivet X-Patchwork-Id: 1439077 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.133; helo=hemlock.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=u256.net header.i=@u256.net header.a=rsa-sha256 header.s=fm1 header.b=MYyIumba; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm2 header.b=E1RmMWDT; dkim-atps=neutral Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4DbP2h6xPhz9rx8 for ; Thu, 11 Feb 2021 02:34:52 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id 84BDF87513; Wed, 10 Feb 2021 15:34:51 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Q+8goIFrDinh; Wed, 10 Feb 2021 15:34:48 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by hemlock.osuosl.org (Postfix) with ESMTP id 50C73874D9; Wed, 10 Feb 2021 15:34:37 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 2A16BC1E6F; Wed, 10 Feb 2021 15:34:37 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp3.osuosl.org (smtp3.osuosl.org [140.211.166.136]) by lists.linuxfoundation.org (Postfix) with ESMTP id EA46EC1E70 for ; Wed, 10 Feb 2021 15:34:31 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp3.osuosl.org (Postfix) with ESMTP id B41636F6A5 for ; Wed, 10 Feb 2021 15:34:31 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp3.osuosl.org ([127.0.0.1]) by localhost (smtp3.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id sk2SaK0ZCt67 for ; Wed, 10 Feb 2021 15:34:29 +0000 (UTC) Received: by smtp3.osuosl.org (Postfix, from userid 1001) id 6AFAB6F79A; Wed, 10 Feb 2021 15:34:29 +0000 (UTC) X-Greylist: from auto-whitelisted by SQLgrey-1.8.0 Received: from wnew4-smtp.messagingengine.com (wnew4-smtp.messagingengine.com [64.147.123.18]) by smtp3.osuosl.org (Postfix) with ESMTPS id 49B796F66B for ; Wed, 10 Feb 2021 15:34:21 +0000 (UTC) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailnew.west.internal (Postfix) with ESMTP id BA04BE18; Wed, 10 Feb 2021 10:34:20 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute1.internal (MEProxy); Wed, 10 Feb 2021 10:34:20 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=u256.net; h=from :to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=uapF96GLjk13k e5x2Q/30LebnRbPauH8wMHhKw/nAxc=; b=MYyIumbaAEzqgv/1Ni+efykGsvjve joCviAU8VuEldhDVBMSM6YhG5EbvH0HzuxMpBo3bSoeAwzltmuRUl0TmN/eHbByH FCq5vcd8wv4RPQ0Rs0jvsd+SvsqHC4gCBxw5NEKN2OYg+dhytCafsUH+J45jZEyo irSfEbg2+UmzT/9fzHCCkPQnES7Xv78rfKXBCzVnpMdb8RVRwS27eUyLKlXT2bu9 S4Pn4ap7y9a94XeUQmJkhM/WVMRpEV1ZQXpZ5NTbHt5NERClR6Cb/xExvPL2en5T BQSYdhMqR0+KPMMnRBLZvw8d1Zh7HMh/vj5f/UHKQynp0IHt08rEDEJOQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=uapF96GLjk13ke5x2Q/30LebnRbPauH8wMHhKw/nAxc=; b=E1RmMWDT a3mCYmSUsX0PL7EDH7EjNH9jw7UtNJhT3JQId+4AUP5gnbM5s+P0BRptVV+3bKzE uoWZwszUotD1RwVqtYRShut9l2e5hLm+8UDcGn4lM1+GCg2t67ZHY8AZ+y4Ss/38 k/CkqAhFAm4zOGVe8hkL/x1QB47hDIR+4S4Dg2wKKF46773XKdoni4+Lo1ba5pf4 aU7BugBGEh7zNz5yGJ1Psxa+Ey+9fAit4x4cj0r6zQ32o6EnkJVJBp7MbUaVHdhg nOWDBF6LOE93LVGg/wU4OJWWI+XSFlp9SvW3i7SGuAt/G1A46eKdFE74mgLmY2TG io0J5SZ730aLnQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrheejgdejkecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepifgrvghtrghn ucftihhvvghtuceoghhrihhvvgesuhdvheeirdhnvghtqeenucggtffrrghtthgvrhhnpe ehgfevffekteehteefieefvdehleefjeefheevudetjefhkeeutdekieeuvdetheenucfk phepkeeirddvheegrdduieegrddujeegnecuvehluhhsthgvrhfuihiivgeptdenucfrrg hrrghmpehmrghilhhfrhhomhepghhrihhvvgesuhdvheeirdhnvght X-ME-Proxy: Received: from inocybe.home (lfbn-poi-1-842-174.w86-254.abo.wanadoo.fr [86.254.164.174]) by mail.messagingengine.com (Postfix) with ESMTPA id B8065240066; Wed, 10 Feb 2021 10:34:19 -0500 (EST) From: Gaetan Rivet To: dev@openvswitch.org Date: Wed, 10 Feb 2021 16:34:06 +0100 Message-Id: <1bb4361ecaf090ddab368955eca248c291cb202e.1612968146.git.grive@u256.net> X-Mailer: git-send-email 2.30.0 In-Reply-To: References: MIME-Version: 1.0 Cc: Eli Britstein Subject: [ovs-dev] [PATCH v1 20/23] dpif-netdev: Make megaflow and mark mappings thread objects X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" In later commits hardware offloads are managed in several threads. Each offload is managed by a thread determined by its flow's 'mega_ufid'. As megaflow to mark and mark to flow mappings are 1:1 and 1:N respectively, then a single mark exists for a single 'mega_ufid', and multiple flows uses the same 'mega_ufid'. Because the managing thread will be choosen using the 'mega_ufid', then each mapping does not need to be shared with other offload threads. The mappings are kept as cmap as upcalls will sometimes query them before enqueuing orders to the offload threads. To prepare this change, move the mappings within the offload thread structure. Signed-off-by: Gaetan Rivet Reviewed-by: Eli Britstein --- lib/dpif-netdev.c | 41 +++++++++++++++++++---------------------- 1 file changed, 19 insertions(+), 22 deletions(-) diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index 09d62a3d5..913edff27 100644 --- a/lib/dpif-netdev.c +++ b/lib/dpif-netdev.c @@ -441,12 +441,16 @@ struct dp_offload_thread_item { struct dp_offload_thread { struct mpsc_queue queue; atomic_uint64_t enqueued_item; + struct cmap megaflow_to_mark; + struct cmap mark_to_flow; struct mov_avg_cma cma; struct mov_avg_ema ema; }; static struct dp_offload_thread dp_offload_thread = { .queue = MPSC_QUEUE_INITIALIZER(&dp_offload_thread.queue), + .megaflow_to_mark = CMAP_INITIALIZER, + .mark_to_flow = CMAP_INITIALIZER, .enqueued_item = ATOMIC_VAR_INIT(0), .cma = MOV_AVG_CMA_INITIALIZER, .ema = MOV_AVG_EMA_INITIALIZER(100), @@ -2413,16 +2417,7 @@ struct megaflow_to_mark_data { uint32_t mark; }; -struct flow_mark { - struct cmap megaflow_to_mark; - struct cmap mark_to_flow; - struct seq_pool *pool; -}; - -static struct flow_mark flow_mark = { - .megaflow_to_mark = CMAP_INITIALIZER, - .mark_to_flow = CMAP_INITIALIZER, -}; +static struct seq_pool *flow_mark_pool; static uint32_t flow_mark_alloc(void) @@ -2433,12 +2428,12 @@ flow_mark_alloc(void) if (ovsthread_once_start(&pool_init)) { /* Haven't initiated yet, do it here */ - flow_mark.pool = seq_pool_create(netdev_offload_thread_nb(), + flow_mark_pool = seq_pool_create(netdev_offload_thread_nb(), 1, MAX_FLOW_MARK); ovsthread_once_done(&pool_init); } - if (seq_pool_new_id(flow_mark.pool, tid, &mark)) { + if (seq_pool_new_id(flow_mark_pool, tid, &mark)) { return mark; } @@ -2450,7 +2445,7 @@ flow_mark_free(uint32_t mark) { unsigned int tid = netdev_offload_thread_id(); - seq_pool_free_id(flow_mark.pool, tid, mark); + seq_pool_free_id(flow_mark_pool, tid, mark); } /* associate megaflow with a mark, which is a 1:1 mapping */ @@ -2463,7 +2458,7 @@ megaflow_to_mark_associate(const ovs_u128 *mega_ufid, uint32_t mark) data->mega_ufid = *mega_ufid; data->mark = mark; - cmap_insert(&flow_mark.megaflow_to_mark, + cmap_insert(&dp_offload_thread.megaflow_to_mark, CONST_CAST(struct cmap_node *, &data->node), hash); } @@ -2474,9 +2469,10 @@ megaflow_to_mark_disassociate(const ovs_u128 *mega_ufid) size_t hash = dp_netdev_flow_hash(mega_ufid); struct megaflow_to_mark_data *data; - CMAP_FOR_EACH_WITH_HASH (data, node, hash, &flow_mark.megaflow_to_mark) { + CMAP_FOR_EACH_WITH_HASH (data, node, hash, + &dp_offload_thread.megaflow_to_mark) { if (ovs_u128_equals(*mega_ufid, data->mega_ufid)) { - cmap_remove(&flow_mark.megaflow_to_mark, + cmap_remove(&dp_offload_thread.megaflow_to_mark, CONST_CAST(struct cmap_node *, &data->node), hash); ovsrcu_postpone(free, data); return; @@ -2493,7 +2489,8 @@ megaflow_to_mark_find(const ovs_u128 *mega_ufid) size_t hash = dp_netdev_flow_hash(mega_ufid); struct megaflow_to_mark_data *data; - CMAP_FOR_EACH_WITH_HASH (data, node, hash, &flow_mark.megaflow_to_mark) { + CMAP_FOR_EACH_WITH_HASH (data, node, hash, + &dp_offload_thread.megaflow_to_mark) { if (ovs_u128_equals(*mega_ufid, data->mega_ufid)) { return data->mark; } @@ -2510,7 +2507,7 @@ mark_to_flow_associate(const uint32_t mark, struct dp_netdev_flow *flow) { dp_netdev_flow_ref(flow); - cmap_insert(&flow_mark.mark_to_flow, + cmap_insert(&dp_offload_thread.mark_to_flow, CONST_CAST(struct cmap_node *, &flow->mark_node), hash_int(mark, 0)); flow->mark = mark; @@ -2525,7 +2522,7 @@ flow_mark_has_no_ref(uint32_t mark) struct dp_netdev_flow *flow; CMAP_FOR_EACH_WITH_HASH (flow, mark_node, hash_int(mark, 0), - &flow_mark.mark_to_flow) { + &dp_offload_thread.mark_to_flow) { if (flow->mark == mark) { return false; } @@ -2550,7 +2547,7 @@ mark_to_flow_disassociate(struct dp_netdev_pmd_thread *pmd, return EINVAL; } - cmap_remove(&flow_mark.mark_to_flow, mark_node, hash_int(mark, 0)); + cmap_remove(&dp_offload_thread.mark_to_flow, mark_node, hash_int(mark, 0)); flow->mark = INVALID_FLOW_MARK; /* @@ -2587,7 +2584,7 @@ flow_mark_flush(struct dp_netdev_pmd_thread *pmd) { struct dp_netdev_flow *flow; - CMAP_FOR_EACH (flow, mark_node, &flow_mark.mark_to_flow) { + CMAP_FOR_EACH (flow, mark_node, &dp_offload_thread.mark_to_flow) { if (flow->pmd_id == pmd->core_id) { queue_netdev_flow_del(pmd, flow); } @@ -2601,7 +2598,7 @@ mark_to_flow_find(const struct dp_netdev_pmd_thread *pmd, struct dp_netdev_flow *flow; CMAP_FOR_EACH_WITH_HASH (flow, mark_node, hash_int(mark, 0), - &flow_mark.mark_to_flow) { + &dp_offload_thread.mark_to_flow) { if (flow->mark == mark && flow->pmd_id == pmd->core_id && flow->dead == false) { return flow;