From patchwork Mon Nov 27 07:43:00 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuanhan Liu X-Patchwork-Id: 841502 X-Patchwork-Delegate: ian.stokes@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=openvswitch.org (client-ip=140.211.169.12; helo=mail.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=fridaylinux.org header.i=@fridaylinux.org header.b="q0ijH2xP"; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.b="S7WoAlIg"; dkim-atps=neutral Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3ylf1p1J4Rz9s7F for ; Mon, 27 Nov 2017 18:43:58 +1100 (AEDT) Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id 99C0BAF3; Mon, 27 Nov 2017 07:43:22 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@mail.linuxfoundation.org Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id F2C63901 for ; Mon, 27 Nov 2017 07:43:18 +0000 (UTC) X-Greylist: from auto-whitelisted by SQLgrey-1.7.6 Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id A1565496 for ; Mon, 27 Nov 2017 07:43:17 +0000 (UTC) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id 0470920A90; Mon, 27 Nov 2017 02:43:17 -0500 (EST) Received: from frontend2 ([10.202.2.161]) by compute1.internal (MEProxy); Mon, 27 Nov 2017 02:43:17 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fridaylinux.org; h=cc:date:from:in-reply-to:message-id:references:subject:to :x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=BBF708m1rowVuILa+ cQte5qKoMJrwif7XT2w7x94Z90=; b=q0ijH2xPCuPvYi1HLGigUc3m2qZ+/3YAU bMUzqJDLstndoWLNt4SUObBnOHTHv/Ymhx0JAjJRTcvmt61ILUxRE2TOIcOD1DvC fGlKqEGTk95LEeEj9DE0CJsM8t7g5UNQqujytdWjmFNfxHv80cCzcfYLdwFmOtcA 9GvINsH6kHT99MpXA+FiDRUE/va/L2A9ySvglCTjUHKAyvM0v2NsJc341Q8uX+NP 92F1SaJBGm3TliUZ86ZichQvIozATg/DdZJmeFesZLPhGOEWWhVKLzb3NgZAL0xq /x11H0buqQ/Wkt4BtS4dwBMRX9pCDHw3FXvx3V5OC7i5pqzJhSqOg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:date:from:in-reply-to:message-id :references:subject:to:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=BBF708m1rowVuILa+cQte5qKoMJrwif7XT2w7x94Z90=; b=S7WoAlIg ZR+yeKtAwmyPvkwt+2MaewF7j11UKTBGc00z04vkR1MXUJ2dVcKtY62MGif5AB3X OcRJ3Jbn65Zy7hRGzpY/xWvlq9tSc7GZpwo0PDPdbYAPNjkNp39rEH90qhirHPFk x3gcumgGm/32Etp4Xt3bv69DeaIRp8sASkKX6SyqGvb32taZHZLBdYFOvGvEoCoa gwxAT12bQaAKkjyt9lrA2EIc2GeNjAw8PBLk1E+OWtxYzL0hYhJbPx67SOCf6llW 6YEOTNyzw4fzkhR+Bfzep6O7TgmZvLnY6aBmadZZG06K5EdTyQgrXK31CPsKLDw7 OnQ7pOFRWj+yeg== X-ME-Sender: Received: from yliu-dev.mtl.com (unknown [180.158.62.82]) by mail.messagingengine.com (Postfix) with ESMTPA id 2FA3524A6C; Mon, 27 Nov 2017 02:43:12 -0500 (EST) From: Yuanhan Liu To: dev@openvswitch.org Date: Mon, 27 Nov 2017 15:43:00 +0800 Message-Id: <1511768584-19167-2-git-send-email-yliu@fridaylinux.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1511768584-19167-1-git-send-email-yliu@fridaylinux.org> References: <1511768584-19167-1-git-send-email-yliu@fridaylinux.org> X-Spam-Status: No, score=-2.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Cc: Simon Horman Subject: [ovs-dev] [PATCH v4 1/5] dpif-netdev: associate flow with a mark id X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: ovs-dev-bounces@openvswitch.org Errors-To: ovs-dev-bounces@openvswitch.org Most modern NICs have the ability to bind a flow with a mark, so that every pkt matches such flow will have that mark present in its desc. The basic idea of doing that is, when we receives pkts later, we could directly get the flow from the mark. That could avoid some very costly CPU operations, including (but not limiting to) miniflow_extract, emc lookup, dpcls lookup, etc. Thus, performance could be greatly improved. Thus, the mojor work of this patch is to associate a flow with a mark id (an uint32_t number). The association in netdev datapatch is done by CMAP, while in hardware it's done by the rte_flow MARK action. One tricky thing in OVS-DPDK is, the flow tables is per-PMD. For the case there is only one phys port but with 2 queues, there could be 2 PMDs. In another word, even for a single mega flow (i.e. udp,tp_src=1000), there could be 2 different dp_netdev flows, one for each PMD. That could results to the same mega flow being offloaded twice in the hardware, worse, we may get 2 different marks and only the last one will work. To avoid that, a megaflow_to_mark CMAP is created. An entry will be added for the first PMD wants to offload a flow. For later PMDs, it will see such megaflow is already offloaded, then the flow will not be offloaded to HW twice. Meanwhile, the mark to flow mapping becomes to 1:N mapping. That is what the mark_to_flow CMAP for. For the first PMD wants to offload a flow, it allocates a new mark and do the flow offload by reusing the ->flow_put method. When it succeeds, a "mark to flow" entry will be added. For later PMDs, it will get the corresponding mark by above megaflow_to_mark CMAP. Then, another "mark to flow" entry will be added. Another thing might worth mentioning is that hte megaflow is created by masking all the bytes from match->flow with match->wc. It works well so far, but I have a feeling that is not the best way. Co-authored-by: Finn Christensen Signed-off-by: Yuanhan Liu Signed-off-by: Finn Christensen --- lib/dpif-netdev.c | 272 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ lib/netdev.h | 6 ++ 2 files changed, 278 insertions(+) diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index 0a62630..8579474 100644 --- a/lib/dpif-netdev.c +++ b/lib/dpif-netdev.c @@ -77,6 +77,7 @@ #include "tnl-ports.h" #include "unixctl.h" #include "util.h" +#include "uuid.h" VLOG_DEFINE_THIS_MODULE(dpif_netdev); @@ -442,7 +443,9 @@ struct dp_netdev_flow { /* Hash table index by unmasked flow. */ const struct cmap_node node; /* In owning dp_netdev_pmd_thread's */ /* 'flow_table'. */ + const struct cmap_node mark_node; /* In owning flow_mark's mark_to_flow */ const ovs_u128 ufid; /* Unique flow identifier. */ + const ovs_u128 mega_ufid; const unsigned pmd_id; /* The 'core_id' of pmd thread owning this */ /* flow. */ @@ -453,6 +456,7 @@ struct dp_netdev_flow { struct ovs_refcount ref_cnt; bool dead; + uint32_t mark; /* Unique flow mark assiged to a flow */ /* Statistics. */ struct dp_netdev_flow_stats stats; @@ -1854,6 +1858,175 @@ dp_netdev_pmd_find_dpcls(struct dp_netdev_pmd_thread *pmd, return cls; } +#define MAX_FLOW_MARK (UINT32_MAX - 1) +#define INVALID_FLOW_MARK (UINT32_MAX) + +struct megaflow_to_mark_data { + const struct cmap_node node; + ovs_u128 mega_ufid; + uint32_t mark; +}; + +struct flow_mark { + struct cmap megaflow_to_mark; + struct cmap mark_to_flow; + struct id_pool *pool; + struct ovs_mutex mutex; +}; + +struct flow_mark flow_mark = { + .megaflow_to_mark = CMAP_INITIALIZER, + .mark_to_flow = CMAP_INITIALIZER, + .mutex = OVS_MUTEX_INITIALIZER, +}; + +static uint32_t +flow_mark_alloc(void) +{ + uint32_t mark; + + if (!flow_mark.pool) { + /* Haven't initiated yet, do it here */ + flow_mark.pool = id_pool_create(0, MAX_FLOW_MARK); + } + + if (id_pool_alloc_id(flow_mark.pool, &mark)) { + return mark; + } + + return INVALID_FLOW_MARK; +} + +static void +flow_mark_free(uint32_t mark) +{ + id_pool_free_id(flow_mark.pool, mark); +} + +/* associate flow with a mark, which is a 1:1 mapping */ +static void +megaflow_to_mark_associate(const ovs_u128 *mega_ufid, uint32_t mark) +{ + size_t hash = dp_netdev_flow_hash(mega_ufid); + struct megaflow_to_mark_data *data = xzalloc(sizeof(*data)); + + data->mega_ufid = *mega_ufid; + data->mark = mark; + + cmap_insert(&flow_mark.megaflow_to_mark, + CONST_CAST(struct cmap_node *, &data->node), hash); +} + +/* disassociate flow with a mark */ +static void +megaflow_to_mark_disassociate(const ovs_u128 *mega_ufid) +{ + size_t hash = dp_netdev_flow_hash(mega_ufid); + struct megaflow_to_mark_data *data; + + CMAP_FOR_EACH_WITH_HASH (data, node, hash, &flow_mark.megaflow_to_mark) { + if (ovs_u128_equals(*mega_ufid, data->mega_ufid)) { + cmap_remove(&flow_mark.megaflow_to_mark, + CONST_CAST(struct cmap_node *, &data->node), hash); + free(data); + return; + } + } + + VLOG_WARN("masked ufid "UUID_FMT" is not associated with a mark?\n", + UUID_ARGS((struct uuid *)mega_ufid)); +} + +static inline uint32_t +megaflow_to_mark_find(const ovs_u128 *mega_ufid) +{ + size_t hash = dp_netdev_flow_hash(mega_ufid); + struct megaflow_to_mark_data *data; + + CMAP_FOR_EACH_WITH_HASH (data, node, hash, &flow_mark.megaflow_to_mark) { + if (ovs_u128_equals(*mega_ufid, data->mega_ufid)) { + return data->mark; + } + } + + return -1; +} + +/* associate mark with a flow, which is 1:N mapping */ +static void +mark_to_flow_associate(const uint32_t mark, struct dp_netdev_flow *flow) +{ + dp_netdev_flow_ref(flow); + + cmap_insert(&flow_mark.mark_to_flow, + CONST_CAST(struct cmap_node *, &flow->mark_node), + mark); + flow->mark = mark; + + VLOG_INFO("associated dp_netdev flow %p with mark %u\n", flow, mark); +} + +static bool +is_last_flow_mark_reference(uint32_t mark) +{ + struct dp_netdev_flow *flow; + + CMAP_FOR_EACH_WITH_HASH (flow, mark_node, mark, + &flow_mark.mark_to_flow) { + return false; + } + + return true; +} + +static void +mark_to_flow_disassociate(struct dp_netdev_pmd_thread *pmd, + struct dp_netdev_flow *flow) +{ + uint32_t mark = flow->mark; + struct cmap_node *mark_node = CONST_CAST(struct cmap_node *, + &flow->mark_node); + VLOG_INFO(" "); + VLOG_INFO(":: about to REMOVE offload:\n"); + VLOG_INFO(" ufid: "UUID_FMT"\n", + UUID_ARGS((struct uuid *)&flow->ufid)); + VLOG_INFO(" mask: "UUID_FMT"\n", + UUID_ARGS((struct uuid *)&flow->mega_ufid)); + + cmap_remove(&flow_mark.mark_to_flow, mark_node, mark); + flow->mark = INVALID_FLOW_MARK; + + if (is_last_flow_mark_reference(mark)) { + struct dp_netdev_port *port; + odp_port_t in_port = flow->flow.in_port.odp_port; + + port = dp_netdev_lookup_port(pmd->dp, in_port); + if (port) { + netdev_flow_del(port->netdev, &flow->mega_ufid, NULL); + } + + ovs_mutex_lock(&flow_mark.mutex); + flow_mark_free(mark); + ovs_mutex_unlock(&flow_mark.mutex); + VLOG_INFO("freed flow mark %u\n", mark); + + megaflow_to_mark_disassociate(&flow->mega_ufid); + } + dp_netdev_flow_unref(flow); +} + +static void +flow_mark_flush(struct dp_netdev_pmd_thread *pmd) +{ + struct dp_netdev_flow *flow; + + CMAP_FOR_EACH (flow, mark_node, &flow_mark.mark_to_flow) { + if (flow->pmd_id == pmd->core_id) { + mark_to_flow_disassociate(pmd, flow); + } + } +} + static void dp_netdev_pmd_remove_flow(struct dp_netdev_pmd_thread *pmd, struct dp_netdev_flow *flow) @@ -1867,6 +2040,9 @@ dp_netdev_pmd_remove_flow(struct dp_netdev_pmd_thread *pmd, ovs_assert(cls != NULL); dpcls_remove(cls, &flow->cr); cmap_remove(&pmd->flow_table, node, dp_netdev_flow_hash(&flow->ufid)); + if (flow->mark != INVALID_FLOW_MARK) { + mark_to_flow_disassociate(pmd, flow); + } flow->dead = true; dp_netdev_flow_unref(flow); @@ -2446,6 +2622,91 @@ out: return error; } +static void +try_netdev_flow_put(struct dp_netdev_pmd_thread *pmd, odp_port_t in_port, + struct dp_netdev_flow *flow, struct match *match, + const ovs_u128 *ufid, const struct nlattr *actions, + size_t actions_len) +{ + struct offload_info info; + struct dp_netdev_port *port; + bool modification = flow->mark != INVALID_FLOW_MARK; + const char *op = modification ? "modify" : "add"; + uint32_t mark; + int ret; + + port = dp_netdev_lookup_port(pmd->dp, in_port); + if (!port) { + return; + } + + ovs_mutex_lock(&flow_mark.mutex); + + VLOG_INFO(" "); + VLOG_INFO(":: about to offload:\n"); + VLOG_INFO(" ufid: "UUID_FMT"\n", + UUID_ARGS((struct uuid *)ufid)); + VLOG_INFO(" mask: "UUID_FMT"\n", + UUID_ARGS((struct uuid *)&flow->mega_ufid)); + + if (modification) { + mark = flow->mark; + } else { + if (!netdev_is_flow_api_enabled()) { + goto out; + } + + /* + * If a mega flow has already been offloaded (from other PMD + * instances), do not offload it again. + */ + mark = megaflow_to_mark_find(&flow->mega_ufid); + if (mark != INVALID_FLOW_MARK) { + VLOG_INFO("## got a previously installed mark %u\n", mark); + mark_to_flow_associate(mark, flow); + goto out; + } + + mark = flow_mark_alloc(); + if (mark == INVALID_FLOW_MARK) { + VLOG_ERR("failed to allocate flow mark!\n"); + goto out; + } + } + + info.flow_mark = mark; + ret = netdev_flow_put(port->netdev, match, + CONST_CAST(struct nlattr *, actions), + actions_len, &flow->mega_ufid, &info, NULL); + if (ret) { + VLOG_ERR("failed to %s netdev flow with mark %u\n", op, mark); + flow_mark_free(mark); + goto out; + } + + if (!modification) { + megaflow_to_mark_associate(&flow->mega_ufid, mark); + mark_to_flow_associate(mark, flow); + } + VLOG_INFO("succeed to %s netdev flow with mark %u\n", op, mark); + +out: + ovs_mutex_unlock(&flow_mark.mutex); +} + +static void +dp_netdev_get_mega_ufid(const struct match *match, ovs_u128 *mega_ufid) +{ + struct flow masked_flow; + size_t i; + + for (i = 0; i < sizeof(struct flow); i++) { + ((uint8_t *)&masked_flow)[i] = ((uint8_t *)&match->flow)[i] & + ((uint8_t *)&match->wc)[i]; + } + dpif_flow_hash(NULL, &masked_flow, sizeof(struct flow), mega_ufid); +} + static struct dp_netdev_flow * dp_netdev_flow_add(struct dp_netdev_pmd_thread *pmd, struct match *match, const ovs_u128 *ufid, @@ -2481,12 +2742,15 @@ dp_netdev_flow_add(struct dp_netdev_pmd_thread *pmd, memset(&flow->stats, 0, sizeof flow->stats); flow->dead = false; flow->batch = NULL; + flow->mark = INVALID_FLOW_MARK; *CONST_CAST(unsigned *, &flow->pmd_id) = pmd->core_id; *CONST_CAST(struct flow *, &flow->flow) = match->flow; *CONST_CAST(ovs_u128 *, &flow->ufid) = *ufid; ovs_refcount_init(&flow->ref_cnt); ovsrcu_set(&flow->actions, dp_netdev_actions_create(actions, actions_len)); + dp_netdev_get_mega_ufid(match, CONST_CAST(ovs_u128 *, &flow->mega_ufid)); + netdev_flow_key_init_masked(&flow->cr.flow, &match->flow, &mask); /* Select dpcls for in_port. Relies on in_port to be exact match. */ @@ -2496,6 +2760,9 @@ dp_netdev_flow_add(struct dp_netdev_pmd_thread *pmd, cmap_insert(&pmd->flow_table, CONST_CAST(struct cmap_node *, &flow->node), dp_netdev_flow_hash(&flow->ufid)); + try_netdev_flow_put(pmd, in_port, flow, match, ufid, + actions, actions_len); + if (OVS_UNLIKELY(!VLOG_DROP_DBG((&upcall_rl)))) { struct ds ds = DS_EMPTY_INITIALIZER; struct ofpbuf key_buf, mask_buf; @@ -2576,6 +2843,7 @@ flow_put_on_pmd(struct dp_netdev_pmd_thread *pmd, if (put->flags & DPIF_FP_MODIFY) { struct dp_netdev_actions *new_actions; struct dp_netdev_actions *old_actions; + odp_port_t in_port = netdev_flow->flow.in_port.odp_port; new_actions = dp_netdev_actions_create(put->actions, put->actions_len); @@ -2583,6 +2851,9 @@ flow_put_on_pmd(struct dp_netdev_pmd_thread *pmd, old_actions = dp_netdev_flow_get_actions(netdev_flow); ovsrcu_set(&netdev_flow->actions, new_actions); + try_netdev_flow_put(pmd, in_port, netdev_flow, match, ufid, + put->actions, put->actions_len); + if (stats) { get_dpif_flow_stats(netdev_flow, stats); } @@ -3576,6 +3847,7 @@ reload_affected_pmds(struct dp_netdev *dp) CMAP_FOR_EACH (pmd, node, &dp->poll_threads) { if (pmd->need_reload) { + flow_mark_flush(pmd); dp_netdev_reload_pmd__(pmd); pmd->need_reload = false; } diff --git a/lib/netdev.h b/lib/netdev.h index 3a545fe..0c1946a 100644 --- a/lib/netdev.h +++ b/lib/netdev.h @@ -188,6 +188,12 @@ void netdev_send_wait(struct netdev *, int qid); struct offload_info { const struct dpif_class *dpif_class; ovs_be16 tp_dst_port; /* Destination port for tunnel in SET action */ + + /* + * The flow mark id assigened to the flow. If any pkts hit the flow, + * it will be in the pkt meta data. + */ + uint32_t flow_mark; }; struct dpif_class; struct netdev_flow_dump;