From patchwork Mon May 24 06:51:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sriharsha Basavapatna X-Patchwork-Id: 1482640 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=2605:bc80:3010::137; helo=smtp4.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=broadcom.com header.i=@broadcom.com header.a=rsa-sha256 header.s=google header.b=Yov6Z7bI; dkim-atps=neutral Received: from smtp4.osuosl.org (smtp4.osuosl.org [IPv6:2605:bc80:3010::137]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4FpSYv55pBz9sRf for ; Mon, 24 May 2021 16:52:03 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 611F24035C; Mon, 24 May 2021 06:51:57 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id pLtHYuQg9qjI; Mon, 24 May 2021 06:51:55 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by smtp4.osuosl.org (Postfix) with ESMTP id 3B07340312; Mon, 24 May 2021 06:51:54 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 167C9C000D; Mon, 24 May 2021 06:51:54 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp3.osuosl.org (smtp3.osuosl.org [140.211.166.136]) by lists.linuxfoundation.org (Postfix) with ESMTP id 0F81AC0001 for ; Mon, 24 May 2021 06:51:53 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp3.osuosl.org (Postfix) with ESMTP id EB8496086B for ; Mon, 24 May 2021 06:51:52 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Authentication-Results: smtp3.osuosl.org (amavisd-new); dkim=pass (1024-bit key) header.d=broadcom.com Received: from smtp3.osuosl.org ([127.0.0.1]) by localhost (smtp3.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 62IC5yn37Gmv for ; Mon, 24 May 2021 06:51:47 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.8.0 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by smtp3.osuosl.org (Postfix) with ESMTPS id B5A2D607A1 for ; Mon, 24 May 2021 06:51:47 +0000 (UTC) Received: by mail-pl1-x62b.google.com with SMTP id s20so14129944plr.13 for ; Sun, 23 May 2021 23:51:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:mime-version; bh=LnBYpTMo/pH2Pp37lVuI/mubIPKHzyxRZchoGesvbzI=; b=Yov6Z7bI4Df7al8dLJxkP1mZpIhm4j0fYWJD/ts4n/vHRKSwMM8OmFMaWpBdwToDdn qARhnts0HeDZACqME/5DrU1Y/WXMlzfIW7p3PbJjHHgiLet4WD8lUFT8BYt9iZyw3k3F IRobJHJ+E89PPTckTP8pwHInpyLdPsfyXuv1I= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version; bh=LnBYpTMo/pH2Pp37lVuI/mubIPKHzyxRZchoGesvbzI=; b=gQtZoOhLO069hJHWdUvCe/VjuQlC+HnAsRjkNDF5rlNgTGVJoj1S5Y9P0MH09jw4pn sL8ILaImSPmrYOqnVz+/BdQ/IyHcJS55lE+pevTFgIo0tvPZk8ReUoBC/Ul1rAsfc1aT eZywVc2/tultUODZFrSypt1cGtuKusOqqk3pedgA6Zh7sycMo6BMtL7IwFyoAjdILq41 XDSvR5KMN+7UCMKHGI3oM6opwKHHQjrDWfk/tb3cCTkrUXT6Ke6MtaFRoC2gDTtI3okm 3/zj8wr3tMcpbX3It8ZmLn4x9EIwPiBrAivDMqqkMAEp+0jtA92gG60UfptUhz5+V7/i 2Uyw== X-Gm-Message-State: AOAM533uzCTh3/lw7NcFDKsAzVp2T0YjyVV0O0o+VCBD1/7KVARwkbta QHorm1hCCPZoF80tX/VzQ0qLD0l4dsqk0tka78KXfJV3Y/8HXF/pbIIwD8pPKSlFIlE900Z35IS pHEtmoWAWEGicly+pX/R/KMsU/7qtOkdAoXEh9WCmlTLOweo1LRuxsWQiBdVaikQrFZvujuR2BZ MeH3wKEeo= X-Google-Smtp-Source: ABdhPJz8CdNglk+d0UbFgvBoD/oWYTdLlrWZcCX1kw0/9Rh0TKsoph2oDvJRidIWxRevksVOfzCfPw== X-Received: by 2002:a17:90b:1b4c:: with SMTP id nv12mr23113761pjb.55.1621839105993; Sun, 23 May 2021 23:51:45 -0700 (PDT) Received: from dhcp-10-123-154-23.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id ch24sm14407856pjb.18.2021.05.23.23.51.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 23 May 2021 23:51:45 -0700 (PDT) To: dev@openvswitch.org Date: Mon, 24 May 2021 02:51:40 -0400 Message-Id: <20210524065140.31891-1-sriharsha.basavapatna@broadcom.com> X-Mailer: git-send-email 2.30.0.349.g30b29f044a MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: Ilya Maximets Subject: [ovs-dev] [PATCH v2] dpif-netdev: Forwarding optimization for direct output flows. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Sriharsha Basavapatna via dev From: Sriharsha Basavapatna Reply-To: Sriharsha Basavapatna Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" From: Ilya Maximets There are some cases where users want to have simple forwarding or drop rules for all packets received from particular port, i.e : "in_port=1,actions=2" "in_port=1,actions=IN_PORT" "in_port=1,actions=drop" There are also cases where complex OF flows could be simplified down to simple forwarding/drop datapath flows. In theory, we don't need to parse packets at all to follow these flows. "Direct output forwarding" optimization is intended to speed up above cases. Design: Due to various implementation restrictions userspace datapath has following flow fields always in exact match (i.e. it's required to match at least these fields of a packet even if the OF rule doesn't need that): - recirc_id - in_port - packet_type - dl_type - vlan_tci - nw_frag (for ip packets) Not all of these fields are related to packet itself. We already know the current 'recirc_id' and the 'in_port' before starting the packet processing. It also seems safe to assume that we're working with Ethernet packets. dpif-netdev sets exact match on 'vlan_tci' to avoid issues with flow format conversion and we don't really need to match with it until ofproto layer didn't ask us to. So, for the simple forwarding OF rule we need to match only with 'dl_type' and 'nw_frag'. 'in_port', 'dl_type' and 'nw_frag' could be combined in a single 64bit integer that could be used as a hash in hash map. New per-PMD flow table 'direct_output_table' introduced to store direct output flows only. 'dp_netdev_flow_add' adds flow to the usual 'flow_table' and to 'direct_output_table' if the flow meets following constraints: - 'recirc_id' in flow match is 0. - 'packet_type' in flow match is Ethernet. - Flow wildcards originally had wildcarded 'vlan_tci'. - Flow has no actions (drop) or exactly one action equal to OVS_ACTION_ATTR_OUTPUT. - Flow wildcards contains only minimal set of non-wildcarded fields (listed above). If the number of flows for current 'in_port' in regular 'flow_table' equals number of flows for current 'in_port' in 'direct_output_table', we may use direct output optimization, because all the flows we have are direct output flows. This means that we only need to parse 'dl_type' and 'nw_frag' to perform packet matching. Now we making the unique flow mark from the 'in_port', 'dl_type' and 'nw_frag' and looking for it in 'direct_output_table'. On successful lookup we don't need to make full 'miniflow_extract()'. Unsuccessful lookup technically means that we have no sufficient flow in datapath and upcall will be required. We may optimize this path in the future by bypassing the EMC, SMC and dpcls lookups in this case. Performance improvement of this solution on a 'direct output' flows should be comparable with partial HW offloading, because it parses same packet fields and uses similar flow lookup scheme. However, unlike partial HW offloading, it works for all port types including virtual ones. Signed-off-by: Ilya Maximets Signed-off-by: Sriharsha Basavapatna --- v1->v2: Rebased to master branch. Added a coverage counter. --- lib/dpif-netdev.c | 263 +++++++++++++++++++++++++++++++++++++++++++--- lib/flow.c | 12 ++- lib/flow.h | 4 +- 3 files changed, 259 insertions(+), 20 deletions(-) diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index 650e67ab3..ec09c67cd 100644 --- a/lib/dpif-netdev.c +++ b/lib/dpif-netdev.c @@ -35,6 +35,7 @@ #include "bitmap.h" #include "cmap.h" +#include "ccmap.h" #include "conntrack.h" #include "conntrack-tp.h" #include "coverage.h" @@ -114,6 +115,7 @@ COVERAGE_DEFINE(datapath_drop_invalid_port); COVERAGE_DEFINE(datapath_drop_invalid_bond); COVERAGE_DEFINE(datapath_drop_invalid_tnl_port); COVERAGE_DEFINE(datapath_drop_rx_invalid_packet); +COVERAGE_DEFINE(datapath_direct_output_packet); /* Protects against changes to 'dp_netdevs'. */ static struct ovs_mutex dp_netdev_mutex = OVS_MUTEX_INITIALIZER; @@ -543,6 +545,8 @@ struct dp_netdev_flow { /* Hash table index by unmasked flow. */ const struct cmap_node node; /* In owning dp_netdev_pmd_thread's */ /* 'flow_table'. */ + const struct cmap_node direct_output_node; /* In dp_netdev_pmd_thread's + 'direct_output_table'. */ const struct cmap_node mark_node; /* In owning flow_mark's mark_to_flow */ const ovs_u128 ufid; /* Unique flow identifier. */ const ovs_u128 mega_ufid; /* Unique mega flow identifier. */ @@ -556,7 +560,8 @@ struct dp_netdev_flow { struct ovs_refcount ref_cnt; bool dead; - uint32_t mark; /* Unique flow mark assigned to a flow */ + uint32_t mark; /* Unique flow mark for netdev offloading. */ + uint64_t direct_output_mark; /* Unique flow mark for direct output. */ /* Statistics. */ struct dp_netdev_flow_stats stats; @@ -690,12 +695,19 @@ struct dp_netdev_pmd_thread { /* Flow-Table and classifiers * - * Writers of 'flow_table' must take the 'flow_mutex'. Corresponding - * changes to 'classifiers' must be made while still holding the - * 'flow_mutex'. + * Writers of 'flow_table'/'direct_output_table' and their n* ccmap's must + * take the 'flow_mutex'. Corresponding changes to 'classifiers' must be + * made while still holding the 'flow_mutex'. */ struct ovs_mutex flow_mutex; struct cmap flow_table OVS_GUARDED; /* Flow table. */ + struct cmap direct_output_table OVS_GUARDED; /* Flow table with direct + output flows only. */ + struct ccmap n_flows OVS_GUARDED; /* Number of flows in 'flow_table' + per in_port. */ + struct ccmap n_direct_flows OVS_GUARDED; /* Number of flows in + 'direct_output_table' + per in_port. */ /* One classifier per in_port polled by the pmd */ struct cmap classifiers; @@ -925,6 +937,24 @@ pmd_perf_metrics_enabled(const struct dp_netdev_pmd_thread *pmd); static void queue_netdev_flow_del(struct dp_netdev_pmd_thread *pmd, struct dp_netdev_flow *flow); +static void dp_netdev_direct_output_insert(struct dp_netdev_pmd_thread *pmd, + struct dp_netdev_flow *flow) + OVS_REQUIRES(pmd->flow_mutex); +static void dp_netdev_direct_output_remove(struct dp_netdev_pmd_thread *pmd, + struct dp_netdev_flow *flow) + OVS_REQUIRES(pmd->flow_mutex); + +static bool dp_netdev_flow_is_direct_output(const struct flow_wildcards *wc, + const struct nlattr *actions, + size_t actions_len); +static bool +dp_netdev_direct_output_enabled(const struct dp_netdev_pmd_thread *pmd, + odp_port_t in_port); +static struct dp_netdev_flow * +dp_netdev_direct_output_lookup(const struct dp_netdev_pmd_thread *pmd, + odp_port_t in_port, + ovs_be16 dp_type, uint8_t nw_frag); + static void emc_cache_init(struct emc_cache *flow_cache) { @@ -2841,7 +2871,9 @@ dp_netdev_pmd_remove_flow(struct dp_netdev_pmd_thread *pmd, cls = dp_netdev_pmd_lookup_dpcls(pmd, in_port); ovs_assert(cls != NULL); dpcls_remove(cls, &flow->cr); + dp_netdev_direct_output_remove(pmd, flow); cmap_remove(&pmd->flow_table, node, dp_netdev_flow_hash(&flow->ufid)); + ccmap_dec(&pmd->n_flows, odp_to_u32(in_port)); if (flow->mark != INVALID_FLOW_MARK) { queue_netdev_flow_del(pmd, flow); } @@ -3623,10 +3655,165 @@ dp_netdev_get_mega_ufid(const struct match *match, ovs_u128 *mega_ufid) odp_flow_key_hash(&masked_flow, sizeof masked_flow, mega_ufid); } +static uint64_t +dp_netdev_direct_output_mark(odp_port_t in_port, + ovs_be16 dl_type, uint8_t nw_frag) +{ + return ((uint64_t) odp_to_u32(in_port) << 32) + | ((uint32_t) ntohs(dl_type) << 16) | nw_frag; +} + +static struct dp_netdev_flow * +dp_netdev_direct_output_lookup(const struct dp_netdev_pmd_thread *pmd, + odp_port_t in_port, + ovs_be16 dl_type, uint8_t nw_frag) +{ + uint32_t hash; + uint64_t mark; + struct dp_netdev_flow *flow; + + mark = dp_netdev_direct_output_mark(in_port, dl_type, nw_frag); + hash = hash_uint64(mark); + + CMAP_FOR_EACH_WITH_HASH (flow, direct_output_node, + hash, &pmd->direct_output_table) { + if (flow->direct_output_mark == mark) { + VLOG_DBG("Direct output lookup: " + "core_id(%d),in_port(%"PRIu32"),mark(0x%"PRIx64") -> %s.", + pmd->core_id, in_port, mark, "success"); + return flow; + } + } + VLOG_DBG("Direct output lookup: " + "core_id(%d),in_port(%"PRIu32"),mark(0x%"PRIx64") -> %s.", + pmd->core_id, in_port, mark, "fail"); + return NULL; +} + +static bool +dp_netdev_direct_output_enabled(const struct dp_netdev_pmd_thread *pmd, + odp_port_t in_port) +{ + return ccmap_find(&pmd->n_flows, odp_to_u32(in_port)) + == ccmap_find(&pmd->n_direct_flows, odp_to_u32(in_port)); +} + +static void +dp_netdev_direct_output_insert(struct dp_netdev_pmd_thread *pmd, + struct dp_netdev_flow *dp_flow) + OVS_REQUIRES(pmd->flow_mutex) +{ + uint32_t hash; + uint64_t mark; + uint8_t nw_frag = dp_flow->flow.nw_frag; + ovs_be16 dl_type = dp_flow->flow.dl_type; + odp_port_t in_port = dp_flow->flow.in_port.odp_port; + + if (!dp_netdev_flow_ref(dp_flow)) { + return; + } + + /* Avoid double insertion. Should not happen in practice. */ + dp_netdev_direct_output_remove(pmd, dp_flow); + + mark = dp_netdev_direct_output_mark(in_port, dl_type, nw_frag); + hash = hash_uint64(mark); + + dp_flow->direct_output_mark = mark; + cmap_insert(&pmd->direct_output_table, + CONST_CAST(struct cmap_node *, &dp_flow->direct_output_node), + hash); + ccmap_inc(&pmd->n_direct_flows, odp_to_u32(in_port)); + + VLOG_DBG("Direct output insert: " + "core_id(%d),in_port(%"PRIu32"),mark(0x%"PRIx64").", + pmd->core_id, in_port, mark); +} + +static void +dp_netdev_direct_output_remove(struct dp_netdev_pmd_thread *pmd, + struct dp_netdev_flow *dp_flow) + OVS_REQUIRES(pmd->flow_mutex) +{ + uint32_t hash; + uint64_t mark; + struct dp_netdev_flow *flow; + uint8_t nw_frag = dp_flow->flow.nw_frag; + ovs_be16 dl_type = dp_flow->flow.dl_type; + odp_port_t in_port = dp_flow->flow.in_port.odp_port; + + mark = dp_netdev_direct_output_mark(in_port, dl_type, nw_frag); + hash = hash_uint64(mark); + + flow = dp_netdev_direct_output_lookup(pmd, in_port, dl_type, nw_frag); + if (flow) { + ovs_assert(dp_flow == flow); + VLOG_DBG("Direct output remove: " + "core_id(%d),in_port(%"PRIu32"),mark(0x%"PRIx64").", + pmd->core_id, in_port, mark); + cmap_remove(&pmd->direct_output_table, + CONST_CAST(struct cmap_node *, &flow->direct_output_node), + hash); + ccmap_dec(&pmd->n_direct_flows, odp_to_u32(in_port)); + dp_netdev_flow_unref(flow); + } +} + +static bool +dp_netdev_flow_is_direct_output(const struct flow_wildcards *wc, + const struct nlattr *actions, + size_t actions_len) +{ + /* Drop flows has no explicit actions. Treat them as direct output. */ + if (actions && actions_len) { + unsigned int left, n_actions = 0; + const struct nlattr *a; + + /* Check that there is only one action and it's OUTPUT action. */ + NL_ATTR_FOR_EACH (a, left, actions, actions_len) { + enum ovs_action_attr type = nl_attr_type(a); + + if (++n_actions > 1 || type != OVS_ACTION_ATTR_OUTPUT) { + return false; + } + } + } + + /* Check that flow matches only minimal set of fields that always set. */ + if (wc) { + struct flow_wildcards *minimal = xmalloc(sizeof *minimal); + + flow_wildcards_init_catchall(minimal); + /* 'dpif-netdev' always has following in exact match: + * - recirc_id <-- recirc_id == 0 checked on input. + * - in_port <-- will be checked on input. + * - packet_type <-- Assuming all packets are PT_ETH. + * - dl_type <-- Need to match with. + * - vlan_tci <-- No need to match if not asked. + * - and nw_frag for ip packets. <-- Need to match for ip packets. + */ + WC_MASK_FIELD(minimal, recirc_id); + WC_MASK_FIELD(minimal, in_port); + WC_MASK_FIELD(minimal, packet_type); + WC_MASK_FIELD(minimal, dl_type); + WC_MASK_FIELD(minimal, vlans[0].tci); + WC_MASK_FIELD_MASK(minimal, nw_frag, FLOW_NW_FRAG_MASK); + + if (flow_wildcards_has_extra(minimal, wc)) { + free(minimal); + return false; + } + free(minimal); + } + + return true; +} + static struct dp_netdev_flow * dp_netdev_flow_add(struct dp_netdev_pmd_thread *pmd, struct match *match, const ovs_u128 *ufid, - const struct nlattr *actions, size_t actions_len) + const struct nlattr *actions, size_t actions_len, + bool vlan_tci_wc_faked) OVS_REQUIRES(pmd->flow_mutex) { struct ds extra_info = DS_EMPTY_INITIALIZER; @@ -3691,6 +3878,14 @@ dp_netdev_flow_add(struct dp_netdev_pmd_thread *pmd, cmap_insert(&pmd->flow_table, CONST_CAST(struct cmap_node *, &flow->node), dp_netdev_flow_hash(&flow->ufid)); + ccmap_inc(&pmd->n_flows, odp_to_u32(in_port)); + + if (vlan_tci_wc_faked + && match->flow.recirc_id == 0 + && match->flow.packet_type == htonl(PT_ETH) + && dp_netdev_flow_is_direct_output(&match->wc, actions, actions_len)) { + dp_netdev_direct_output_insert(pmd, flow); + } queue_netdev_flow_put(pmd, flow, match, actions, actions_len); @@ -3749,7 +3944,8 @@ flow_put_on_pmd(struct dp_netdev_pmd_thread *pmd, struct match *match, ovs_u128 *ufid, const struct dpif_flow_put *put, - struct dpif_flow_stats *stats) + struct dpif_flow_stats *stats, + bool vlan_tci_wc_faked) { struct dp_netdev_flow *netdev_flow; int error = 0; @@ -3763,7 +3959,7 @@ flow_put_on_pmd(struct dp_netdev_pmd_thread *pmd, if (!netdev_flow) { if (put->flags & DPIF_FP_CREATE) { dp_netdev_flow_add(pmd, match, ufid, put->actions, - put->actions_len); + put->actions_len, vlan_tci_wc_faked); } else { error = ENOENT; } @@ -3778,6 +3974,12 @@ flow_put_on_pmd(struct dp_netdev_pmd_thread *pmd, old_actions = dp_netdev_flow_get_actions(netdev_flow); ovsrcu_set(&netdev_flow->actions, new_actions); + if (!dp_netdev_flow_is_direct_output(NULL, new_actions->actions, + new_actions->size)) { + /* New actions are not direct output. */ + dp_netdev_direct_output_remove(pmd, netdev_flow); + } + queue_netdev_flow_put(pmd, netdev_flow, match, put->actions, put->actions_len); @@ -3819,6 +4021,7 @@ dpif_netdev_flow_put(struct dpif *dpif, const struct dpif_flow_put *put) ovs_u128 ufid; int error; bool probe = put->flags & DPIF_FP_PROBE; + bool vlan_tci_wc_faked = false; if (put->stats) { memset(put->stats, 0, sizeof *put->stats); @@ -3857,6 +4060,7 @@ dpif_netdev_flow_put(struct dpif *dpif, const struct dpif_flow_put *put) * Netlink and struct flow representations, we have to do the same * here. This must be in sync with 'match' in handle_packet_upcall(). */ if (!match.wc.masks.vlans[0].tci) { + vlan_tci_wc_faked = true; match.wc.masks.vlans[0].tci = htons(0xffff); } @@ -3875,7 +4079,7 @@ dpif_netdev_flow_put(struct dpif *dpif, const struct dpif_flow_put *put) int pmd_error; pmd_error = flow_put_on_pmd(pmd, &key, &match, &ufid, put, - &pmd_stats); + &pmd_stats, vlan_tci_wc_faked); if (pmd_error) { error = pmd_error; } else if (put->stats) { @@ -3890,7 +4094,8 @@ dpif_netdev_flow_put(struct dpif *dpif, const struct dpif_flow_put *put) if (!pmd) { return EINVAL; } - error = flow_put_on_pmd(pmd, &key, &match, &ufid, put, put->stats); + error = flow_put_on_pmd(pmd, &key, &match, &ufid, put, put->stats, + vlan_tci_wc_faked); dp_netdev_pmd_unref(pmd); } @@ -6552,6 +6757,9 @@ dp_netdev_configure_pmd(struct dp_netdev_pmd_thread *pmd, struct dp_netdev *dp, ovs_mutex_init(&pmd->bond_mutex); cmap_init(&pmd->flow_table); cmap_init(&pmd->classifiers); + cmap_init(&pmd->direct_output_table); + ccmap_init(&pmd->n_flows); + ccmap_init(&pmd->n_direct_flows); pmd->ctx.last_rxq = NULL; pmd_thread_ctx_time_update(pmd); pmd->next_optimization = pmd->ctx.now + DPCLS_OPTIMIZATION_INTERVAL; @@ -6591,6 +6799,9 @@ dp_netdev_destroy_pmd(struct dp_netdev_pmd_thread *pmd) } cmap_destroy(&pmd->classifiers); cmap_destroy(&pmd->flow_table); + cmap_destroy(&pmd->direct_output_table); + ccmap_destroy(&pmd->n_flows); + ccmap_destroy(&pmd->n_direct_flows); ovs_mutex_destroy(&pmd->flow_mutex); seq_destroy(pmd->reload_seq); ovs_mutex_destroy(&pmd->port_mutex); @@ -7099,6 +7310,7 @@ dfc_processing(struct dp_netdev_pmd_thread *pmd, bool smc_enable_db; size_t map_cnt = 0; bool batch_enable = true; + bool direct_output_enabled = dp_netdev_direct_output_enabled(pmd, port_no); atomic_read_relaxed(&pmd->dp->smc_enable_db, &smc_enable_db); pmd_perf_update_counter(&pmd->perf_stats, @@ -7106,7 +7318,7 @@ dfc_processing(struct dp_netdev_pmd_thread *pmd, cnt); DP_PACKET_BATCH_REFILL_FOR_EACH (i, cnt, packet, packets_) { - struct dp_netdev_flow *flow; + struct dp_netdev_flow *flow = NULL; uint32_t mark; if (OVS_UNLIKELY(dp_packet_size(packet) < ETH_HEADER_LEN)) { @@ -7124,13 +7336,27 @@ dfc_processing(struct dp_netdev_pmd_thread *pmd, if (!md_is_valid) { pkt_metadata_init(&packet->md, port_no); - } - if ((*recirc_depth_get() == 0) && - dp_packet_has_flow_mark(packet, &mark)) { - flow = mark_to_flow_find(pmd, mark); - if (OVS_LIKELY(flow)) { - tcp_flags = parse_tcp_flags(packet); + if (dp_packet_has_flow_mark(packet, &mark)) { + flow = mark_to_flow_find(pmd, mark); + if (OVS_LIKELY(flow)) { + tcp_flags = parse_tcp_flags(packet, NULL, NULL); + } + } + + if (!flow && direct_output_enabled) { + ovs_be16 dl_type = 0; + uint8_t nw_frag = 0; + + tcp_flags = parse_tcp_flags(packet, &dl_type, &nw_frag); + flow = dp_netdev_direct_output_lookup(pmd, port_no, + dl_type, nw_frag); + if (flow) { + COVERAGE_INC(datapath_direct_output_packet); + } + } + + if (flow) { if (OVS_LIKELY(batch_enable)) { dp_netdev_queue_batches(packet, flow, tcp_flags, batches, n_batches); @@ -7218,6 +7444,7 @@ handle_packet_upcall(struct dp_netdev_pmd_thread *pmd, ovs_u128 ufid; int error; uint64_t cycles = cycles_counter_update(&pmd->perf_stats); + bool vlan_tci_wc_faked = false; match.tun_md.valid = false; miniflow_expand(&key->mf, &match.flow); @@ -7244,6 +7471,7 @@ handle_packet_upcall(struct dp_netdev_pmd_thread *pmd, * here. This must be in sync with 'match' in dpif_netdev_flow_put(). */ if (!match.wc.masks.vlans[0].tci) { match.wc.masks.vlans[0].tci = htons(0xffff); + vlan_tci_wc_faked = true; } /* We can't allow the packet batching in the next loop to execute @@ -7267,7 +7495,8 @@ handle_packet_upcall(struct dp_netdev_pmd_thread *pmd, if (OVS_LIKELY(!netdev_flow)) { netdev_flow = dp_netdev_flow_add(pmd, &match, &ufid, add_actions->data, - add_actions->size); + add_actions->size, + vlan_tci_wc_faked); } ovs_mutex_unlock(&pmd->flow_mutex); uint32_t hash = dp_netdev_flow_hash(&netdev_flow->ufid); diff --git a/lib/flow.c b/lib/flow.c index 729d59b1b..846886db0 100644 --- a/lib/flow.c +++ b/lib/flow.c @@ -1085,11 +1085,14 @@ parse_dl_type(const void **datap, size_t *sizep) /* Parses and return the TCP flags in 'packet', converted to host byte order. * If 'packet' is not an Ethernet packet embedding TCP, returns 0. + * 'dl_type_p' will be set only if 'packet' is an Ethernet packet. + * 'nw_frag_p' will be set only if 'packet' is an IP packet. * * The caller must ensure that 'packet' is at least ETH_HEADER_LEN bytes * long.'*/ uint16_t -parse_tcp_flags(struct dp_packet *packet) +parse_tcp_flags(struct dp_packet *packet, + ovs_be16 *dl_type_p, uint8_t *nw_frag_p) { const void *data = dp_packet_data(packet); const char *frame = (const char *)data; @@ -1104,6 +1107,9 @@ parse_tcp_flags(struct dp_packet *packet) dp_packet_reset_offsets(packet); dl_type = parse_dl_type(&data, &size); + if (dl_type_p) { + *dl_type_p = dl_type; + } if (OVS_UNLIKELY(eth_type_mpls(dl_type))) { packet->l2_5_ofs = (char *)data - frame; } @@ -1144,6 +1150,10 @@ parse_tcp_flags(struct dp_packet *packet) return 0; } + if (nw_frag_p) { + *nw_frag_p = nw_frag; + } + packet->l4_ofs = (uint16_t)((char *)data - frame); if (!(nw_frag & FLOW_NW_FRAG_LATER) && nw_proto == IPPROTO_TCP && size >= TCP_HEADER_LEN) { diff --git a/lib/flow.h b/lib/flow.h index b32f0b277..a406a18cd 100644 --- a/lib/flow.h +++ b/lib/flow.h @@ -134,8 +134,8 @@ bool parse_ipv6_ext_hdrs(const void **datap, size_t *sizep, uint8_t *nw_proto, uint8_t *nw_frag, const struct ovs_16aligned_ip6_frag **frag_hdr); bool parse_nsh(const void **datap, size_t *sizep, struct ovs_key_nsh *key); -uint16_t parse_tcp_flags(struct dp_packet *packet); - +uint16_t parse_tcp_flags(struct dp_packet *packet, ovs_be16 *dl_type_p, + uint8_t *nw_frag_p); static inline uint64_t flow_get_xreg(const struct flow *flow, int idx) {