From patchwork Wed Feb 10 15:26:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eli Britstein X-Patchwork-Id: 1439044 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.137; helo=fraxinus.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4DbNtJ3W0wz9sB4 for ; Thu, 11 Feb 2021 02:27:36 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id 872BC86972; Wed, 10 Feb 2021 15:27:34 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 2AJ3MBxspoPl; Wed, 10 Feb 2021 15:27:33 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by fraxinus.osuosl.org (Postfix) with ESMTP id 84E3A869DE; Wed, 10 Feb 2021 15:27:30 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 5F8C7C1E70; Wed, 10 Feb 2021 15:27:30 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by lists.linuxfoundation.org (Postfix) with ESMTP id 042C2C013A for ; Wed, 10 Feb 2021 15:27:28 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id E53708685E for ; Wed, 10 Feb 2021 15:27:27 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id CmBoqmhnTlnO for ; Wed, 10 Feb 2021 15:27:26 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by fraxinus.osuosl.org (Postfix) with ESMTP id 3E45985633 for ; Wed, 10 Feb 2021 15:27:26 +0000 (UTC) Received: from Internal Mail-Server by MTLPINE1 (envelope-from elibr@nvidia.com) with SMTP; 10 Feb 2021 17:27:23 +0200 Received: from nvidia.com (dev-r-vrt-214.mtr.labs.mlnx [10.212.214.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 11AFRMQl003641; Wed, 10 Feb 2021 17:27:23 +0200 From: Eli Britstein To: dev@openvswitch.org, Ilya Maximets Date: Wed, 10 Feb 2021 15:26:53 +0000 Message-Id: <20210210152702.4898-6-elibr@nvidia.com> X-Mailer: git-send-email 2.28.0.546.g385c171 In-Reply-To: <20210210152702.4898-1-elibr@nvidia.com> References: <20210210152702.4898-1-elibr@nvidia.com> MIME-Version: 1.0 Cc: Eli Britstein , Ameer Mahagneh , Majd Dibbiny , Gaetan Rivet Subject: [ovs-dev] [PATCH V2 05/14] netdev-offload-dpdk: Implement HW miss packet recover for vport X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" A miss in virtual port offloads means the flow with tnl_pop was offloaded, but not the following one. Recover the state and continue with SW processing. Signed-off-by: Eli Britstein Reviewed-by: Gaetan Rivet --- lib/netdev-offload-dpdk.c | 95 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+) diff --git a/lib/netdev-offload-dpdk.c b/lib/netdev-offload-dpdk.c index 8cc90d0f1..21aa26b42 100644 --- a/lib/netdev-offload-dpdk.c +++ b/lib/netdev-offload-dpdk.c @@ -1610,6 +1610,100 @@ netdev_offload_dpdk_flow_dump_destroy(struct netdev_flow_dump *dump) return 0; } +static struct netdev * +get_vport_netdev(const char *dpif_type, + struct rte_flow_tunnel *tunnel, + odp_port_t *odp_port) +{ + const struct netdev_tunnel_config *tnl_cfg; + struct netdev_flow_dump **netdev_dumps; + struct netdev *vport = NULL; + bool found = false; + int num_ports = 0; + int err; + int i; + + netdev_dumps = netdev_ports_flow_dump_create(dpif_type, &num_ports, false); + for (i = 0; i < num_ports; i++) { + if (!found && tunnel->type == RTE_FLOW_ITEM_TYPE_VXLAN && + !strcmp(netdev_get_type(netdev_dumps[i]->netdev), "vxlan")) { + tnl_cfg = netdev_get_tunnel_config(netdev_dumps[i]->netdev); + if (tnl_cfg && tnl_cfg->dst_port == tunnel->tp_dst) { + found = true; + vport = netdev_dumps[i]->netdev; + netdev_ref(vport); + *odp_port = netdev_dumps[i]->port; + } + } + err = netdev_flow_dump_destroy(netdev_dumps[i]); + if (err != 0 && err != EOPNOTSUPP) { + VLOG_ERR("failed dumping netdev: %s", ovs_strerror(err)); + } + } + return vport; +} + +static int +netdev_offload_dpdk_hw_miss_packet_recover(struct netdev *netdev, + struct dp_packet *packet) +{ + struct rte_flow_restore_info rte_restore_info; + struct rte_flow_tunnel *rte_tnl; + struct rte_flow_error error; + struct netdev *vport_netdev; + struct pkt_metadata *md; + struct flow_tnl *md_tnl; + odp_port_t vport_odp; + + if (netdev_dpdk_rte_flow_get_restore_info(netdev, packet, + &rte_restore_info, &error)) { + /* This function is called for every packet, and in most cases there + * will be no restore info from the HW, thus error is expected. + */ + (void) error; + return -1; + } + + rte_tnl = &rte_restore_info.tunnel; + if (rte_restore_info.flags & RTE_FLOW_RESTORE_INFO_TUNNEL) { + vport_netdev = get_vport_netdev(netdev->dpif_type, rte_tnl, + &vport_odp); + md = &packet->md; + if (rte_restore_info.flags & RTE_FLOW_RESTORE_INFO_ENCAPSULATED) { + if (!vport_netdev || !vport_netdev->netdev_class || + !vport_netdev->netdev_class->pop_header) { + VLOG_ERR("vport nedtdev=%s with no pop_header method", + netdev_get_name(vport_netdev)); + return -1; + } + vport_netdev->netdev_class->pop_header(packet); + netdev_close(vport_netdev); + } else { + md_tnl = &md->tunnel; + if (rte_tnl->is_ipv6) { + memcpy(&md_tnl->ipv6_src, &rte_tnl->ipv6.src_addr, + sizeof md_tnl->ipv6_src); + memcpy(&md_tnl->ipv6_dst, &rte_tnl->ipv6.dst_addr, + sizeof md_tnl->ipv6_dst); + } else { + md_tnl->ip_src = rte_tnl->ipv4.src_addr; + md_tnl->ip_dst = rte_tnl->ipv4.dst_addr; + } + md_tnl->tun_id = htonll(rte_tnl->tun_id); + md_tnl->flags = rte_tnl->tun_flags; + md_tnl->ip_tos = rte_tnl->tos; + md_tnl->ip_ttl = rte_tnl->ttl; + md_tnl->tp_src = rte_tnl->tp_src; + } + if (vport_netdev) { + md->in_port.odp_port = vport_odp; + } + } + dp_packet_reset_offload(packet); + + return 0; +} + const struct netdev_flow_api netdev_offload_dpdk = { .type = "dpdk_flow_api", .flow_put = netdev_offload_dpdk_flow_put, @@ -1619,4 +1713,5 @@ const struct netdev_flow_api netdev_offload_dpdk = { .flow_flush = netdev_offload_dpdk_flow_flush, .flow_dump_create = netdev_offload_dpdk_flow_dump_create, .flow_dump_destroy = netdev_offload_dpdk_flow_dump_destroy, + .hw_miss_packet_recover = netdev_offload_dpdk_hw_miss_packet_recover, };