From patchwork Thu Jun 1 11:16:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Mi X-Patchwork-Id: 1788988 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256 header.s=selector2 header.b=BUM+R32n; dkim-atps=neutral Received: from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QX3Ww1HByz20QB for ; Thu, 1 Jun 2023 21:17:47 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 17FC0841F2; Thu, 1 Jun 2023 11:17:46 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 17FC0841F2 Authentication-Results: smtp1.osuosl.org; dkim=fail reason="signature verification failed" (2048-bit key, unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256 header.s=selector2 header.b=BUM+R32n X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id PaXxN_dZVZob; Thu, 1 Jun 2023 11:17:44 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [IPv6:2605:bc80:3010:104::8cd3:938]) by smtp1.osuosl.org (Postfix) with ESMTPS id D31E88421E; Thu, 1 Jun 2023 11:17:43 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org D31E88421E Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id AB82FC007A; Thu, 1 Jun 2023 11:17:43 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by lists.linuxfoundation.org (Postfix) with ESMTP id 1AFCAC007A for ; Thu, 1 Jun 2023 11:17:43 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 2A811424C7 for ; Thu, 1 Jun 2023 11:17:38 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 2A811424C7 Authentication-Results: smtp4.osuosl.org; dkim=pass (2048-bit key, unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256 header.s=selector2 header.b=BUM+R32n X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Tcy0xK1wSpru for ; Thu, 1 Jun 2023 11:17:36 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org BFDF8424C2 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2061e.outbound.protection.outlook.com [IPv6:2a01:111:f400:7e89::61e]) by smtp4.osuosl.org (Postfix) with ESMTPS id BFDF8424C2 for ; Thu, 1 Jun 2023 11:17:36 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cYj2SAZw/gJUgj8GRe7zq/VewQUVFr6OgaluO9WstoBz2dH8vL2ooClpgFSvPYqbSDAOjbpWGulTp1P5U/wbNYG+yjodDoDleTsi63P/GIRxRzod3ZniWFxVPX7OwaZ/QlgTX6lzbzEXVRXDNQCbocyRW7dogC3GD8QD0FG9uxauDG639WK7ZOglqa4aRBiQizNJomacsaolB3oj6eUg1qnn8l9d7aOO23y3zJChpGRp0FJ47AQxopebs8B9ekVHJbGFdXXBI1SVFUOgQiMyHj3rLMlefxH2X4WVAKx1kt5msESoZdggRHChL8GdHPJT6OEy9+3+Z4kaFffEiuqEbw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ItNJser8cAMhOk+ZxVlWPzRvWNgjeARkFAnHTRQaeVM=; b=RFRjKExFpAg3HtwgUFHkUsTFvlEZnQinHHElxK5PV4deZnnZBzG3JUC0Vw1bIYbx8yR9+kh5YhDAhizMBHP1yRI9uTly0quyH93ZT0gfPWjStpTDmMe6wgALv4ujyBDcfQZw3AHOj7qM6ol53Jq+xKXhlpDo4j2paRIxv8yQdXgWlRpXdRXywDcH6NukdQUQ+QHbo2E7wwJGucIJ4J35py8Xhd3fLZcheX54HOosDahBeIpEZAzYJAQKxo4+oSaeY4hL9u2GZB36B0GvMLKexAOL1N8IuTsfpdBX2X8ldKpbrQxZ2XFQhgy04Wl1wtCZUO37JOij+HS7odcP4d/XJA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=openvswitch.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ItNJser8cAMhOk+ZxVlWPzRvWNgjeARkFAnHTRQaeVM=; b=BUM+R32noo9lNaxokRibJ4KtrTm9KrxNYQNWwteEtZNp/k+K9VMkn9niak8GBlcL4APZPNs1uy7bUWiRDZGPtZKGMJ/hWnSsAjnGZvULOPfOvCoZEFEIUnOY25l1DFw5VRx+kP95ZHwy39dtw7dWTYF8ity5fZu6O49L+8Ed8cWV6WqCB+7f2J+tnGcJDpwtJp14SKxlznAdngpDwINkqMYeZzkPsENgDrckRui7ci+mVvTmyqoSCxC9utJaejeITIIwYAtgV+8amZAMbeHFYBrYUon8ZtW2oyGTFkOXzLWjY+8yfL5wBeOJq9NNA9IThpAAUYR57A/JTrya/g+JSw== Received: from SA1P222CA0143.NAMP222.PROD.OUTLOOK.COM (2603:10b6:806:3c2::19) by PH7PR12MB5877.namprd12.prod.outlook.com (2603:10b6:510:1d5::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16; Thu, 1 Jun 2023 11:17:33 +0000 Received: from SA2PEPF0000150B.namprd04.prod.outlook.com (2603:10b6:806:3c2:cafe::1c) by SA1P222CA0143.outlook.office365.com (2603:10b6:806:3c2::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.23 via Frontend Transport; Thu, 1 Jun 2023 11:17:33 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SA2PEPF0000150B.mail.protection.outlook.com (10.167.242.43) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.18 via Frontend Transport; Thu, 1 Jun 2023 11:17:33 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Thu, 1 Jun 2023 04:17:20 -0700 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Thu, 1 Jun 2023 04:17:19 -0700 Received: from c-237-115-160-163.mtl.labs.mlnx (10.127.8.12) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.37 via Frontend Transport; Thu, 1 Jun 2023 04:17:16 -0700 To: Date: Thu, 1 Jun 2023 14:16:54 +0300 Message-ID: <20230601111658.113144-5-cmi@nvidia.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20230601111658.113144-1-cmi@nvidia.com> References: <20230601111658.113144-1-cmi@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SA2PEPF0000150B:EE_|PH7PR12MB5877:EE_ X-MS-Office365-Filtering-Correlation-Id: 53f36e06-6952-4e4c-8b3a-08db6291cc02 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: eiwn9SMupcEU5eTgoNTJl9g2oyAW9tQb1D9J3xCHJ+xjb6919crRoriM37nnfYvIQYSzLwfZk5/1K8joxs8sSePEuOuqVUSKm2+lhJX7ac+tUF8rgEIWMbp2YFCfRREutB9p90h3ybro/ZOkax3/gIGLJ9AwL751Tt9xTIHGRuz7klW7ThX+XvZVscCvox2dLMQDc5bf7kHlYz8Mi0tn261aWWppC1LBWuNbbCsoy7TRET1Q9ve3CsM3J+3BQVHSDu4zJCuVj6ZXQHoDoXflkC3mgAmBCRp6tBZ7Q/s2OqbquehYsWdnJuFnLDzMr+bEyWAlYi8WLWdWasfuSbBIj1GNmdzdZsrrno9qWZURPiLOieERhP3vwizDEvXq2F2cyQz1FnGgU2bUzVJ2ns0/AYZZDNcwU/8f2cXII3iCsr5fxmG/mdeuBo1SYQjQvIGws5n+XPh537fl7pk5bSPi36eC42cf00srNL8FQBi7wEViecrID0/81Jp4kPkdevOu+vQb9oJ9Hv5FEC/av97A1zXUjIw8SCmfpIQuqsRVVfNHXcdX3w714ooDNHMVwsImUJ/pFzi+r9JhnJ4jyxufmTu7bKT9hv7FqWTPR6EqFvwnzzUQJ6XV+1pTscyt6w4Xs4XKSWeyqp+fmDhQ17JysbxMyaR7hn3037IuqCX8ugLziwCogZazTXBtC6MpCapxImAJTQ9lulYDq9e97tNlXQvA7B1M44BI+0vTvrfMA+sEU2KxYU0GktxsHv6AWNVS X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(39860400002)(136003)(376002)(396003)(346002)(451199021)(40470700004)(46966006)(36840700001)(40460700003)(6916009)(54906003)(70586007)(4326008)(316002)(70206006)(478600001)(36756003)(86362001)(426003)(47076005)(6666004)(336012)(107886003)(83380400001)(1076003)(186003)(2616005)(2906002)(82310400005)(5660300002)(30864003)(8936002)(7636003)(26005)(356005)(82740400003)(8676002)(36860700001)(41300700001)(40480700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Jun 2023 11:17:33.3993 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 53f36e06-6952-4e4c-8b3a-08db6291cc02 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SA2PEPF0000150B.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB5877 Cc: elibr@nvidia.com, simon.horman@corigine.com, roniba@nvidia.com, i.maximets@ovn.org, konguyen@redhat.com, majd@nvidia.com, maord@nvidia.com Subject: [ovs-dev] [PATCH v27 4/8] netdev-offload-tc: Add sample offload API for TC X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Chris Mi via dev From: Chris Mi Reply-To: Chris Mi Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" Initialize psample socket. Add sample recv API to receive sampled packets from psample socket. Add sample recv wait API to add psample socket fd to poll list. Signed-off-by: Chris Mi Reviewed-by: Roi Dayan Acked-by: Eelco Chaudron --- lib/dpif.h | 6 +- lib/flow.h | 2 +- lib/netdev-offload-provider.h | 30 ++++++ lib/netdev-offload-tc.c | 172 ++++++++++++++++++++++++++++++++++ lib/netdev-offload.c | 3 +- lib/packets.h | 2 +- 6 files changed, 210 insertions(+), 5 deletions(-) diff --git a/lib/dpif.h b/lib/dpif.h index 129cbf6a1..f91295862 100644 --- a/lib/dpif.h +++ b/lib/dpif.h @@ -834,8 +834,10 @@ struct dpif_upcall { /* DPIF_UC_ACTION only. */ struct nlattr *userdata; /* Argument to OVS_ACTION_ATTR_USERSPACE. */ - struct nlattr *out_tun_key; /* Output tunnel key. */ - struct nlattr *actions; /* Argument to OVS_ACTION_ATTR_USERSPACE. */ + struct nlattr *out_tun_key; /* Output tunnel key. */ + struct nlattr *actions; /* Argument to OVS_ACTION_ATTR_USERSPACE. */ + struct flow flow; /* Caller provided 'flow' if the 'key' is not + available. */ }; /* A callback to notify higher layer of dpif about to be purged, so that diff --git a/lib/flow.h b/lib/flow.h index a9d026e1c..0974bfd42 100644 --- a/lib/flow.h +++ b/lib/flow.h @@ -970,7 +970,7 @@ pkt_metadata_from_flow(struct pkt_metadata *md, const struct flow *flow) md->recirc_id = flow->recirc_id; md->dp_hash = flow->dp_hash; - flow_tnl_copy__(&md->tunnel, &flow->tunnel); + flow_tnl_copy(&md->tunnel, &flow->tunnel); md->skb_priority = flow->skb_priority; md->pkt_mark = flow->pkt_mark; md->in_port = flow->in_port; diff --git a/lib/netdev-offload-provider.h b/lib/netdev-offload-provider.h index 9108856d1..a457556e5 100644 --- a/lib/netdev-offload-provider.h +++ b/lib/netdev-offload-provider.h @@ -28,6 +28,8 @@ extern "C" { #endif +struct dpif_upcall; + struct netdev_flow_api { char *type; /* Flush all offloaded flows from a netdev. @@ -121,6 +123,34 @@ struct netdev_flow_api { int (*meter_del)(ofproto_meter_id meter_id, struct ofputil_meter_stats *stats); + /* Polls for upcall offload packets for an upcall handler. If successful, + * stores the upcall into '*upcall', using 'buf' for storage. + * + * The implementation should point 'upcall->flow' and 'upcall->userdata' + * (if any) into data in the caller-provided 'buf'. The implementation may + * also use 'buf' for storing the data of 'upcall->packet'. If necessary + * to make room, the implementation may reallocate the data in 'buf'. + * + * The caller owns the data of 'upcall->packet' and may modify it. If + * packet's headroom is exhausted as it is manipulated, 'upcall->packet' + * will be reallocated. This requires the data of 'upcall->packet' to be + * released with ofpbuf_uninit() before 'upcall' is destroyed. However, + * when an error is returned, the 'upcall->packet' may be uninitialized + * and should not be released. + * + * This function must not block. If no upcall is pending when it is + * called, it should return EAGAIN without blocking. + * + * Return 0 if successful, otherwise returns a positive errno value. + */ + int (*recv)(struct dpif_upcall *upcall, struct ofpbuf *buf, + uint32_t handler_id); + + /* Arranges for the poll loop for an upcall handler to wake up when + * sample socket has a message queued to be received with the recv + * member functions. */ + void (*recv_wait)(uint32_t handler_id); + /* Initializies the netdev flow api. * Return 0 if successful, otherwise returns a positive errno value. */ int (*init_flow_api)(struct netdev *); diff --git a/lib/netdev-offload-tc.c b/lib/netdev-offload-tc.c index 79bc3225a..d2fe7489a 100644 --- a/lib/netdev-offload-tc.c +++ b/lib/netdev-offload-tc.c @@ -18,6 +18,8 @@ #include #include +#include +#include #include "cmap.h" #include "dpif-provider.h" @@ -127,6 +129,9 @@ struct sgid_node { struct offload_sample sample; }; +static struct nl_sock *psample_sock; +static int psample_family; + /* The sgid_map mutex protects the sample_group_ids and the sgid_map for * cmap_insert(), cmap_remove(), or cmap_replace() operations. */ static struct ovs_mutex sgid_lock = OVS_MUTEX_INITIALIZER; @@ -158,6 +163,14 @@ sgid_find(uint32_t id) return node ? CONTAINER_OF(node, struct sgid_node, id_node) : NULL; } +static struct offload_sample * +sample_find(uint32_t id) +{ + struct sgid_node *node = sgid_find(id); + + return node ? &node->sample: NULL; +} + static void offload_sample_clone(struct offload_sample *dst, const struct offload_sample *src, @@ -2959,6 +2972,55 @@ tc_cleanup_policer_actions(struct id_pool *police_ids, hmap_destroy(&map); } +static void +psample_init(void) +{ + unsigned int psample_mcgroup; + int err; + + if (!netdev_is_flow_api_enabled()) { + VLOG_DBG("Flow API is not enabled"); + return; + } + + if (psample_sock) { + VLOG_DBG("Psample socket is already initialized"); + return; + } + + err = nl_lookup_genl_family(PSAMPLE_GENL_NAME, + &psample_family); + if (err) { + VLOG_INFO("Generic Netlink family '%s' does not exist: %s\n" + "Please make sure the kernel module psample is loaded", + PSAMPLE_GENL_NAME, ovs_strerror(err)); + return; + } + + err = nl_lookup_genl_mcgroup(PSAMPLE_GENL_NAME, + PSAMPLE_NL_MCGRP_SAMPLE_NAME, + &psample_mcgroup); + if (err) { + VLOG_INFO("Failed to join Netlink multicast group '%s': %s", + PSAMPLE_NL_MCGRP_SAMPLE_NAME, ovs_strerror(err)); + return; + } + + err = nl_sock_create(NETLINK_GENERIC, &psample_sock); + if (err) { + VLOG_INFO("Failed to create psample socket: %s", ovs_strerror(err)); + return; + } + + err = nl_sock_join_mcgroup(psample_sock, psample_mcgroup); + if (err) { + VLOG_INFO("Failed to join psample mcgroup: %s", ovs_strerror(err)); + nl_sock_destroy(psample_sock); + psample_sock = NULL; + return; + } +} + static int netdev_tc_init_flow_api(struct netdev *netdev) { @@ -3018,6 +3080,7 @@ netdev_tc_init_flow_api(struct netdev *netdev) ovs_mutex_lock(&sgid_lock); sample_group_ids = id_pool_create(1, UINT32_MAX - 1); ovs_mutex_unlock(&sgid_lock); + psample_init(); ovsthread_once_done(&once); } @@ -3235,6 +3298,113 @@ meter_tc_del_policer(ofproto_meter_id meter_id, return err; } +struct offload_psample { + struct nlattr *packet; /* Packet data. */ + uint32_t group_id; /* Mapping id for sample offload. */ +}; + +static int +nl_parse_psample(struct offload_psample *psample, struct ofpbuf *buf) +{ + static const struct nl_policy ovs_psample_policy[] = { + [PSAMPLE_ATTR_SAMPLE_GROUP] = { .type = NL_A_U32 }, + [PSAMPLE_ATTR_DATA] = { .type = NL_A_UNSPEC }, + }; + struct nlattr *a[ARRAY_SIZE(ovs_psample_policy)]; + struct genlmsghdr *genl; + struct nlmsghdr *nlmsg; + struct ofpbuf b; + + b = ofpbuf_const_initializer(buf->data, buf->size); + nlmsg = ofpbuf_try_pull(&b, sizeof *nlmsg); + genl = ofpbuf_try_pull(&b, sizeof *genl); + if (!nlmsg || !genl || nlmsg->nlmsg_type != psample_family + || !nl_policy_parse(&b, 0, ovs_psample_policy, a, + ARRAY_SIZE(ovs_psample_policy))) { + return EINVAL; + } + + psample->group_id = nl_attr_get_u32(a[PSAMPLE_ATTR_SAMPLE_GROUP]); + psample->packet = a[PSAMPLE_ATTR_DATA]; + + return 0; +} + +static int +psample_parse_packet(struct offload_psample *psample, + struct dpif_upcall *upcall) +{ + struct flow *flow = &upcall->flow; + struct offload_sample *sample; + + memset(upcall, 0, sizeof *upcall); + dp_packet_use_const(&upcall->packet, + nl_attr_get(psample->packet), + nl_attr_get_size(psample->packet)); + + sample = sample_find(psample->group_id); + if (!sample) { + VLOG_ERR_RL(&error_rl, "Failed to get sample info via group id: %d", + psample->group_id); + return ENOENT; + } + + upcall->userdata = sample->userdata; + if (sample->tunnel) { + flow_tnl_copy(&flow->tunnel, sample->tunnel); + } + if (sample->userspace_actions) { + upcall->actions = sample->userspace_actions; + } + flow->in_port.odp_port = netdev_ifindex_to_odp_port(sample->ifindex); + upcall->type = DPIF_UC_ACTION; + + return 0; +} + +static int +netdev_tc_recv(struct dpif_upcall *upcall, struct ofpbuf *buf, + uint32_t handler_id) +{ + int read_tries = 0; + + if (handler_id || !psample_sock) { + return EAGAIN; + } + + for (;;) { + struct offload_psample psample; + int error; + + if (++read_tries > 50) { + return EAGAIN; + } + + error = nl_sock_recv(psample_sock, buf, NULL, false); + if (error == ENOBUFS) { + continue; + } + if (error) { + return error; + } + error = nl_parse_psample(&psample, buf); + + return error ? error : psample_parse_packet(&psample, upcall); + } + + return EAGAIN; +} + +static void +netdev_tc_recv_wait(uint32_t handler_id) +{ + /* For simplicity, i.e., using a single NetLink socket, only the first + * handler thread will be used. */ + if (!handler_id && psample_sock) { + nl_sock_wait(psample_sock, POLLIN); + } +} + const struct netdev_flow_api netdev_offload_tc = { .type = "linux_tc", .flow_flush = netdev_tc_flow_flush, @@ -3248,5 +3418,7 @@ const struct netdev_flow_api netdev_offload_tc = { .meter_set = meter_tc_set_policer, .meter_get = meter_tc_get_policer, .meter_del = meter_tc_del_policer, + .recv = netdev_tc_recv, + .recv_wait = netdev_tc_recv_wait, .init_flow_api = netdev_tc_init_flow_api, }; diff --git a/lib/netdev-offload.c b/lib/netdev-offload.c index a5fa62487..403315deb 100644 --- a/lib/netdev-offload.c +++ b/lib/netdev-offload.c @@ -38,6 +38,7 @@ #include "netdev-provider.h" #include "netdev-vport.h" #include "odp-netlink.h" +#include "odp-util.h" #include "openflow/openflow.h" #include "packets.h" #include "openvswitch/ofp-print.h" @@ -826,7 +827,7 @@ odp_port_t netdev_ifindex_to_odp_port(int ifindex) { struct port_to_netdev_data *data; - odp_port_t ret = 0; + odp_port_t ret = ODPP_NONE; ovs_rwlock_rdlock(&ifindex_to_port_rwlock); HMAP_FOR_EACH_WITH_HASH (data, ifindex_node, ifindex, &ifindex_to_port) { diff --git a/lib/packets.h b/lib/packets.h index ac4c28e47..f49c3822f 100644 --- a/lib/packets.h +++ b/lib/packets.h @@ -86,7 +86,7 @@ flow_tnl_size(const struct flow_tnl *src) * data in 'dst' is NOT cleared, so this must not be used in cases where the * uninitialized portion may be hashed over. */ static inline void -flow_tnl_copy__(struct flow_tnl *dst, const struct flow_tnl *src) +flow_tnl_copy(struct flow_tnl *dst, const struct flow_tnl *src) { memcpy(dst, src, flow_tnl_size(src)); }