From patchwork Fri May 10 14:01:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 1933858 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=ejESxrZP; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:45e3:2400::1; helo=sv.mirrors.kernel.org; envelope-from=netfilter-devel+bounces-2143-incoming=patchwork.ozlabs.org@vger.kernel.org; receiver=patchwork.ozlabs.org) Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org [IPv6:2604:1380:45e3:2400::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4VbVtl033nz1ymg for ; Sat, 11 May 2024 00:02:06 +1000 (AEST) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 865D22817B1 for ; Fri, 10 May 2024 14:02:04 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4EC0912D1F9; Fri, 10 May 2024 14:01:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ejESxrZP" X-Original-To: netfilter-devel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C26A5127B73; Fri, 10 May 2024 14:01:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715349709; cv=none; b=V5xZ2GuOrdgjAGH3xbnWY/CKSDI7wBgPDow6MAq0T+0w1NaniEfNkqC10/nhea1wS+JZhs4Qgkd24fXNgzHaZAoWdujm+nvugMMc4sGpxM9r1Dl0XdBPKV8sGhPEWUev4dVVFiHbu+oCfMvv5ZFYVMfHmtxkINUtrax7pnO6HWY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715349709; c=relaxed/simple; bh=a5ktp55/+js+iVDEn3xpxV6YVFGbAUfSfTAr6yW53v0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=J58eaa0qs/NopWdjydg4f2phgOW9v4ONWJVi2LSbjqjm9ZtSxRSLSK7vKwbIK08wTenpn6ny+UfGW26T5m/m5DaAqk6N7vqc3J1Y1hwROXYmzFkhW9wJIPEtoqtBtkSI3sCopG5aZ2XA768i3do5utEf65f6Dx7ufxe+olwv8EI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ejESxrZP; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id D0F3EC113CC; Fri, 10 May 2024 14:01:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1715349709; bh=a5ktp55/+js+iVDEn3xpxV6YVFGbAUfSfTAr6yW53v0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ejESxrZP6AcIisTvHeMWEQ/sF6pTO8Ri/qsQ9n5h8wMuarv9OtjNpe6L9dgebn2mw YwuG24a+v+bbqlulkR7gg7YFRXyMn7+z6GFu28rlOVX3MiqtnnIS24aUpkbFLIa2hy 3s22Oe2bzCRmvdi0bFr5UM7T8NZKUphjJM4TNeQlagPz061a3vq8rRgA0IkVD0TcFJ 4/qDzHHAFkTGMBarHIWFPMqgzlxXEU9AsUg314tC56+e/0mHjA4/893qvLSqYix24+ HMTvlXfbXxezh3zDuJPZbwXcU/QRL/E577fBZ6d1LASnz/GkJsR/Ps5pEntahwjnmj 6V+JySc+MD5IQ== From: Lorenzo Bianconi To: bpf@vger.kernel.org Cc: pablo@netfilter.org, kadlec@netfilter.org, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, netfilter-devel@vger.kernel.org, netdev@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, lorenzo.bianconi@redhat.com, toke@redhat.com, fw@strlen.de, hawk@kernel.org, horms@kernel.org, donhunte@redhat.com Subject: [RFC bpf-next v1 1/4] netfilter: nf_tables: add flowtable map for xdp offload Date: Fri, 10 May 2024 16:01:24 +0200 Message-ID: X-Mailer: git-send-email 2.45.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netfilter-devel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Florian Westphal This adds a small internal mapping table so that a new bpf (xdp) kfunc can perform lookups in a flowtable. As-is, xdp program has access to the device pointer, but no way to do a lookup in a flowtable -- there is no way to obtain the needed struct without questionable stunts. This allows to obtain an nf_flowtable pointer given a net_device structure. In order to keep backward compatibility, the infrastructure allows the user to add a given device to multiple flowtables, but it will always return the first added mapping performing the lookup since it assumes the right configuration is 1:1 mapping between flowtables and net_devices. Signed-off-by: Florian Westphal Co-developed-by: Lorenzo Bianconi Signed-off-by: Lorenzo Bianconi --- include/net/netfilter/nf_flow_table.h | 2 + net/netfilter/nf_flow_table_offload.c | 161 +++++++++++++++++++++++++- 2 files changed, 161 insertions(+), 2 deletions(-) diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h index 9abb7ee40d72f..0bbe6ea8e0651 100644 --- a/include/net/netfilter/nf_flow_table.h +++ b/include/net/netfilter/nf_flow_table.h @@ -305,6 +305,8 @@ struct flow_ports { __be16 source, dest; }; +struct nf_flowtable *nf_flowtable_by_dev(const struct net_device *dev); + unsigned int nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb, const struct nf_hook_state *state); unsigned int nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb, diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c index a010b25076ca0..1acfcdbee42e8 100644 --- a/net/netfilter/nf_flow_table_offload.c +++ b/net/netfilter/nf_flow_table_offload.c @@ -17,6 +17,129 @@ static struct workqueue_struct *nf_flow_offload_add_wq; static struct workqueue_struct *nf_flow_offload_del_wq; static struct workqueue_struct *nf_flow_offload_stats_wq; +struct flow_offload_xdp_ft { + struct list_head head; + struct nf_flowtable *ft; + struct rcu_head rcuhead; +}; + +struct flow_offload_xdp { + struct hlist_node hnode; + unsigned long net_device_addr; + struct list_head head; +}; + +#define NF_XDP_HT_BITS 4 +static DEFINE_HASHTABLE(nf_xdp_hashtable, NF_XDP_HT_BITS); +static DEFINE_MUTEX(nf_xdp_hashtable_lock); + +/* caller must hold rcu read lock */ +struct nf_flowtable *nf_flowtable_by_dev(const struct net_device *dev) +{ + unsigned long key = (unsigned long)dev; + struct flow_offload_xdp *iter; + + hash_for_each_possible_rcu(nf_xdp_hashtable, iter, hnode, key) { + if (key == iter->net_device_addr) { + struct flow_offload_xdp_ft *ft_elem; + + /* The user is supposed to insert a given net_device + * just into a single nf_flowtable so we always return + * the first element here. + */ + ft_elem = list_first_or_null_rcu(&iter->head, + struct flow_offload_xdp_ft, + head); + return ft_elem ? ft_elem->ft : NULL; + } + } + + return NULL; +} + +static int nf_flowtable_by_dev_insert(struct nf_flowtable *ft, + const struct net_device *dev) +{ + struct flow_offload_xdp *iter, *elem = NULL; + unsigned long key = (unsigned long)dev; + struct flow_offload_xdp_ft *ft_elem; + + ft_elem = kzalloc(sizeof(*ft_elem), GFP_KERNEL_ACCOUNT); + if (!ft_elem) + return -ENOMEM; + + ft_elem->ft = ft; + + mutex_lock(&nf_xdp_hashtable_lock); + + hash_for_each_possible(nf_xdp_hashtable, iter, hnode, key) { + if (key == iter->net_device_addr) { + elem = iter; + break; + } + } + + if (!elem) { + elem = kzalloc(sizeof(*elem), GFP_KERNEL_ACCOUNT); + if (!elem) + goto err_unlock; + + elem->net_device_addr = key; + INIT_LIST_HEAD(&elem->head); + hash_add_rcu(nf_xdp_hashtable, &elem->hnode, key); + } + list_add_tail_rcu(&ft_elem->head, &elem->head); + + mutex_unlock(&nf_xdp_hashtable_lock); + + return 0; + +err_unlock: + mutex_unlock(&nf_xdp_hashtable_lock); + kfree(ft_elem); + + return -ENOMEM; +} + +static void nf_flowtable_by_dev_remove(struct nf_flowtable *ft, + const struct net_device *dev) +{ + struct flow_offload_xdp *iter, *elem = NULL; + unsigned long key = (unsigned long)dev; + + mutex_lock(&nf_xdp_hashtable_lock); + + hash_for_each_possible(nf_xdp_hashtable, iter, hnode, key) { + if (key == iter->net_device_addr) { + elem = iter; + break; + } + } + + if (elem) { + struct flow_offload_xdp_ft *ft_elem, *ft_next; + + list_for_each_entry_safe(ft_elem, ft_next, &elem->head, head) { + if (ft_elem->ft == ft) { + list_del_rcu(&ft_elem->head); + kfree_rcu(ft_elem, rcuhead); + } + } + + if (list_empty(&elem->head)) + hash_del_rcu(&elem->hnode); + else + elem = NULL; + } + + mutex_unlock(&nf_xdp_hashtable_lock); + + if (elem) { + synchronize_rcu(); + kfree(elem); + } +} + struct flow_offload_work { struct list_head list; enum flow_cls_command cmd; @@ -1183,6 +1306,38 @@ static int nf_flow_table_offload_cmd(struct flow_block_offload *bo, return 0; } +static int nf_flow_offload_xdp_setup(struct nf_flowtable *flowtable, + struct net_device *dev, + enum flow_block_command cmd) +{ + switch (cmd) { + case FLOW_BLOCK_BIND: + return nf_flowtable_by_dev_insert(flowtable, dev); + case FLOW_BLOCK_UNBIND: + nf_flowtable_by_dev_remove(flowtable, dev); + return 0; + } + + WARN_ON_ONCE(1); + return 0; +} + +static void nf_flow_offload_xdp_cancel(struct nf_flowtable *flowtable, + struct net_device *dev, + enum flow_block_command cmd) +{ + switch (cmd) { + case FLOW_BLOCK_BIND: + nf_flowtable_by_dev_remove(flowtable, dev); + return; + case FLOW_BLOCK_UNBIND: + /* We do not re-bind in case hw offload would report error + * on *unregister*. + */ + break; + } +} + int nf_flow_table_offload_setup(struct nf_flowtable *flowtable, struct net_device *dev, enum flow_block_command cmd) @@ -1192,7 +1347,7 @@ int nf_flow_table_offload_setup(struct nf_flowtable *flowtable, int err; if (!nf_flowtable_hw_offload(flowtable)) - return 0; + return nf_flow_offload_xdp_setup(flowtable, dev, cmd); if (dev->netdev_ops->ndo_setup_tc) err = nf_flow_table_offload_cmd(&bo, flowtable, dev, cmd, @@ -1200,8 +1355,10 @@ int nf_flow_table_offload_setup(struct nf_flowtable *flowtable, else err = nf_flow_table_indr_offload_cmd(&bo, flowtable, dev, cmd, &extack); - if (err < 0) + if (err < 0) { + nf_flow_offload_xdp_cancel(flowtable, dev, cmd); return err; + } return nf_flow_table_block_setup(flowtable, &bo, cmd); } From patchwork Fri May 10 14:01:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 1933859 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=UQL6e/93; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:40f1:3f00::1; helo=sy.mirrors.kernel.org; envelope-from=netfilter-devel+bounces-2144-incoming=patchwork.ozlabs.org@vger.kernel.org; receiver=patchwork.ozlabs.org) Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org [IPv6:2604:1380:40f1:3f00::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4VbVtw1p9Zz1ymg for ; Sat, 11 May 2024 00:02:16 +1000 (AEST) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 32C0AB22979 for ; Fri, 10 May 2024 14:02:15 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 43EED1304BF; Fri, 10 May 2024 14:01:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="UQL6e/93" X-Original-To: netfilter-devel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B92D2127B73; Fri, 10 May 2024 14:01:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715349713; cv=none; b=OtwmGaSV/elCKrxmZ+FoswO15HWL1mwluPcKu/VPxPMXw/RrWdOGIg5lem5IQ+UiG/YbXamqu48k6YswSwQEk0JmAxnPFJNOO01EfhlXekzgivo0rWCLMnb/JCjA0hGneR0ZVdEgk7EZTX6cq8zkZF1vjXoRoe9jDb9niBU/dVs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715349713; c=relaxed/simple; bh=gigbIQ9TeUK6/dpHMO5F9oRq99aeFRnXGE3J80yrO5Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=A/Vgp0mLQ3mU+5+Cey/rTIZYVWletPobU1dz3EpsqkrGxoAXH2xyn8EpH6orzgBF0uyaJQG2S75n7/BvTzP//UtDtJxm4+8MQPr2FIf4i8Q2wpvo0YVCVYQn6d5IdseUwhboVn65ACaVcdh8LRwfIQCLhxqXg4tvnriq20lalnU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=UQL6e/93; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id E9812C113CC; Fri, 10 May 2024 14:01:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1715349713; bh=gigbIQ9TeUK6/dpHMO5F9oRq99aeFRnXGE3J80yrO5Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UQL6e/93ZRf8l2U3fICZYj6amTw1VkrDnjGyhxO/8Ym18mYtRiT0chTvXb9OX3tXk x0IkBa/XCBjaBETEV1RsS/ev+M1bsDYWGRSVkPGd/+nl54FsgiG9AQELha5ywkZTeA B8rlbqD/cwlFICYvOoFpwUbdIc4fbEokP2hVU+hOnQ7bnc37zengE4viOAe3xxaB0S /uPebup6x0qwe0w3Dkhu6XwG2mHrkAzMt+1guB+caloL9HjpdW/jMRrmQucQIR0gT3 TcGlH3LzsYnXLHSLDHQZzJPvRtINx9C9a6VPpO/fXdQO9fnBc7O9cdb9zfOsv0Hfqh hAEE8b3kgquWw== From: Lorenzo Bianconi To: bpf@vger.kernel.org Cc: pablo@netfilter.org, kadlec@netfilter.org, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, netfilter-devel@vger.kernel.org, netdev@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, lorenzo.bianconi@redhat.com, toke@redhat.com, fw@strlen.de, hawk@kernel.org, horms@kernel.org, donhunte@redhat.com Subject: [RFC bpf-next v1 2/4] netfilter: add bpf_xdp_flow_offload_lookup kfunc Date: Fri, 10 May 2024 16:01:25 +0200 Message-ID: <6526cb192be96de940f2e281ca31843d74106ae0.1715348200.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.45.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netfilter-devel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Introduce bpf_xdp_flow_offload_lookup kfunc in order to perform the lookup of a given flowtable entry based on a fib tuple of incoming traffic. bpf_xdp_flow_offload_lookup can be used as building block to offload in xdp the processing of sw flowtable when hw flowtable is not available. Signed-off-by: Lorenzo Bianconi --- include/net/netfilter/nf_flow_table.h | 9 +++ net/netfilter/Makefile | 5 ++ net/netfilter/nf_flow_table_bpf.c | 95 +++++++++++++++++++++++++++ net/netfilter/nf_flow_table_inet.c | 2 + 4 files changed, 111 insertions(+) create mode 100644 net/netfilter/nf_flow_table_bpf.c diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h index 0bbe6ea8e0651..2f78bacb45d6a 100644 --- a/include/net/netfilter/nf_flow_table.h +++ b/include/net/netfilter/nf_flow_table.h @@ -312,6 +312,15 @@ unsigned int nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb, unsigned int nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb, const struct nf_hook_state *state); +#if IS_ENABLED(CONFIG_DEBUG_INFO_BTF) || IS_ENABLED(CONFIG_DEBUG_INFO_BTF_MODULES) +extern int nf_flow_offload_register_bpf(void); +#else +static inline int nf_flow_offload_register_bpf(void) +{ + return 0; +} +#endif + #define MODULE_ALIAS_NF_FLOWTABLE(family) \ MODULE_ALIAS("nf-flowtable-" __stringify(family)) diff --git a/net/netfilter/Makefile b/net/netfilter/Makefile index 614815a3ed738..eb91a54b9fa9a 100644 --- a/net/netfilter/Makefile +++ b/net/netfilter/Makefile @@ -143,6 +143,11 @@ obj-$(CONFIG_NFT_FWD_NETDEV) += nft_fwd_netdev.o obj-$(CONFIG_NF_FLOW_TABLE) += nf_flow_table.o nf_flow_table-objs := nf_flow_table_core.o nf_flow_table_ip.o \ nf_flow_table_offload.o +ifeq ($(CONFIG_NF_FLOW_TABLE),m) +nf_flow_table-objs += nf_flow_table_bpf.o +else ifeq ($(CONFIG_NF_FLOW_TABLE),y) +nf_flow_table-objs += nf_flow_table_bpf.o +endif nf_flow_table-$(CONFIG_NF_FLOW_TABLE_PROCFS) += nf_flow_table_procfs.o obj-$(CONFIG_NF_FLOW_TABLE_INET) += nf_flow_table_inet.o diff --git a/net/netfilter/nf_flow_table_bpf.c b/net/netfilter/nf_flow_table_bpf.c new file mode 100644 index 0000000000000..836a1127e4052 --- /dev/null +++ b/net/netfilter/nf_flow_table_bpf.c @@ -0,0 +1,95 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Unstable Flow Table Helpers for XDP hook + * + * These are called from the XDP programs. + * Note that it is allowed to break compatibility for these functions since + * the interface they are exposed through to BPF programs is explicitly + * unstable. + */ + +#include +#include +#include +#include +#include +#include +#include + +__diag_push(); +__diag_ignore_all("-Wmissing-prototypes", + "Global functions as their definitions will be in nf_flow_table BTF"); + +static struct flow_offload_tuple_rhash * +bpf_xdp_flow_offload_tuple_lookup(struct net_device *dev, + struct flow_offload_tuple *tuple, + __be16 proto) +{ + struct flow_offload_tuple_rhash *tuplehash; + struct nf_flowtable *flow_table; + struct flow_offload *flow; + + flow_table = nf_flowtable_by_dev(dev); + if (!flow_table) + return ERR_PTR(-ENOENT); + + tuplehash = flow_offload_lookup(flow_table, tuple); + if (!tuplehash) + return ERR_PTR(-ENOENT); + + flow = container_of(tuplehash, struct flow_offload, + tuplehash[tuplehash->tuple.dir]); + flow_offload_refresh(flow_table, flow, false); + + return tuplehash; +} + +__bpf_kfunc struct flow_offload_tuple_rhash * +bpf_xdp_flow_offload_lookup(struct xdp_md *ctx, + struct bpf_fib_lookup *fib_tuple, + u32 fib_tuple__sz) +{ + struct xdp_buff *xdp = (struct xdp_buff *)ctx; + struct flow_offload_tuple tuple = { + .iifidx = fib_tuple->ifindex, + .l3proto = fib_tuple->family, + .l4proto = fib_tuple->l4_protocol, + .src_port = fib_tuple->sport, + .dst_port = fib_tuple->dport, + }; + __be16 proto; + + switch (fib_tuple->family) { + case AF_INET: + tuple.src_v4.s_addr = fib_tuple->ipv4_src; + tuple.dst_v4.s_addr = fib_tuple->ipv4_dst; + proto = htons(ETH_P_IP); + break; + case AF_INET6: + tuple.src_v6 = *(struct in6_addr *)&fib_tuple->ipv6_src; + tuple.dst_v6 = *(struct in6_addr *)&fib_tuple->ipv6_dst; + proto = htons(ETH_P_IPV6); + break; + default: + return ERR_PTR(-EINVAL); + } + + return bpf_xdp_flow_offload_tuple_lookup(xdp->rxq->dev, &tuple, proto); +} + +__diag_pop() + +BTF_KFUNCS_START(nf_ft_kfunc_set) +BTF_ID_FLAGS(func, bpf_xdp_flow_offload_lookup) +BTF_KFUNCS_END(nf_ft_kfunc_set) + +static const struct btf_kfunc_id_set nf_flow_offload_kfunc_set = { + .owner = THIS_MODULE, + .set = &nf_ft_kfunc_set, +}; + +int nf_flow_offload_register_bpf(void) +{ + return register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP, + &nf_flow_offload_kfunc_set); +} +EXPORT_SYMBOL_GPL(nf_flow_offload_register_bpf); diff --git a/net/netfilter/nf_flow_table_inet.c b/net/netfilter/nf_flow_table_inet.c index 6eef15648b7b0..b13587238eceb 100644 --- a/net/netfilter/nf_flow_table_inet.c +++ b/net/netfilter/nf_flow_table_inet.c @@ -98,6 +98,8 @@ static int __init nf_flow_inet_module_init(void) nft_register_flowtable_type(&flowtable_ipv6); nft_register_flowtable_type(&flowtable_inet); + nf_flow_offload_register_bpf(); + return 0; } From patchwork Fri May 10 14:01:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 1933860 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=JTBqSdMU; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=147.75.80.249; helo=am.mirrors.kernel.org; envelope-from=netfilter-devel+bounces-2145-incoming=patchwork.ozlabs.org@vger.kernel.org; receiver=patchwork.ozlabs.org) Received: from am.mirrors.kernel.org (am.mirrors.kernel.org [147.75.80.249]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4VbVv94nkZz1ymg for ; Sat, 11 May 2024 00:02:29 +1000 (AEST) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id AE1CD1F228FE for ; Fri, 10 May 2024 14:02:26 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 0BCA616D4DF; Fri, 10 May 2024 14:01:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JTBqSdMU" X-Original-To: netfilter-devel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7DEC312C80F; Fri, 10 May 2024 14:01:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715349717; cv=none; b=vAXzcSSPXj5qfdF4BjJzivI+MnaK3/CdhsAQm/WWBf0qsKV3oL55qF8h+xI8PMalZxmuQfg88vpXOrGvzlBpsYNhYfADUeByIiqBGlICtUNNRZ0sLLpZMbOcI6mCrg7uXrQgOkf/Leu8/Hq8ibm6+4ZF8ISIfRPq4gKh/Adhtmo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715349717; c=relaxed/simple; bh=v0uuuhUE2wqCSiEzBC0tUksx8Sxag17ovvMsswnSdSo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dsc2+9OtIvF6YdC2b6pVmMe59CVMZsagqmsmZBnu7MZuiXvWlIPrBGAHEoLiLcpt/4eupn80H03yMyzdtIeK99xzPcROmn3cDNmnQe6e2F0gPYPVnpBtHEw6hAauWGlvUGDhh5yn+XONgJJ6SK+va0DgIPWe46mS1YWX2Qm+bHA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JTBqSdMU; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 97F4AC113CC; Fri, 10 May 2024 14:01:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1715349717; bh=v0uuuhUE2wqCSiEzBC0tUksx8Sxag17ovvMsswnSdSo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JTBqSdMUoByfCJWB1pA5d5Q7JetvqxYSqvkgQVnSTG/vFPYK3Mbe/Xm5pJBB13mH9 0x3qxlC+dYFmcKnu+bkSoBPREk6QKmdSERihVvveBoWK0QPJki9/OJwHrSsHJXyY+k WlDrGu44lOW9W6bqenuc/5bphlb5tohJK4jkkfIiGKatmua1Igt4gsjuwUxvONLg1I JDuqe30xADqj62PuzwTh6wQu5fxacmRqaGx0oP2J3YRmbPZxWoMmj7LUB7MssgtbAt p4/4Qx9VnXmssImYgN5Ft4yXLxU46JFGgg1FWeoRZuunbNFXBpfLCGVabZ9xi22zaD pc5nl+F2A3O3A== From: Lorenzo Bianconi To: bpf@vger.kernel.org Cc: pablo@netfilter.org, kadlec@netfilter.org, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, netfilter-devel@vger.kernel.org, netdev@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, lorenzo.bianconi@redhat.com, toke@redhat.com, fw@strlen.de, hawk@kernel.org, horms@kernel.org, donhunte@redhat.com Subject: [RFC bpf-next v1 3/4] samples/bpf: Add bpf sample to offload flowtable traffic to xdp Date: Fri, 10 May 2024 16:01:26 +0200 Message-ID: <958a7051a2bd76026fc2a227c5f02cbee26c6a2e.1715348200.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.45.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netfilter-devel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Introduce xdp_flowtable_offload bpf sample to offload sw flowtable logic in xdp layer if hw flowtable is not available or does not support a specific kind of traffic. Signed-off-by: Lorenzo Bianconi --- samples/bpf/Makefile | 7 +- samples/bpf/xdp_flowtable_offload.bpf.c | 592 +++++++++++++++++++++++ samples/bpf/xdp_flowtable_offload_user.c | 128 +++++ 3 files changed, 726 insertions(+), 1 deletion(-) create mode 100644 samples/bpf/xdp_flowtable_offload.bpf.c create mode 100644 samples/bpf/xdp_flowtable_offload_user.c diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile index 9aa027b144df6..a3d089ca224d5 100644 --- a/samples/bpf/Makefile +++ b/samples/bpf/Makefile @@ -46,6 +46,7 @@ tprogs-y += xdp_fwd tprogs-y += task_fd_query tprogs-y += ibumad tprogs-y += hbm +tprogs-y += xdp_flowtable_offload # Libbpf dependencies LIBBPF_SRC = $(TOOLS_PATH)/lib/bpf @@ -98,6 +99,7 @@ ibumad-objs := ibumad_user.o hbm-objs := hbm.o $(CGROUP_HELPERS) xdp_router_ipv4-objs := xdp_router_ipv4_user.o $(XDP_SAMPLE) +xdp_flowtable_offload-objs := xdp_flowtable_offload_user.o $(XDP_SAMPLE) # Tell kbuild to always build the programs always-y := $(tprogs-y) @@ -306,6 +308,7 @@ $(obj)/$(TRACE_HELPERS) $(obj)/$(CGROUP_HELPERS) $(obj)/$(XDP_SAMPLE): | libbpf_ .PHONY: libbpf_hdrs $(obj)/xdp_router_ipv4_user.o: $(obj)/xdp_router_ipv4.skel.h +$(obj)/xdp_flowtable_offload_user.o: $(obj)/xdp_flowtable_offload.skel.h $(obj)/tracex5.bpf.o: $(obj)/syscall_nrs.h $(obj)/hbm_out_kern.o: $(src)/hbm.h $(src)/hbm_kern.h @@ -361,6 +364,7 @@ endef CLANG_SYS_INCLUDES = $(call get_sys_includes,$(CLANG)) $(obj)/xdp_router_ipv4.bpf.o: $(obj)/xdp_sample.bpf.o +$(obj)/xdp_flowtable_offload.bpf.o: $(obj)/xdp_sample.bpf.o $(obj)/%.bpf.o: $(src)/%.bpf.c $(obj)/vmlinux.h $(src)/xdp_sample.bpf.h $(src)/xdp_sample_shared.h @echo " CLANG-BPF " $@ @@ -370,10 +374,11 @@ $(obj)/%.bpf.o: $(src)/%.bpf.c $(obj)/vmlinux.h $(src)/xdp_sample.bpf.h $(src)/x -I$(LIBBPF_INCLUDE) $(CLANG_SYS_INCLUDES) \ -c $(filter %.bpf.c,$^) -o $@ -LINKED_SKELS := xdp_router_ipv4.skel.h +LINKED_SKELS := xdp_router_ipv4.skel.h xdp_flowtable_offload.skel.h clean-files += $(LINKED_SKELS) xdp_router_ipv4.skel.h-deps := xdp_router_ipv4.bpf.o xdp_sample.bpf.o +xdp_flowtable_offload.skel.h-deps := xdp_flowtable_offload.bpf.o xdp_sample.bpf.o LINKED_BPF_SRCS := $(patsubst %.bpf.o,%.bpf.c,$(foreach skel,$(LINKED_SKELS),$($(skel)-deps))) diff --git a/samples/bpf/xdp_flowtable_offload.bpf.c b/samples/bpf/xdp_flowtable_offload.bpf.c new file mode 100644 index 0000000000000..728c859feefae --- /dev/null +++ b/samples/bpf/xdp_flowtable_offload.bpf.c @@ -0,0 +1,592 @@ +/* Copyright (c) 2024 Lorenzo Bianconi + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of version 2 of the GNU General Public + * License as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + */ + +#include "vmlinux.h" +#include "xdp_sample.bpf.h" +#include "xdp_sample_shared.h" + +#define MAX_ERRNO 4095 +#define IS_ERR_VALUE(x) (unsigned long)(void *)(x) >= (unsigned long)-MAX_ERRNO +#define BIT(x) (1 << (x)) + +#define ETH_P_IP 0x0800 +#define IP_MF 0x2000 /* "More Fragments" */ +#define IP_OFFSET 0x1fff /* "Fragment Offset" */ + +#define IPV6_FLOWINFO_MASK __cpu_to_be32(0x0fffffff) + +#define CSUM_MANGLED_0 ((__sum16)0xffff) + +struct flow_offload_tuple_rhash * +bpf_xdp_flow_offload_lookup(struct xdp_md *, + struct bpf_fib_lookup *, u32) __ksym; + +/* IP checksum utility routines */ + +static __always_inline __u32 csum_add(__u32 csum, __u32 addend) +{ + __u32 res = csum + addend; + + return res + (res < addend); +} + +static __always_inline __u16 csum_fold(__u32 csum) +{ + csum = (csum & 0xffff) + (csum >> 16); + csum = (csum & 0xffff) + (csum >> 16); + return ~csum; +} + +static __always_inline __u16 csum_replace4(__u32 csum, __u32 from, __u32 to) +{ + __u32 tmp = csum_add(~csum, ~from); + + return csum_fold(csum_add(tmp, to)); +} + +static __always_inline __u16 csum_replace16(__u32 csum, __u32 *from, __u32 *to) +{ + __u32 diff[] = { + ~from[0], ~from[1], ~from[2], ~from[3], + to[0], to[1], to[2], to[3], + }; + + csum = bpf_csum_diff(0, 0, diff, sizeof(diff), ~csum); + return csum_fold(csum); +} + +/* IP-TCP header utility routines */ + +static __always_inline void ip_decrease_ttl(struct iphdr *iph) +{ + __u32 check = (__u32)iph->check; + + check += (__u32)bpf_htons(0x0100); + iph->check = (__sum16)(check + (check >= 0xffff)); + iph->ttl--; +} + +static __always_inline bool +xdp_flowtable_offload_check_iphdr(struct iphdr *iph) +{ + /* ip fragmented traffic */ + if (iph->frag_off & bpf_htons(IP_MF | IP_OFFSET)) + return false; + + /* ip options */ + if (iph->ihl * 4 != sizeof(*iph)) + return false; + + if (iph->ttl <= 1) + return false; + + return true; +} + +static __always_inline bool +xdp_flowtable_offload_check_tcp_state(void *ports, void *data_end, u8 proto) +{ + if (proto == IPPROTO_TCP) { + struct tcphdr *tcph = ports; + + if (tcph + 1 > data_end) + return false; + + if (tcph->fin || tcph->rst) + return false; + } + + return true; +} + +/* IP nat utility routines */ + +static __always_inline void +xdp_flowtable_offload_nat_port(struct flow_ports *ports, void *data_end, + u8 proto, __be16 port, __be16 nat_port) +{ + switch (proto) { + case IPPROTO_TCP: { + struct tcphdr *tcph = (struct tcphdr *)ports; + + if (tcph + 1 > data_end) + break; + + tcph->check = csum_replace4((__u32)tcph->check, (__u32)port, + (__u32)nat_port); + break; + } + case IPPROTO_UDP: { + struct udphdr *udph = (struct udphdr *)ports; + + if (udph + 1 > data_end) + break; + + if (!udph->check) + break; + + udph->check = csum_replace4((__u32)udph->check, (__u32)port, + (__u32)nat_port); + if (!udph->check) + udph->check = CSUM_MANGLED_0; + break; + } + default: + break; + } +} + +static __always_inline void +xdp_flowtable_offload_snat_port(const struct flow_offload *flow, + struct flow_ports *ports, void *data_end, + u8 proto, enum flow_offload_tuple_dir dir) +{ + __be16 port, nat_port; + + if (ports + 1 > data_end) + return; + + switch (dir) { + case FLOW_OFFLOAD_DIR_ORIGINAL: + port = ports->source; + bpf_core_read(&nat_port, bpf_core_type_size(nat_port), + &flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.dst_port); + ports->source = nat_port; + break; + case FLOW_OFFLOAD_DIR_REPLY: + port = ports->dest; + bpf_core_read(&nat_port, bpf_core_type_size(nat_port), + &flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.src_port); + ports->dest = nat_port; + break; + default: + return; + } + + xdp_flowtable_offload_nat_port(ports, data_end, proto, port, nat_port); +} + +static __always_inline void +xdp_flowtable_offload_dnat_port(const struct flow_offload *flow, + struct flow_ports *ports, void *data_end, + u8 proto, enum flow_offload_tuple_dir dir) +{ + __be16 port, nat_port; + + if (ports + 1 > data_end) + return; + + switch (dir) { + case FLOW_OFFLOAD_DIR_ORIGINAL: + port = ports->dest; + bpf_core_read(&nat_port, bpf_core_type_size(nat_port), + &flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.src_port); + ports->dest = nat_port; + break; + case FLOW_OFFLOAD_DIR_REPLY: + port = ports->source; + bpf_core_read(&nat_port, bpf_core_type_size(nat_port), + &flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.dst_port); + ports->source = nat_port; + break; + default: + return; + } + + xdp_flowtable_offload_nat_port(ports, data_end, proto, port, nat_port); +} + +static __always_inline void +xdp_flowtable_offload_ip_l4(struct iphdr *iph, void *data_end, + __be32 addr, __be32 nat_addr) +{ + switch (iph->protocol) { + case IPPROTO_TCP: { + struct tcphdr *tcph = (struct tcphdr *)(iph + 1); + + if (tcph + 1 > data_end) + break; + + tcph->check = csum_replace4((__u32)tcph->check, addr, + nat_addr); + break; + } + case IPPROTO_UDP: { + struct udphdr *udph = (struct udphdr *)(iph + 1); + + if (udph + 1 > data_end) + break; + + if (!udph->check) + break; + + udph->check = csum_replace4((__u32)udph->check, addr, + nat_addr); + if (!udph->check) + udph->check = CSUM_MANGLED_0; + break; + } + default: + break; + } +} + +static __always_inline void +xdp_flowtable_offload_snat_ip(const struct flow_offload *flow, + struct iphdr *iph, void *data_end, + enum flow_offload_tuple_dir dir) +{ + __be32 addr, nat_addr; + + switch (dir) { + case FLOW_OFFLOAD_DIR_ORIGINAL: + addr = iph->saddr; + bpf_core_read(&nat_addr, bpf_core_type_size(nat_addr), + &flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.dst_v4.s_addr); + iph->saddr = nat_addr; + break; + case FLOW_OFFLOAD_DIR_REPLY: + addr = iph->daddr; + bpf_core_read(&nat_addr, bpf_core_type_size(nat_addr), + &flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.src_v4.s_addr); + iph->daddr = nat_addr; + break; + default: + return; + } + iph->check = csum_replace4((__u32)iph->check, addr, nat_addr); + + xdp_flowtable_offload_ip_l4(iph, data_end, addr, nat_addr); +} + +static __always_inline void +xdp_flowtable_offload_get_dnat_ip(const struct flow_offload *flow, + enum flow_offload_tuple_dir dir, + __be32 *addr) +{ + switch (dir) { + case FLOW_OFFLOAD_DIR_ORIGINAL: + bpf_core_read(addr, sizeof(*addr), + &flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.src_v4.s_addr); + break; + case FLOW_OFFLOAD_DIR_REPLY: + bpf_core_read(addr, sizeof(*addr), + &flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.dst_v4.s_addr); + break; + } +} + +static __always_inline void +xdp_flowtable_offload_dnat_ip(const struct flow_offload *flow, + struct iphdr *iph, void *data_end, + enum flow_offload_tuple_dir dir) +{ + __be32 addr, nat_addr; + + switch (dir) { + case FLOW_OFFLOAD_DIR_ORIGINAL: + addr = iph->daddr; + bpf_core_read(&nat_addr, bpf_core_type_size(nat_addr), + &flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.src_v4.s_addr); + iph->daddr = nat_addr; + break; + case FLOW_OFFLOAD_DIR_REPLY: + addr = iph->saddr; + bpf_core_read(&nat_addr, bpf_core_type_size(nat_addr), + &flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.dst_v4.s_addr); + iph->saddr = nat_addr; + break; + default: + return; + } + iph->check = csum_replace4((__u32)iph->check, addr, nat_addr); + + xdp_flowtable_offload_ip_l4(iph, data_end, addr, nat_addr); +} + +static __always_inline void +xdp_flowtable_offload_ipv6_l4(struct ipv6hdr *ip6h, void *data_end, + struct in6_addr *addr, struct in6_addr *nat_addr) +{ + switch (ip6h->nexthdr) { + case IPPROTO_TCP: { + struct tcphdr *tcph = (struct tcphdr *)(ip6h + 1); + + if (tcph + 1 > data_end) + break; + + tcph->check = csum_replace16((__u32)tcph->check, + addr->in6_u.u6_addr32, + nat_addr->in6_u.u6_addr32); + break; + } + case IPPROTO_UDP: { + struct udphdr *udph = (struct udphdr *)(ip6h + 1); + + if (udph + 1 > data_end) + break; + + if (!udph->check) + break; + + udph->check = csum_replace16((__u32)udph->check, + addr->in6_u.u6_addr32, + nat_addr->in6_u.u6_addr32); + if (!udph->check) + udph->check = CSUM_MANGLED_0; + break; + } + default: + break; + } +} + +static __always_inline void +xdp_flowtable_offload_snat_ipv6(const struct flow_offload *flow, + struct ipv6hdr *ip6h, void *data_end, + enum flow_offload_tuple_dir dir) +{ + struct in6_addr addr, nat_addr; + + switch (dir) { + case FLOW_OFFLOAD_DIR_ORIGINAL: + addr = ip6h->saddr; + bpf_core_read(&nat_addr, bpf_core_type_size(nat_addr), + &flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.dst_v6); + ip6h->saddr = nat_addr; + break; + case FLOW_OFFLOAD_DIR_REPLY: + addr = ip6h->daddr; + bpf_core_read(&nat_addr, bpf_core_type_size(nat_addr), + &flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.src_v6); + ip6h->daddr = nat_addr; + break; + default: + return; + } + + xdp_flowtable_offload_ipv6_l4(ip6h, data_end, &addr, &nat_addr); +} + +static __always_inline void +xdp_flowtable_offload_get_dnat_ipv6(const struct flow_offload *flow, + enum flow_offload_tuple_dir dir, + struct in6_addr *addr) +{ + switch (dir) { + case FLOW_OFFLOAD_DIR_ORIGINAL: + bpf_core_read(addr, sizeof(*addr), + &flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.src_v6); + break; + case FLOW_OFFLOAD_DIR_REPLY: + bpf_core_read(addr, sizeof(*addr), + &flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.dst_v6); + break; + } +} + +static __always_inline void +xdp_flowtable_offload_dnat_ipv6(const struct flow_offload *flow, + struct ipv6hdr *ip6h, void *data_end, + enum flow_offload_tuple_dir dir) +{ + struct in6_addr addr, nat_addr; + + switch (dir) { + case FLOW_OFFLOAD_DIR_ORIGINAL: + addr = ip6h->daddr; + bpf_core_read(&nat_addr, bpf_core_type_size(nat_addr), + &flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.src_v6); + ip6h->daddr = nat_addr; + break; + case FLOW_OFFLOAD_DIR_REPLY: + addr = ip6h->saddr; + bpf_core_read(&nat_addr, bpf_core_type_size(nat_addr), + &flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.dst_v6); + ip6h->saddr = nat_addr; + break; + default: + return; + } + + xdp_flowtable_offload_ipv6_l4(ip6h, data_end, &addr, &nat_addr); +} + +static __always_inline void +xdp_flowtable_offload_forward_ip(const struct flow_offload *flow, + void *data, void *data_end, + struct flow_ports *ports, + enum flow_offload_tuple_dir dir, + unsigned long flags) +{ + struct iphdr *iph = data + sizeof(struct ethhdr); + + if (iph + 1 > data_end) + return; + + if (flags & BIT(NF_FLOW_SNAT)) { + xdp_flowtable_offload_snat_port(flow, ports, data_end, + iph->protocol, dir); + xdp_flowtable_offload_snat_ip(flow, iph, data_end, dir); + } + if (flags & BIT(NF_FLOW_DNAT)) { + xdp_flowtable_offload_dnat_port(flow, ports, data_end, + iph->protocol, dir); + xdp_flowtable_offload_dnat_ip(flow, iph, data_end, dir); + } + + ip_decrease_ttl(iph); +} + +static __always_inline void +xdp_flowtable_offload_forward_ipv6(const struct flow_offload *flow, + void *data, void *data_end, + struct flow_ports *ports, + enum flow_offload_tuple_dir dir, + unsigned long flags) +{ + struct ipv6hdr *ip6h = data + sizeof(struct ethhdr); + + if (ip6h + 1 > data_end) + return; + + if (flags & BIT(NF_FLOW_SNAT)) { + xdp_flowtable_offload_snat_port(flow, ports, data_end, + ip6h->nexthdr, dir); + xdp_flowtable_offload_snat_ipv6(flow, ip6h, data_end, dir); + } + if (flags & BIT(NF_FLOW_DNAT)) { + xdp_flowtable_offload_dnat_port(flow, ports, data_end, + ip6h->nexthdr, dir); + xdp_flowtable_offload_dnat_ipv6(flow, ip6h, data_end, dir); + } + + ip6h->hop_limit--; +} + +SEC("xdp") +int xdp_flowtable_offload(struct xdp_md *ctx) +{ + void *data_end = (void *)(long)ctx->data_end; + struct flow_offload_tuple_rhash *tuplehash; + struct bpf_fib_lookup tuple = { + .ifindex = ctx->ingress_ifindex, + }; + void *data = (void *)(long)ctx->data; + enum flow_offload_tuple_dir dir; + struct ethhdr *eth = data; + struct flow_offload *flow; + struct flow_ports *ports; + unsigned long flags; + int iifindex; + + if (eth + 1 > data_end) + return XDP_PASS; + + switch (eth->h_proto) { + case bpf_htons(ETH_P_IP): { + struct iphdr *iph = data + sizeof(*eth); + + ports = (struct flow_ports *)(iph + 1); + if (ports + 1 > data_end) + return XDP_PASS; + + /* sanity check on ip header */ + if (!xdp_flowtable_offload_check_iphdr(iph)) + return XDP_PASS; + + if (!xdp_flowtable_offload_check_tcp_state(ports, data_end, + iph->protocol)) + return XDP_PASS; + + tuple.family = AF_INET; + tuple.tos = iph->tos; + tuple.l4_protocol = iph->protocol; + tuple.tot_len = bpf_ntohs(iph->tot_len); + tuple.ipv4_src = iph->saddr; + tuple.ipv4_dst = iph->daddr; + tuple.sport = ports->source; + tuple.dport = ports->dest; + break; + } + case bpf_htons(ETH_P_IPV6): { + struct in6_addr *src = (struct in6_addr *)tuple.ipv6_src; + struct in6_addr *dst = (struct in6_addr *)tuple.ipv6_dst; + struct ipv6hdr *ip6h = data + sizeof(*eth); + + ports = (struct flow_ports *)(ip6h + 1); + if (ports + 1 > data_end) + return XDP_PASS; + + if (ip6h->hop_limit <= 1) + return XDP_PASS; + + if (!xdp_flowtable_offload_check_tcp_state(ports, data_end, + ip6h->nexthdr)) + return XDP_PASS; + + tuple.family = AF_INET6; + tuple.l4_protocol = ip6h->nexthdr; + tuple.tot_len = bpf_ntohs(ip6h->payload_len); + *src = ip6h->saddr; + *dst = ip6h->daddr; + tuple.sport = ports->source; + tuple.dport = ports->dest; + break; + } + default: + return XDP_PASS; + } + + tuplehash = bpf_xdp_flow_offload_lookup(ctx, &tuple, sizeof(tuple)); + if (IS_ERR_VALUE(tuplehash)) + return XDP_PASS; + + dir = tuplehash->tuple.dir; + flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]); + if (bpf_core_read(&flags, sizeof(flags), &flow->flags)) + return XDP_PASS; + + switch (tuplehash->tuple.xmit_type) { + case FLOW_OFFLOAD_XMIT_NEIGH: + /* update the destination address in case of dnatting before + * performing the route lookup + */ + if (tuple.family == AF_INET6) + xdp_flowtable_offload_get_dnat_ipv6(flow, dir, + (struct in6_addr *)&tuple.ipv6_dst); + else + xdp_flowtable_offload_get_dnat_ip(flow, dir, &tuple.ipv4_src); + + if (bpf_fib_lookup(ctx, &tuple, sizeof(tuple), 0)) + return XDP_PASS; + + if (tuple.family == AF_INET6) + xdp_flowtable_offload_forward_ipv6(flow, data, data_end, + ports, dir, flags); + else + xdp_flowtable_offload_forward_ip(flow, data, data_end, + ports, dir, flags); + + __builtin_memcpy(eth->h_dest, tuple.dmac, ETH_ALEN); + __builtin_memcpy(eth->h_source, tuple.smac, ETH_ALEN); + iifindex = tuple.ifindex; + break; + case FLOW_OFFLOAD_XMIT_DIRECT: + default: + return XDP_PASS; + } + + return bpf_redirect(iifindex, 0); +} + +char _license[] SEC("license") = "GPL"; diff --git a/samples/bpf/xdp_flowtable_offload_user.c b/samples/bpf/xdp_flowtable_offload_user.c new file mode 100644 index 0000000000000..179b1f34b48fd --- /dev/null +++ b/samples/bpf/xdp_flowtable_offload_user.c @@ -0,0 +1,128 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2024 Lorenzo Bianconi + */ +static const char *__doc__ = +"XDP flowtable integration example\n" +"Usage: xdp_flowtable_offload \n"; + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "bpf_util.h" +#include "xdp_sample_user.h" +#include "xdp_flowtable_offload.skel.h" + +static int mask = SAMPLE_RX_CNT | SAMPLE_EXCEPTION_CNT; + +DEFINE_SAMPLE_INIT(xdp_flowtable_offload); + +static const struct option long_options[] = { + { "help", no_argument, NULL, 'h' }, + { "generic", no_argument, NULL, 'g' }, + {} +}; + +int main(int argc, char **argv) +{ + struct xdp_flowtable_offload *skel; + int ret = EXIT_FAIL_OPTION; + char ifname[IF_NAMESIZE]; + bool generic = false; + int ifindex; + int opt; + + while ((opt = getopt_long(argc, argv, "hg", + long_options, NULL)) != -1) { + switch (opt) { + case 'g': + generic = true; + break; + case 'h': + default: + sample_usage(argv, long_options, __doc__, mask, false); + return ret; + } + } + + if (argc <= optind) { + sample_usage(argv, long_options, __doc__, mask, true); + goto end; + } + + ifindex = if_nametoindex(argv[optind]); + if (!ifindex) + ifindex = strtoul(argv[optind], NULL, 0); + + if (!ifindex) { + fprintf(stderr, "Bad interface index or name\n"); + sample_usage(argv, long_options, __doc__, mask, true); + goto end; + } + + skel = xdp_flowtable_offload__open(); + if (!skel) { + fprintf(stderr, "Failed to xdp_flowtable_offload__open: %s\n", + strerror(errno)); + ret = EXIT_FAIL_BPF; + goto end; + } + + ret = sample_init_pre_load(skel); + if (ret < 0) { + fprintf(stderr, "Failed to sample_init_pre_load: %s\n", strerror(-ret)); + ret = EXIT_FAIL_BPF; + goto end_destroy; + } + + ret = xdp_flowtable_offload__load(skel); + if (ret < 0) { + fprintf(stderr, "Failed to xdp_flowtable_offload__load: %s\n", + strerror(errno)); + ret = EXIT_FAIL_BPF; + goto end_destroy; + } + + ret = sample_init(skel, mask); + if (ret < 0) { + fprintf(stderr, "Failed to initialize sample: %s\n", strerror(-ret)); + ret = EXIT_FAIL; + goto end_destroy; + } + + if (sample_install_xdp(skel->progs.xdp_flowtable_offload, + ifindex, generic, false) < 0) { + ret = EXIT_FAIL_XDP; + goto end_destroy; + } + + ret = EXIT_FAIL; + if (!if_indextoname(ifindex, ifname)) { + fprintf(stderr, "Failed to if_indextoname for %d: %s\n", ifindex, + strerror(errno)); + goto end_destroy; + } + + ret = sample_run(2, NULL, NULL); + if (ret < 0) { + fprintf(stderr, "Failed during sample run: %s\n", strerror(-ret)); + ret = EXIT_FAIL; + goto end_destroy; + } + ret = EXIT_OK; +end_destroy: + xdp_flowtable_offload__destroy(skel); +end: + sample_exit(ret); +} From patchwork Fri May 10 14:01:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 1933861 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=u3541Fyv; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:45e3:2400::1; helo=sv.mirrors.kernel.org; envelope-from=netfilter-devel+bounces-2146-incoming=patchwork.ozlabs.org@vger.kernel.org; receiver=patchwork.ozlabs.org) Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org [IPv6:2604:1380:45e3:2400::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4VbVvH73XMz1ymg for ; Sat, 11 May 2024 00:02:35 +1000 (AEST) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id B4A2E281198 for ; Fri, 10 May 2024 14:02:34 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2CF0016F84A; Fri, 10 May 2024 14:02:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="u3541Fyv" X-Original-To: netfilter-devel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A2A4112CD8A; Fri, 10 May 2024 14:02:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715349721; cv=none; b=VIz54QpYtUTJItew9omEnU/xpLLlNpBymPC5Lq/Wb0CIigS3+MEAhZnpm8KDj2/MQSMTXOA81E58j8jVbjLSmpsfaIQTDmcCci08bE6TQQEeFFzo4UoyRBXoV18/ZKDF2o4CSBTO3YiZ3yavdCPoXmFhrQtKVHqKEas25fDA7Hs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715349721; c=relaxed/simple; bh=Gsj0JF1o1XJm4oCJPzizUQmX32k4Fi0e9TMvLivDsr8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WqxYq3IpfEFzZ8ryCzn2dRnUGuOHN+oKhriGAEdPZxd5/Yr9FszxnbxjbcMl9SlfRufeGWaMKD8/BfJZaH8AsGh68c7bOlhqMNe7fKruVnadSRp5JwHKtjw+kebrERPsQo7Nm57IYfnjpuf68Cn4KbM2tOFgKQWkE8KOShFcDo0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=u3541Fyv; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id A5477C113CC; Fri, 10 May 2024 14:02:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1715349721; bh=Gsj0JF1o1XJm4oCJPzizUQmX32k4Fi0e9TMvLivDsr8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=u3541Fyvk6WCPY+WrM1g+LH6ApH/3JRNEzzU2cmpRt3Z5MiX1BVZLcHTcdySBvWB0 Wm1F4kqC7YhfQld7dG7pDFgpF0LtkK5sgU8edDVN7Ljvd48zA7KnjZ9RgYVRyJP/xU 5GIdx8rX0274ld6UKSi5jpfyz9hZ0p7/isbv0Q1Gt0va5FYXehvzCQS3Y3//ij7Y8g 8ykbhIw+Iudpz2NqAaC5OitHVLkXlInGHNFSjq2KPXW9/w+yX/w38upr6HXfnjghzN i3V8GSlXHWnrwdTvUr+kZ09wvc13TGOQboKcZ1R7IZsJ5u3k1msUjvlbVryvMhDwBK 7k4WQqyUaIoZw== From: Lorenzo Bianconi To: bpf@vger.kernel.org Cc: pablo@netfilter.org, kadlec@netfilter.org, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, netfilter-devel@vger.kernel.org, netdev@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, lorenzo.bianconi@redhat.com, toke@redhat.com, fw@strlen.de, hawk@kernel.org, horms@kernel.org, donhunte@redhat.com Subject: [RFC bpf-next v1 4/4] selftests/bpf: Add selftest for bpf_xdp_flow_offload_lookup kfunc Date: Fri, 10 May 2024 16:01:27 +0200 Message-ID: <5bcb91023522707a079475c74befdde5572b6f34.1715348200.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.45.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netfilter-devel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Introduce e2e selftest for bpf_xdp_flow_offload_lookup kfunc through xdp_flowtable utility. Signed-off-by: Lorenzo Bianconi --- tools/testing/selftests/bpf/Makefile | 10 +- tools/testing/selftests/bpf/config | 4 + .../selftests/bpf/progs/xdp_flowtable.c | 142 ++++++++++++++++++ .../selftests/bpf/test_xdp_flowtable.sh | 112 ++++++++++++++ tools/testing/selftests/bpf/xdp_flowtable.c | 142 ++++++++++++++++++ 5 files changed, 408 insertions(+), 2 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/xdp_flowtable.c create mode 100755 tools/testing/selftests/bpf/test_xdp_flowtable.sh create mode 100644 tools/testing/selftests/bpf/xdp_flowtable.c diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index 135023a357b3b..a5d4cd7d92c0a 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -133,7 +133,8 @@ TEST_PROGS := test_kmod.sh \ test_bpftool_metadata.sh \ test_doc_build.sh \ test_xsk.sh \ - test_xdp_features.sh + test_xdp_features.sh \ + test_xdp_flowtable.sh TEST_PROGS_EXTENDED := with_addr.sh \ with_tunnels.sh ima_setup.sh verify_sig_setup.sh \ @@ -144,7 +145,7 @@ TEST_GEN_PROGS_EXTENDED = test_sock_addr test_skb_cgroup_id_user \ flow_dissector_load test_flow_dissector test_tcp_check_syncookie_user \ test_lirc_mode2_user xdping test_cpp runqslower bench bpf_testmod.ko \ xskxceiver xdp_redirect_multi xdp_synproxy veristat xdp_hw_metadata \ - xdp_features bpf_test_no_cfi.ko + xdp_features bpf_test_no_cfi.ko xdp_flowtable TEST_GEN_FILES += liburandom_read.so urandom_read sign-file uprobe_multi @@ -477,6 +478,7 @@ test_usdt.skel.h-deps := test_usdt.bpf.o test_usdt_multispec.bpf.o xsk_xdp_progs.skel.h-deps := xsk_xdp_progs.bpf.o xdp_hw_metadata.skel.h-deps := xdp_hw_metadata.bpf.o xdp_features.skel.h-deps := xdp_features.bpf.o +xdp_flowtable.skel.h-deps := xdp_flowtable.bpf.o LINKED_BPF_SRCS := $(patsubst %.bpf.o,%.c,$(foreach skel,$(LINKED_SKELS),$($(skel)-deps))) @@ -711,6 +713,10 @@ $(OUTPUT)/xdp_features: xdp_features.c $(OUTPUT)/network_helpers.o $(OUTPUT)/xdp $(call msg,BINARY,,$@) $(Q)$(CC) $(CFLAGS) $(filter %.a %.o %.c,$^) $(LDLIBS) -o $@ +$(OUTPUT)/xdp_flowtable: xdp_flowtable.c $(OUTPUT)/xdp_flowtable.skel.h | $(OUTPUT) + $(call msg,BINARY,,$@) + $(Q)$(CC) $(CFLAGS) $(filter %.a %.o %.c,$^) $(LDLIBS) -o $@ + # Make sure we are able to include and link libbpf against c++. $(OUTPUT)/test_cpp: test_cpp.cpp $(OUTPUT)/test_core_extern.skel.h $(BPFOBJ) $(call msg,CXX,,$@) diff --git a/tools/testing/selftests/bpf/config b/tools/testing/selftests/bpf/config index eeabd798bc3ae..1a9aea01145f7 100644 --- a/tools/testing/selftests/bpf/config +++ b/tools/testing/selftests/bpf/config @@ -82,6 +82,10 @@ CONFIG_NF_CONNTRACK=y CONFIG_NF_CONNTRACK_MARK=y CONFIG_NF_DEFRAG_IPV4=y CONFIG_NF_DEFRAG_IPV6=y +CONFIG_NF_TABLES=y +CONFIG_NETFILTER_INGRESS=y +CONFIG_NF_FLOW_TABLE=y +CONFIG_NF_FLOW_TABLE_INET=y CONFIG_NF_NAT=y CONFIG_RC_CORE=y CONFIG_SECURITY=y diff --git a/tools/testing/selftests/bpf/progs/xdp_flowtable.c b/tools/testing/selftests/bpf/progs/xdp_flowtable.c new file mode 100644 index 0000000000000..cf725699e21d4 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/xdp_flowtable.c @@ -0,0 +1,142 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include + +#define MAX_ERRNO 4095 +#define IS_ERR_VALUE(x) (unsigned long)(void *)(x) >= (unsigned long)-MAX_ERRNO + +#define ETH_P_IP 0x0800 +#define ETH_P_IPV6 0x86dd +#define IP_MF 0x2000 /* "More Fragments" */ +#define IP_OFFSET 0x1fff /* "Fragment Offset" */ +#define AF_INET 2 +#define AF_INET6 10 + +struct flow_offload_tuple_rhash * +bpf_xdp_flow_offload_lookup(struct xdp_md *, + struct bpf_fib_lookup *, __u32) __ksym; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __type(key, __u32); + __type(value, __u32); + __uint(max_entries, 1); +} stats SEC(".maps"); + +static __always_inline bool +xdp_flowtable_offload_check_iphdr(struct iphdr *iph) +{ + /* ip fragmented traffic */ + if (iph->frag_off & bpf_htons(IP_MF | IP_OFFSET)) + return false; + + /* ip options */ + if (iph->ihl * 4 != sizeof(*iph)) + return false; + + if (iph->ttl <= 1) + return false; + + return true; +} + +static __always_inline bool +xdp_flowtable_offload_check_tcp_state(void *ports, void *data_end, u8 proto) +{ + if (proto == IPPROTO_TCP) { + struct tcphdr *tcph = ports; + + if (tcph + 1 > data_end) + return false; + + if (tcph->fin || tcph->rst) + return false; + } + + return true; +} + +SEC("xdp.frags") +int xdp_flowtable_do_lookup(struct xdp_md *ctx) +{ + void *data_end = (void *)(long)ctx->data_end; + struct flow_offload_tuple_rhash *tuplehash; + struct bpf_fib_lookup tuple = { + .ifindex = ctx->ingress_ifindex, + }; + void *data = (void *)(long)ctx->data; + struct ethhdr *eth = data; + struct flow_ports *ports; + __u32 *val, key = 0; + + if (eth + 1 > data_end) + return XDP_DROP; + + switch (eth->h_proto) { + case bpf_htons(ETH_P_IP): { + struct iphdr *iph = data + sizeof(*eth); + + ports = (struct flow_ports *)(iph + 1); + if (ports + 1 > data_end) + return XDP_PASS; + + /* sanity check on ip header */ + if (!xdp_flowtable_offload_check_iphdr(iph)) + return XDP_PASS; + + if (!xdp_flowtable_offload_check_tcp_state(ports, data_end, + iph->protocol)) + return XDP_PASS; + + tuple.family = AF_INET; + tuple.tos = iph->tos; + tuple.l4_protocol = iph->protocol; + tuple.tot_len = bpf_ntohs(iph->tot_len); + tuple.ipv4_src = iph->saddr; + tuple.ipv4_dst = iph->daddr; + tuple.sport = ports->source; + tuple.dport = ports->dest; + break; + } + case bpf_htons(ETH_P_IPV6): { + struct in6_addr *src = (struct in6_addr *)tuple.ipv6_src; + struct in6_addr *dst = (struct in6_addr *)tuple.ipv6_dst; + struct ipv6hdr *ip6h = data + sizeof(*eth); + + ports = (struct flow_ports *)(ip6h + 1); + if (ports + 1 > data_end) + return XDP_PASS; + + if (ip6h->hop_limit <= 1) + return XDP_PASS; + + if (!xdp_flowtable_offload_check_tcp_state(ports, data_end, + ip6h->nexthdr)) + return XDP_PASS; + + tuple.family = AF_INET6; + tuple.l4_protocol = ip6h->nexthdr; + tuple.tot_len = bpf_ntohs(ip6h->payload_len); + *src = ip6h->saddr; + *dst = ip6h->daddr; + tuple.sport = ports->source; + tuple.dport = ports->dest; + break; + } + default: + return XDP_PASS; + } + + tuplehash = bpf_xdp_flow_offload_lookup(ctx, &tuple, sizeof(tuple)); + if (IS_ERR_VALUE(tuplehash)) + return XDP_PASS; + + val = bpf_map_lookup_elem(&stats, &key); + if (val) + __sync_add_and_fetch(val, 1); + + return XDP_PASS; +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/test_xdp_flowtable.sh b/tools/testing/selftests/bpf/test_xdp_flowtable.sh new file mode 100755 index 0000000000000..1a8a40aebbdf1 --- /dev/null +++ b/tools/testing/selftests/bpf/test_xdp_flowtable.sh @@ -0,0 +1,112 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +readonly NS0="ns0-$(mktemp -u XXXXXX)" +readonly NS1="ns1-$(mktemp -u XXXXXX)" +readonly infile="$(mktemp)" +readonly outfile="$(mktemp)" + +xdp_flowtable_pid="" +ret=1 + +setup_flowtable() { +nft -f /dev/stdin </dev/null 2>/dev/null +} + +test_xdp_flowtable_lookup() { + ## Run IPv4 test + ip netns exec ${NS1} nc -4 --no-shutdown -l 8084 > ${outfile} & + wait_for_nc_server 8084 + ip netns exec ${NS0} timeout 2 nc -4 192.168.1.2 8084 < ${infile} + + ## Run IPv6 test + ip netns exec ${NS1} nc -6 --no-shutdown -l 8086 > ${outfile} & + wait_for_nc_server 8086 + ip netns exec ${NS0} timeout 2 nc -6 2001:db8:1::2 8086 < ${infile} + + wait $xdp_flowtable_pid && ret=0 +} + +trap cleanup 0 2 3 6 9 +setup + +test_xdp_flowtable_lookup + +exit $ret diff --git a/tools/testing/selftests/bpf/xdp_flowtable.c b/tools/testing/selftests/bpf/xdp_flowtable.c new file mode 100644 index 0000000000000..dea24deda7359 --- /dev/null +++ b/tools/testing/selftests/bpf/xdp_flowtable.c @@ -0,0 +1,142 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include +#include +#include +#include +#include + +#include "xdp_flowtable.skel.h" + +#define MAX_ITERATION 10 + +static volatile bool exiting, verbosity; +static char ifname[IF_NAMESIZE]; +static int ifindex = -ENODEV; +const char *argp_program_version = "xdp-flowtable 0.0"; +const char argp_program_doc[] = +"XDP flowtable application.\n" +"\n" +"USAGE: ./xdp-flowtable [-v] \n"; + +static const struct argp_option opts[] = { + { "verbose", 'v', NULL, 0, "Verbose debug output" }, + {}, +}; + +static void sig_handler(int sig) +{ + exiting = true; +} + +static int libbpf_print_fn(enum libbpf_print_level level, + const char *format, va_list args) +{ + if (level == LIBBPF_DEBUG && !verbosity) + return 0; + return vfprintf(stderr, format, args); +} + +static error_t parse_arg(int key, char *arg, struct argp_state *state) +{ + switch (key) { + case 'v': + verbosity = true; + break; + case ARGP_KEY_ARG: + errno = 0; + if (strlen(arg) >= IF_NAMESIZE) { + fprintf(stderr, "Invalid device name: %s\n", arg); + argp_usage(state); + return ARGP_ERR_UNKNOWN; + } + + ifindex = if_nametoindex(arg); + if (!ifindex) + ifindex = strtoul(arg, NULL, 0); + if (!ifindex || !if_indextoname(ifindex, ifname)) { + fprintf(stderr, + "Bad interface index or name (%d): %s\n", + errno, strerror(errno)); + argp_usage(state); + return ARGP_ERR_UNKNOWN; + } + break; + default: + return ARGP_ERR_UNKNOWN; + } + + return 0; +} + +static const struct argp argp = { + .options = opts, + .parser = parse_arg, + .doc = argp_program_doc, +}; + +int main(int argc, char **argv) +{ + unsigned int count = 0, key = 0; + struct xdp_flowtable *skel; + int i, err; + + libbpf_set_strict_mode(LIBBPF_STRICT_ALL); + libbpf_set_print(libbpf_print_fn); + + signal(SIGINT, sig_handler); + signal(SIGTERM, sig_handler); + + /* Parse command line arguments */ + err = argp_parse(&argp, argc, argv, 0, NULL, NULL); + if (err) + return err; + + /* Load and verify BPF application */ + skel = xdp_flowtable__open(); + if (!skel) { + fprintf(stderr, "Failed to open and load BPF skeleton\n"); + return -EINVAL; + } + + /* Load & verify BPF programs */ + err = xdp_flowtable__load(skel); + if (err) { + fprintf(stderr, "Failed to load and verify BPF skeleton\n"); + goto cleanup; + } + + /* Attach the XDP program */ + err = xdp_flowtable__attach(skel); + if (err) { + fprintf(stderr, "Failed to attach BPF skeleton\n"); + goto cleanup; + } + + err = bpf_xdp_attach(ifindex, + bpf_program__fd(skel->progs.xdp_flowtable_do_lookup), + XDP_FLAGS_DRV_MODE, NULL); + if (err) { + fprintf(stderr, "Failed attaching XDP program to device %s\n", + ifname); + goto cleanup; + } + + /* Collect stats */ + for (i = 0; i < MAX_ITERATION && !exiting; i++) + sleep(1); + + /* Check results */ + err = bpf_map__lookup_elem(skel->maps.stats, &key, sizeof(key), + &count, sizeof(count), 0); + if (!err && !count) + err = -EINVAL; + + bpf_xdp_detach(ifindex, XDP_FLAGS_DRV_MODE, NULL); +cleanup: + xdp_flowtable__destroy(skel); + + return err; +}