Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/925068/?format=api
{ "id": 925068, "url": "http://patchwork.ozlabs.org/api/patches/925068/?format=api", "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20180604120601.18123-8-bjorn.topel@gmail.com/", "project": { "id": 46, "url": "http://patchwork.ozlabs.org/api/projects/46/?format=api", "name": "Intel Wired Ethernet development", "link_name": "intel-wired-lan", "list_id": "intel-wired-lan.osuosl.org", "list_email": "intel-wired-lan@osuosl.org", "web_url": "", "scm_url": "", "webscm_url": "", "list_archive_url": "", "list_archive_url_format": "", "commit_url_format": "" }, "msgid": "<20180604120601.18123-8-bjorn.topel@gmail.com>", "list_archive_url": null, "date": "2018-06-04T12:05:57", "name": "[bpf-next,07/11] xsk: wire upp Tx zero-copy functions", "commit_ref": null, "pull_url": null, "state": "awaiting-upstream", "archived": false, "hash": "ffbc8b5069d8a459ed117f2d582b4e99bdc34317", "submitter": { "id": 70569, "url": "http://patchwork.ozlabs.org/api/people/70569/?format=api", "name": "Björn Töpel", "email": "bjorn.topel@gmail.com" }, "delegate": null, "mbox": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20180604120601.18123-8-bjorn.topel@gmail.com/mbox/", "series": [ { "id": 48416, "url": "http://patchwork.ozlabs.org/api/series/48416/?format=api", "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/list/?series=48416", "date": "2018-06-04T12:05:50", "name": "AF_XDP: introducing zero-copy support", "version": 1, "mbox": "http://patchwork.ozlabs.org/series/48416/mbox/" } ], "comments": "http://patchwork.ozlabs.org/api/patches/925068/comments/", "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/925068/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<intel-wired-lan-bounces@osuosl.org>", "X-Original-To": [ "incoming@patchwork.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Delivered-To": [ "patchwork-incoming@bilbo.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Authentication-Results": [ "ozlabs.org;\n\tspf=pass (mailfrom) smtp.mailfrom=osuosl.org\n\t(client-ip=140.211.166.133; helo=hemlock.osuosl.org;\n\tenvelope-from=intel-wired-lan-bounces@osuosl.org;\n\treceiver=<UNKNOWN>)", "ozlabs.org;\n\tdmarc=fail (p=none dis=none) header.from=gmail.com" ], "Received": [ "from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133])\n\t(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 40zyry26jtz9s0W\n\tfor <incoming@patchwork.ozlabs.org>;\n\tTue, 5 Jun 2018 01:04:34 +1000 (AEST)", "from localhost (localhost [127.0.0.1])\n\tby hemlock.osuosl.org (Postfix) with ESMTP id 8C0CE89C6C;\n\tMon, 4 Jun 2018 15:04:32 +0000 (UTC)", "from hemlock.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id Qc-t9Nr3FpyS; Mon, 4 Jun 2018 15:04:29 +0000 (UTC)", "from ash.osuosl.org (ash.osuosl.org [140.211.166.34])\n\tby hemlock.osuosl.org (Postfix) with ESMTP id 05A6589D5B;\n\tMon, 4 Jun 2018 15:04:29 +0000 (UTC)", "from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137])\n\tby ash.osuosl.org (Postfix) with ESMTP id B88631BFFD0\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tMon, 4 Jun 2018 12:06:51 +0000 (UTC)", "from localhost (localhost [127.0.0.1])\n\tby fraxinus.osuosl.org (Postfix) with ESMTP id B50F987517\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tMon, 4 Jun 2018 12:06:51 +0000 (UTC)", "from fraxinus.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id 5DTM-FPgtvYL for <intel-wired-lan@lists.osuosl.org>;\n\tMon, 4 Jun 2018 12:06:50 +0000 (UTC)", "from mga06.intel.com (mga06.intel.com [134.134.136.31])\n\tby fraxinus.osuosl.org (Postfix) with ESMTPS id BCCA2874F1\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tMon, 4 Jun 2018 12:06:50 +0000 (UTC)", "from fmsmga004.fm.intel.com ([10.253.24.48])\n\tby orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t04 Jun 2018 05:06:50 -0700", "from btopel-mobl1.isw.intel.com (HELO\n\tbtopel-mobl1.hil-pdxphhh.sea.wayport.net) ([10.103.211.148])\n\tby fmsmga004.fm.intel.com with ESMTP; 04 Jun 2018 05:06:45 -0700" ], "X-Virus-Scanned": [ "amavisd-new at osuosl.org", "amavisd-new at osuosl.org" ], "X-Greylist": "domain auto-whitelisted by SQLgrey-1.7.6", "X-Amp-Result": "SKIPPED(no attachment in message)", "X-Amp-File-Uploaded": "False", "X-ExtLoop1": "1", "X-IronPort-AV": "E=Sophos;i=\"5.49,476,1520924400\"; d=\"scan'208\";a=\"60197213\"", "From": "=?utf-8?b?QmrDtnJuIFTDtnBlbA==?= <bjorn.topel@gmail.com>", "To": "bjorn.topel@gmail.com, magnus.karlsson@intel.com,\n\tmagnus.karlsson@gmail.com, alexander.h.duyck@intel.com,\n\talexander.duyck@gmail.com, ast@fb.com, brouer@redhat.com,\n\tdaniel@iogearbox.net, netdev@vger.kernel.org, mykyta.iziumtsev@linaro.org", "Date": "Mon, 4 Jun 2018 14:05:57 +0200", "Message-Id": "<20180604120601.18123-8-bjorn.topel@gmail.com>", "X-Mailer": "git-send-email 2.14.1", "In-Reply-To": "<20180604120601.18123-1-bjorn.topel@gmail.com>", "References": "<20180604120601.18123-1-bjorn.topel@gmail.com>", "X-Mailman-Approved-At": "Mon, 04 Jun 2018 15:04:25 +0000", "Subject": "[Intel-wired-lan] [PATCH bpf-next 07/11] xsk: wire upp Tx zero-copy\n\tfunctions", "X-BeenThere": "intel-wired-lan@osuosl.org", "X-Mailman-Version": "2.1.24", "Precedence": "list", "List-Id": "Intel Wired Ethernet Linux Kernel Driver Development\n\t<intel-wired-lan.osuosl.org>", "List-Unsubscribe": "<https://lists.osuosl.org/mailman/options/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@osuosl.org?subject=unsubscribe>", "List-Archive": "<http://lists.osuosl.org/pipermail/intel-wired-lan/>", "List-Post": "<mailto:intel-wired-lan@osuosl.org>", "List-Help": "<mailto:intel-wired-lan-request@osuosl.org?subject=help>", "List-Subscribe": "<https://lists.osuosl.org/mailman/listinfo/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@osuosl.org?subject=subscribe>", "Cc": "francois.ozog@linaro.org, willemdebruijn.kernel@gmail.com, mst@redhat.com,\n\tilias.apalodimas@linaro.org, michael.lundkvist@ericsson.com,\n\tbrian.brooks@linaro.org, intel-wired-lan@lists.osuosl.org,\n\tqi.z.zhang@intel.com, michael.chan@broadcom.com, andy@greyhouse.net", "MIME-Version": "1.0", "Content-Type": "text/plain; charset=\"us-ascii\"", "Content-Transfer-Encoding": "7bit", "Errors-To": "intel-wired-lan-bounces@osuosl.org", "Sender": "\"Intel-wired-lan\" <intel-wired-lan-bounces@osuosl.org>" }, "content": "From: Magnus Karlsson <magnus.karlsson@intel.com>\n\nHere we add the functionality required to support zero-copy Tx, and\nalso exposes various zero-copy related functions for the netdevs.\n\nSigned-off-by: Magnus Karlsson <magnus.karlsson@intel.com>\n---\n include/net/xdp_sock.h | 9 +++++++\n net/xdp/xdp_umem.c | 29 +++++++++++++++++++--\n net/xdp/xdp_umem.h | 8 +++++-\n net/xdp/xsk.c | 70 +++++++++++++++++++++++++++++++++++++++++++++-----\n net/xdp/xsk_queue.h | 32 ++++++++++++++++++++++-\n 5 files changed, 137 insertions(+), 11 deletions(-)", "diff": "diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h\nindex d93d3aac3fc9..9fe472f2ac95 100644\n--- a/include/net/xdp_sock.h\n+++ b/include/net/xdp_sock.h\n@@ -9,6 +9,7 @@\n #include <linux/workqueue.h>\n #include <linux/if_xdp.h>\n #include <linux/mutex.h>\n+#include <linux/spinlock.h>\n #include <linux/mm.h>\n #include <net/sock.h>\n \n@@ -42,6 +43,8 @@ struct xdp_umem {\n \tstruct net_device *dev;\n \tu16 queue_id;\n \tbool zc;\n+\tspinlock_t xsk_list_lock;\n+\tstruct list_head xsk_list;\n };\n \n struct xdp_sock {\n@@ -53,6 +56,8 @@ struct xdp_sock {\n \tstruct list_head flush_node;\n \tu16 queue_id;\n \tstruct xsk_queue *tx ____cacheline_aligned_in_smp;\n+\tstruct list_head list;\n+\tbool zc;\n \t/* Protects multiple processes in the control path */\n \tstruct mutex mutex;\n \tu64 rx_dropped;\n@@ -64,8 +69,12 @@ int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp);\n int xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp);\n void xsk_flush(struct xdp_sock *xs);\n bool xsk_is_setup_for_bpf_map(struct xdp_sock *xs);\n+/* Used from netdev driver */\n u64 *xsk_umem_peek_addr(struct xdp_umem *umem, u64 *addr);\n void xsk_umem_discard_addr(struct xdp_umem *umem);\n+void xsk_umem_complete_tx(struct xdp_umem *umem, u32 nb_entries);\n+bool xsk_umem_consume_tx(struct xdp_umem *umem, dma_addr_t *dma, u32 *len);\n+void xsk_umem_consume_tx_done(struct xdp_umem *umem);\n #else\n static inline int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp)\n {\ndiff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c\nindex f729d79b8d91..7eb4948a38d2 100644\n--- a/net/xdp/xdp_umem.c\n+++ b/net/xdp/xdp_umem.c\n@@ -17,6 +17,29 @@\n \n #define XDP_UMEM_MIN_CHUNK_SIZE 2048\n \n+void xdp_add_sk_umem(struct xdp_umem *umem, struct xdp_sock *xs)\n+{\n+\tunsigned long flags;\n+\n+\tspin_lock_irqsave(&umem->xsk_list_lock, flags);\n+\tlist_add_rcu(&xs->list, &umem->xsk_list);\n+\tspin_unlock_irqrestore(&umem->xsk_list_lock, flags);\n+}\n+\n+void xdp_del_sk_umem(struct xdp_umem *umem, struct xdp_sock *xs)\n+{\n+\tunsigned long flags;\n+\n+\tif (xs->dev) {\n+\t\tspin_lock_irqsave(&umem->xsk_list_lock, flags);\n+\t\tlist_del_rcu(&xs->list);\n+\t\tspin_unlock_irqrestore(&umem->xsk_list_lock, flags);\n+\n+\t\tif (umem->zc)\n+\t\t\tsynchronize_net();\n+\t}\n+}\n+\n int xdp_umem_assign_dev(struct xdp_umem *umem, struct net_device *dev,\n \t\t\tu32 queue_id, u16 flags)\n {\n@@ -35,7 +58,7 @@ int xdp_umem_assign_dev(struct xdp_umem *umem, struct net_device *dev,\n \n \tdev_hold(dev);\n \n-\tif (dev->netdev_ops->ndo_bpf) {\n+\tif (dev->netdev_ops->ndo_bpf && dev->netdev_ops->ndo_xsk_async_xmit) {\n \t\tbpf.command = XDP_QUERY_XSK_UMEM;\n \n \t\trtnl_lock();\n@@ -70,7 +93,7 @@ int xdp_umem_assign_dev(struct xdp_umem *umem, struct net_device *dev,\n \treturn force_zc ? -ENOTSUPP : 0; /* fail or fallback */\n }\n \n-void xdp_umem_clear_dev(struct xdp_umem *umem)\n+static void xdp_umem_clear_dev(struct xdp_umem *umem)\n {\n \tstruct netdev_bpf bpf;\n \tint err;\n@@ -283,6 +306,8 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)\n \tumem->npgs = size / PAGE_SIZE;\n \tumem->pgs = NULL;\n \tumem->user = NULL;\n+\tINIT_LIST_HEAD(&umem->xsk_list);\n+\tspin_lock_init(&umem->xsk_list_lock);\n \n \trefcount_set(&umem->users, 1);\n \ndiff --git a/net/xdp/xdp_umem.h b/net/xdp/xdp_umem.h\nindex 674508a32a4d..f11560334f88 100644\n--- a/net/xdp/xdp_umem.h\n+++ b/net/xdp/xdp_umem.h\n@@ -13,12 +13,18 @@ static inline char *xdp_umem_get_data(struct xdp_umem *umem, u64 addr)\n \treturn umem->pages[addr >> PAGE_SHIFT].addr + (addr & (PAGE_SIZE - 1));\n }\n \n+static inline dma_addr_t xdp_umem_get_dma(struct xdp_umem *umem, u64 addr)\n+{\n+\treturn umem->pages[addr >> PAGE_SHIFT].dma + (addr & (PAGE_SIZE - 1));\n+}\n+\n int xdp_umem_assign_dev(struct xdp_umem *umem, struct net_device *dev,\n \t\t\tu32 queue_id, u16 flags);\n-void xdp_umem_clear_dev(struct xdp_umem *umem);\n bool xdp_umem_validate_queues(struct xdp_umem *umem);\n void xdp_get_umem(struct xdp_umem *umem);\n void xdp_put_umem(struct xdp_umem *umem);\n+void xdp_add_sk_umem(struct xdp_umem *umem, struct xdp_sock *xs);\n+void xdp_del_sk_umem(struct xdp_umem *umem, struct xdp_sock *xs);\n struct xdp_umem *xdp_umem_create(struct xdp_umem_reg *mr);\n \n #endif /* XDP_UMEM_H_ */\ndiff --git a/net/xdp/xsk.c b/net/xdp/xsk.c\nindex ab64bd8260ea..ddca4bf1cfc8 100644\n--- a/net/xdp/xsk.c\n+++ b/net/xdp/xsk.c\n@@ -21,6 +21,7 @@\n #include <linux/uaccess.h>\n #include <linux/net.h>\n #include <linux/netdevice.h>\n+#include <linux/rculist.h>\n #include <net/xdp_sock.h>\n #include <net/xdp.h>\n \n@@ -138,6 +139,59 @@ int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp)\n \treturn err;\n }\n \n+void xsk_umem_complete_tx(struct xdp_umem *umem, u32 nb_entries)\n+{\n+\txskq_produce_flush_addr_n(umem->cq, nb_entries);\n+}\n+EXPORT_SYMBOL(xsk_umem_complete_tx);\n+\n+void xsk_umem_consume_tx_done(struct xdp_umem *umem)\n+{\n+\tstruct xdp_sock *xs;\n+\n+\trcu_read_lock();\n+\tlist_for_each_entry_rcu(xs, &umem->xsk_list, list) {\n+\t\txs->sk.sk_write_space(&xs->sk);\n+\t}\n+\trcu_read_unlock();\n+}\n+EXPORT_SYMBOL(xsk_umem_consume_tx_done);\n+\n+bool xsk_umem_consume_tx(struct xdp_umem *umem, dma_addr_t *dma, u32 *len)\n+{\n+\tstruct xdp_desc desc;\n+\tstruct xdp_sock *xs;\n+\n+\trcu_read_lock();\n+\tlist_for_each_entry_rcu(xs, &umem->xsk_list, list) {\n+\t\tif (!xskq_peek_desc(xs->tx, &desc))\n+\t\t\tcontinue;\n+\n+\t\tif (xskq_produce_addr_lazy(umem->cq, desc.addr))\n+\t\t\tgoto out;\n+\n+\t\t*dma = xdp_umem_get_dma(umem, desc.addr);\n+\t\t*len = desc.len;\n+\n+\t\txskq_discard_desc(xs->tx);\n+\t\trcu_read_unlock();\n+\t\treturn true;\n+\t}\n+\n+out:\n+\trcu_read_unlock();\n+\treturn false;\n+}\n+EXPORT_SYMBOL(xsk_umem_consume_tx);\n+\n+static int xsk_zc_xmit(struct sock *sk)\n+{\n+\tstruct xdp_sock *xs = xdp_sk(sk);\n+\tstruct net_device *dev = xs->dev;\n+\n+\treturn dev->netdev_ops->ndo_xsk_async_xmit(dev, xs->queue_id);\n+}\n+\n static void xsk_destruct_skb(struct sk_buff *skb)\n {\n \tu64 addr = (u64)(long)skb_shinfo(skb)->destructor_arg;\n@@ -151,7 +205,6 @@ static void xsk_destruct_skb(struct sk_buff *skb)\n static int xsk_generic_xmit(struct sock *sk, struct msghdr *m,\n \t\t\t size_t total_len)\n {\n-\tbool need_wait = !(m->msg_flags & MSG_DONTWAIT);\n \tu32 max_batch = TX_BATCH_SIZE;\n \tstruct xdp_sock *xs = xdp_sk(sk);\n \tbool sent_frame = false;\n@@ -161,8 +214,6 @@ static int xsk_generic_xmit(struct sock *sk, struct msghdr *m,\n \n \tif (unlikely(!xs->tx))\n \t\treturn -ENOBUFS;\n-\tif (need_wait)\n-\t\treturn -EOPNOTSUPP;\n \n \tmutex_lock(&xs->mutex);\n \n@@ -192,7 +243,7 @@ static int xsk_generic_xmit(struct sock *sk, struct msghdr *m,\n \t\t\tgoto out;\n \t\t}\n \n-\t\tskb = sock_alloc_send_skb(sk, len, !need_wait, &err);\n+\t\tskb = sock_alloc_send_skb(sk, len, 1, &err);\n \t\tif (unlikely(!skb)) {\n \t\t\terr = -EAGAIN;\n \t\t\tgoto out;\n@@ -235,6 +286,7 @@ static int xsk_generic_xmit(struct sock *sk, struct msghdr *m,\n \n static int xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)\n {\n+\tbool need_wait = !(m->msg_flags & MSG_DONTWAIT);\n \tstruct sock *sk = sock->sk;\n \tstruct xdp_sock *xs = xdp_sk(sk);\n \n@@ -242,8 +294,10 @@ static int xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)\n \t\treturn -ENXIO;\n \tif (unlikely(!(xs->dev->flags & IFF_UP)))\n \t\treturn -ENETDOWN;\n+\tif (need_wait)\n+\t\treturn -EOPNOTSUPP;\n \n-\treturn xsk_generic_xmit(sk, m, total_len);\n+\treturn (xs->zc) ? xsk_zc_xmit(sk) : xsk_generic_xmit(sk, m, total_len);\n }\n \n static unsigned int xsk_poll(struct file *file, struct socket *sock,\n@@ -419,10 +473,11 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)\n \t}\n \n \txs->dev = dev;\n-\txs->queue_id = sxdp->sxdp_queue_id;\n-\n+\txs->zc = xs->umem->zc;\n+\txs->queue_id = qid;\n \txskq_set_umem(xs->rx, &xs->umem->props);\n \txskq_set_umem(xs->tx, &xs->umem->props);\n+\txdp_add_sk_umem(xs->umem, xs);\n \n out_unlock:\n \tif (err)\n@@ -660,6 +715,7 @@ static void xsk_destruct(struct sock *sk)\n \n \txskq_destroy(xs->rx);\n \txskq_destroy(xs->tx);\n+\txdp_del_sk_umem(xs->umem, xs);\n \txdp_put_umem(xs->umem);\n \n \tsk_refcnt_debug_dec(sk);\ndiff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h\nindex 5246ed420a16..ef6a6f0ec949 100644\n--- a/net/xdp/xsk_queue.h\n+++ b/net/xdp/xsk_queue.h\n@@ -11,6 +11,7 @@\n #include <net/xdp_sock.h>\n \n #define RX_BATCH_SIZE 16\n+#define LAZY_UPDATE_THRESHOLD 128\n \n struct xdp_ring {\n \tu32 producer ____cacheline_aligned_in_smp;\n@@ -61,9 +62,14 @@ static inline u32 xskq_nb_avail(struct xsk_queue *q, u32 dcnt)\n \treturn (entries > dcnt) ? dcnt : entries;\n }\n \n+static inline u32 xskq_nb_free_lazy(struct xsk_queue *q, u32 producer)\n+{\n+\treturn q->nentries - (producer - q->cons_tail);\n+}\n+\n static inline u32 xskq_nb_free(struct xsk_queue *q, u32 producer, u32 dcnt)\n {\n-\tu32 free_entries = q->nentries - (producer - q->cons_tail);\n+\tu32 free_entries = xskq_nb_free_lazy(q, producer);\n \n \tif (free_entries >= dcnt)\n \t\treturn free_entries;\n@@ -123,6 +129,9 @@ static inline int xskq_produce_addr(struct xsk_queue *q, u64 addr)\n {\n \tstruct xdp_umem_ring *ring = (struct xdp_umem_ring *)q->ring;\n \n+\tif (xskq_nb_free(q, q->prod_tail, LAZY_UPDATE_THRESHOLD) == 0)\n+\t\treturn -ENOSPC;\n+\n \tring->desc[q->prod_tail++ & q->ring_mask] = addr;\n \n \t/* Order producer and data */\n@@ -132,6 +141,27 @@ static inline int xskq_produce_addr(struct xsk_queue *q, u64 addr)\n \treturn 0;\n }\n \n+static inline int xskq_produce_addr_lazy(struct xsk_queue *q, u64 addr)\n+{\n+\tstruct xdp_umem_ring *ring = (struct xdp_umem_ring *)q->ring;\n+\n+\tif (xskq_nb_free(q, q->prod_head, LAZY_UPDATE_THRESHOLD) == 0)\n+\t\treturn -ENOSPC;\n+\n+\tring->desc[q->prod_head++ & q->ring_mask] = addr;\n+\treturn 0;\n+}\n+\n+static inline void xskq_produce_flush_addr_n(struct xsk_queue *q,\n+\t\t\t\t\t u32 nb_entries)\n+{\n+\t/* Order producer and data */\n+\tsmp_wmb();\n+\n+\tq->prod_tail += nb_entries;\n+\tWRITE_ONCE(q->ring->producer, q->prod_tail);\n+}\n+\n static inline int xskq_reserve_addr(struct xsk_queue *q)\n {\n \tif (xskq_nb_free(q, q->prod_head, 1) == 0)\n", "prefixes": [ "bpf-next", "07/11" ] }