get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/2205089/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 2205089,
    "url": "http://patchwork.ozlabs.org/api/patches/2205089/?format=api",
    "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20260304160345.1340940-7-larysa.zaremba@intel.com/",
    "project": {
        "id": 46,
        "url": "http://patchwork.ozlabs.org/api/projects/46/?format=api",
        "name": "Intel Wired Ethernet development",
        "link_name": "intel-wired-lan",
        "list_id": "intel-wired-lan.osuosl.org",
        "list_email": "intel-wired-lan@osuosl.org",
        "web_url": "",
        "scm_url": "",
        "webscm_url": "",
        "list_archive_url": "",
        "list_archive_url_format": "",
        "commit_url_format": ""
    },
    "msgid": "<20260304160345.1340940-7-larysa.zaremba@intel.com>",
    "list_archive_url": null,
    "date": "2026-03-04T16:03:38",
    "name": "[iwl-next,v3,06/10] ixgbevf: XDP_TX in multi-buffer through libeth",
    "commit_ref": null,
    "pull_url": null,
    "state": "changes-requested",
    "archived": false,
    "hash": "fb320344d48d03eff42051f99a4047ee1999dadc",
    "submitter": {
        "id": 84900,
        "url": "http://patchwork.ozlabs.org/api/people/84900/?format=api",
        "name": "Larysa Zaremba",
        "email": "larysa.zaremba@intel.com"
    },
    "delegate": {
        "id": 109701,
        "url": "http://patchwork.ozlabs.org/api/users/109701/?format=api",
        "username": "anguy11",
        "first_name": "Anthony",
        "last_name": "Nguyen",
        "email": "anthony.l.nguyen@intel.com"
    },
    "mbox": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20260304160345.1340940-7-larysa.zaremba@intel.com/mbox/",
    "series": [
        {
            "id": 494412,
            "url": "http://patchwork.ozlabs.org/api/series/494412/?format=api",
            "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/list/?series=494412",
            "date": "2026-03-04T16:03:32",
            "name": "libeth and full XDP for ixgbevf",
            "version": 3,
            "mbox": "http://patchwork.ozlabs.org/series/494412/mbox/"
        }
    ],
    "comments": "http://patchwork.ozlabs.org/api/patches/2205089/comments/",
    "check": "pending",
    "checks": "http://patchwork.ozlabs.org/api/patches/2205089/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<intel-wired-lan-bounces@osuosl.org>",
        "X-Original-To": [
            "incoming@patchwork.ozlabs.org",
            "intel-wired-lan@lists.osuosl.org"
        ],
        "Delivered-To": [
            "patchwork-incoming@legolas.ozlabs.org",
            "intel-wired-lan@lists.osuosl.org"
        ],
        "Authentication-Results": [
            "legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256\n header.s=default header.b=WimcEukE;\n\tdkim-atps=neutral",
            "legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org\n (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org;\n envelope-from=intel-wired-lan-bounces@osuosl.org;\n receiver=patchwork.ozlabs.org)"
        ],
        "Received": [
            "from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4fQywd2m8Rz1xws\n\tfor <incoming@patchwork.ozlabs.org>; Thu, 05 Mar 2026 03:36:17 +1100 (AEDT)",
            "from localhost (localhost [127.0.0.1])\n\tby smtp1.osuosl.org (Postfix) with ESMTP id C92108133E;\n\tWed,  4 Mar 2026 16:36:12 +0000 (UTC)",
            "from smtp1.osuosl.org ([127.0.0.1])\n by localhost (smtp1.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id tIWPOKxjqT-Y; Wed,  4 Mar 2026 16:36:11 +0000 (UTC)",
            "from lists1.osuosl.org (lists1.osuosl.org [140.211.166.142])\n\tby smtp1.osuosl.org (Postfix) with ESMTP id 6D48A81317;\n\tWed,  4 Mar 2026 16:36:11 +0000 (UTC)",
            "from smtp3.osuosl.org (smtp3.osuosl.org [140.211.166.136])\n by lists1.osuosl.org (Postfix) with ESMTP id 7DF031EB\n for <intel-wired-lan@lists.osuosl.org>; Wed,  4 Mar 2026 16:36:09 +0000 (UTC)",
            "from localhost (localhost [127.0.0.1])\n by smtp3.osuosl.org (Postfix) with ESMTP id 645BF6086D\n for <intel-wired-lan@lists.osuosl.org>; Wed,  4 Mar 2026 16:36:09 +0000 (UTC)",
            "from smtp3.osuosl.org ([127.0.0.1])\n by localhost (smtp3.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id vvnVGl9Hqdgb for <intel-wired-lan@lists.osuosl.org>;\n Wed,  4 Mar 2026 16:36:08 +0000 (UTC)",
            "from mgamail.intel.com (mgamail.intel.com [192.198.163.18])\n by smtp3.osuosl.org (Postfix) with ESMTPS id 5884A6086F\n for <intel-wired-lan@lists.osuosl.org>; Wed,  4 Mar 2026 16:36:08 +0000 (UTC)",
            "from fmviesa002.fm.intel.com ([10.60.135.142])\n by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 04 Mar 2026 08:36:07 -0800",
            "from irvmail002.ir.intel.com ([10.43.11.120])\n by fmviesa002.fm.intel.com with ESMTP; 04 Mar 2026 08:36:03 -0800",
            "from lincoln.igk.intel.com (lincoln.igk.intel.com [10.102.21.235])\n by irvmail002.ir.intel.com (Postfix) with ESMTP id 6D7D1312CD;\n Wed,  4 Mar 2026 16:36:01 +0000 (GMT)"
        ],
        "X-Virus-Scanned": [
            "amavis at osuosl.org",
            "amavis at osuosl.org"
        ],
        "X-Comment": "SPF check N/A for local connections - client-ip=140.211.166.142;\n helo=lists1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org;\n receiver=<UNKNOWN> ",
        "DKIM-Filter": [
            "OpenDKIM Filter v2.11.0 smtp1.osuosl.org 6D48A81317",
            "OpenDKIM Filter v2.11.0 smtp3.osuosl.org 5884A6086F"
        ],
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org;\n\ts=default; t=1772642171;\n\tbh=/pLpK0Ak3Mx5llt8s0jmA7fg/cM17W+H5nfAMx9JgRE=;\n\th=From:To:Cc:Date:In-Reply-To:References:Subject:List-Id:\n\t List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe:\n\t From;\n\tb=WimcEukEb0SHxG4m0t27M40Ns/nI19CUhQc0oeRNNo/r857dD+BN379em9bHTD/Zf\n\t REmqyVVid2iO+i3dsgXPbIOT2QFSJopssgGamxNqRsp4FEfuUkrxVpwMNVv7NW3CwA\n\t YJz9f/IWvsg/gg7RbAJsgjP0f/GCbmaljBpOK7ajJDEb8SqeFrKhiIUvW61Lqak3uh\n\t 1JohXKyULutxv0QxcfKd8iyOPfHA+cDOMoEVpU/XGz2qxvTY8KFRUZLTkUzPQPZGC9\n\t 0mphmnv48/dDlEtmefylFpiGIgZc390E9YwlDD6sqyo42iN584qouTna42r1vFtHpX\n\t S/9NEykPSP0Bw==",
        "Received-SPF": "Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.18;\n helo=mgamail.intel.com; envelope-from=larysa.zaremba@intel.com;\n receiver=<UNKNOWN>",
        "DMARC-Filter": "OpenDMARC Filter v1.4.2 smtp3.osuosl.org 5884A6086F",
        "X-CSE-ConnectionGUID": [
            "HZzE+QN2RIiFk6Ore58I1w==",
            "3YZ5wlK2Tr+3bH7WilE6+g=="
        ],
        "X-CSE-MsgGUID": [
            "hJVvVyFLTK2WK3nMFHj6OA==",
            "niAmt55NSKa0Q4eKOpBI+Q=="
        ],
        "X-IronPort-AV": [
            "E=McAfee;i=\"6800,10657,11719\"; a=\"72906387\"",
            "E=Sophos;i=\"6.21,324,1763452800\"; d=\"scan'208\";a=\"72906387\"",
            "E=Sophos;i=\"6.21,324,1763452800\"; d=\"scan'208\";a=\"241404968\""
        ],
        "X-ExtLoop1": "1",
        "From": "Larysa Zaremba <larysa.zaremba@intel.com>",
        "To": "Tony Nguyen <anthony.l.nguyen@intel.com>, intel-wired-lan@lists.osuosl.org",
        "Cc": "Larysa Zaremba <larysa.zaremba@intel.com>,\n Przemek Kitszel <przemyslaw.kitszel@intel.com>,\n Andrew Lunn <andrew+netdev@lunn.ch>,\n \"David S. Miller\" <davem@davemloft.net>,\n Eric Dumazet <edumazet@google.com>, Jakub Kicinski <kuba@kernel.org>,\n Paolo Abeni <pabeni@redhat.com>,\n Alexander Lobakin <aleksander.lobakin@intel.com>,\n Simon Horman <horms@kernel.org>, Alexei Starovoitov <ast@kernel.org>,\n Daniel Borkmann <daniel@iogearbox.net>,\n Jesper Dangaard Brouer <hawk@kernel.org>,\n John Fastabend <john.fastabend@gmail.com>,\n Stanislav Fomichev <sdf@fomichev.me>,\n Aleksandr Loktionov <aleksandr.loktionov@intel.com>,\n Natalia Wochtman <natalia.wochtman@intel.com>, netdev@vger.kernel.org,\n linux-kernel@vger.kernel.org, bpf@vger.kernel.org",
        "Date": "Wed,  4 Mar 2026 17:03:38 +0100",
        "Message-ID": "<20260304160345.1340940-7-larysa.zaremba@intel.com>",
        "X-Mailer": "git-send-email 2.52.0",
        "In-Reply-To": "<20260304160345.1340940-1-larysa.zaremba@intel.com>",
        "References": "<20260304160345.1340940-1-larysa.zaremba@intel.com>",
        "MIME-Version": "1.0",
        "Content-Transfer-Encoding": "8bit",
        "X-Mailman-Original-DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/simple;\n d=intel.com; i=@intel.com; q=dns/txt; s=Intel;\n t=1772642168; x=1804178168;\n h=from:to:cc:subject:date:message-id:in-reply-to:\n references:mime-version:content-transfer-encoding;\n bh=IOY8YI5FCKqZkUhckYEu/S/48sykVUGvU+Vw8bzqOU8=;\n b=SYNgRAFNeuAVHigSgNNzOXB5HOPWHo8O4ulDZ1iVN1gi4braI3Tb0qpR\n kPvH4tqDxUnfYwyLU81R5aV/KeLe5Llvm6VB1jRDNV/V+B6D+lBez5KEB\n 9CKZiJinfwTP5srbYcyc5Eo82yV53T9tEmZrFLcgGZQ1X9VpUlb320RAH\n fiE5Tc9528qLwJqZBADZCLPa1dfOlYu/h92UTA5hX3eSXNfU/MgASODDO\n NuVUL2JrWMeg1LFcOyPXceZnxCl+TN9M4j3Ek+iOhVEGCmdPSNBdtAaNi\n R4Ang7nVXPKhIKAmqY1BG0fJevWC+l+taULzj/t5jQA+07iJpLa7Gb8By\n w==;",
        "X-Mailman-Original-Authentication-Results": [
            "smtp3.osuosl.org;\n dmarc=pass (p=none dis=none)\n header.from=intel.com",
            "smtp3.osuosl.org;\n dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com\n header.a=rsa-sha256 header.s=Intel header.b=SYNgRAFN"
        ],
        "Subject": "[Intel-wired-lan] [PATCH iwl-next v3 06/10] ixgbevf: XDP_TX in\n multi-buffer through libeth",
        "X-BeenThere": "intel-wired-lan@osuosl.org",
        "X-Mailman-Version": "2.1.30",
        "Precedence": "list",
        "List-Id": "Intel Wired Ethernet Linux Kernel Driver Development\n <intel-wired-lan.osuosl.org>",
        "List-Unsubscribe": "<https://lists.osuosl.org/mailman/options/intel-wired-lan>,\n <mailto:intel-wired-lan-request@osuosl.org?subject=unsubscribe>",
        "List-Archive": "<http://lists.osuosl.org/pipermail/intel-wired-lan/>",
        "List-Post": "<mailto:intel-wired-lan@osuosl.org>",
        "List-Help": "<mailto:intel-wired-lan-request@osuosl.org?subject=help>",
        "List-Subscribe": "<https://lists.osuosl.org/mailman/listinfo/intel-wired-lan>,\n <mailto:intel-wired-lan-request@osuosl.org?subject=subscribe>",
        "Errors-To": "intel-wired-lan-bounces@osuosl.org",
        "Sender": "\"Intel-wired-lan\" <intel-wired-lan-bounces@osuosl.org>"
    },
    "content": "Use libeth to support XDP_TX action for segmented packets.\n\nReviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com>\nSigned-off-by: Larysa Zaremba <larysa.zaremba@intel.com>\n---\n drivers/net/ethernet/intel/ixgbevf/ixgbevf.h  |  14 +-\n .../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 294 ++++++++++++------\n 2 files changed, 200 insertions(+), 108 deletions(-)",
    "diff": "diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h\nindex 2626af039361..a27081ee764b 100644\n--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h\n+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h\n@@ -81,20 +81,22 @@ struct ixgbevf_ring {\n \tstruct net_device *netdev;\n \tstruct bpf_prog __rcu *xdp_prog;\n \tunion {\n-\t\tstruct page_pool *pp;\t/* Rx ring */\n+\t\tstruct page_pool *pp;\t/* Rx and XDP rings */\n \t\tstruct device *dev;\t/* Tx ring */\n \t};\n \tvoid *desc;\t\t\t/* descriptor ring memory */\n-\tdma_addr_t dma;\t\t\t/* phys. address of descriptor ring */\n-\tunsigned int size;\t\t/* length in bytes */\n-\tu32 truesize;\t\t\t/* Rx buffer full size */\n+\tunion {\n+\t\tu32 truesize;\t\t/* Rx buffer full size */\n+\t\tu32 pending;\t\t/* Sent-not-completed descriptors */\n+\t};\n \tu16 count;\t\t\t/* amount of descriptors */\n-\tu16 next_to_use;\n \tu16 next_to_clean;\n+\tu32 next_to_use;\n \n \tunion {\n \t\tstruct libeth_fqe *rx_fqes;\n \t\tstruct ixgbevf_tx_buffer *tx_buffer_info;\n+\t\tstruct libeth_sqe *xdp_sqes;\n \t};\n \tunsigned long state;\n \tstruct ixgbevf_stats stats;\n@@ -114,6 +116,8 @@ struct ixgbevf_ring {\n \tint queue_index; /* needed for multiqueue queue management */\n \tu32 rx_buf_len;\n \tstruct libeth_xdp_buff_stash xdp_stash;\n+\tunsigned int dma_size;\t\t/* length in bytes */\n+\tdma_addr_t dma;\t\t\t/* phys. address of descriptor ring */\n } ____cacheline_internodealigned_in_smp;\n \n /* How many Rx Buffers do we bundle into one write to the hardware ? */\ndiff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c\nindex 27cab542d3bb..177eb141e22d 100644\n--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c\n+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c\n@@ -306,10 +306,7 @@ static bool ixgbevf_clean_tx_irq(struct ixgbevf_q_vector *q_vector,\n \t\t\ttotal_ipsec++;\n \n \t\t/* free the skb */\n-\t\tif (ring_is_xdp(tx_ring))\n-\t\t\tlibeth_xdp_return_va(tx_buffer->data, true);\n-\t\telse\n-\t\t\tnapi_consume_skb(tx_buffer->skb, napi_budget);\n+\t\tnapi_consume_skb(tx_buffer->skb, napi_budget);\n \n \t\t/* unmap skb header data */\n \t\tdma_unmap_single(tx_ring->dev,\n@@ -392,9 +389,8 @@ static bool ixgbevf_clean_tx_irq(struct ixgbevf_q_vector *q_vector,\n \t\t       eop_desc, (eop_desc ? eop_desc->wb.status : 0),\n \t\t       tx_ring->tx_buffer_info[i].time_stamp, jiffies);\n \n-\t\tif (!ring_is_xdp(tx_ring))\n-\t\t\tnetif_stop_subqueue(tx_ring->netdev,\n-\t\t\t\t\t    tx_ring->queue_index);\n+\t\tnetif_stop_subqueue(tx_ring->netdev,\n+\t\t\t\t    tx_ring->queue_index);\n \n \t\t/* schedule immediate reset if we believe we hung */\n \t\tixgbevf_tx_timeout_reset(adapter);\n@@ -402,9 +398,6 @@ static bool ixgbevf_clean_tx_irq(struct ixgbevf_q_vector *q_vector,\n \t\treturn true;\n \t}\n \n-\tif (ring_is_xdp(tx_ring))\n-\t\treturn !!budget;\n-\n #define TX_WAKE_THRESHOLD (DESC_NEEDED * 2)\n \tif (unlikely(total_packets && netif_carrier_ok(tx_ring->netdev) &&\n \t\t     (ixgbevf_desc_unused(tx_ring) >= TX_WAKE_THRESHOLD))) {\n@@ -660,44 +653,83 @@ static inline void ixgbevf_irq_enable_queues(struct ixgbevf_adapter *adapter,\n #define IXGBEVF_XDP_CONSUMED 1\n #define IXGBEVF_XDP_TX 2\n \n-static int ixgbevf_xmit_xdp_ring(struct ixgbevf_ring *ring,\n-\t\t\t\t struct xdp_buff *xdp)\n+static void ixgbevf_clean_xdp_num(struct ixgbevf_ring *xdp_ring, bool in_napi,\n+\t\t\t\t  u16 to_clean)\n+{\n+\tstruct libeth_xdpsq_napi_stats stats = { };\n+\tu32 ntc = xdp_ring->next_to_clean;\n+\tstruct xdp_frame_bulk cbulk;\n+\tstruct libeth_cq_pp cp = {\n+\t\t.bq = &cbulk,\n+\t\t.dev = xdp_ring->dev,\n+\t\t.xss = &stats,\n+\t\t.napi = in_napi,\n+\t};\n+\n+\txdp_frame_bulk_init(&cbulk);\n+\txdp_ring->pending -= to_clean;\n+\n+\twhile (likely(to_clean--)) {\n+\t\tlibeth_xdp_complete_tx(&xdp_ring->xdp_sqes[ntc], &cp);\n+\t\tntc++;\n+\t\tntc = unlikely(ntc == xdp_ring->count) ? 0 : ntc;\n+\t}\n+\n+\txdp_ring->next_to_clean = ntc;\n+\txdp_flush_frame_bulk(&cbulk);\n+}\n+\n+static u16 ixgbevf_tx_get_num_sent(struct ixgbevf_ring *xdp_ring)\n {\n-\tstruct ixgbevf_tx_buffer *tx_buffer;\n-\tunion ixgbe_adv_tx_desc *tx_desc;\n-\tu32 len, cmd_type;\n-\tdma_addr_t dma;\n-\tu16 i;\n+\tu16 ntc = xdp_ring->next_to_clean;\n+\tu16 to_clean = 0;\n \n-\tlen = xdp->data_end - xdp->data;\n+\twhile (likely(to_clean < xdp_ring->pending)) {\n+\t\tu32 idx = xdp_ring->xdp_sqes[ntc].rs_idx;\n+\t\tunion ixgbe_adv_tx_desc *rs_desc;\n \n-\tif (unlikely(!ixgbevf_desc_unused(ring)))\n-\t\treturn IXGBEVF_XDP_CONSUMED;\n+\t\tif (!idx--)\n+\t\t\tbreak;\n \n-\tdma = dma_map_single(ring->dev, xdp->data, len, DMA_TO_DEVICE);\n-\tif (dma_mapping_error(ring->dev, dma))\n-\t\treturn IXGBEVF_XDP_CONSUMED;\n+\t\trs_desc = IXGBEVF_TX_DESC(xdp_ring, idx);\n \n-\t/* record the location of the first descriptor for this packet */\n-\ti = ring->next_to_use;\n-\ttx_buffer = &ring->tx_buffer_info[i];\n-\n-\tdma_unmap_len_set(tx_buffer, len, len);\n-\tdma_unmap_addr_set(tx_buffer, dma, dma);\n-\ttx_buffer->data = xdp->data;\n-\ttx_buffer->bytecount = len;\n-\ttx_buffer->gso_segs = 1;\n-\ttx_buffer->protocol = 0;\n-\n-\t/* Populate minimal context descriptor that will provide for the\n-\t * fact that we are expected to process Ethernet frames.\n-\t */\n-\tif (!test_bit(__IXGBEVF_TX_XDP_RING_PRIMED, &ring->state)) {\n+\t\tif (!(rs_desc->wb.status & cpu_to_le32(IXGBE_TXD_STAT_DD)))\n+\t\t\tbreak;\n+\n+\t\txdp_ring->xdp_sqes[ntc].rs_idx = 0;\n+\n+\t\tto_clean +=\n+\t\t\t(idx >= ntc ? idx : idx + xdp_ring->count) - ntc + 1;\n+\n+\t\tntc = (idx + 1 == xdp_ring->count) ? 0 : idx + 1;\n+\t}\n+\n+\treturn to_clean;\n+}\n+\n+static void ixgbevf_clean_xdp_ring(struct ixgbevf_ring *xdp_ring)\n+{\n+\tixgbevf_clean_xdp_num(xdp_ring, false, xdp_ring->pending);\n+}\n+\n+static u32 ixgbevf_prep_xdp_sq(void *xdpsq, struct libeth_xdpsq *sq)\n+{\n+\tstruct ixgbevf_ring *xdp_ring = xdpsq;\n+\n+\tif (unlikely(ixgbevf_desc_unused(xdp_ring) < LIBETH_XDP_TX_BULK)) {\n+\t\tu16 to_clean = ixgbevf_tx_get_num_sent(xdp_ring);\n+\n+\t\tif (likely(to_clean))\n+\t\t\tixgbevf_clean_xdp_num(xdp_ring, true, to_clean);\n+\t}\n+\n+\tif (unlikely(!test_bit(__IXGBEVF_TX_XDP_RING_PRIMED,\n+\t\t\t       &xdp_ring->state))) {\n \t\tstruct ixgbe_adv_tx_context_desc *context_desc;\n \n-\t\tset_bit(__IXGBEVF_TX_XDP_RING_PRIMED, &ring->state);\n+\t\tset_bit(__IXGBEVF_TX_XDP_RING_PRIMED, &xdp_ring->state);\n \n-\t\tcontext_desc = IXGBEVF_TX_CTXTDESC(ring, 0);\n+\t\tcontext_desc = IXGBEVF_TX_CTXTDESC(xdp_ring, 0);\n \t\tcontext_desc->vlan_macip_lens\t=\n \t\t\tcpu_to_le32(ETH_HLEN << IXGBE_ADVTXD_MACLEN_SHIFT);\n \t\tcontext_desc->fceof_saidx\t= 0;\n@@ -706,48 +738,98 @@ static int ixgbevf_xmit_xdp_ring(struct ixgbevf_ring *ring,\n \t\t\t\t    IXGBE_ADVTXD_DTYP_CTXT);\n \t\tcontext_desc->mss_l4len_idx\t= 0;\n \n-\t\ti = 1;\n+\t\txdp_ring->next_to_use = 1;\n+\t\txdp_ring->pending = 1;\n+\n+\t\t/* Finish descriptor writes before bumping tail */\n+\t\twmb();\n+\t\tixgbevf_write_tail(xdp_ring, 1);\n \t}\n \n-\t/* put descriptor type bits */\n-\tcmd_type = IXGBE_ADVTXD_DTYP_DATA |\n-\t\t   IXGBE_ADVTXD_DCMD_DEXT |\n-\t\t   IXGBE_ADVTXD_DCMD_IFCS;\n-\tcmd_type |= len | IXGBE_TXD_CMD;\n+\t*sq = (struct libeth_xdpsq) {\n+\t\t.count = xdp_ring->count,\n+\t\t.descs = xdp_ring->desc,\n+\t\t.lock = NULL,\n+\t\t.ntu = &xdp_ring->next_to_use,\n+\t\t.pending = &xdp_ring->pending,\n+\t\t.pool = NULL,\n+\t\t.sqes = xdp_ring->xdp_sqes,\n+\t};\n+\n+\treturn ixgbevf_desc_unused(xdp_ring);\n+}\n \n-\ttx_desc = IXGBEVF_TX_DESC(ring, i);\n-\ttx_desc->read.buffer_addr = cpu_to_le64(dma);\n+static void ixgbevf_xdp_xmit_desc(struct libeth_xdp_tx_desc desc, u32 i,\n+\t\t\t\t  const struct libeth_xdpsq *sq,\n+\t\t\t\t  u64 priv)\n+{\n+\tunion ixgbe_adv_tx_desc *tx_desc =\n+\t\t&((union ixgbe_adv_tx_desc *)sq->descs)[i];\n \n-\ttx_desc->read.cmd_type_len = cpu_to_le32(cmd_type);\n-\ttx_desc->read.olinfo_status =\n-\t\t\tcpu_to_le32((len << IXGBE_ADVTXD_PAYLEN_SHIFT) |\n+\tu32 cmd_type = IXGBE_ADVTXD_DTYP_DATA |\n+\t\t       IXGBE_ADVTXD_DCMD_DEXT |\n+\t\t       IXGBE_ADVTXD_DCMD_IFCS |\n+\t\t       desc.len;\n+\n+\tif (desc.flags & LIBETH_XDP_TX_LAST)\n+\t\tcmd_type |= IXGBE_TXD_CMD_EOP;\n+\n+\tif (desc.flags & LIBETH_XDP_TX_FIRST) {\n+\t\tstruct skb_shared_info *sinfo = sq->sqes[i].sinfo;\n+\t\tu16 full_len = desc.len + sinfo->xdp_frags_size;\n+\n+\t\ttx_desc->read.olinfo_status =\n+\t\t\tcpu_to_le32((full_len << IXGBE_ADVTXD_PAYLEN_SHIFT) |\n \t\t\t\t    IXGBE_ADVTXD_CC);\n+\t}\n \n-\t/* Avoid any potential race with cleanup */\n-\tsmp_wmb();\n+\ttx_desc->read.buffer_addr = cpu_to_le64(desc.addr);\n+\ttx_desc->read.cmd_type_len = cpu_to_le32(cmd_type);\n+}\n \n-\t/* set next_to_watch value indicating a packet is present */\n-\ti++;\n-\tif (i == ring->count)\n-\t\ti = 0;\n+LIBETH_XDP_DEFINE_START();\n+LIBETH_XDP_DEFINE_FLUSH_TX(static ixgbevf_xdp_flush_tx, ixgbevf_prep_xdp_sq,\n+\t\t\t   ixgbevf_xdp_xmit_desc);\n+LIBETH_XDP_DEFINE_END();\n \n-\ttx_buffer->next_to_watch = tx_desc;\n-\tring->next_to_use = i;\n+static void ixgbevf_xdp_set_rs(struct ixgbevf_ring *xdp_ring, u32 cached_ntu)\n+{\n+\tu32 ltu = (xdp_ring->next_to_use ? : xdp_ring->count) - 1;\n+\tunion ixgbe_adv_tx_desc *desc;\n \n-\treturn IXGBEVF_XDP_TX;\n+\tdesc = IXGBEVF_TX_DESC(xdp_ring, ltu);\n+\txdp_ring->xdp_sqes[cached_ntu].rs_idx = ltu + 1;\n+\tdesc->read.cmd_type_len |= cpu_to_le32(IXGBE_TXD_CMD);\n }\n \n-static int ixgbevf_run_xdp(struct ixgbevf_adapter *adapter,\n-\t\t\t   struct ixgbevf_ring *rx_ring,\n+static void ixgbevf_rx_finalize_xdp(struct libeth_xdp_tx_bulk *tx_bulk,\n+\t\t\t\t    bool xdp_xmit, u32 cached_ntu)\n+{\n+\tstruct ixgbevf_ring *xdp_ring = tx_bulk->xdpsq;\n+\n+\tif (!xdp_xmit)\n+\t\tgoto unlock;\n+\n+\tif (tx_bulk->count)\n+\t\tixgbevf_xdp_flush_tx(tx_bulk, LIBETH_XDP_TX_DROP);\n+\n+\tixgbevf_xdp_set_rs(xdp_ring, cached_ntu);\n+\n+\t/* Finish descriptor writes before bumping tail */\n+\twmb();\n+\tixgbevf_write_tail(xdp_ring, xdp_ring->next_to_use);\n+unlock:\n+\trcu_read_unlock();\n+}\n+\n+static int ixgbevf_run_xdp(struct libeth_xdp_tx_bulk *tx_bulk,\n \t\t\t   struct libeth_xdp_buff *xdp)\n {\n \tint result = IXGBEVF_XDP_PASS;\n-\tstruct ixgbevf_ring *xdp_ring;\n-\tstruct bpf_prog *xdp_prog;\n+\tconst struct bpf_prog *xdp_prog;\n \tu32 act;\n \n-\txdp_prog = READ_ONCE(rx_ring->xdp_prog);\n-\n+\txdp_prog = tx_bulk->prog;\n \tif (!xdp_prog)\n \t\tgoto xdp_out;\n \n@@ -756,17 +838,16 @@ static int ixgbevf_run_xdp(struct ixgbevf_adapter *adapter,\n \tcase XDP_PASS:\n \t\tbreak;\n \tcase XDP_TX:\n-\t\txdp_ring = adapter->xdp_ring[rx_ring->queue_index];\n-\t\tresult = ixgbevf_xmit_xdp_ring(xdp_ring, &xdp->base);\n-\t\tif (result == IXGBEVF_XDP_CONSUMED)\n-\t\t\tgoto out_failure;\n+\t\tresult = IXGBEVF_XDP_TX;\n+\t\tif (!libeth_xdp_tx_queue_bulk(tx_bulk, xdp,\n+\t\t\t\t\t      ixgbevf_xdp_flush_tx))\n+\t\t\tresult = IXGBEVF_XDP_CONSUMED;\n \t\tbreak;\n \tdefault:\n-\t\tbpf_warn_invalid_xdp_action(rx_ring->netdev, xdp_prog, act);\n+\t\tbpf_warn_invalid_xdp_action(tx_bulk->dev, xdp_prog, act);\n \t\tfallthrough;\n \tcase XDP_ABORTED:\n-out_failure:\n-\t\ttrace_xdp_exception(rx_ring->netdev, xdp_prog, act);\n+\t\ttrace_xdp_exception(tx_bulk->dev, xdp_prog, act);\n \t\tfallthrough; /* handle aborts by dropping packet */\n \tcase XDP_DROP:\n \t\tresult = IXGBEVF_XDP_CONSUMED;\n@@ -784,11 +865,19 @@ static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector,\n \tunsigned int total_rx_bytes = 0, total_rx_packets = 0;\n \tstruct ixgbevf_adapter *adapter = q_vector->adapter;\n \tu16 cleaned_count = ixgbevf_desc_unused(rx_ring);\n+\tLIBETH_XDP_ONSTACK_BULK(xdp_tx_bulk);\n \tLIBETH_XDP_ONSTACK_BUFF(xdp);\n+\tu32 cached_ntu;\n \tbool xdp_xmit = false;\n \tint xdp_res = 0;\n \n \tlibeth_xdp_init_buff(xdp, &rx_ring->xdp_stash, &rx_ring->xdp_rxq);\n+\tlibeth_xdp_tx_init_bulk(&xdp_tx_bulk, rx_ring->xdp_prog,\n+\t\t\t\tadapter->netdev, adapter->xdp_ring,\n+\t\t\t\tadapter->num_xdp_queues);\n+\tif (xdp_tx_bulk.prog)\n+\t\tcached_ntu =\n+\t\t\t((struct ixgbevf_ring *)xdp_tx_bulk.xdpsq)->next_to_use;\n \n \twhile (likely(total_rx_packets < budget)) {\n \t\tunion ixgbe_adv_rx_desc *rx_desc;\n@@ -821,11 +910,12 @@ static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector,\n \t\tif (ixgbevf_is_non_eop(rx_ring, rx_desc))\n \t\t\tcontinue;\n \n-\t\txdp_res = ixgbevf_run_xdp(adapter, rx_ring, xdp);\n+\t\txdp_res = ixgbevf_run_xdp(&xdp_tx_bulk, xdp);\n \t\tif (xdp_res) {\n \t\t\tif (xdp_res == IXGBEVF_XDP_TX)\n \t\t\t\txdp_xmit = true;\n \n+\t\t\txdp->data = NULL;\n \t\t\ttotal_rx_packets++;\n \t\t\ttotal_rx_bytes += xdp_get_buff_len(&xdp->base);\n \t\t\tcontinue;\n@@ -870,16 +960,7 @@ static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector,\n \t/* place incomplete frames back on ring for completion */\n \tlibeth_xdp_save_buff(&rx_ring->xdp_stash, xdp);\n \n-\tif (xdp_xmit) {\n-\t\tstruct ixgbevf_ring *xdp_ring =\n-\t\t\tadapter->xdp_ring[rx_ring->queue_index];\n-\n-\t\t/* Force memory writes to complete before letting h/w\n-\t\t * know there are new descriptors to fetch.\n-\t\t */\n-\t\twmb();\n-\t\tixgbevf_write_tail(xdp_ring, xdp_ring->next_to_use);\n-\t}\n+\tixgbevf_rx_finalize_xdp(&xdp_tx_bulk, xdp_xmit, cached_ntu);\n \n \tu64_stats_update_begin(&rx_ring->syncp);\n \trx_ring->stats.packets += total_rx_packets;\n@@ -909,6 +990,8 @@ static int ixgbevf_poll(struct napi_struct *napi, int budget)\n \tbool clean_complete = true;\n \n \tixgbevf_for_each_ring(ring, q_vector->tx) {\n+\t\tif (ring_is_xdp(ring))\n+\t\t\tcontinue;\n \t\tif (!ixgbevf_clean_tx_irq(q_vector, ring, budget))\n \t\t\tclean_complete = false;\n \t}\n@@ -1348,6 +1431,7 @@ static void ixgbevf_configure_tx_ring(struct ixgbevf_adapter *adapter,\n \t/* reset ntu and ntc to place SW in sync with hardwdare */\n \tring->next_to_clean = 0;\n \tring->next_to_use = 0;\n+\tring->pending = 0;\n \n \t/* In order to avoid issues WTHRESH + PTHRESH should always be equal\n \t * to or less than the number of on chip descriptors, which is\n@@ -1360,8 +1444,12 @@ static void ixgbevf_configure_tx_ring(struct ixgbevf_adapter *adapter,\n \t\t   32;           /* PTHRESH = 32 */\n \n \t/* reinitialize tx_buffer_info */\n-\tmemset(ring->tx_buffer_info, 0,\n-\t       sizeof(struct ixgbevf_tx_buffer) * ring->count);\n+\tif (!ring_is_xdp(ring))\n+\t\tmemset(ring->tx_buffer_info, 0,\n+\t\t       sizeof(struct ixgbevf_tx_buffer) * ring->count);\n+\telse\n+\t\tmemset(ring->xdp_sqes, 0,\n+\t\t       sizeof(struct libeth_sqe) * ring->count);\n \n \tclear_bit(__IXGBEVF_HANG_CHECK_ARMED, &ring->state);\n \tclear_bit(__IXGBEVF_TX_XDP_RING_PRIMED, &ring->state);\n@@ -2016,10 +2104,7 @@ static void ixgbevf_clean_tx_ring(struct ixgbevf_ring *tx_ring)\n \t\tunion ixgbe_adv_tx_desc *eop_desc, *tx_desc;\n \n \t\t/* Free all the Tx ring sk_buffs */\n-\t\tif (ring_is_xdp(tx_ring))\n-\t\t\tlibeth_xdp_return_va(tx_buffer->data, false);\n-\t\telse\n-\t\t\tdev_kfree_skb_any(tx_buffer->skb);\n+\t\tdev_kfree_skb_any(tx_buffer->skb);\n \n \t\t/* unmap skb header data */\n \t\tdma_unmap_single(tx_ring->dev,\n@@ -2088,7 +2173,7 @@ static void ixgbevf_clean_all_tx_rings(struct ixgbevf_adapter *adapter)\n \tfor (i = 0; i < adapter->num_tx_queues; i++)\n \t\tixgbevf_clean_tx_ring(adapter->tx_ring[i]);\n \tfor (i = 0; i < adapter->num_xdp_queues; i++)\n-\t\tixgbevf_clean_tx_ring(adapter->xdp_ring[i]);\n+\t\tixgbevf_clean_xdp_ring(adapter->xdp_ring[i]);\n }\n \n void ixgbevf_down(struct ixgbevf_adapter *adapter)\n@@ -2834,8 +2919,6 @@ static void ixgbevf_check_hang_subtask(struct ixgbevf_adapter *adapter)\n \tif (netif_carrier_ok(adapter->netdev)) {\n \t\tfor (i = 0; i < adapter->num_tx_queues; i++)\n \t\t\tset_check_for_tx_hang(adapter->tx_ring[i]);\n-\t\tfor (i = 0; i < adapter->num_xdp_queues; i++)\n-\t\t\tset_check_for_tx_hang(adapter->xdp_ring[i]);\n \t}\n \n \t/* get one bit for every active Tx/Rx interrupt vector */\n@@ -2979,7 +3062,10 @@ static void ixgbevf_service_task(struct work_struct *work)\n  **/\n void ixgbevf_free_tx_resources(struct ixgbevf_ring *tx_ring)\n {\n-\tixgbevf_clean_tx_ring(tx_ring);\n+\tif (!ring_is_xdp(tx_ring))\n+\t\tixgbevf_clean_tx_ring(tx_ring);\n+\telse\n+\t\tixgbevf_clean_xdp_ring(tx_ring);\n \n \tvfree(tx_ring->tx_buffer_info);\n \ttx_ring->tx_buffer_info = NULL;\n@@ -2988,7 +3074,7 @@ void ixgbevf_free_tx_resources(struct ixgbevf_ring *tx_ring)\n \tif (!tx_ring->desc)\n \t\treturn;\n \n-\tdma_free_coherent(tx_ring->dev, tx_ring->size, tx_ring->desc,\n+\tdma_free_coherent(tx_ring->dev, tx_ring->dma_size, tx_ring->desc,\n \t\t\t  tx_ring->dma);\n \n \ttx_ring->desc = NULL;\n@@ -3023,7 +3109,9 @@ int ixgbevf_setup_tx_resources(struct ixgbevf_ring *tx_ring)\n \tstruct ixgbevf_adapter *adapter = netdev_priv(tx_ring->netdev);\n \tint size;\n \n-\tsize = sizeof(struct ixgbevf_tx_buffer) * tx_ring->count;\n+\tsize = (!ring_is_xdp(tx_ring) ? sizeof(struct ixgbevf_tx_buffer) :\n+\t\tsizeof(struct libeth_sqe)) * tx_ring->count;\n+\n \ttx_ring->tx_buffer_info = vmalloc(size);\n \tif (!tx_ring->tx_buffer_info)\n \t\tgoto err;\n@@ -3031,10 +3119,10 @@ int ixgbevf_setup_tx_resources(struct ixgbevf_ring *tx_ring)\n \tu64_stats_init(&tx_ring->syncp);\n \n \t/* round up to nearest 4K */\n-\ttx_ring->size = tx_ring->count * sizeof(union ixgbe_adv_tx_desc);\n-\ttx_ring->size = ALIGN(tx_ring->size, 4096);\n+\ttx_ring->dma_size = tx_ring->count * sizeof(union ixgbe_adv_tx_desc);\n+\ttx_ring->dma_size = ALIGN(tx_ring->dma_size, 4096);\n \n-\ttx_ring->desc = dma_alloc_coherent(tx_ring->dev, tx_ring->size,\n+\ttx_ring->desc = dma_alloc_coherent(tx_ring->dev, tx_ring->dma_size,\n \t\t\t\t\t   &tx_ring->dma, GFP_KERNEL);\n \tif (!tx_ring->desc)\n \t\tgoto err;\n@@ -3123,10 +3211,10 @@ int ixgbevf_setup_rx_resources(struct ixgbevf_adapter *adapter,\n \tu64_stats_init(&rx_ring->syncp);\n \n \t/* Round up to nearest 4K */\n-\trx_ring->size = rx_ring->count * sizeof(union ixgbe_adv_rx_desc);\n-\trx_ring->size = ALIGN(rx_ring->size, 4096);\n+\trx_ring->dma_size = rx_ring->count * sizeof(union ixgbe_adv_rx_desc);\n+\trx_ring->dma_size = ALIGN(rx_ring->dma_size, 4096);\n \n-\trx_ring->desc = dma_alloc_coherent(fq.pp->p.dev, rx_ring->size,\n+\trx_ring->desc = dma_alloc_coherent(fq.pp->p.dev, rx_ring->dma_size,\n \t\t\t\t\t   &rx_ring->dma, GFP_KERNEL);\n \n \tif (!rx_ring->desc) {\n@@ -3202,7 +3290,7 @@ void ixgbevf_free_rx_resources(struct ixgbevf_ring *rx_ring)\n \txdp_rxq_info_detach_mem_model(&rx_ring->xdp_rxq);\n \txdp_rxq_info_unreg(&rx_ring->xdp_rxq);\n \n-\tdma_free_coherent(fq.pp->p.dev, rx_ring->size, rx_ring->desc,\n+\tdma_free_coherent(fq.pp->p.dev, rx_ring->dma_size, rx_ring->desc,\n \t\t\t  rx_ring->dma);\n \trx_ring->desc = NULL;\n \n",
    "prefixes": [
        "iwl-next",
        "v3",
        "06/10"
    ]
}