get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/2205091/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 2205091,
    "url": "http://patchwork.ozlabs.org/api/patches/2205091/?format=api",
    "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20260304160345.1340940-9-larysa.zaremba@intel.com/",
    "project": {
        "id": 46,
        "url": "http://patchwork.ozlabs.org/api/projects/46/?format=api",
        "name": "Intel Wired Ethernet development",
        "link_name": "intel-wired-lan",
        "list_id": "intel-wired-lan.osuosl.org",
        "list_email": "intel-wired-lan@osuosl.org",
        "web_url": "",
        "scm_url": "",
        "webscm_url": "",
        "list_archive_url": "",
        "list_archive_url_format": "",
        "commit_url_format": ""
    },
    "msgid": "<20260304160345.1340940-9-larysa.zaremba@intel.com>",
    "list_archive_url": null,
    "date": "2026-03-04T16:03:40",
    "name": "[iwl-next,v3,08/10] ixgbevf: add pseudo header split",
    "commit_ref": null,
    "pull_url": null,
    "state": "changes-requested",
    "archived": false,
    "hash": "9aa354c7cb240d3888b21b447f8f9ab6ce1275d5",
    "submitter": {
        "id": 84900,
        "url": "http://patchwork.ozlabs.org/api/people/84900/?format=api",
        "name": "Larysa Zaremba",
        "email": "larysa.zaremba@intel.com"
    },
    "delegate": {
        "id": 109701,
        "url": "http://patchwork.ozlabs.org/api/users/109701/?format=api",
        "username": "anguy11",
        "first_name": "Anthony",
        "last_name": "Nguyen",
        "email": "anthony.l.nguyen@intel.com"
    },
    "mbox": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20260304160345.1340940-9-larysa.zaremba@intel.com/mbox/",
    "series": [
        {
            "id": 494412,
            "url": "http://patchwork.ozlabs.org/api/series/494412/?format=api",
            "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/list/?series=494412",
            "date": "2026-03-04T16:03:32",
            "name": "libeth and full XDP for ixgbevf",
            "version": 3,
            "mbox": "http://patchwork.ozlabs.org/series/494412/mbox/"
        }
    ],
    "comments": "http://patchwork.ozlabs.org/api/patches/2205091/comments/",
    "check": "pending",
    "checks": "http://patchwork.ozlabs.org/api/patches/2205091/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<intel-wired-lan-bounces@osuosl.org>",
        "X-Original-To": [
            "incoming@patchwork.ozlabs.org",
            "intel-wired-lan@lists.osuosl.org"
        ],
        "Delivered-To": [
            "patchwork-incoming@legolas.ozlabs.org",
            "intel-wired-lan@lists.osuosl.org"
        ],
        "Authentication-Results": [
            "legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256\n header.s=default header.b=fU/PjlPO;\n\tdkim-atps=neutral",
            "legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org\n (client-ip=140.211.166.138; helo=smtp1.osuosl.org;\n envelope-from=intel-wired-lan-bounces@osuosl.org;\n receiver=patchwork.ozlabs.org)"
        ],
        "Received": [
            "from smtp1.osuosl.org (smtp1.osuosl.org [140.211.166.138])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4fQywk0D8Cz1xws\n\tfor <incoming@patchwork.ozlabs.org>; Thu, 05 Mar 2026 03:36:22 +1100 (AEDT)",
            "from localhost (localhost [127.0.0.1])\n\tby smtp1.osuosl.org (Postfix) with ESMTP id C8B4981359;\n\tWed,  4 Mar 2026 16:36:15 +0000 (UTC)",
            "from smtp1.osuosl.org ([127.0.0.1])\n by localhost (smtp1.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id bHz1IKsyBlxl; Wed,  4 Mar 2026 16:36:15 +0000 (UTC)",
            "from lists1.osuosl.org (lists1.osuosl.org [140.211.166.142])\n\tby smtp1.osuosl.org (Postfix) with ESMTP id E218081318;\n\tWed,  4 Mar 2026 16:36:14 +0000 (UTC)",
            "from smtp3.osuosl.org (smtp3.osuosl.org [140.211.166.136])\n by lists1.osuosl.org (Postfix) with ESMTP id ED0DF1EB\n for <intel-wired-lan@lists.osuosl.org>; Wed,  4 Mar 2026 16:36:13 +0000 (UTC)",
            "from localhost (localhost [127.0.0.1])\n by smtp3.osuosl.org (Postfix) with ESMTP id DFB656086F\n for <intel-wired-lan@lists.osuosl.org>; Wed,  4 Mar 2026 16:36:13 +0000 (UTC)",
            "from smtp3.osuosl.org ([127.0.0.1])\n by localhost (smtp3.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id ohEoQCBKv5UY for <intel-wired-lan@lists.osuosl.org>;\n Wed,  4 Mar 2026 16:36:12 +0000 (UTC)",
            "from mgamail.intel.com (mgamail.intel.com [192.198.163.18])\n by smtp3.osuosl.org (Postfix) with ESMTPS id AA3266086D\n for <intel-wired-lan@lists.osuosl.org>; Wed,  4 Mar 2026 16:36:12 +0000 (UTC)",
            "from fmviesa002.fm.intel.com ([10.60.135.142])\n by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 04 Mar 2026 08:36:12 -0800",
            "from irvmail002.ir.intel.com ([10.43.11.120])\n by fmviesa002.fm.intel.com with ESMTP; 04 Mar 2026 08:36:07 -0800",
            "from lincoln.igk.intel.com (lincoln.igk.intel.com [10.102.21.235])\n by irvmail002.ir.intel.com (Postfix) with ESMTP id F2953312C7;\n Wed,  4 Mar 2026 16:36:05 +0000 (GMT)"
        ],
        "X-Virus-Scanned": [
            "amavis at osuosl.org",
            "amavis at osuosl.org"
        ],
        "X-Comment": "SPF check N/A for local connections - client-ip=140.211.166.142;\n helo=lists1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org;\n receiver=<UNKNOWN> ",
        "DKIM-Filter": [
            "OpenDKIM Filter v2.11.0 smtp1.osuosl.org E218081318",
            "OpenDKIM Filter v2.11.0 smtp3.osuosl.org AA3266086D"
        ],
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org;\n\ts=default; t=1772642174;\n\tbh=Up+L2yn+WEJMD6tLWkY5okmA7ymqi0Be/h2yfK8Wfjo=;\n\th=From:To:Cc:Date:In-Reply-To:References:Subject:List-Id:\n\t List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe:\n\t From;\n\tb=fU/PjlPOiKyPA2x//M7XLxvDgmQvPCuZy+rj/49Vah8La8fxlVS2kDBvy9jdB5iOY\n\t XTJJqjNSQb1WS1CoypCl0YjiNIVqDIDBKz4QYGFH8Ze42yulSXJ9PRCZbaWVE553zr\n\t zbCqvVUIu0crk6W/4RBuc4SZtYc0x8FlO/YC0yqTZC0a/cgG7IqCBPy6S+3oDPqzjN\n\t KN6xfLYRhq2fSvJd6E9h0RvDdxJaf7oWekWc9OasfKuuGaY2W9y9urA2pUfPHZRhkQ\n\t gLGDLioNDh/H2V+w/ctuokYQ0rBIhLjDW0z0VLRU0TI7yV5SSgNDEXBICFE5PTZMue\n\t 1VBt6KgId8DYA==",
        "Received-SPF": "Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.18;\n helo=mgamail.intel.com; envelope-from=larysa.zaremba@intel.com;\n receiver=<UNKNOWN>",
        "DMARC-Filter": "OpenDMARC Filter v1.4.2 smtp3.osuosl.org AA3266086D",
        "X-CSE-ConnectionGUID": [
            "0W4xDAptSiiPw9kdrjAc2Q==",
            "NAVU0VL+T6yLJEwUh/etdw=="
        ],
        "X-CSE-MsgGUID": [
            "qBR8jIqdQ8yuB7CkA4rXzQ==",
            "ycIQVp/aQQ+9VJbj0Nev5g=="
        ],
        "X-IronPort-AV": [
            "E=McAfee;i=\"6800,10657,11719\"; a=\"72906426\"",
            "E=Sophos;i=\"6.21,324,1763452800\"; d=\"scan'208\";a=\"72906426\"",
            "E=Sophos;i=\"6.21,324,1763452800\"; d=\"scan'208\";a=\"241405012\""
        ],
        "X-ExtLoop1": "1",
        "From": "Larysa Zaremba <larysa.zaremba@intel.com>",
        "To": "Tony Nguyen <anthony.l.nguyen@intel.com>, intel-wired-lan@lists.osuosl.org",
        "Cc": "Larysa Zaremba <larysa.zaremba@intel.com>,\n Przemek Kitszel <przemyslaw.kitszel@intel.com>,\n Andrew Lunn <andrew+netdev@lunn.ch>,\n \"David S. Miller\" <davem@davemloft.net>,\n Eric Dumazet <edumazet@google.com>, Jakub Kicinski <kuba@kernel.org>,\n Paolo Abeni <pabeni@redhat.com>,\n Alexander Lobakin <aleksander.lobakin@intel.com>,\n Simon Horman <horms@kernel.org>, Alexei Starovoitov <ast@kernel.org>,\n Daniel Borkmann <daniel@iogearbox.net>,\n Jesper Dangaard Brouer <hawk@kernel.org>,\n John Fastabend <john.fastabend@gmail.com>,\n Stanislav Fomichev <sdf@fomichev.me>,\n Aleksandr Loktionov <aleksandr.loktionov@intel.com>,\n Natalia Wochtman <natalia.wochtman@intel.com>, netdev@vger.kernel.org,\n linux-kernel@vger.kernel.org, bpf@vger.kernel.org",
        "Date": "Wed,  4 Mar 2026 17:03:40 +0100",
        "Message-ID": "<20260304160345.1340940-9-larysa.zaremba@intel.com>",
        "X-Mailer": "git-send-email 2.52.0",
        "In-Reply-To": "<20260304160345.1340940-1-larysa.zaremba@intel.com>",
        "References": "<20260304160345.1340940-1-larysa.zaremba@intel.com>",
        "MIME-Version": "1.0",
        "Content-Transfer-Encoding": "8bit",
        "X-Mailman-Original-DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/simple;\n d=intel.com; i=@intel.com; q=dns/txt; s=Intel;\n t=1772642172; x=1804178172;\n h=from:to:cc:subject:date:message-id:in-reply-to:\n references:mime-version:content-transfer-encoding;\n bh=3E37TVKi/ZdyMSzyvl9aONt6910ly+ZHDY6aXGreeo4=;\n b=AMh2R+s2LNdwMa6XAJolKtjd6VBgIschp/OvXye2pbfzqeyy8haO7r+t\n DhOtf6w1R9rZd6TD7plkPKKytpwoxzplAQ6V0pzVCoqH3UpjJxyXGf7O2\n IlkoTTnaPYcJSii5mN8giGGpZoUYuUFfrVpNtZaj7tawwHDYFLNnqxPsv\n sFt+KGhmbbMWwCMsG+PaxD8tOGnuQikwWgEtRPT4Nw3LPsJJitDb6quPQ\n pVtqSKK1mEQTH9PcnlUS+TbqDHkMENCWC2D7TyTgsUmJXo97M8aQnjKYM\n sXWQnjKL9bxrksBvBLpHetnCl3d/IyUQX4jJFDaPH9lqTHrkHXSZXLFqa\n w==;",
        "X-Mailman-Original-Authentication-Results": [
            "smtp3.osuosl.org;\n dmarc=pass (p=none dis=none)\n header.from=intel.com",
            "smtp3.osuosl.org;\n dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com\n header.a=rsa-sha256 header.s=Intel header.b=AMh2R+s2"
        ],
        "Subject": "[Intel-wired-lan] [PATCH iwl-next v3 08/10] ixgbevf: add pseudo\n header split",
        "X-BeenThere": "intel-wired-lan@osuosl.org",
        "X-Mailman-Version": "2.1.30",
        "Precedence": "list",
        "List-Id": "Intel Wired Ethernet Linux Kernel Driver Development\n <intel-wired-lan.osuosl.org>",
        "List-Unsubscribe": "<https://lists.osuosl.org/mailman/options/intel-wired-lan>,\n <mailto:intel-wired-lan-request@osuosl.org?subject=unsubscribe>",
        "List-Archive": "<http://lists.osuosl.org/pipermail/intel-wired-lan/>",
        "List-Post": "<mailto:intel-wired-lan@osuosl.org>",
        "List-Help": "<mailto:intel-wired-lan-request@osuosl.org?subject=help>",
        "List-Subscribe": "<https://lists.osuosl.org/mailman/listinfo/intel-wired-lan>,\n <mailto:intel-wired-lan-request@osuosl.org?subject=subscribe>",
        "Errors-To": "intel-wired-lan-bounces@osuosl.org",
        "Sender": "\"Intel-wired-lan\" <intel-wired-lan-bounces@osuosl.org>"
    },
    "content": "From: Natalia Wochtman <natalia.wochtman@intel.com>\n\nIntroduce pseudo header split support in the ixgbevf driver, specifically\ntargeting ixgbe_mac_82599_vf.\n\nOn older hardware (e.g. ixgbe_mac_82599_vf), RX DMA write size can only be\nlimited in 1K increments. This causes issues when attempting to fit\nmultiple packets per page, as a DMA write may overwrite the\nheadroom of the next packet.\n\nTo address this, introduce pseudo header split support, where the hardware\ncopies the full L2 header into a dedicated header buffer. This avoids the\nneed for HR/TR alignment and allows safe skb construction from the header\nbuffer without risking overwrites.\n\nGiven that once packet is too big to fit into a single page, the behaviour\nis the same for all supported HW, use pseudo header split only for smaller\npackets.\n\nSigned-off-by: Natalia Wochtman <natalia.wochtman@intel.com>\nReviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>\nCo-developed-by: Larysa Zaremba <larysa.zaremba@intel.com>\nSigned-off-by: Larysa Zaremba <larysa.zaremba@intel.com>\n---\n drivers/net/ethernet/intel/ixgbevf/ixgbevf.h  |   8 +\n .../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 180 +++++++++++++++---\n 2 files changed, 163 insertions(+), 25 deletions(-)",
    "diff": "diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h\nindex ea86679e4f81..438328b81855 100644\n--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h\n+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h\n@@ -89,6 +89,7 @@ struct ixgbevf_ring {\n \t\tu32 truesize;\t\t/* Rx buffer full size */\n \t\tu32 pending;\t\t/* Sent-not-completed descriptors */\n \t};\n+\tu32 hdr_truesize;\t\t/* Rx header buffer full size */\n \tu16 count;\t\t\t/* amount of descriptors */\n \tu16 next_to_clean;\n \tu32 next_to_use;\n@@ -107,6 +108,8 @@ struct ixgbevf_ring {\n \t\tstruct ixgbevf_tx_queue_stats tx_stats;\n \t\tstruct ixgbevf_rx_queue_stats rx_stats;\n \t};\n+\tstruct libeth_fqe *hdr_fqes;\n+\tstruct page_pool *hdr_pp;\n \tstruct xdp_rxq_info xdp_rxq;\n \tu64 hw_csum_rx_error;\n \tu8 __iomem *tail;\n@@ -116,6 +119,7 @@ struct ixgbevf_ring {\n \t */\n \tu16 reg_idx;\n \tint queue_index; /* needed for multiqueue queue management */\n+\tu32 hdr_buf_len;\n \tu32 rx_buf_len;\n \tstruct libeth_xdp_buff_stash xdp_stash;\n \tunsigned int dma_size;\t\t/* length in bytes */\n@@ -151,6 +155,8 @@ struct ixgbevf_ring {\n \n #define IXGBEVF_RX_PAGE_LEN(hr)\t\t(ALIGN_DOWN(LIBETH_RX_PAGE_LEN(hr), \\\n \t\t\t\t\t IXGBE_SRRCTL_BSIZEPKT_STEP))\n+#define IXGBEVF_RX_SRRCTL_BUF_SIZE(mtu)\t(ALIGN((mtu) + LIBETH_RX_LL_LEN, \\\n+\t\t\t\t\t       IXGBE_SRRCTL_BSIZEPKT_STEP))\n \n #define IXGBE_TX_FLAGS_CSUM\t\tBIT(0)\n #define IXGBE_TX_FLAGS_VLAN\t\tBIT(1)\n@@ -349,6 +355,8 @@ enum ixbgevf_state_t {\n \t__IXGBEVF_QUEUE_RESET_REQUESTED,\n };\n \n+#define IXGBEVF_FLAG_HSPLIT\tBIT(0)\n+\n enum ixgbevf_boards {\n \tboard_82599_vf,\n \tboard_82599_vf_hv,\ndiff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c\nindex 2f3b4954ded8..d00d3b307a8f 100644\n--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c\n+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c\n@@ -561,6 +561,12 @@ static void ixgbevf_alloc_rx_buffers(struct ixgbevf_ring *rx_ring,\n \t\t.truesize\t= rx_ring->truesize,\n \t\t.count\t\t= rx_ring->count,\n \t};\n+\tconst struct libeth_fq_fp hdr_fq = {\n+\t\t.pp\t\t= rx_ring->hdr_pp,\n+\t\t.fqes\t\t= rx_ring->hdr_fqes,\n+\t\t.truesize\t= rx_ring->hdr_truesize,\n+\t\t.count\t\t= rx_ring->count,\n+\t};\n \tu16 ntu = rx_ring->next_to_use;\n \n \t/* nothing to do or no valid netdev defined */\n@@ -578,6 +584,14 @@ static void ixgbevf_alloc_rx_buffers(struct ixgbevf_ring *rx_ring,\n \n \t\trx_desc->read.pkt_addr = cpu_to_le64(addr);\n \n+\t\tif (hdr_fq.pp) {\n+\t\t\taddr = libeth_rx_alloc(&hdr_fq, ntu);\n+\t\t\tif (addr == DMA_MAPPING_ERROR) {\n+\t\t\t\tlibeth_rx_recycle_slow(fq.fqes[ntu].netmem);\n+\t\t\t\tbreak;\n+\t\t\t}\n+\t\t}\n+\n \t\trx_desc++;\n \t\tntu++;\n \t\tif (unlikely(ntu == fq.count)) {\n@@ -820,6 +834,32 @@ LIBETH_XDP_DEFINE_FINALIZE(static ixgbevf_xdp_finalize_xdp_napi,\n \t\t\t   ixgbevf_xdp_flush_tx, ixgbevf_xdp_rs_and_bump);\n LIBETH_XDP_DEFINE_END();\n \n+static u32 ixgbevf_rx_hsplit_wa(const struct libeth_fqe *hdr,\n+\t\t\t\tstruct libeth_fqe *buf, u32 data_len)\n+{\n+\tu32 copy = data_len <= L1_CACHE_BYTES ? data_len : ETH_HLEN;\n+\tstruct page *hdr_page, *buf_page;\n+\tconst void *src;\n+\tvoid *dst;\n+\n+\tif (unlikely(netmem_is_net_iov(buf->netmem)) ||\n+\t    !libeth_rx_sync_for_cpu(buf, copy))\n+\t\treturn 0;\n+\n+\thdr_page = __netmem_to_page(hdr->netmem);\n+\tbuf_page = __netmem_to_page(buf->netmem);\n+\n+\tdst = page_address(hdr_page) + hdr->offset +\n+\t      pp_page_to_nmdesc(hdr_page)->pp->p.offset;\n+\tsrc = page_address(buf_page) + buf->offset +\n+\t      pp_page_to_nmdesc(buf_page)->pp->p.offset;\n+\n+\tmemcpy(dst, src, LARGEST_ALIGN(copy));\n+\tbuf->offset += copy;\n+\n+\treturn copy;\n+}\n+\n static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector,\n \t\t\t\tstruct ixgbevf_ring *rx_ring,\n \t\t\t\tint budget)\n@@ -859,6 +899,23 @@ static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector,\n \t\trmb();\n \n \t\trx_buffer = &rx_ring->rx_fqes[rx_ring->next_to_clean];\n+\n+\t\tif (unlikely(rx_ring->hdr_pp)) {\n+\t\t\tstruct libeth_fqe *hdr_buff;\n+\t\t\tunsigned int hdr_size = 0;\n+\n+\t\t\thdr_buff = &rx_ring->hdr_fqes[rx_ring->next_to_clean];\n+\n+\t\t\tif (!xdp->data) {\n+\t\t\t\thdr_size = ixgbevf_rx_hsplit_wa(hdr_buff,\n+\t\t\t\t\t\t\t\trx_buffer,\n+\t\t\t\t\t\t\t\tsize);\n+\t\t\t\tsize -= hdr_size ? : size;\n+\t\t\t}\n+\n+\t\t\tlibeth_xdp_process_buff(xdp, hdr_buff, hdr_size);\n+\t\t}\n+\n \t\tlibeth_xdp_process_buff(xdp, rx_buffer, size);\n \n \t\tcleaned_count++;\n@@ -1598,6 +1655,90 @@ static void ixgbevf_setup_vfmrqc(struct ixgbevf_adapter *adapter)\n \tIXGBE_WRITE_REG(hw, IXGBE_VFMRQC, vfmrqc);\n }\n \n+static void ixgbevf_rx_destroy_pp(struct ixgbevf_ring *rx_ring)\n+{\n+\tstruct libeth_fq fq = {\n+\t\t.pp\t= rx_ring->pp,\n+\t\t.fqes\t= rx_ring->rx_fqes,\n+\t};\n+\n+\tlibeth_rx_fq_destroy(&fq);\n+\trx_ring->rx_fqes = NULL;\n+\trx_ring->pp = NULL;\n+\n+\tif (!rx_ring->hdr_pp)\n+\t\treturn;\n+\n+\tfq = (struct libeth_fq) {\n+\t\t.pp\t= rx_ring->hdr_pp,\n+\t\t.fqes\t= rx_ring->hdr_fqes,\n+\t};\n+\n+\tlibeth_rx_fq_destroy(&fq);\n+\trx_ring->hdr_fqes = NULL;\n+\trx_ring->hdr_pp = NULL;\n+}\n+\n+static int ixgbevf_rx_create_pp(struct ixgbevf_ring *rx_ring)\n+{\n+\tu32 adapter_flags = rx_ring->q_vector->adapter->flags;\n+\tstruct libeth_fq fq = {\n+\t\t.count\t\t= rx_ring->count,\n+\t\t.nid\t\t= NUMA_NO_NODE,\n+\t\t.type\t\t= LIBETH_FQE_MTU,\n+\t\t.xdp\t\t= !!rx_ring->xdp_prog,\n+\t\t.idx\t\t= rx_ring->queue_index,\n+\t\t.buf_len\t= IXGBEVF_RX_PAGE_LEN(rx_ring->xdp_prog ?\n+\t\t\t\t\t\t      LIBETH_XDP_HEADROOM :\n+\t\t\t\t\t\t      LIBETH_SKB_HEADROOM),\n+\t};\n+\tu32 frame_size;\n+\tint ret;\n+\n+\t/* Some HW requires DMA write sizes to be aligned to 1K,\n+\t * which warrants fake header split usage, but this is\n+\t * not an issue if the frame size is at its maximum of 3K\n+\t */\n+\tframe_size =\n+\t\tIXGBEVF_RX_SRRCTL_BUF_SIZE(READ_ONCE(rx_ring->netdev->mtu));\n+\tfq.hsplit = (adapter_flags & IXGBEVF_FLAG_HSPLIT) &&\n+\t\t    frame_size < fq.buf_len;\n+\tret = libeth_rx_fq_create(&fq, &rx_ring->q_vector->napi);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\trx_ring->pp = fq.pp;\n+\trx_ring->rx_fqes = fq.fqes;\n+\trx_ring->truesize = fq.truesize;\n+\trx_ring->rx_buf_len = fq.buf_len;\n+\n+\tif (!fq.hsplit)\n+\t\treturn 0;\n+\n+\tfq = (struct libeth_fq) {\n+\t\t.count\t\t= rx_ring->count,\n+\t\t.nid\t\t= NUMA_NO_NODE,\n+\t\t.type\t\t= LIBETH_FQE_HDR,\n+\t\t.xdp\t\t= !!rx_ring->xdp_prog,\n+\t\t.idx\t\t= rx_ring->queue_index,\n+\t};\n+\n+\tret = libeth_rx_fq_create(&fq, &rx_ring->q_vector->napi);\n+\tif (ret)\n+\t\tgoto err;\n+\n+\trx_ring->hdr_pp = fq.pp;\n+\trx_ring->hdr_fqes = fq.fqes;\n+\trx_ring->hdr_truesize = fq.truesize;\n+\trx_ring->hdr_buf_len = fq.buf_len;\n+\n+\treturn 0;\n+\n+err:\n+\tixgbevf_rx_destroy_pp(rx_ring);\n+\treturn ret;\n+}\n+\n static void ixgbevf_configure_rx_ring(struct ixgbevf_adapter *adapter,\n \t\t\t\t      struct ixgbevf_ring *ring)\n {\n@@ -2718,6 +2859,9 @@ static int ixgbevf_sw_init(struct ixgbevf_adapter *adapter)\n \t\t\tgoto out;\n \t}\n \n+\tif (adapter->hw.mac.type == ixgbe_mac_82599_vf)\n+\t\tadapter->flags |= IXGBEVF_FLAG_HSPLIT;\n+\n \t/* assume legacy case in which PF would only give VF 2 queues */\n \thw->mac.max_tx_queues = 2;\n \thw->mac.max_rx_queues = 2;\n@@ -3152,43 +3296,29 @@ static int ixgbevf_setup_all_tx_resources(struct ixgbevf_adapter *adapter)\n }\n \n /**\n- * ixgbevf_setup_rx_resources - allocate Rx resources (Descriptors)\n+ * ixgbevf_setup_rx_resources - allocate Rx resources\n  * @adapter: board private structure\n  * @rx_ring: Rx descriptor ring (for a specific queue) to setup\n  *\n- * Returns 0 on success, negative on failure\n+ * Returns: 0 on success, negative on failure.\n  **/\n int ixgbevf_setup_rx_resources(struct ixgbevf_adapter *adapter,\n \t\t\t       struct ixgbevf_ring *rx_ring)\n {\n-\tstruct libeth_fq fq = {\n-\t\t.count\t\t= rx_ring->count,\n-\t\t.nid\t\t= NUMA_NO_NODE,\n-\t\t.type\t\t= LIBETH_FQE_MTU,\n-\t\t.xdp\t\t= !!rx_ring->xdp_prog,\n-\t\t.idx\t\t= rx_ring->queue_index,\n-\t\t.buf_len\t= IXGBEVF_RX_PAGE_LEN(rx_ring->xdp_prog ?\n-\t\t\t\t\t\t      LIBETH_XDP_HEADROOM :\n-\t\t\t\t\t\t      LIBETH_SKB_HEADROOM),\n-\t};\n \tint ret;\n \n-\tret = libeth_rx_fq_create(&fq, &rx_ring->q_vector->napi);\n+\tret = ixgbevf_rx_create_pp(rx_ring);\n \tif (ret)\n \t\treturn ret;\n \n-\trx_ring->pp = fq.pp;\n-\trx_ring->rx_fqes = fq.fqes;\n-\trx_ring->truesize = fq.truesize;\n-\trx_ring->rx_buf_len = fq.buf_len;\n-\n \tu64_stats_init(&rx_ring->syncp);\n \n \t/* Round up to nearest 4K */\n \trx_ring->dma_size = rx_ring->count * sizeof(union ixgbe_adv_rx_desc);\n \trx_ring->dma_size = ALIGN(rx_ring->dma_size, 4096);\n \n-\trx_ring->desc = dma_alloc_coherent(fq.pp->p.dev, rx_ring->dma_size,\n+\trx_ring->desc = dma_alloc_coherent(rx_ring->pp->p.dev,\n+\t\t\t\t\t   rx_ring->dma_size,\n \t\t\t\t\t   &rx_ring->dma, GFP_KERNEL);\n \n \tif (!rx_ring->desc) {\n@@ -3202,16 +3332,15 @@ int ixgbevf_setup_rx_resources(struct ixgbevf_adapter *adapter,\n \tif (ret)\n \t\tgoto err;\n \n-\txdp_rxq_info_attach_page_pool(&rx_ring->xdp_rxq, fq.pp);\n+\txdp_rxq_info_attach_page_pool(&rx_ring->xdp_rxq, rx_ring->pp);\n \n \trx_ring->xdp_prog = adapter->xdp_prog;\n \n \treturn 0;\n err:\n-\tlibeth_rx_fq_destroy(&fq);\n-\trx_ring->rx_fqes = NULL;\n-\trx_ring->pp = NULL;\n+\tixgbevf_rx_destroy_pp(rx_ring);\n \tdev_err(rx_ring->dev, \"Unable to allocate memory for the Rx descriptor ring\\n\");\n+\n \treturn ret;\n }\n \n@@ -4140,10 +4269,11 @@ static int ixgbevf_xdp_setup(struct net_device *dev, struct bpf_prog *prog,\n \tstruct bpf_prog *old_prog;\n \tbool requires_mbuf;\n \n-\trequires_mbuf = frame_size > IXGBEVF_RX_PAGE_LEN(LIBETH_XDP_HEADROOM);\n+\trequires_mbuf = frame_size > IXGBEVF_RX_PAGE_LEN(LIBETH_XDP_HEADROOM) ||\n+\t\t\tadapter->flags & IXGBEVF_FLAG_HSPLIT;\n \tif (prog && !prog->aux->xdp_has_frags && requires_mbuf) {\n \t\tNL_SET_ERR_MSG_MOD(extack,\n-\t\t\t\t   \"Configured MTU requires non-linear frames and XDP prog does not support frags\");\n+\t\t\t\t   \"Configured MTU or HW limitations require non-linear frames and XDP prog does not support frags\");\n \t\treturn -EOPNOTSUPP;\n \t}\n \n",
    "prefixes": [
        "iwl-next",
        "v3",
        "08/10"
    ]
}