get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/2205085/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 2205085,
    "url": "http://patchwork.ozlabs.org/api/patches/2205085/?format=api",
    "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20260304160345.1340940-3-larysa.zaremba@intel.com/",
    "project": {
        "id": 46,
        "url": "http://patchwork.ozlabs.org/api/projects/46/?format=api",
        "name": "Intel Wired Ethernet development",
        "link_name": "intel-wired-lan",
        "list_id": "intel-wired-lan.osuosl.org",
        "list_email": "intel-wired-lan@osuosl.org",
        "web_url": "",
        "scm_url": "",
        "webscm_url": "",
        "list_archive_url": "",
        "list_archive_url_format": "",
        "commit_url_format": ""
    },
    "msgid": "<20260304160345.1340940-3-larysa.zaremba@intel.com>",
    "list_archive_url": null,
    "date": "2026-03-04T16:03:34",
    "name": "[iwl-next,v3,02/10] ixgbevf: do not share pages between packets",
    "commit_ref": null,
    "pull_url": null,
    "state": "changes-requested",
    "archived": false,
    "hash": "795ac70a8abb52267eacfa0ef5c499aab3e50050",
    "submitter": {
        "id": 84900,
        "url": "http://patchwork.ozlabs.org/api/people/84900/?format=api",
        "name": "Larysa Zaremba",
        "email": "larysa.zaremba@intel.com"
    },
    "delegate": {
        "id": 109701,
        "url": "http://patchwork.ozlabs.org/api/users/109701/?format=api",
        "username": "anguy11",
        "first_name": "Anthony",
        "last_name": "Nguyen",
        "email": "anthony.l.nguyen@intel.com"
    },
    "mbox": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20260304160345.1340940-3-larysa.zaremba@intel.com/mbox/",
    "series": [
        {
            "id": 494412,
            "url": "http://patchwork.ozlabs.org/api/series/494412/?format=api",
            "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/list/?series=494412",
            "date": "2026-03-04T16:03:32",
            "name": "libeth and full XDP for ixgbevf",
            "version": 3,
            "mbox": "http://patchwork.ozlabs.org/series/494412/mbox/"
        }
    ],
    "comments": "http://patchwork.ozlabs.org/api/patches/2205085/comments/",
    "check": "pending",
    "checks": "http://patchwork.ozlabs.org/api/patches/2205085/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<intel-wired-lan-bounces@osuosl.org>",
        "X-Original-To": [
            "incoming@patchwork.ozlabs.org",
            "intel-wired-lan@lists.osuosl.org"
        ],
        "Delivered-To": [
            "patchwork-incoming@legolas.ozlabs.org",
            "intel-wired-lan@lists.osuosl.org"
        ],
        "Authentication-Results": [
            "legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256\n header.s=default header.b=Y0voPJwA;\n\tdkim-atps=neutral",
            "legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org\n (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org;\n envelope-from=intel-wired-lan-bounces@osuosl.org;\n receiver=patchwork.ozlabs.org)"
        ],
        "Received": [
            "from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4fQywS32Kqz1xws\n\tfor <incoming@patchwork.ozlabs.org>; Thu, 05 Mar 2026 03:36:08 +1100 (AEDT)",
            "from localhost (localhost [127.0.0.1])\n\tby smtp1.osuosl.org (Postfix) with ESMTP id 83B118130D;\n\tWed,  4 Mar 2026 16:36:04 +0000 (UTC)",
            "from smtp1.osuosl.org ([127.0.0.1])\n by localhost (smtp1.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id X7ngWnz9jht3; Wed,  4 Mar 2026 16:36:03 +0000 (UTC)",
            "from lists1.osuosl.org (lists1.osuosl.org [140.211.166.142])\n\tby smtp1.osuosl.org (Postfix) with ESMTP id 79C3781326;\n\tWed,  4 Mar 2026 16:36:03 +0000 (UTC)",
            "from smtp3.osuosl.org (smtp3.osuosl.org [IPv6:2605:bc80:3010::136])\n by lists1.osuosl.org (Postfix) with ESMTP id 3F89E1EB\n for <intel-wired-lan@lists.osuosl.org>; Wed,  4 Mar 2026 16:36:02 +0000 (UTC)",
            "from localhost (localhost [127.0.0.1])\n by smtp3.osuosl.org (Postfix) with ESMTP id 321C06086F\n for <intel-wired-lan@lists.osuosl.org>; Wed,  4 Mar 2026 16:36:02 +0000 (UTC)",
            "from smtp3.osuosl.org ([127.0.0.1])\n by localhost (smtp3.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id bzPiVhV1l-Sx for <intel-wired-lan@lists.osuosl.org>;\n Wed,  4 Mar 2026 16:36:01 +0000 (UTC)",
            "from mgamail.intel.com (mgamail.intel.com [192.198.163.12])\n by smtp3.osuosl.org (Postfix) with ESMTPS id 1F2506086D\n for <intel-wired-lan@lists.osuosl.org>; Wed,  4 Mar 2026 16:36:01 +0000 (UTC)",
            "from orviesa004.jf.intel.com ([10.64.159.144])\n by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 04 Mar 2026 08:36:01 -0800",
            "from irvmail002.ir.intel.com ([10.43.11.120])\n by orviesa004.jf.intel.com with ESMTP; 04 Mar 2026 08:35:56 -0800",
            "from lincoln.igk.intel.com (lincoln.igk.intel.com [10.102.21.235])\n by irvmail002.ir.intel.com (Postfix) with ESMTP id D50DB312C8;\n Wed,  4 Mar 2026 16:35:53 +0000 (GMT)"
        ],
        "X-Virus-Scanned": [
            "amavis at osuosl.org",
            "amavis at osuosl.org"
        ],
        "X-Comment": "SPF check N/A for local connections - client-ip=140.211.166.142;\n helo=lists1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org;\n receiver=<UNKNOWN> ",
        "DKIM-Filter": [
            "OpenDKIM Filter v2.11.0 smtp1.osuosl.org 79C3781326",
            "OpenDKIM Filter v2.11.0 smtp3.osuosl.org 1F2506086D"
        ],
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org;\n\ts=default; t=1772642163;\n\tbh=tsT99tuUZCE2kjCLA9lAikN2cSAWwEiaOEHnlLa9ols=;\n\th=From:To:Cc:Date:In-Reply-To:References:Subject:List-Id:\n\t List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe:\n\t From;\n\tb=Y0voPJwADIxL0pzkxPKoELK+Uu8fZ3S/IQBodz24ZsyamLsFRRS4vxtYgUA4hE/U+\n\t RO4kIpaKhiVMBXjBoCHopcq3Wnv4hB7Aq5fTojUFT7PhXN82bbFhhGj27SzHIUZO+n\n\t ndKXNDiPZ5Yha8s4BY8iWLO4sW39bsAUq4vRXhb16Qt/o7o9r4afIRbKa6gWOftf2e\n\t F7L7qqxIKoUGhowu1uaq2m4o3UMU2odcchakpN0atcxBmCtaT6NgWjl8CRL5xhyrx6\n\t n0qwOFSKwPSId22QD6pmA8Qm75HLGpZAfhSEsr2r6U38mxrTOGJhNhnGN22oxx0QyQ\n\t WSzF6JmZ2OHzQ==",
        "Received-SPF": "Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.12;\n helo=mgamail.intel.com; envelope-from=larysa.zaremba@intel.com;\n receiver=<UNKNOWN>",
        "DMARC-Filter": "OpenDMARC Filter v1.4.2 smtp3.osuosl.org 1F2506086D",
        "X-CSE-ConnectionGUID": [
            "WwV9mKdtTmuadnvHfxyA7g==",
            "K4kdLLZDSkyAZqM+O+bnSQ=="
        ],
        "X-CSE-MsgGUID": [
            "RlDB1i+kTZKgOe2GAXa/Kw==",
            "SwyWluAdRiW8gVjgg7/7MQ=="
        ],
        "X-IronPort-AV": [
            "E=McAfee;i=\"6800,10657,11719\"; a=\"77580027\"",
            "E=Sophos;i=\"6.21,324,1763452800\"; d=\"scan'208\";a=\"77580027\"",
            "E=Sophos;i=\"6.21,324,1763452800\"; d=\"scan'208\";a=\"222895567\""
        ],
        "X-ExtLoop1": "1",
        "From": "Larysa Zaremba <larysa.zaremba@intel.com>",
        "To": "Tony Nguyen <anthony.l.nguyen@intel.com>, intel-wired-lan@lists.osuosl.org",
        "Cc": "Larysa Zaremba <larysa.zaremba@intel.com>,\n Przemek Kitszel <przemyslaw.kitszel@intel.com>,\n Andrew Lunn <andrew+netdev@lunn.ch>,\n \"David S. Miller\" <davem@davemloft.net>,\n Eric Dumazet <edumazet@google.com>, Jakub Kicinski <kuba@kernel.org>,\n Paolo Abeni <pabeni@redhat.com>,\n Alexander Lobakin <aleksander.lobakin@intel.com>,\n Simon Horman <horms@kernel.org>, Alexei Starovoitov <ast@kernel.org>,\n Daniel Borkmann <daniel@iogearbox.net>,\n Jesper Dangaard Brouer <hawk@kernel.org>,\n John Fastabend <john.fastabend@gmail.com>,\n Stanislav Fomichev <sdf@fomichev.me>,\n Aleksandr Loktionov <aleksandr.loktionov@intel.com>,\n Natalia Wochtman <natalia.wochtman@intel.com>, netdev@vger.kernel.org,\n linux-kernel@vger.kernel.org, bpf@vger.kernel.org",
        "Date": "Wed,  4 Mar 2026 17:03:34 +0100",
        "Message-ID": "<20260304160345.1340940-3-larysa.zaremba@intel.com>",
        "X-Mailer": "git-send-email 2.52.0",
        "In-Reply-To": "<20260304160345.1340940-1-larysa.zaremba@intel.com>",
        "References": "<20260304160345.1340940-1-larysa.zaremba@intel.com>",
        "MIME-Version": "1.0",
        "Content-Transfer-Encoding": "8bit",
        "X-Mailman-Original-DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/simple;\n d=intel.com; i=@intel.com; q=dns/txt; s=Intel;\n t=1772642161; x=1804178161;\n h=from:to:cc:subject:date:message-id:in-reply-to:\n references:mime-version:content-transfer-encoding;\n bh=jUOV6Js93vk5q324qEhimUXDbEFSByRLcASTD2PJOEU=;\n b=RmJOn8kq+t99c1FGb8MKHi6vb6s3DnrA440cFicyALwasTXWOHiflw+P\n +uzRM+52aKznLg6vZ1scmR+JGS14FARqx60uoaHplLRal3pYrvSKZjFw6\n DJsZjWdSV2elXaa9fW0YJWG3KhtVxQ2wq62hG+3CSRIcI2/3k9l9Rn4D9\n D52/tGZMNVCIw4ZoO6IvMP9+h6Iwn4b3+YY5KlI4oGf2s6oxhG/4hs7l+\n 7GHIILvdAVbLN81nOjofBIpaDIhNhR0ZFuHC92eyy+ST2dCyBkVaN5rvl\n hOps6uCfBE0F6WhjvY1xQHD2nMlocYXrhNcA3txcPHJc0UkxwVQwO+QU1\n Q==;",
        "X-Mailman-Original-Authentication-Results": [
            "smtp3.osuosl.org;\n dmarc=pass (p=none dis=none)\n header.from=intel.com",
            "smtp3.osuosl.org;\n dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com\n header.a=rsa-sha256 header.s=Intel header.b=RmJOn8kq"
        ],
        "Subject": "[Intel-wired-lan] [PATCH iwl-next v3 02/10] ixgbevf: do not share\n pages between packets",
        "X-BeenThere": "intel-wired-lan@osuosl.org",
        "X-Mailman-Version": "2.1.30",
        "Precedence": "list",
        "List-Id": "Intel Wired Ethernet Linux Kernel Driver Development\n <intel-wired-lan.osuosl.org>",
        "List-Unsubscribe": "<https://lists.osuosl.org/mailman/options/intel-wired-lan>,\n <mailto:intel-wired-lan-request@osuosl.org?subject=unsubscribe>",
        "List-Archive": "<http://lists.osuosl.org/pipermail/intel-wired-lan/>",
        "List-Post": "<mailto:intel-wired-lan@osuosl.org>",
        "List-Help": "<mailto:intel-wired-lan-request@osuosl.org?subject=help>",
        "List-Subscribe": "<https://lists.osuosl.org/mailman/listinfo/intel-wired-lan>,\n <mailto:intel-wired-lan-request@osuosl.org?subject=subscribe>",
        "Errors-To": "intel-wired-lan-bounces@osuosl.org",
        "Sender": "\"Intel-wired-lan\" <intel-wired-lan-bounces@osuosl.org>"
    },
    "content": "Again, same as in the related iavf commit 920d86f3c552 (\"iavf: drop page\nsplitting and recycling\"), as an intermediate step, drop the page sharing\nand recycling logic in a preparation to offload it to page_pool.\n\nInstead of the previous sharing and recycling, just allocate a new page\nevery time.\n\nSuggested-by: Alexander Lobakin <aleksander.lobakin@intel.com>\nReviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>\nReviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com>\nSigned-off-by: Larysa Zaremba <larysa.zaremba@intel.com>\n---\n drivers/net/ethernet/intel/ixgbevf/ixgbevf.h  |  44 +---\n .../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 231 ++----------------\n 2 files changed, 23 insertions(+), 252 deletions(-)",
    "diff": "diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h\nindex ae2763fea2be..2d7ca3f86868 100644\n--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h\n+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h\n@@ -45,12 +45,7 @@ struct ixgbevf_tx_buffer {\n struct ixgbevf_rx_buffer {\n \tdma_addr_t dma;\n \tstruct page *page;\n-#if (BITS_PER_LONG > 32) || (PAGE_SIZE >= 65536)\n \t__u32 page_offset;\n-#else\n-\t__u16 page_offset;\n-#endif\n-\t__u16 pagecnt_bias;\n };\n \n struct ixgbevf_stats {\n@@ -72,7 +67,6 @@ struct ixgbevf_rx_queue_stats {\n };\n \n enum ixgbevf_ring_state_t {\n-\t__IXGBEVF_RX_3K_BUFFER,\n \t__IXGBEVF_TX_DETECT_HANG,\n \t__IXGBEVF_HANG_CHECK_ARMED,\n \t__IXGBEVF_TX_XDP_RING,\n@@ -143,8 +137,7 @@ struct ixgbevf_ring {\n #define IXGBEVF_MIN_RXD\t\t64\n \n /* Supported Rx Buffer Sizes */\n-#define IXGBEVF_RXBUFFER_256\t256    /* Used for packet split */\n-#define IXGBEVF_RXBUFFER_2048\t2048\n+#define IXGBEVF_RXBUFFER_256\t256\n #define IXGBEVF_RXBUFFER_3072\t3072\n \n #define IXGBEVF_RX_HDR_SIZE\tIXGBEVF_RXBUFFER_256\n@@ -152,12 +145,6 @@ struct ixgbevf_ring {\n #define MAXIMUM_ETHERNET_VLAN_SIZE (VLAN_ETH_FRAME_LEN + ETH_FCS_LEN)\n \n #define IXGBEVF_SKB_PAD\t\t(NET_SKB_PAD + NET_IP_ALIGN)\n-#if (PAGE_SIZE < 8192)\n-#define IXGBEVF_MAX_FRAME_BUILD_SKB \\\n-\t(SKB_WITH_OVERHEAD(IXGBEVF_RXBUFFER_2048) - IXGBEVF_SKB_PAD)\n-#else\n-#define IXGBEVF_MAX_FRAME_BUILD_SKB\tIXGBEVF_RXBUFFER_2048\n-#endif\n \n #define IXGBE_TX_FLAGS_CSUM\t\tBIT(0)\n #define IXGBE_TX_FLAGS_VLAN\t\tBIT(1)\n@@ -168,35 +155,6 @@ struct ixgbevf_ring {\n #define IXGBE_TX_FLAGS_VLAN_PRIO_MASK\t0x0000e000\n #define IXGBE_TX_FLAGS_VLAN_SHIFT\t16\n \n-#define ring_uses_large_buffer(ring) \\\n-\ttest_bit(__IXGBEVF_RX_3K_BUFFER, &(ring)->state)\n-#define set_ring_uses_large_buffer(ring) \\\n-\tset_bit(__IXGBEVF_RX_3K_BUFFER, &(ring)->state)\n-#define clear_ring_uses_large_buffer(ring) \\\n-\tclear_bit(__IXGBEVF_RX_3K_BUFFER, &(ring)->state)\n-\n-static inline unsigned int ixgbevf_rx_bufsz(struct ixgbevf_ring *ring)\n-{\n-#if (PAGE_SIZE < 8192)\n-\tif (ring_uses_large_buffer(ring))\n-\t\treturn IXGBEVF_RXBUFFER_3072;\n-\n-\treturn IXGBEVF_MAX_FRAME_BUILD_SKB;\n-#endif\n-\treturn IXGBEVF_RXBUFFER_2048;\n-}\n-\n-static inline unsigned int ixgbevf_rx_pg_order(struct ixgbevf_ring *ring)\n-{\n-#if (PAGE_SIZE < 8192)\n-\tif (ring_uses_large_buffer(ring))\n-\t\treturn 1;\n-#endif\n-\treturn 0;\n-}\n-\n-#define ixgbevf_rx_pg_size(_ring) (PAGE_SIZE << ixgbevf_rx_pg_order(_ring))\n-\n #define check_for_tx_hang(ring) \\\n \ttest_bit(__IXGBEVF_TX_DETECT_HANG, &(ring)->state)\n #define set_check_for_tx_hang(ring) \\\ndiff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c\nindex fc48c89c7bb8..f5a7dd37084f 100644\n--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c\n+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c\n@@ -112,9 +112,6 @@ static void ixgbevf_service_event_complete(struct ixgbevf_adapter *adapter)\n static void ixgbevf_queue_reset_subtask(struct ixgbevf_adapter *adapter);\n static void ixgbevf_set_itr(struct ixgbevf_q_vector *q_vector);\n static void ixgbevf_free_all_rx_resources(struct ixgbevf_adapter *adapter);\n-static bool ixgbevf_can_reuse_rx_page(struct ixgbevf_rx_buffer *rx_buffer);\n-static void ixgbevf_reuse_rx_page(struct ixgbevf_ring *rx_ring,\n-\t\t\t\t  struct ixgbevf_rx_buffer *old_buff);\n \n static void ixgbevf_remove_adapter(struct ixgbe_hw *hw)\n {\n@@ -544,32 +541,14 @@ struct ixgbevf_rx_buffer *ixgbevf_get_rx_buffer(struct ixgbevf_ring *rx_ring,\n \t\t\t\t      size,\n \t\t\t\t      DMA_FROM_DEVICE);\n \n-\trx_buffer->pagecnt_bias--;\n-\n \treturn rx_buffer;\n }\n \n static void ixgbevf_put_rx_buffer(struct ixgbevf_ring *rx_ring,\n-\t\t\t\t  struct ixgbevf_rx_buffer *rx_buffer,\n-\t\t\t\t  struct sk_buff *skb)\n+\t\t\t\t  struct ixgbevf_rx_buffer *rx_buffer)\n {\n-\tif (ixgbevf_can_reuse_rx_page(rx_buffer)) {\n-\t\t/* hand second half of page back to the ring */\n-\t\tixgbevf_reuse_rx_page(rx_ring, rx_buffer);\n-\t} else {\n-\t\tif (IS_ERR(skb))\n-\t\t\t/* We are not reusing the buffer so unmap it and free\n-\t\t\t * any references we are holding to it\n-\t\t\t */\n-\t\t\tdma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma,\n-\t\t\t\t\t     ixgbevf_rx_pg_size(rx_ring),\n-\t\t\t\t\t     DMA_FROM_DEVICE,\n-\t\t\t\t\t     IXGBEVF_RX_DMA_ATTR);\n-\t\t__page_frag_cache_drain(rx_buffer->page,\n-\t\t\t\t\trx_buffer->pagecnt_bias);\n-\t}\n-\n-\t/* clear contents of rx_buffer */\n+\tdma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma, PAGE_SIZE,\n+\t\t\t     DMA_FROM_DEVICE, IXGBEVF_RX_DMA_ATTR);\n \trx_buffer->page = NULL;\n }\n \n@@ -600,38 +579,28 @@ static bool ixgbevf_is_non_eop(struct ixgbevf_ring *rx_ring,\n \treturn true;\n }\n \n-static inline unsigned int ixgbevf_rx_offset(struct ixgbevf_ring *rx_ring)\n-{\n-\treturn IXGBEVF_SKB_PAD;\n-}\n-\n static bool ixgbevf_alloc_mapped_page(struct ixgbevf_ring *rx_ring,\n \t\t\t\t      struct ixgbevf_rx_buffer *bi)\n {\n \tstruct page *page = bi->page;\n \tdma_addr_t dma;\n \n-\t/* since we are recycling buffers we should seldom need to alloc */\n-\tif (likely(page))\n-\t\treturn true;\n-\n \t/* alloc new page for storage */\n-\tpage = dev_alloc_pages(ixgbevf_rx_pg_order(rx_ring));\n+\tpage = dev_alloc_page();\n \tif (unlikely(!page)) {\n \t\trx_ring->rx_stats.alloc_rx_page_failed++;\n \t\treturn false;\n \t}\n \n \t/* map page for use */\n-\tdma = dma_map_page_attrs(rx_ring->dev, page, 0,\n-\t\t\t\t ixgbevf_rx_pg_size(rx_ring),\n+\tdma = dma_map_page_attrs(rx_ring->dev, page, 0, PAGE_SIZE,\n \t\t\t\t DMA_FROM_DEVICE, IXGBEVF_RX_DMA_ATTR);\n \n \t/* if mapping failed free memory back to system since\n \t * there isn't much point in holding memory we can't use\n \t */\n \tif (dma_mapping_error(rx_ring->dev, dma)) {\n-\t\t__free_pages(page, ixgbevf_rx_pg_order(rx_ring));\n+\t\t__free_page(page);\n \n \t\trx_ring->rx_stats.alloc_rx_page_failed++;\n \t\treturn false;\n@@ -639,8 +608,7 @@ static bool ixgbevf_alloc_mapped_page(struct ixgbevf_ring *rx_ring,\n \n \tbi->dma = dma;\n \tbi->page = page;\n-\tbi->page_offset = ixgbevf_rx_offset(rx_ring);\n-\tbi->pagecnt_bias = 1;\n+\tbi->page_offset = IXGBEVF_SKB_PAD;\n \trx_ring->rx_stats.alloc_rx_page++;\n \n \treturn true;\n@@ -673,7 +641,7 @@ static void ixgbevf_alloc_rx_buffers(struct ixgbevf_ring *rx_ring,\n \t\t/* sync the buffer for use by the device */\n \t\tdma_sync_single_range_for_device(rx_ring->dev, bi->dma,\n \t\t\t\t\t\t bi->page_offset,\n-\t\t\t\t\t\t ixgbevf_rx_bufsz(rx_ring),\n+\t\t\t\t\t\t IXGBEVF_RXBUFFER_3072,\n \t\t\t\t\t\t DMA_FROM_DEVICE);\n \n \t\t/* Refresh the desc even if pkt_addr didn't change\n@@ -755,66 +723,6 @@ static bool ixgbevf_cleanup_headers(struct ixgbevf_ring *rx_ring,\n \treturn false;\n }\n \n-/**\n- * ixgbevf_reuse_rx_page - page flip buffer and store it back on the ring\n- * @rx_ring: rx descriptor ring to store buffers on\n- * @old_buff: donor buffer to have page reused\n- *\n- * Synchronizes page for reuse by the adapter\n- **/\n-static void ixgbevf_reuse_rx_page(struct ixgbevf_ring *rx_ring,\n-\t\t\t\t  struct ixgbevf_rx_buffer *old_buff)\n-{\n-\tstruct ixgbevf_rx_buffer *new_buff;\n-\tu16 nta = rx_ring->next_to_alloc;\n-\n-\tnew_buff = &rx_ring->rx_buffer_info[nta];\n-\n-\t/* update, and store next to alloc */\n-\tnta++;\n-\trx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0;\n-\n-\t/* transfer page from old buffer to new buffer */\n-\tnew_buff->page = old_buff->page;\n-\tnew_buff->dma = old_buff->dma;\n-\tnew_buff->page_offset = old_buff->page_offset;\n-\tnew_buff->pagecnt_bias = old_buff->pagecnt_bias;\n-}\n-\n-static bool ixgbevf_can_reuse_rx_page(struct ixgbevf_rx_buffer *rx_buffer)\n-{\n-\tunsigned int pagecnt_bias = rx_buffer->pagecnt_bias;\n-\tstruct page *page = rx_buffer->page;\n-\n-\t/* avoid re-using remote and pfmemalloc pages */\n-\tif (!dev_page_is_reusable(page))\n-\t\treturn false;\n-\n-#if (PAGE_SIZE < 8192)\n-\t/* if we are only owner of page we can reuse it */\n-\tif (unlikely((page_ref_count(page) - pagecnt_bias) > 1))\n-\t\treturn false;\n-#else\n-#define IXGBEVF_LAST_OFFSET \\\n-\t(SKB_WITH_OVERHEAD(PAGE_SIZE) - IXGBEVF_RXBUFFER_2048)\n-\n-\tif (rx_buffer->page_offset > IXGBEVF_LAST_OFFSET)\n-\t\treturn false;\n-\n-#endif\n-\n-\t/* If we have drained the page fragment pool we need to update\n-\t * the pagecnt_bias and page count so that we fully restock the\n-\t * number of references the driver holds.\n-\t */\n-\tif (unlikely(!pagecnt_bias)) {\n-\t\tpage_ref_add(page, USHRT_MAX);\n-\t\trx_buffer->pagecnt_bias = USHRT_MAX;\n-\t}\n-\n-\treturn true;\n-}\n-\n /**\n  * ixgbevf_add_rx_frag - Add contents of Rx buffer to sk_buff\n  * @rx_ring: rx descriptor ring to transact packets on\n@@ -829,18 +737,10 @@ static void ixgbevf_add_rx_frag(struct ixgbevf_ring *rx_ring,\n \t\t\t\tstruct sk_buff *skb,\n \t\t\t\tunsigned int size)\n {\n-#if (PAGE_SIZE < 8192)\n-\tunsigned int truesize = ixgbevf_rx_pg_size(rx_ring) / 2;\n-#else\n \tunsigned int truesize = SKB_DATA_ALIGN(IXGBEVF_SKB_PAD + size);\n-#endif\n+\n \tskb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page,\n \t\t\trx_buffer->page_offset, size, truesize);\n-#if (PAGE_SIZE < 8192)\n-\trx_buffer->page_offset ^= truesize;\n-#else\n-\trx_buffer->page_offset += truesize;\n-#endif\n }\n \n static inline void ixgbevf_irq_enable_queues(struct ixgbevf_adapter *adapter,\n@@ -857,13 +757,9 @@ static struct sk_buff *ixgbevf_build_skb(struct ixgbevf_ring *rx_ring,\n \t\t\t\t\t union ixgbe_adv_rx_desc *rx_desc)\n {\n \tunsigned int metasize = xdp->data - xdp->data_meta;\n-#if (PAGE_SIZE < 8192)\n-\tunsigned int truesize = ixgbevf_rx_pg_size(rx_ring) / 2;\n-#else\n \tunsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) +\n \t\t\t\tSKB_DATA_ALIGN(xdp->data_end -\n \t\t\t\t\t       xdp->data_hard_start);\n-#endif\n \tstruct sk_buff *skb;\n \n \t/* Prefetch first cache line of first page. If xdp->data_meta\n@@ -884,13 +780,6 @@ static struct sk_buff *ixgbevf_build_skb(struct ixgbevf_ring *rx_ring,\n \tif (metasize)\n \t\tskb_metadata_set(skb, metasize);\n \n-\t/* update buffer offset */\n-#if (PAGE_SIZE < 8192)\n-\trx_buffer->page_offset ^= truesize;\n-#else\n-\trx_buffer->page_offset += truesize;\n-#endif\n-\n \treturn skb;\n }\n \n@@ -1014,38 +903,11 @@ static int ixgbevf_run_xdp(struct ixgbevf_adapter *adapter,\n \treturn result;\n }\n \n-static unsigned int ixgbevf_rx_frame_truesize(struct ixgbevf_ring *rx_ring,\n-\t\t\t\t\t      unsigned int size)\n-{\n-\tunsigned int truesize;\n-\n-#if (PAGE_SIZE < 8192)\n-\ttruesize = ixgbevf_rx_pg_size(rx_ring) / 2; /* Must be power-of-2 */\n-#else\n-\ttruesize = SKB_DATA_ALIGN(IXGBEVF_SKB_PAD + size) +\n-\t\t   SKB_DATA_ALIGN(sizeof(struct skb_shared_info));\n-#endif\n-\treturn truesize;\n-}\n-\n-static void ixgbevf_rx_buffer_flip(struct ixgbevf_ring *rx_ring,\n-\t\t\t\t   struct ixgbevf_rx_buffer *rx_buffer,\n-\t\t\t\t   unsigned int size)\n-{\n-\tunsigned int truesize = ixgbevf_rx_frame_truesize(rx_ring, size);\n-\n-#if (PAGE_SIZE < 8192)\n-\trx_buffer->page_offset ^= truesize;\n-#else\n-\trx_buffer->page_offset += truesize;\n-#endif\n-}\n-\n static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector,\n \t\t\t\tstruct ixgbevf_ring *rx_ring,\n \t\t\t\tint budget)\n {\n-\tunsigned int total_rx_bytes = 0, total_rx_packets = 0, frame_sz = 0;\n+\tunsigned int total_rx_bytes = 0, total_rx_packets = 0;\n \tstruct ixgbevf_adapter *adapter = q_vector->adapter;\n \tu16 cleaned_count = ixgbevf_desc_unused(rx_ring);\n \tstruct sk_buff *skb = rx_ring->skb;\n@@ -1054,10 +916,7 @@ static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector,\n \tint xdp_res = 0;\n \n \t/* Frame size depend on rx_ring setup when PAGE_SIZE=4K */\n-#if (PAGE_SIZE < 8192)\n-\tframe_sz = ixgbevf_rx_frame_truesize(rx_ring, 0);\n-#endif\n-\txdp_init_buff(&xdp, frame_sz, &rx_ring->xdp_rxq);\n+\txdp_init_buff(&xdp, IXGBEVF_RXBUFFER_3072, &rx_ring->xdp_rxq);\n \n \twhile (likely(total_rx_packets < budget)) {\n \t\tstruct ixgbevf_rx_buffer *rx_buffer;\n@@ -1081,31 +940,24 @@ static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector,\n \t\t */\n \t\trmb();\n \n-\t\trx_buffer = ixgbevf_get_rx_buffer(rx_ring, size);\n+\t\trx_buffer =\n+\t\t\tixgbevf_get_rx_buffer(rx_ring, IXGBEVF_RXBUFFER_3072);\n \n \t\t/* retrieve a buffer from the ring */\n \t\tif (!skb) {\n-\t\t\tunsigned int offset = ixgbevf_rx_offset(rx_ring);\n+\t\t\tunsigned int offset = rx_buffer->page_offset;\n \t\t\tunsigned char *hard_start;\n \n \t\t\thard_start = page_address(rx_buffer->page) +\n \t\t\t\t     rx_buffer->page_offset - offset;\n \t\t\txdp_prepare_buff(&xdp, hard_start, offset, size, true);\n-#if (PAGE_SIZE > 4096)\n-\t\t\t/* At larger PAGE_SIZE, frame_sz depend on len size */\n-\t\t\txdp.frame_sz = ixgbevf_rx_frame_truesize(rx_ring, size);\n-#endif\n \t\t\txdp_res = ixgbevf_run_xdp(adapter, rx_ring, &xdp);\n \t\t}\n \n \t\tif (xdp_res) {\n-\t\t\tif (xdp_res == IXGBEVF_XDP_TX) {\n+\t\t\tif (xdp_res == IXGBEVF_XDP_TX)\n \t\t\t\txdp_xmit = true;\n-\t\t\t\tixgbevf_rx_buffer_flip(rx_ring, rx_buffer,\n-\t\t\t\t\t\t       size);\n-\t\t\t} else {\n-\t\t\t\trx_buffer->pagecnt_bias++;\n-\t\t\t}\n+\n \t\t\ttotal_rx_packets++;\n \t\t\ttotal_rx_bytes += size;\n \t\t} else if (skb) {\n@@ -1118,11 +970,10 @@ static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector,\n \t\t/* exit if we failed to retrieve a buffer */\n \t\tif (!xdp_res && !skb) {\n \t\t\trx_ring->rx_stats.alloc_rx_buff_failed++;\n-\t\t\trx_buffer->pagecnt_bias++;\n \t\t\tbreak;\n \t\t}\n \n-\t\tixgbevf_put_rx_buffer(rx_ring, rx_buffer, skb);\n+\t\tixgbevf_put_rx_buffer(rx_ring, rx_buffer);\n \t\tcleaned_count++;\n \n \t\t/* fetch next buffer in frame if non-eop */\n@@ -1699,10 +1550,7 @@ static void ixgbevf_configure_srrctl(struct ixgbevf_adapter *adapter,\n \tsrrctl = IXGBE_SRRCTL_DROP_EN;\n \n \tsrrctl |= IXGBEVF_RX_HDR_SIZE << IXGBE_SRRCTL_BSIZEHDRSIZE_SHIFT;\n-\tif (ring_uses_large_buffer(ring))\n-\t\tsrrctl |= IXGBEVF_RXBUFFER_3072 >> IXGBE_SRRCTL_BSIZEPKT_SHIFT;\n-\telse\n-\t\tsrrctl |= IXGBEVF_RXBUFFER_2048 >> IXGBE_SRRCTL_BSIZEPKT_SHIFT;\n+\tsrrctl |= IXGBEVF_RXBUFFER_3072 >> IXGBE_SRRCTL_BSIZEPKT_SHIFT;\n \tsrrctl |= IXGBE_SRRCTL_DESCTYPE_ADV_ONEBUF;\n \n \tIXGBE_WRITE_REG(hw, IXGBE_VFSRRCTL(index), srrctl);\n@@ -1880,13 +1728,6 @@ static void ixgbevf_configure_rx_ring(struct ixgbevf_adapter *adapter,\n \tif (adapter->hw.mac.type != ixgbe_mac_82599_vf) {\n \t\trxdctl &= ~(IXGBE_RXDCTL_RLPMLMASK |\n \t\t\t    IXGBE_RXDCTL_RLPML_EN);\n-\n-#if (PAGE_SIZE < 8192)\n-\t\t/* Limit the maximum frame size so we don't overrun the skb */\n-\t\tif (!ring_uses_large_buffer(ring))\n-\t\t\trxdctl |= IXGBEVF_MAX_FRAME_BUILD_SKB |\n-\t\t\t\t  IXGBE_RXDCTL_RLPML_EN;\n-#endif\n \t}\n \n \trxdctl |= IXGBE_RXDCTL_ENABLE | IXGBE_RXDCTL_VME;\n@@ -1896,24 +1737,6 @@ static void ixgbevf_configure_rx_ring(struct ixgbevf_adapter *adapter,\n \tixgbevf_alloc_rx_buffers(ring, ixgbevf_desc_unused(ring));\n }\n \n-static void ixgbevf_set_rx_buffer_len(struct ixgbevf_adapter *adapter,\n-\t\t\t\t      struct ixgbevf_ring *rx_ring)\n-{\n-\tstruct net_device *netdev = adapter->netdev;\n-\tunsigned int max_frame = netdev->mtu + ETH_HLEN + ETH_FCS_LEN;\n-\n-\t/* set buffer size flags */\n-\tclear_ring_uses_large_buffer(rx_ring);\n-\n-\tif (PAGE_SIZE < 8192)\n-\t\t/* 82599 can't rely on RXDCTL.RLPML to restrict\n-\t\t * the size of the frame\n-\t\t */\n-\t\tif (max_frame > IXGBEVF_MAX_FRAME_BUILD_SKB ||\n-\t\t    adapter->hw.mac.type == ixgbe_mac_82599_vf)\n-\t\t\tset_ring_uses_large_buffer(rx_ring);\n-}\n-\n /**\n  * ixgbevf_configure_rx - Configure 82599 VF Receive Unit after Reset\n  * @adapter: board private structure\n@@ -1944,7 +1767,6 @@ static void ixgbevf_configure_rx(struct ixgbevf_adapter *adapter)\n \tfor (i = 0; i < adapter->num_rx_queues; i++) {\n \t\tstruct ixgbevf_ring *rx_ring = adapter->rx_ring[i];\n \n-\t\tixgbevf_set_rx_buffer_len(adapter, rx_ring);\n \t\tixgbevf_configure_rx_ring(adapter, rx_ring);\n \t}\n }\n@@ -2323,19 +2145,12 @@ static void ixgbevf_clean_rx_ring(struct ixgbevf_ring *rx_ring)\n \t\tdma_sync_single_range_for_cpu(rx_ring->dev,\n \t\t\t\t\t      rx_buffer->dma,\n \t\t\t\t\t      rx_buffer->page_offset,\n-\t\t\t\t\t      ixgbevf_rx_bufsz(rx_ring),\n+\t\t\t\t\t      IXGBEVF_RXBUFFER_3072,\n \t\t\t\t\t      DMA_FROM_DEVICE);\n \n \t\t/* free resources associated with mapping */\n-\t\tdma_unmap_page_attrs(rx_ring->dev,\n-\t\t\t\t     rx_buffer->dma,\n-\t\t\t\t     ixgbevf_rx_pg_size(rx_ring),\n-\t\t\t\t     DMA_FROM_DEVICE,\n-\t\t\t\t     IXGBEVF_RX_DMA_ATTR);\n-\n-\t\t__page_frag_cache_drain(rx_buffer->page,\n-\t\t\t\t\trx_buffer->pagecnt_bias);\n-\n+\t\tixgbevf_put_rx_buffer(rx_ring, rx_buffer);\n+\t\t__free_page(rx_buffer->page);\n \t\ti++;\n \t\tif (i == rx_ring->count)\n \t\t\ti = 0;\n@@ -4394,9 +4209,7 @@ static int ixgbevf_xdp_setup(struct net_device *dev, struct bpf_prog *prog)\n \n \t/* verify ixgbevf ring attributes are sufficient for XDP */\n \tfor (i = 0; i < adapter->num_rx_queues; i++) {\n-\t\tstruct ixgbevf_ring *ring = adapter->rx_ring[i];\n-\n-\t\tif (frame_size > ixgbevf_rx_bufsz(ring))\n+\t\tif (frame_size > IXGBEVF_RXBUFFER_3072)\n \t\t\treturn -EINVAL;\n \t}\n \n",
    "prefixes": [
        "iwl-next",
        "v3",
        "02/10"
    ]
}