[{"id":3688456,"web_url":"http://patchwork.ozlabs.org/comment/3688456/","msgid":"<20260508140758.1200568-3-horms@kernel.org>","list_archive_url":null,"date":"2026-05-08T14:08:00","subject":"Re: [Intel-wired-lan] [PATCH iwl-next v5 4/5] ice: implement Rx\n queue management ops","submitter":{"id":82748,"url":"http://patchwork.ozlabs.org/api/people/82748/","name":"Simon Horman","email":"horms@kernel.org"},"content":"From: 'Simon Horman' <horms@kernel.org>\n\nThis is an AI-generated review of your patch. The human sending this\nemail has considered the AI review valid, or at least plausible.\nFull review at: https://sashiko.dev\n---\nice: implement Rx queue management ops\n\nThis commit wires the ice driver into the netdev queue management ops\nframework, reusing the existing per-queue disable/reconfig/enable paths\noriginally added for XSk. With it, ice reportedly passes the io_uring\nzcrx selftests.\n\n> diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c\n> index 1add82d894bb..4e0b8895c303 100644\n> --- a/drivers/net/ethernet/intel/ice/ice_base.c\n> +++ b/drivers/net/ethernet/intel/ice/ice_base.c\n\n[ ... ]\n\n> @@ -1577,3 +1593,81 @@ int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx)\n>\n>  \treturn fail;\n>  }\n> +\n> +/**\n> + * ice_qp_ena - Enables a queue pair\n> + * @vsi: VSI of interest\n> + * @q_idx: ring index in array\n> + *\n> + * Returns 0 on success, negative on failure.\n> + */\n> +int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx)\n> +{\n> +\treturn __ice_qp_ena(vsi, q_idx, ice_vsi_cfg_single_rxq(vsi, q_idx));\n> +}\n> +\n> +static int ice_queue_start(struct net_device *dev,\n> +\t\t\t   struct netdev_queue_config *qcfg,\n> +\t\t\t   void *per_queue_mem, int idx)\n> +{\n> +\tconst struct ice_netdev_priv *priv = netdev_priv(dev);\n> +\tstruct ice_rx_ring *real = priv->vsi->rx_rings[idx];\n> +\tstruct ice_rx_ring *new = per_queue_mem;\n> +\tstruct napi_struct *napi;\n> +\tint ret;\n> +\n> +\treal->pp = new->pp;\n> +\treal->rx_fqes = new->rx_fqes;\n> +\treal->hdr_fqes = new->hdr_fqes;\n> +\treal->hdr_pp = new->hdr_pp;\n> +\n> +\treal->hdr_truesize = new->hdr_truesize;\n> +\treal->truesize = new->truesize;\n> +\treal->rx_hdr_len = new->rx_hdr_len;\n> +\treal->rx_buf_len = new->rx_buf_len;\n> +\n> +\tmemcpy(&real->xdp_rxq, &new->xdp_rxq, sizeof(new->xdp_rxq));\n> +\n> +\tret = ice_setup_rx_ctx(real);\n> +\tif (ret)\n> +\t\treturn ret;\n\nSimon says: I believe that the issue flagged immediately below is\n            a false positive; the cleanup logic unwinds things correctly.\n\n\t    However, the two other issues flagged further down do seem to be\n\t    of concern.\n\nice_queue_start() eagerly copies new->pp, new->rx_fqes, new->hdr_pp,\nnew->hdr_fqes and the entire new->xdp_rxq into real before ice_setup_rx_ctx()\nhas succeeded.\n\nIf ice_setup_rx_ctx() returns an error here, the function bails out with\nreal already pointing at new_mem's page pools and fqes arrays.\n\nThe core netdev_rx_queue_reconfig() rollback then invokes\nndo_queue_start(dev, qcfg_old, old_mem, rxq_idx), which repeats the same\neager overwrite using old_mem and calls ice_setup_rx_ctx() again.  If the\nsecond ice_setup_rx_ctx() also fails, the core proceeds to\nndo_queue_mem_free(dev, old_mem), and ice_queue_mem_free() will tear down\nthe page pools whose pointers real->pp / real->hdr_pp are still aliasing.\n\nWould it be safer to stash the existing real->pp / real->rx_fqes /\nreal->hdr_pp / real->hdr_fqes / real->xdp_rxq on entry and restore them if\nice_setup_rx_ctx() fails, so that a failed rollback cannot leave real with\ndangling pointers to a pool that ice_queue_mem_free() is about to destroy?\n\n> +\n> +\tnapi = &real->q_vector->napi;\n> +\n> +\tpage_pool_enable_direct_recycling(real->pp, napi);\n> +\tif (real->hdr_pp)\n> +\t\tpage_pool_enable_direct_recycling(real->hdr_pp, napi);\n> +\n> +\tret = ice_alloc_rx_bufs(real, ICE_DESC_UNUSED(real));\n> +\n> +\treturn __ice_qp_ena(priv->vsi, idx, ret);\n\nice_alloc_rx_bufs() is declared bool and returns !!cleaned_count, so it\nyields 0 on full success and 1 on partial/failed allocation:\n\nbool ice_alloc_rx_bufs(struct ice_rx_ring *rx_ring, unsigned int cleaned_count)\n{\n    ...\n    return !!cleaned_count;\n}\n\nThat bool is then passed straight into __ice_qp_ena()'s fail parameter,\nwhich only overwrites fail via 'if (!fail) fail = err;', so a value of 1\npropagates unchanged to ice_queue_start()'s return value.\n\nnetdev_rx_queue_reconfig() treats any non-zero return as failure and\npropagates it back to userspace (netlink, ethtool, io_uring zcrx install),\nwhere the callers expect a negative errno.\n\nBefore this patch, ice_qp_ena() obtained its fail value from\nice_vsi_cfg_single_rxq(), which returns a signed errno.\n\nShould this be converted to a proper negative errno before being handed\nto __ice_qp_ena(), for example:\n\n\tret = ice_alloc_rx_bufs(real, ICE_DESC_UNUSED(real)) ? -ENOMEM : 0;\n\n> +}\n> +\n> +static int ice_queue_stop(struct net_device *dev, void *per_queue_mem,\n> +\t\t\t  int idx)\n> +{\n> +\tconst struct ice_netdev_priv *priv = netdev_priv(dev);\n> +\tstruct ice_rx_ring *real = priv->vsi->rx_rings[idx];\n> +\tint ret;\n> +\n> +\tret = __ice_qp_dis(priv->vsi, idx);\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n> +\tpage_pool_disable_direct_recycling(real->pp);\n> +\tif (real->hdr_pp)\n> +\t\tpage_pool_disable_direct_recycling(real->hdr_pp);\n> +\n> +\tice_zero_rx_ring(real);\n> +\tmemcpy(per_queue_mem, real, sizeof(*real));\n> +\n> +\treturn 0;\n> +}\n\nCan this leak inflight page pool buffers?\n\nice_zero_rx_ring(real) is called before the memcpy to per_queue_mem, and\nice_zero_rx_ring() resets both indices:\n\nvoid ice_zero_rx_ring(struct ice_rx_ring *rx_ring)\n{\n    ...\n    rx_ring->next_to_clean = 0;\n    rx_ring->next_to_use = 0;\n}\n\nSo per_queue_mem captures a ring where next_to_clean == next_to_use == 0.\n\nThe core then invokes ndo_queue_mem_free(dev, old_mem), and the recycle\nloop in ice_queue_mem_free() is guarded by exactly those two indices:\n\nvoid ice_queue_mem_free(struct net_device *dev, void *per_queue_mem)\n{\n    ...\n    for (u32 i = rx_ring->next_to_clean; i != rx_ring->next_to_use; ) {\n        libeth_rx_recycle_slow(rx_ring->rx_fqes[i].netmem);\n\n        if (rx_ring->hdr_pp)\n            libeth_rx_recycle_slow(rx_ring->hdr_fqes[i].netmem);\n\n        if (unlikely(++i == rx_ring->count))\n            i = 0;\n    }\n    ...\n    ice_rxq_pp_destroy(rx_ring);\n}\n\nWith 0 != 0 false on entry, the loop never runs, so any buffers that were\nin rx_fqes[old_ntc..old_ntu) (and hdr_fqes[] when header split is on) are\nnever returned via libeth_rx_recycle_slow().  ice_rxq_pp_destroy() then\nkvfree()s the fqes arrays and calls page_pool_destroy() with outstanding\ninflight references that are no longer tracked anywhere.\n\nThe new ice_clean_rx_ring() wrapper keeps the original ordering:\n\nvoid ice_clean_rx_ring(struct ice_rx_ring *rx_ring)\n{\n    ice_queue_mem_free(rx_ring->netdev, rx_ring);\n    ice_zero_rx_ring(rx_ring);\n}\n\nShould ice_queue_stop() follow the same ordering and perform the memcpy\nbefore ice_zero_rx_ring(real), so the recycle loop in ice_queue_mem_free()\nsees the real ntc/ntu values?\n\n> +\n> +const struct netdev_queue_mgmt_ops ice_queue_mgmt_ops = {\n> +\t.ndo_queue_mem_alloc\t= ice_queue_mem_alloc,\n> +\t.ndo_queue_mem_free\t= ice_queue_mem_free,\n> +\t.ndo_queue_mem_size\t= sizeof(struct ice_rx_ring),\n> +\t.ndo_queue_start\t= ice_queue_start,\n> +\t.ndo_queue_stop\t\t= ice_queue_stop,\n> +};\n\n[ ... ]","headers":{"Return-Path":"<intel-wired-lan-bounces@osuosl.org>","X-Original-To":["incoming@patchwork.ozlabs.org","intel-wired-lan@lists.osuosl.org"],"Delivered-To":["patchwork-incoming@legolas.ozlabs.org","intel-wired-lan@lists.osuosl.org"],"Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256\n header.s=default header.b=IcfIqRKz;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org\n (client-ip=2605:bc80:3010::136; helo=smtp3.osuosl.org;\n envelope-from=intel-wired-lan-bounces@osuosl.org;\n receiver=patchwork.ozlabs.org)"],"Received":["from smtp3.osuosl.org (smtp3.osuosl.org [IPv6:2605:bc80:3010::136])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4gBrpf1tGwz1yJq\n\tfor <incoming@patchwork.ozlabs.org>; Sat, 09 May 2026 00:19:22 +1000 (AEST)","from localhost (localhost [127.0.0.1])\n\tby smtp3.osuosl.org (Postfix) with ESMTP id 73B8D61247;\n\tFri,  8 May 2026 14:19:20 +0000 (UTC)","from smtp3.osuosl.org ([127.0.0.1])\n by localhost (smtp3.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id TCehAE8Snvwg; Fri,  8 May 2026 14:19:19 +0000 (UTC)","from lists1.osuosl.org (lists1.osuosl.org [140.211.166.142])\n\tby smtp3.osuosl.org (Postfix) with ESMTP id 87B0561621;\n\tFri,  8 May 2026 14:19:19 +0000 (UTC)","from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133])\n by lists1.osuosl.org (Postfix) with ESMTP id 008CA358\n for <intel-wired-lan@lists.osuosl.org>; Fri,  8 May 2026 14:19:18 +0000 (UTC)","from localhost (localhost [127.0.0.1])\n by smtp2.osuosl.org (Postfix) with ESMTP id E615F4073D\n for <intel-wired-lan@lists.osuosl.org>; Fri,  8 May 2026 14:19:18 +0000 (UTC)","from smtp2.osuosl.org ([127.0.0.1])\n by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id DIm4sr-algOE for <intel-wired-lan@lists.osuosl.org>;\n Fri,  8 May 2026 14:19:18 +0000 (UTC)","from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31])\n by smtp2.osuosl.org (Postfix) with ESMTPS id E7F3E406AE\n for <intel-wired-lan@lists.osuosl.org>; Fri,  8 May 2026 14:19:17 +0000 (UTC)","from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])\n by sea.source.kernel.org (Postfix) with ESMTP id 35385434EE;\n Fri,  8 May 2026 14:19:17 +0000 (UTC)","by smtp.kernel.org (Postfix) with ESMTPSA id 1DD2DC2BCB0;\n Fri,  8 May 2026 14:19:13 +0000 (UTC)"],"X-Virus-Scanned":["amavis at osuosl.org","amavis at osuosl.org"],"X-Comment":"SPF check N/A for local connections - client-ip=140.211.166.142;\n helo=lists1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org;\n receiver=<UNKNOWN> ","DKIM-Filter":["OpenDKIM Filter v2.11.0 smtp3.osuosl.org 87B0561621","OpenDKIM Filter v2.11.0 smtp2.osuosl.org E7F3E406AE"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org;\n\ts=default; t=1778249959;\n\tbh=Sr8Q4o/bIRjQ5al37Kam1jIuO1SKkTEVm1ZzQwLpy0Q=;\n\th=From:To:Cc:Date:In-Reply-To:References:Subject:List-Id:\n\t List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe:\n\t From;\n\tb=IcfIqRKzVAijlUYLeBBe6gYsLXy2Bw48NfWheouHy92ktZ2fcIYmVLuigs1rqXUwd\n\t jZeqjL7URmyJopBK3GIsWRvcenFu0N2wg+CO/xtAnn9LiZQRxsgP0m0Lx78SkwR167\n\t UZgyBy4NDp2CRTTdq5A/uHypGqjE7rx50QZVRMFm4spLPk+0u3oJr+6h34i4ZmdiD0\n\t jYVwgiSgUEyMXaRmVfdB6oVtHbKJLGuT21IrBEIfypT4zp1adl3FIefBPhrw8Kca8R\n\t JbIHBfT8lr4AoJZSdM84trFvosB2w4/qkA+x/nAtb57dfoAMYqxlQyiNM2PmKdC5Ty\n\t xdBViU5hapnWA==","Received-SPF":"Pass (mailfrom) identity=mailfrom; client-ip=172.234.252.31;\n helo=sea.source.kernel.org; envelope-from=horms@kernel.org;\n receiver=<UNKNOWN>","DMARC-Filter":"OpenDMARC Filter v1.4.2 smtp2.osuosl.org E7F3E406AE","From":"Simon Horman <horms@kernel.org>","To":"aleksander.lobakin@intel.com","Cc":"'Simon Horman' <horms@kernel.org>, intel-wired-lan@lists.osuosl.org,\n anthony.l.nguyen@intel.com, przemyslaw.kitszel@intel.com,\n andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com,\n kuba@kernel.org, pabeni@redhat.com, kohei@enjuk.jp,\n jacob.e.keller@intel.com, aleksandr.loktionov@intel.com,\n nxne.cnse.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org,\n linux-kernel@vger.kernel.org","Date":"Fri,  8 May 2026 15:08:00 +0100","Message-ID":"<20260508140758.1200568-3-horms@kernel.org>","X-Mailer":"git-send-email 2.54.0","In-Reply-To":"<20260505152923.1040589-5-aleksander.lobakin@intel.com>","References":"<20260505152923.1040589-5-aleksander.lobakin@intel.com>","MIME-Version":"1.0","Content-Transfer-Encoding":"8bit","X-Mailman-Original-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple;\n d=kernel.org; s=k20201202; t=1778249957;\n bh=U43IRTUeEFC+y+FoudadK4oeMVSSy37ArbZ6hs+eIIg=;\n h=From:To:Cc:Subject:Date:In-Reply-To:References:From;\n b=WbMfbNRlW4bsPoDerwDHrarui7U8wGbbxQ/K0zs9bmR8aVX/jqJd8oEO1ysnxjkEy\n 5hd3J2tsn4+H0lzT3g28Xh2RQiiBgjHZf5ZnZ3h5W8dlipS/D/aGdZhHnQPJxVJWyl\n mU5T/XlfoWpwzkpkykOGwiXL80H5MZtCN3TLIZ15qtZFyH2kkS4QMoAfFkrInx55wY\n LXFRNG924FoItItZDkqfvp5MH6K6KiOHks0OlZYIh/lHzq4v0wyHKGnICDAdIRzUqK\n k4an71xacgInraN530QG29cnc3Q08c0iWCdVCgBFRhM2LWZAQAQ2hgEKwO4L1E29vD\n i0EVNe5UfkRnw==","X-Mailman-Original-Authentication-Results":["smtp2.osuosl.org;\n dmarc=pass (p=quarantine dis=none)\n header.from=kernel.org","smtp2.osuosl.org;\n dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org\n header.a=rsa-sha256 header.s=k20201202 header.b=WbMfbNRl"],"Subject":"Re: [Intel-wired-lan] [PATCH iwl-next v5 4/5] ice: implement Rx\n queue management ops","X-BeenThere":"intel-wired-lan@osuosl.org","X-Mailman-Version":"2.1.30","Precedence":"list","List-Id":"Intel Wired Ethernet Linux Kernel Driver Development\n <intel-wired-lan.osuosl.org>","List-Unsubscribe":"<https://lists.osuosl.org/mailman/options/intel-wired-lan>,\n <mailto:intel-wired-lan-request@osuosl.org?subject=unsubscribe>","List-Archive":"<http://lists.osuosl.org/pipermail/intel-wired-lan/>","List-Post":"<mailto:intel-wired-lan@osuosl.org>","List-Help":"<mailto:intel-wired-lan-request@osuosl.org?subject=help>","List-Subscribe":"<https://lists.osuosl.org/mailman/listinfo/intel-wired-lan>,\n <mailto:intel-wired-lan-request@osuosl.org?subject=subscribe>","Errors-To":"intel-wired-lan-bounces@osuosl.org","Sender":"\"Intel-wired-lan\" <intel-wired-lan-bounces@osuosl.org>"}},{"id":3688461,"web_url":"http://patchwork.ozlabs.org/comment/3688461/","msgid":"<20260508142639.GO15617@horms.kernel.org>","list_archive_url":null,"date":"2026-05-08T14:26:39","subject":"Re: [Intel-wired-lan] [PATCH iwl-next v5 4/5] ice: implement Rx\n queue management ops","submitter":{"id":82748,"url":"http://patchwork.ozlabs.org/api/people/82748/","name":"Simon Horman","email":"horms@kernel.org"},"content":"On Fri, May 08, 2026 at 03:08:00PM +0100, Simon Horman wrote:\n> From: 'Simon Horman' <horms@kernel.org>\n> \n> This is an AI-generated review of your patch. The human sending this\n> email has considered the AI review valid, or at least plausible.\n> Full review at: https://sashiko.dev\n\nSorry, the line above should have referenced\nhttps://netdev-ai.bots.linux.dev/sashiko/\n\nThere is also a review of this patch available on https://sashiko.dev\nWhich I plan to forward separately.\n\n...","headers":{"Return-Path":"<intel-wired-lan-bounces@osuosl.org>","X-Original-To":["incoming@patchwork.ozlabs.org","intel-wired-lan@lists.osuosl.org"],"Delivered-To":["patchwork-incoming@legolas.ozlabs.org","intel-wired-lan@lists.osuosl.org"],"Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256\n header.s=default header.b=uqZwbWnD;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org\n (client-ip=2605:bc80:3010::136; helo=smtp3.osuosl.org;\n envelope-from=intel-wired-lan-bounces@osuosl.org;\n receiver=patchwork.ozlabs.org)"],"Received":["from smtp3.osuosl.org (smtp3.osuosl.org [IPv6:2605:bc80:3010::136])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4gBrzJ1d6xz1yKd\n\tfor <incoming@patchwork.ozlabs.org>; Sat, 09 May 2026 00:26:52 +1000 (AEST)","from localhost (localhost [127.0.0.1])\n\tby smtp3.osuosl.org (Postfix) with ESMTP id C684261631;\n\tFri,  8 May 2026 14:26:50 +0000 (UTC)","from smtp3.osuosl.org ([127.0.0.1])\n by localhost (smtp3.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id hEHo705e5R5M; Fri,  8 May 2026 14:26:48 +0000 (UTC)","from lists1.osuosl.org (lists1.osuosl.org [140.211.166.142])\n\tby smtp3.osuosl.org (Postfix) with ESMTP id 23BE36162E;\n\tFri,  8 May 2026 14:26:48 +0000 (UTC)","from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133])\n by lists1.osuosl.org (Postfix) with ESMTP id D93AE358\n for <intel-wired-lan@lists.osuosl.org>; Fri,  8 May 2026 14:26:46 +0000 (UTC)","from localhost (localhost [127.0.0.1])\n by smtp2.osuosl.org (Postfix) with ESMTP id BEF5040E75\n for <intel-wired-lan@lists.osuosl.org>; Fri,  8 May 2026 14:26:46 +0000 (UTC)","from smtp2.osuosl.org ([127.0.0.1])\n by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id iTiGxxmKwv3R for <intel-wired-lan@lists.osuosl.org>;\n Fri,  8 May 2026 14:26:46 +0000 (UTC)","from tor.source.kernel.org (tor.source.kernel.org\n [IPv6:2600:3c04:e001:324:0:1991:8:25])\n by smtp2.osuosl.org (Postfix) with ESMTPS id E80F840E63\n for <intel-wired-lan@lists.osuosl.org>; Fri,  8 May 2026 14:26:45 +0000 (UTC)","from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])\n by tor.source.kernel.org (Postfix) with ESMTP id 671B9600AE;\n Fri,  8 May 2026 14:26:44 +0000 (UTC)","by smtp.kernel.org (Postfix) with ESMTPSA id 62C65C2BCB0;\n Fri,  8 May 2026 14:26:41 +0000 (UTC)"],"X-Virus-Scanned":["amavis at osuosl.org","amavis at osuosl.org"],"X-Comment":"SPF check N/A for local connections - client-ip=140.211.166.142;\n helo=lists1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org;\n receiver=<UNKNOWN> ","DKIM-Filter":["OpenDKIM Filter v2.11.0 smtp3.osuosl.org 23BE36162E","OpenDKIM Filter v2.11.0 smtp2.osuosl.org E80F840E63"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org;\n\ts=default; t=1778250408;\n\tbh=3wNpiS4ugDaedwdUXfX9wOa0RNBzWD4K4bSjiChbX8Y=;\n\th=Date:From:To:Cc:References:In-Reply-To:Subject:List-Id:\n\t List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe:\n\t From;\n\tb=uqZwbWnDASozNpI0ffvsTG8XnkvL7CYnZDRIK+VDcmI8Jp0a0cj/pdYyAK+M4t/F5\n\t rRHUkjnkfZj2TGNRmOpwHYrFMJDWYRHXy4u+yZGPlZxNSmyIU3XjN+p9guVzIqf6X9\n\t VfJ8WubxB3YZJ6tq+NujftYSOYNj9/P+Zh+cQgK29sG08sWVwNlWmGm2uvy2RwpreX\n\t tseTWDgo45nK+JGzDYA82OxxB275qi5Axb2RfQceTn2hs6uuyG1wVSRCUyxfMXwoMS\n\t 8nlRJWiJAo8qQu9xgJ8jSN+Of4Q2zkqyZ8L5hjnMt9esaKzne6CngdBm6erWpO24V3\n\t RIcWB19Ghazeg==","Received-SPF":"Pass (mailfrom) identity=mailfrom;\n client-ip=2600:3c04:e001:324:0:1991:8:25; helo=tor.source.kernel.org;\n envelope-from=horms@kernel.org; receiver=<UNKNOWN>","DMARC-Filter":"OpenDMARC Filter v1.4.2 smtp2.osuosl.org E80F840E63","Date":"Fri, 8 May 2026 15:26:39 +0100","From":"Simon Horman <horms@kernel.org>","To":"aleksander.lobakin@intel.com","Cc":"intel-wired-lan@lists.osuosl.org, anthony.l.nguyen@intel.com,\n przemyslaw.kitszel@intel.com, andrew+netdev@lunn.ch,\n davem@davemloft.net, edumazet@google.com, kuba@kernel.org,\n pabeni@redhat.com, kohei@enjuk.jp, jacob.e.keller@intel.com,\n aleksandr.loktionov@intel.com,\n nxne.cnse.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org,\n linux-kernel@vger.kernel.org","Message-ID":"<20260508142639.GO15617@horms.kernel.org>","References":"<20260505152923.1040589-5-aleksander.lobakin@intel.com>\n <20260508140758.1200568-3-horms@kernel.org>","MIME-Version":"1.0","Content-Type":"text/plain; charset=us-ascii","Content-Disposition":"inline","In-Reply-To":"<20260508140758.1200568-3-horms@kernel.org>","X-Mailman-Original-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple;\n d=kernel.org; s=k20201202; t=1778250404;\n bh=ZiN/EMgSDxuZs/BBhHBaRIIV4dNVOUbLJZU0Kmmg47k=;\n h=Date:From:To:Cc:Subject:References:In-Reply-To:From;\n b=rQpSxrCRYG35baUcJLL8n3SXZsTYcmz30I4/Mfc+xnMtmR9TIV40Hx3P5VvxdFvgX\n ZeQwZ97nz+Yn6DIv31JZRFEQ3pQxajGDvQ9aHZemb2Ef4vLdp0kB22jq0+0uAA8dM/\n ysGOttieaAiiwvLH0KRVNWcE5RCrhONbKQoRN9XHPHj0m7r08IMsxdLBncoCYh2qe4\n hGAC/kOebTq3VQRyjo5bVynAjummKKFQF809hhwy+FBBvkY/w+esf/1zx5wml749q8\n o+i0REg1U5fKNaKxp9LJ6Mn1JZ1NGq1D9wdFF4pBNWWtZX/eAe/Xn434Y2Nw04lwdU\n vJhnOgW+bmCZA==","X-Mailman-Original-Authentication-Results":["smtp2.osuosl.org;\n dmarc=pass (p=quarantine dis=none)\n header.from=kernel.org","smtp2.osuosl.org;\n dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org\n header.a=rsa-sha256 header.s=k20201202 header.b=rQpSxrCR"],"Subject":"Re: [Intel-wired-lan] [PATCH iwl-next v5 4/5] ice: implement Rx\n queue management ops","X-BeenThere":"intel-wired-lan@osuosl.org","X-Mailman-Version":"2.1.30","Precedence":"list","List-Id":"Intel Wired Ethernet Linux Kernel Driver Development\n <intel-wired-lan.osuosl.org>","List-Unsubscribe":"<https://lists.osuosl.org/mailman/options/intel-wired-lan>,\n <mailto:intel-wired-lan-request@osuosl.org?subject=unsubscribe>","List-Archive":"<http://lists.osuosl.org/pipermail/intel-wired-lan/>","List-Post":"<mailto:intel-wired-lan@osuosl.org>","List-Help":"<mailto:intel-wired-lan-request@osuosl.org?subject=help>","List-Subscribe":"<https://lists.osuosl.org/mailman/listinfo/intel-wired-lan>,\n <mailto:intel-wired-lan-request@osuosl.org?subject=subscribe>","Errors-To":"intel-wired-lan-bounces@osuosl.org","Sender":"\"Intel-wired-lan\" <intel-wired-lan-bounces@osuosl.org>"}},{"id":3688498,"web_url":"http://patchwork.ozlabs.org/comment/3688498/","msgid":"<20260508142505.1210977-2-horms@kernel.org>","list_archive_url":null,"date":"2026-05-08T14:25:06","subject":"Re: [Intel-wired-lan] [PATCH iwl-next v5 4/5] ice: implement Rx\n queue management ops","submitter":{"id":82748,"url":"http://patchwork.ozlabs.org/api/people/82748/","name":"Simon Horman","email":"horms@kernel.org"},"content":"From: 'Simon Horman' <horms@kernel.org>\n\nThis is an AI-generated review of your patch. The human sending this\nemail has considered the AI review valid, or at least plausible.\nFull review at: https://sashiko.dev\n---\n    ice: implement Rx queue management ops\n    \n    Now ice is ready to get queue_mgmt_ops support. It already has API\n    to disable/reconfig/enable one particular queue (for XSk). Reuse as\n    much of its code as possible to implement Rx queue management\n    callbacks and vice versa -- ice_queue_mem_{alloc,free}() can be\n    reused during ifup/ifdown to elide code duplication.\n    With this, ice passes the io_uring zcrx selftests, meaning the Rx\n    part of netmem/MP support is done.\n\n> diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c\n> index 1add82d894bbe..4e0b8895c303a 100644\n> --- a/drivers/net/ethernet/intel/ice/ice_base.c\n> +++ b/drivers/net/ethernet/intel/ice/ice_base.c\n> @@ -653,6 +653,43 @@ static int ice_rxq_pp_create(struct ice_rx_ring *rq)\n>  \treturn err;\n>  }\n>  \n> +static int ice_queue_mem_alloc(struct net_device *dev,\n> +\t\t\t       struct netdev_queue_config *qcfg,\n> +\t\t\t       void *per_queue_mem, int idx)\n> +{\n> +\tconst struct ice_netdev_priv *priv = netdev_priv(dev);\n> +\tconst struct ice_rx_ring *real = priv->vsi->rx_rings[idx];\n> +\tstruct ice_rx_ring *new = per_queue_mem;\n> +\tint ret;\n> +\n> +\tnew->count = real->count;\n> +\tnew->netdev = real->netdev;\n> +\tnew->q_index = real->q_index;\n> +\tnew->q_vector = real->q_vector;\n> +\tnew->vsi = real->vsi;\n\nIs it safe to assume netdev_priv(dev) returns the correct VSI here? For a\nloopback VSI (ICE_VSI_LB), it shares the PF's netdev, so priv->vsi resolves\nto the PF VSI.\n\nCould this copy the PF VSI pointers to the loopback ring and cause hardware\ncontext corruption later?\n\n[ ... ]\n> -/**\n> - * ice_qp_dis - Disables a queue pair\n> - * @vsi: VSI of interest\n> - * @q_idx: ring index in array\n> - *\n> - * Returns 0 on success, negative on failure.\n> - */\n> -int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)\n> +static int __ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)\n>  {\n\nCould this cause TX data corruption? This function appears to call\nsynchronize_net() before netif_tx_stop_queue(), meaning the RCU barrier\ncompletes before the queue actually stops.\n\nConcurrent transmissions might then race with ring modifications.\n\n[ ... ]\n> +static int ice_queue_start(struct net_device *dev,\n> +\t\t\t   struct netdev_queue_config *qcfg,\n> +\t\t\t   void *per_queue_mem, int idx)\n> +{\n> +\tconst struct ice_netdev_priv *priv = netdev_priv(dev);\n> +\tstruct ice_rx_ring *real = priv->vsi->rx_rings[idx];\n> +\tstruct ice_rx_ring *new = per_queue_mem;\n> +\tstruct napi_struct *napi;\n> +\tint ret;\n[ ... ]\n> +\tret = ice_alloc_rx_bufs(real, ICE_DESC_UNUSED(real));\n> +\n> +\treturn __ice_qp_ena(priv->vsi, idx, ret);\n> +}\n\nDoes this cause a use-after-free via hardware DMA?\n\nice_alloc_rx_bufs() returns a boolean true (1) on success. This is passed to\n__ice_qp_ena() as the fail argument, which returns it back. Since\nice_queue_start() returns a non-zero value, the core networking stack assumes\nfailure and frees the queue memory, while the hardware queue remains active and\nDMAs into freed memory.\n\n> +\n> +static int ice_queue_stop(struct net_device *dev, void *per_queue_mem,\n> +\t\t\t  int idx)\n> +{\n> +\tconst struct ice_netdev_priv *priv = netdev_priv(dev);\n> +\tstruct ice_rx_ring *real = priv->vsi->rx_rings[idx];\n> +\tint ret;\n> +\n> +\tret = __ice_qp_dis(priv->vsi, idx);\n\nDoes calling __ice_qp_dis() from ice_queue_stop() disrupt the global interface\ncarrier state and TX queues?\n\n__ice_qp_dis() calls netif_carrier_off() and cleans TX rings, but\nice_queue_stop() is intended to be a granular, per-RX-queue operation.\n\n> +\tif (ret)\n> +\t\treturn ret;\n> +\n> +\tpage_pool_disable_direct_recycling(real->pp);\n\nCan this dereference a NULL pointer if the queue is configured with an AF_XDP\nzero-copy socket?\n\nIn that case, a standard page pool is not created and real->pp might be NULL,\ncausing a panic in page_pool_disable_direct_recycling().\n\n> +\tif (real->hdr_pp)\n> +\t\tpage_pool_disable_direct_recycling(real->hdr_pp);\n> +\n> +\tice_zero_rx_ring(real);\n> +\tmemcpy(per_queue_mem, real, sizeof(*real));\n\nCould this lead to a memory leak of RX ring buffers?\n\nice_zero_rx_ring() clears the next_to_clean and next_to_use indices to 0\nbefore the memcpy(). When the stack calls ice_queue_mem_free(per_queue_mem),\nit skips cleaning because next_to_clean == next_to_use, leaking all active\nskbs and page pool buffers.\n\nAlso, does this memcpy() leave dangling pointers in real that can lead to a\ndouble-free?\n\nThe dynamically allocated pointers like rx_fqes and pp are not set to NULL\nin real. When the interface is brought down later, ice_clean_rx_ring(real)\nmay attempt to destroy the already-freed page pools again.\n\n> +\n> +\treturn 0;\n> +}","headers":{"Return-Path":"<intel-wired-lan-bounces@osuosl.org>","X-Original-To":["incoming@patchwork.ozlabs.org","intel-wired-lan@lists.osuosl.org"],"Delivered-To":["patchwork-incoming@legolas.ozlabs.org","intel-wired-lan@lists.osuosl.org"],"Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256\n header.s=default header.b=1afcxOvt;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org\n (client-ip=2605:bc80:3010::136; helo=smtp3.osuosl.org;\n envelope-from=intel-wired-lan-bounces@osuosl.org;\n receiver=patchwork.ozlabs.org)"],"Received":["from smtp3.osuosl.org (smtp3.osuosl.org [IPv6:2605:bc80:3010::136])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4gBstj6NQ7z1yCg\n\tfor <incoming@patchwork.ozlabs.org>; Sat, 09 May 2026 01:07:57 +1000 (AEST)","from localhost (localhost [127.0.0.1])\n\tby smtp3.osuosl.org (Postfix) with ESMTP id 447FD61663;\n\tFri,  8 May 2026 15:07:56 +0000 (UTC)","from smtp3.osuosl.org ([127.0.0.1])\n by localhost (smtp3.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id aOmKNIVz8zcU; Fri,  8 May 2026 15:07:55 +0000 (UTC)","from lists1.osuosl.org (lists1.osuosl.org [140.211.166.142])\n\tby smtp3.osuosl.org (Postfix) with ESMTP id 59B8E61664;\n\tFri,  8 May 2026 15:07:55 +0000 (UTC)","from smtp3.osuosl.org (smtp3.osuosl.org [IPv6:2605:bc80:3010::136])\n by lists1.osuosl.org (Postfix) with ESMTP id D5487358\n for <intel-wired-lan@lists.osuosl.org>; Fri,  8 May 2026 15:07:53 +0000 (UTC)","from localhost (localhost [127.0.0.1])\n by smtp3.osuosl.org (Postfix) with ESMTP id BAED261661\n for <intel-wired-lan@lists.osuosl.org>; Fri,  8 May 2026 15:07:53 +0000 (UTC)","from smtp3.osuosl.org ([127.0.0.1])\n by localhost (smtp3.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id U0W6l1Af3dxv for <intel-wired-lan@lists.osuosl.org>;\n Fri,  8 May 2026 15:07:53 +0000 (UTC)","from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254])\n by smtp3.osuosl.org (Postfix) with ESMTPS id CA1F061663\n for <intel-wired-lan@lists.osuosl.org>; Fri,  8 May 2026 15:07:52 +0000 (UTC)","from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])\n by tor.source.kernel.org (Postfix) with ESMTP id BC6846024D;\n Fri,  8 May 2026 15:07:51 +0000 (UTC)","by smtp.kernel.org (Postfix) with ESMTPSA id 8181EC2BCC7;\n Fri,  8 May 2026 15:07:48 +0000 (UTC)"],"X-Virus-Scanned":["amavis at osuosl.org","amavis at osuosl.org"],"X-Comment":"SPF check N/A for local connections - client-ip=140.211.166.142;\n helo=lists1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org;\n receiver=<UNKNOWN> ","DKIM-Filter":["OpenDKIM Filter v2.11.0 smtp3.osuosl.org 59B8E61664","OpenDKIM Filter v2.11.0 smtp3.osuosl.org CA1F061663"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org;\n\ts=default; t=1778252875;\n\tbh=VsYrHkZ7OOpzvfdi9BsxFKyOgLpyw/oG9Yh9ftcFssU=;\n\th=From:To:Cc:Date:In-Reply-To:References:Subject:List-Id:\n\t List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe:\n\t From;\n\tb=1afcxOvti4Uv+i0C8FGynwF/rRr1vPBQa4zujf9Vgnwg9g+KkQ9xFYOX7JlnfNIXn\n\t WjrrQhtNVVL6lx1/M1F7745T9XLPqDtcefPxMXCNl9CC+lpzn35+KyFhJU/9tTd2ih\n\t JBg0gDQTFGRfWn/RC2N5cx837C+yDB6sBzk2R4zFu+lBZWkesIT0cG+ft+ZmMQ+1eS\n\t 1iVBiFvL7D3rOvQhWKMkrT9/TSVjdgQtLDnjWqnFvrrmIFLKTpeZY4OobXOYttpRVC\n\t xgGPnbJ/0vd0PgHvESF9RLm1S4sh4af5dx4AaMAM9WNwEflMSSpQxXx7RhlUMEYE1o\n\t oQX9UqWVt//WA==","Received-SPF":"Pass (mailfrom) identity=mailfrom; client-ip=172.105.4.254;\n helo=tor.source.kernel.org; envelope-from=horms@kernel.org;\n receiver=<UNKNOWN>","DMARC-Filter":"OpenDMARC Filter v1.4.2 smtp3.osuosl.org CA1F061663","From":"Simon Horman <horms@kernel.org>","To":"aleksander.lobakin@intel.com","Cc":"'Simon Horman' <horms@kernel.org>, intel-wired-lan@lists.osuosl.org,\n anthony.l.nguyen@intel.com, przemyslaw.kitszel@intel.com,\n andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com,\n kuba@kernel.org, pabeni@redhat.com, kohei@enjuk.jp,\n jacob.e.keller@intel.com, aleksandr.loktionov@intel.com,\n nxne.cnse.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org,\n linux-kernel@vger.kernel.org","Date":"Fri,  8 May 2026 15:25:06 +0100","Message-ID":"<20260508142505.1210977-2-horms@kernel.org>","X-Mailer":"git-send-email 2.54.0","In-Reply-To":"<20260505152923.1040589-5-aleksander.lobakin@intel.com>","References":"<20260505152923.1040589-5-aleksander.lobakin@intel.com>","MIME-Version":"1.0","Content-Transfer-Encoding":"8bit","X-Mailman-Original-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple;\n d=kernel.org; s=k20201202; t=1778252871;\n bh=Pv3FTZ9ATrmxysMyOtkuTpnXuq4GVIFE10nmL33sesA=;\n h=From:To:Cc:Subject:Date:In-Reply-To:References:From;\n b=s3I1UqusQTbPx2G0/KhCqg7Z6bstEV0lYpshgjIf6OeKxRcnAr4zjJiT3GCzG8apZ\n HfuKbe3KnUUtxyXJ2Xyw3vpr120w0NsOXZ+0rwNp2Wal+LFGL6Z4dA8UT5Z9SHA90H\n 7DKSqJVcPB6MYQA4qmW+6VR/K1EagTlaNPbohLchPhU8f+JwDU0EyCO/lNdGz4L8tn\n e+VM1PNU93ptdxVFbQlucNJcQ0+jpb1oPJoZyg7WOKdu5KK/aQtXkYxwBN8Wso/8F5\n wvkI+esKEc/2O/lLcdo46oNSttsgrvJeGytiB2ckPLG7FymOCpDDTzGAgG8+I28hwo\n 3T0I9mMENSkZw==","X-Mailman-Original-Authentication-Results":["smtp3.osuosl.org;\n dmarc=pass (p=quarantine dis=none)\n header.from=kernel.org","smtp3.osuosl.org;\n dkim=pass (2048-bit key,\n unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256\n header.s=k20201202 header.b=s3I1Uqus"],"Subject":"Re: [Intel-wired-lan] [PATCH iwl-next v5 4/5] ice: implement Rx\n queue management ops","X-BeenThere":"intel-wired-lan@osuosl.org","X-Mailman-Version":"2.1.30","Precedence":"list","List-Id":"Intel Wired Ethernet Linux Kernel Driver Development\n <intel-wired-lan.osuosl.org>","List-Unsubscribe":"<https://lists.osuosl.org/mailman/options/intel-wired-lan>,\n <mailto:intel-wired-lan-request@osuosl.org?subject=unsubscribe>","List-Archive":"<http://lists.osuosl.org/pipermail/intel-wired-lan/>","List-Post":"<mailto:intel-wired-lan@osuosl.org>","List-Help":"<mailto:intel-wired-lan-request@osuosl.org?subject=help>","List-Subscribe":"<https://lists.osuosl.org/mailman/listinfo/intel-wired-lan>,\n <mailto:intel-wired-lan-request@osuosl.org?subject=subscribe>","Errors-To":"intel-wired-lan-bounces@osuosl.org","Sender":"\"Intel-wired-lan\" <intel-wired-lan-bounces@osuosl.org>"}},{"id":3688500,"web_url":"http://patchwork.ozlabs.org/comment/3688500/","msgid":"<4c0cacc0-b5f4-47fa-8865-72bd157c1cb1@intel.com>","list_archive_url":null,"date":"2026-05-08T15:07:03","subject":"Re: [Intel-wired-lan] [PATCH iwl-next v5 4/5] ice: implement Rx\n queue management ops","submitter":{"id":85827,"url":"http://patchwork.ozlabs.org/api/people/85827/","name":"Alexander Lobakin","email":"aleksander.lobakin@intel.com"},"content":"From: Simon Horman <horms@kernel.org>\nDate: Fri,  8 May 2026 15:08:00 +0100\n\n> From: 'Simon Horman' <horms@kernel.org>\n> \n> This is an AI-generated review of your patch. The human sending this\n> email has considered the AI review valid, or at least plausible.\n> Full review at: https://sashiko.dev\n> ---\n> ice: implement Rx queue management ops\n> \n> This commit wires the ice driver into the netdev queue management ops\n> framework, reusing the existing per-queue disable/reconfig/enable paths\n> originally added for XSk. With it, ice reportedly passes the io_uring\n> zcrx selftests.\n> \n>> diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c\n>> index 1add82d894bb..4e0b8895c303 100644\n>> --- a/drivers/net/ethernet/intel/ice/ice_base.c\n>> +++ b/drivers/net/ethernet/intel/ice/ice_base.c\n> \n> [ ... ]\n> \n>> @@ -1577,3 +1593,81 @@ int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx)\n>>\n>>  \treturn fail;\n>>  }\n>> +\n>> +/**\n>> + * ice_qp_ena - Enables a queue pair\n>> + * @vsi: VSI of interest\n>> + * @q_idx: ring index in array\n>> + *\n>> + * Returns 0 on success, negative on failure.\n>> + */\n>> +int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx)\n>> +{\n>> +\treturn __ice_qp_ena(vsi, q_idx, ice_vsi_cfg_single_rxq(vsi, q_idx));\n>> +}\n>> +\n>> +static int ice_queue_start(struct net_device *dev,\n>> +\t\t\t   struct netdev_queue_config *qcfg,\n>> +\t\t\t   void *per_queue_mem, int idx)\n>> +{\n>> +\tconst struct ice_netdev_priv *priv = netdev_priv(dev);\n>> +\tstruct ice_rx_ring *real = priv->vsi->rx_rings[idx];\n>> +\tstruct ice_rx_ring *new = per_queue_mem;\n>> +\tstruct napi_struct *napi;\n>> +\tint ret;\n>> +\n>> +\treal->pp = new->pp;\n>> +\treal->rx_fqes = new->rx_fqes;\n>> +\treal->hdr_fqes = new->hdr_fqes;\n>> +\treal->hdr_pp = new->hdr_pp;\n>> +\n>> +\treal->hdr_truesize = new->hdr_truesize;\n>> +\treal->truesize = new->truesize;\n>> +\treal->rx_hdr_len = new->rx_hdr_len;\n>> +\treal->rx_buf_len = new->rx_buf_len;\n>> +\n>> +\tmemcpy(&real->xdp_rxq, &new->xdp_rxq, sizeof(new->xdp_rxq));\n>> +\n>> +\tret = ice_setup_rx_ctx(real);\n>> +\tif (ret)\n>> +\t\treturn ret;\n> \n> Simon says: I believe that the issue flagged immediately below is\n>             a false positive; the cleanup logic unwinds things correctly.\n\nYup, our Sashiko also had concerns about this piece, but I rechecked and\nseems like it's a FP.\n\n> \n> \t    However, the two other issues flagged further down do seem to be\n> \t    of concern.\n> \n> ice_queue_start() eagerly copies new->pp, new->rx_fqes, new->hdr_pp,\n> new->hdr_fqes and the entire new->xdp_rxq into real before ice_setup_rx_ctx()\n> has succeeded.\n\n[...]\n\n> \n>> +\n>> +\tnapi = &real->q_vector->napi;\n>> +\n>> +\tpage_pool_enable_direct_recycling(real->pp, napi);\n>> +\tif (real->hdr_pp)\n>> +\t\tpage_pool_enable_direct_recycling(real->hdr_pp, napi);\n>> +\n>> +\tret = ice_alloc_rx_bufs(real, ICE_DESC_UNUSED(real));\n>> +\n>> +\treturn __ice_qp_ena(priv->vsi, idx, ret);\n> \n> ice_alloc_rx_bufs() is declared bool and returns !!cleaned_count, so it\n> yields 0 on full success and 1 on partial/failed allocation:\n> \n> bool ice_alloc_rx_bufs(struct ice_rx_ring *rx_ring, unsigned int cleaned_count)\n> {\n>     ...\n>     return !!cleaned_count;\n> }\n> \n> That bool is then passed straight into __ice_qp_ena()'s fail parameter,\n> which only overwrites fail via 'if (!fail) fail = err;', so a value of 1\n> propagates unchanged to ice_queue_start()'s return value.\n> \n> netdev_rx_queue_reconfig() treats any non-zero return as failure and\n> propagates it back to userspace (netlink, ethtool, io_uring zcrx install),\n> where the callers expect a negative errno.\n> \n> Before this patch, ice_qp_ena() obtained its fail value from\n> ice_vsi_cfg_single_rxq(), which returns a signed errno.\n> \n> Should this be converted to a proper negative errno before being handed\n> to __ice_qp_ena(), for example:\n> \n> \tret = ice_alloc_rx_bufs(real, ICE_DESC_UNUSED(real)) ? -ENOMEM : 0;\n> \n>> +}\n>> +\n>> +static int ice_queue_stop(struct net_device *dev, void *per_queue_mem,\n>> +\t\t\t  int idx)\n>> +{\n>> +\tconst struct ice_netdev_priv *priv = netdev_priv(dev);\n>> +\tstruct ice_rx_ring *real = priv->vsi->rx_rings[idx];\n>> +\tint ret;\n>> +\n>> +\tret = __ice_qp_dis(priv->vsi, idx);\n>> +\tif (ret)\n>> +\t\treturn ret;\n>> +\n>> +\tpage_pool_disable_direct_recycling(real->pp);\n>> +\tif (real->hdr_pp)\n>> +\t\tpage_pool_disable_direct_recycling(real->hdr_pp);\n>> +\n>> +\tice_zero_rx_ring(real);\n>> +\tmemcpy(per_queue_mem, real, sizeof(*real));\n>> +\n>> +\treturn 0;\n>> +}\n> \n> Can this leak inflight page pool buffers?\n> \n> ice_zero_rx_ring(real) is called before the memcpy to per_queue_mem, and\n> ice_zero_rx_ring() resets both indices:\n> \n> void ice_zero_rx_ring(struct ice_rx_ring *rx_ring)\n> {\n>     ...\n>     rx_ring->next_to_clean = 0;\n>     rx_ring->next_to_use = 0;\n> }\n> \n> So per_queue_mem captures a ring where next_to_clean == next_to_use == 0.\n> \n> The core then invokes ndo_queue_mem_free(dev, old_mem), and the recycle\n> loop in ice_queue_mem_free() is guarded by exactly those two indices:\n> \n> void ice_queue_mem_free(struct net_device *dev, void *per_queue_mem)\n> {\n>     ...\n>     for (u32 i = rx_ring->next_to_clean; i != rx_ring->next_to_use; ) {\n>         libeth_rx_recycle_slow(rx_ring->rx_fqes[i].netmem);\n> \n>         if (rx_ring->hdr_pp)\n>             libeth_rx_recycle_slow(rx_ring->hdr_fqes[i].netmem);\n> \n>         if (unlikely(++i == rx_ring->count))\n>             i = 0;\n>     }\n>     ...\n>     ice_rxq_pp_destroy(rx_ring);\n> }\n> \n> With 0 != 0 false on entry, the loop never runs, so any buffers that were\n> in rx_fqes[old_ntc..old_ntu) (and hdr_fqes[] when header split is on) are\n> never returned via libeth_rx_recycle_slow().  ice_rxq_pp_destroy() then\n> kvfree()s the fqes arrays and calls page_pool_destroy() with outstanding\n> inflight references that are no longer tracked anywhere.\n> \n> The new ice_clean_rx_ring() wrapper keeps the original ordering:\n> \n> void ice_clean_rx_ring(struct ice_rx_ring *rx_ring)\n> {\n>     ice_queue_mem_free(rx_ring->netdev, rx_ring);\n>     ice_zero_rx_ring(rx_ring);\n> }\n> \n> Should ice_queue_stop() follow the same ordering and perform the memcpy\n> before ice_zero_rx_ring(real), so the recycle loop in ice_queue_mem_free()\n> sees the real ntc/ntu values?\n> \n>> +\n>> +const struct netdev_queue_mgmt_ops ice_queue_mgmt_ops = {\n>> +\t.ndo_queue_mem_alloc\t= ice_queue_mem_alloc,\n>> +\t.ndo_queue_mem_free\t= ice_queue_mem_free,\n>> +\t.ndo_queue_mem_size\t= sizeof(struct ice_rx_ring),\n>> +\t.ndo_queue_start\t= ice_queue_start,\n>> +\t.ndo_queue_stop\t\t= ice_queue_stop,\n>> +};\n\nThose two are new to me, I'll double check.\n\nThanks,\nOlek","headers":{"Return-Path":"<intel-wired-lan-bounces@osuosl.org>","X-Original-To":["incoming@patchwork.ozlabs.org","intel-wired-lan@lists.osuosl.org"],"Delivered-To":["patchwork-incoming@legolas.ozlabs.org","intel-wired-lan@lists.osuosl.org"],"Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256\n header.s=default header.b=4yKM1mhQ;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org\n (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org;\n envelope-from=intel-wired-lan-bounces@osuosl.org;\n receiver=patchwork.ozlabs.org)"],"Received":["from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4gBsvc3lZCz1yCg\n\tfor <incoming@patchwork.ozlabs.org>; Sat, 09 May 2026 01:08:44 +1000 (AEST)","from localhost (localhost [127.0.0.1])\n\tby smtp1.osuosl.org (Postfix) with ESMTP id 03DF184394;\n\tFri,  8 May 2026 15:08:43 +0000 (UTC)","from smtp1.osuosl.org ([127.0.0.1])\n by localhost (smtp1.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id Jq7h72cP1uCA; Fri,  8 May 2026 15:08:42 +0000 (UTC)","from lists1.osuosl.org (lists1.osuosl.org [140.211.166.142])\n\tby smtp1.osuosl.org (Postfix) with ESMTP id 0A1D18438B;\n\tFri,  8 May 2026 15:08:42 +0000 (UTC)","from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138])\n by lists1.osuosl.org (Postfix) with ESMTP id ABE38272\n for <intel-wired-lan@lists.osuosl.org>; Fri,  8 May 2026 15:08:40 +0000 (UTC)","from localhost (localhost [127.0.0.1])\n by smtp1.osuosl.org (Postfix) with ESMTP id 8FF888438B\n for <intel-wired-lan@lists.osuosl.org>; Fri,  8 May 2026 15:08:40 +0000 (UTC)","from smtp1.osuosl.org ([127.0.0.1])\n by localhost (smtp1.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id N0g-ztdkF755 for <intel-wired-lan@lists.osuosl.org>;\n Fri,  8 May 2026 15:08:39 +0000 (UTC)","from mgamail.intel.com (mgamail.intel.com [192.198.163.18])\n by smtp1.osuosl.org (Postfix) with ESMTPS id E6E378437F\n for <intel-wired-lan@lists.osuosl.org>; Fri,  8 May 2026 15:08:38 +0000 (UTC)","from orviesa008.jf.intel.com ([10.64.159.148])\n by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 08 May 2026 08:08:38 -0700","from orsmsx901.amr.corp.intel.com ([10.22.229.23])\n by orviesa008.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 08 May 2026 08:08:38 -0700","from ORSMSX902.amr.corp.intel.com (10.22.229.24) by\n ORSMSX901.amr.corp.intel.com (10.22.229.23) with Microsoft SMTP Server\n (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id\n 15.2.2562.37; Fri, 8 May 2026 08:08:37 -0700","from ORSEDG903.ED.cps.intel.com (10.7.248.13) by\n ORSMSX902.amr.corp.intel.com (10.22.229.24) with Microsoft SMTP Server\n (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id\n 15.2.2562.37 via Frontend Transport; Fri, 8 May 2026 08:08:37 -0700","from CY3PR05CU001.outbound.protection.outlook.com (40.93.201.53) by\n edgegateway.intel.com (134.134.137.113) with Microsoft SMTP Server\n (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id\n 15.2.2562.37; Fri, 8 May 2026 08:08:36 -0700","from DS0PR11MB8718.namprd11.prod.outlook.com (2603:10b6:8:1b9::20)\n by IA0PR11MB7306.namprd11.prod.outlook.com (2603:10b6:208:438::18) with\n Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9891.19; Fri, 8 May\n 2026 15:08:34 +0000","from DS0PR11MB8718.namprd11.prod.outlook.com\n ([fe80::6aa:411d:4bfa:619c]) by DS0PR11MB8718.namprd11.prod.outlook.com\n ([fe80::6aa:411d:4bfa:619c%5]) with mapi id 15.20.9891.019; Fri, 8 May 2026\n 15:08:33 +0000"],"X-Virus-Scanned":["amavis at osuosl.org","amavis at osuosl.org"],"X-Comment":"SPF check N/A for local connections - client-ip=140.211.166.142;\n helo=lists1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org;\n receiver=<UNKNOWN> ","DKIM-Filter":["OpenDKIM Filter v2.11.0 smtp1.osuosl.org 0A1D18438B","OpenDKIM Filter v2.11.0 smtp1.osuosl.org E6E378437F"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org;\n\ts=default; t=1778252922;\n\tbh=I13G0utElhy1I210dqBQzgFrPjyF6yFeMGVmg599ey0=;\n\th=Date:To:CC:References:From:In-Reply-To:Subject:List-Id:\n\t List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe:\n\t From;\n\tb=4yKM1mhQqMJY+iT1rXpuLgbpXmJrVeWERwh0g/oAUkBHZFtcx3akGLJU5emYeaLgq\n\t zADwJHAdSQbGTyk6SqkFI2wvGbaOX2qNMUtCfAnJfZ7B6AYJWPjk/6d+43BLa13sdd\n\t cjSFm56kSDS3V1II29f5Vwule66mlsbgcaT8aTmKePNy2TKR6kLsFlWpcw6GcEn4Y1\n\t 4NWCIMM9bkG3LKYfhVCdi7Zz2tiz4pkkgdaYFyiw2Bm4NFFR/aVjkQWxrTSRj1ieyZ\n\t uDJvbzb8t9F2Hwynd2E7C6wZxMKNkd+jb38P6WzBYzBJ0QuCpawU2pHoqBA77VsTMe\n\t sJe/hQGoZHsgQ==","Received-SPF":"Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.18;\n helo=mgamail.intel.com; envelope-from=aleksander.lobakin@intel.com;\n receiver=<UNKNOWN>","DMARC-Filter":"OpenDMARC Filter v1.4.2 smtp1.osuosl.org E6E378437F","X-CSE-ConnectionGUID":["5Hil80Z/SByWAWqsD7XCfA==","TEAPp8KUTOiBpBhJZtzyKg=="],"X-CSE-MsgGUID":["k41GbPRcQiyOZA623V9x4A==","Axesa6V3QU+A8j95xDXRnA=="],"X-IronPort-AV":["E=McAfee;i=\"6800,10657,11780\"; a=\"78365236\"","E=Sophos;i=\"6.23,223,1770624000\"; d=\"scan'208\";a=\"78365236\"","E=Sophos;i=\"6.23,223,1770624000\"; d=\"scan'208\";a=\"236725131\""],"X-ExtLoop1":"1","ARC-Seal":"i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;\n b=ZU6s+YmxT5j/DQCMGrb/GenqHPC1vpoiCWnQ3Pgvvei7P1G0sbw/ziJ2Jt+lSNEKBlt38UrS6Jvrrk15qPXHXWu6BHQEgH7gKNqdLfiMpXs3f2AhMmU5AelTWgLjYx0e/nTrdlxceyI8vW101g400rkits6jaCsExDa9UNEFO1LsgOXmeC9OnCRGRMHRxErLZEt8V/kaFn49Er5IM4npNHqKpFv3TAGAe+1epioqP+zVZJVpnyeFZ8AfIasNUQWN2yBGCaB5iMD+wvtSe6hhOhy/tb8sW+Q+1GFBYUz2hUc2QuDmddUDP9dZn39gqtmk00sQYEFSoeRVP3g9WjnSEQ==","ARC-Message-Signature":"i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector10001;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=I13G0utElhy1I210dqBQzgFrPjyF6yFeMGVmg599ey0=;\n b=cnPOyaaOU9zyOInllqd9SaENCVYrGNYfAy9OOZAWIDwOi3GFx0keYhoJeCTeXjNd/TEmBuGH5JYS2QvinlzfUzzO7SflQJbXsAmN5AUSIRaUlZAhvHYJJxJ+t6zrZ1FCM389wolbeaQ8+f/ncQXnIDe8M0JQh1pZfX/2q7hiZ0hRxaIy+rj520kqgQCTcwEyXj1TfazjS7bGG/+RLT2tkW1v+H6jIexG7JEnjzup90pifnCyC8AvpW3tT0czR6uzPRrJGYeP4N5o97d9S8ZYzUiFe2Jz1GsJdTq4o/Wri3/kVwinXNFkFwF3W3Lg0a8VIazSn8DagUsv1MdkTkY53A==","ARC-Authentication-Results":"i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;\n dkim=pass header.d=intel.com; arc=none","Message-ID":"<4c0cacc0-b5f4-47fa-8865-72bd157c1cb1@intel.com>","Date":"Fri, 8 May 2026 17:07:03 +0200","User-Agent":"Mozilla Thunderbird","To":"Simon Horman <horms@kernel.org>","CC":"<intel-wired-lan@lists.osuosl.org>, <anthony.l.nguyen@intel.com>,\n <przemyslaw.kitszel@intel.com>, <andrew+netdev@lunn.ch>,\n <davem@davemloft.net>, <edumazet@google.com>, <kuba@kernel.org>,\n <pabeni@redhat.com>, <kohei@enjuk.jp>, <jacob.e.keller@intel.com>,\n <aleksandr.loktionov@intel.com>, <nxne.cnse.osdt.itp.upstreaming@intel.com>,\n <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>","References":"<20260505152923.1040589-5-aleksander.lobakin@intel.com>\n <20260508140758.1200568-3-horms@kernel.org>","Content-Language":"en-US","From":"Alexander Lobakin <aleksander.lobakin@intel.com>","In-Reply-To":"<20260508140758.1200568-3-horms@kernel.org>","Content-Type":"text/plain; charset=\"UTF-8\"","Content-Transfer-Encoding":"7bit","X-ClientProxiedBy":"TL2P290CA0011.ISRP290.PROD.OUTLOOK.COM\n (2603:1096:950:2::14) To DS0PR11MB8718.namprd11.prod.outlook.com\n (2603:10b6:8:1b9::20)","MIME-Version":"1.0","X-MS-PublicTrafficType":"Email","X-MS-TrafficTypeDiagnostic":"DS0PR11MB8718:EE_|IA0PR11MB7306:EE_","X-MS-Office365-Filtering-Correlation-Id":"1b7b255e-6508-4409-5456-08dead13ac0e","X-MS-Exchange-SenderADCheck":"1","X-MS-Exchange-AntiSpam-Relay":"0","X-Microsoft-Antispam":"BCL:0;\n ARA:13230040|7416014|376014|366016|1800799024|18002099003|22082099003|56012099003;","X-Microsoft-Antispam-Message-Info":"\n +QzH1VvVO7sqpy5XY0nu6DThrfJmSHARrSsqn8RBqMwA/Zaajj7SJY3buQ4ata3w4rlsNqyXuT7hljzFCUZXgAswDrtJMqn7QXa2+bPmQmaF3C7VTerhypAAlgsiL/3mWDhnV2VsGveHB93e3kDTChnDHANOYlZ0fvj/UO7p9wz7E/IDih3HvN0+UZAtv8PShvASz9qMH937RhJMxWxvRhgP+mQchcE/kK2pKkQZyKVluu0zM59wjjdVaHtRQFM5c9ePlqrGXU6918kTbz97uA0U3Xbcxy8NhgxaRi3E10t3YjMDtE49wp0px0vGoENcVbLj8lqkuFIG27LD37fS3zk8dnx2MjdegsexnTPVXr/5QD/xp20/9qrn3gICNGGGBsiyivrg2jiaAOSwVg89vukrg3CnHSaGezKkEoPBQT0BSe9hpcR2WWZ3noHWMt5BNnYM7JBgPfs3RRzyFYPrm51Pqg1TWKvW2Dm/7Gc/Il12taFBVoat0wwCutKhBgCgsTqgvtaNovmD2iFGtasTD+yR9qyvCfThks3PpW8S/6TmT1Z7ZbXOmFH0jtdFIs/UzZOx0EVkTUCBtiZ5V001yLxa56nZ5pfNbxp1J/szbhjtI6NYjJb5X3dd/zrla5tmjl3RxWYPlYsMgarzGN6OKQ==","X-Forefront-Antispam-Report":"CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;\n IPV:NLI; SFV:NSPM; H:DS0PR11MB8718.namprd11.prod.outlook.com; PTR:; CAT:NONE;\n SFS:(13230040)(7416014)(376014)(366016)(1800799024)(18002099003)(22082099003)(56012099003);\n DIR:OUT; SFP:1101;","X-MS-Exchange-AntiSpam-MessageData-ChunkCount":"1","X-MS-Exchange-AntiSpam-MessageData-0":"=?utf-8?q?NmK2Gabh5eU3dhZmnFZzTPCJMXDJ?=\n\t=?utf-8?q?6c7vs9wiSW/fvOrbeMds7Y3QtUjI2bQWlJ4CVp8BM1tj6Mcc7nT7l11PPxkG4YRe+?=\n\t=?utf-8?q?gscyZUjwSiYfK9R3z9w786q0YbYxBgBMHfwvLrqTL2B2D4ROil5NjwTxl96LJdX4K?=\n\t=?utf-8?q?wNuXUSNHpHY/9rMRSd7WAEG9jhU+yCt9z0BMXosl0zfaW+SU2cVs6vUgoQf0+28/z?=\n\t=?utf-8?q?W18Rb9zW7koIivHnHAROqbBeaIktncoF16a2SJ40UbmnzCUiLQd+U8vL6HjLGyNmG?=\n\t=?utf-8?q?HR11ziTZUVawd17lEONl2fsB7s0F9DGl5GjpV/SYAuUXP7wZxI/WHOH0mVSbGcKn3?=\n\t=?utf-8?q?mfLt9RptNQY5LUeYaYqO7BCl9iNs61WLplczvNBiE2EXKmjS5orkuM9RxsYpDlSlF?=\n\t=?utf-8?q?QPqh0aSw2k4HXQ7MN5NQ524vZ2L6pflIAQ3Sg2llW0fLDC1dxDAIO/hRQFY1vJfds?=\n\t=?utf-8?q?/uatIj581OrOy4FqFfSxSQAKcIkOwHrLWlGiVMRGe/BsjFJLmEiTL0MFySjTa5ZQj?=\n\t=?utf-8?q?/BAV2x0mz86oEleSV7nSEmftKgBs9e6NtRbaXH/SjPWZBwvtqw+G/+GVDjSWAvMjb?=\n\t=?utf-8?q?+DPKTV+CyAVwWBBXcH5+WgahhFi+Dw1/Yvt1Bt7pq/K6DV37vlCo+ZNjO2PSeJjHG?=\n\t=?utf-8?q?fOiqVBOebFvYoF1JTkKh+AkUs28kKjvvdh96AOZRuj5zxWMTOK7kh7DVPWZRcW+sT?=\n\t=?utf-8?q?VaRg3Zk6pReHRzk0+n0lfqOwuGb9UDuSCtfGmMUct+6siHO6LxIYnWE9ymlbTdxtK?=\n\t=?utf-8?q?eSmie2xFl6rpOOESr5aeJzzl0PqxOG9bULxRcSBS+LjtbTCUrC1UyXqe8IxeJA1Jw?=\n\t=?utf-8?q?l/8PoxYibPrngs02nhla/KJQ3lGRKgbzrIXrpuTRNDEi7ntCNWjlFode1GvDqfgbM?=\n\t=?utf-8?q?BJe3rIwLVv66s6V5+mFJEmmANRc9LRf+ZGXfYwhi8taAIk07gvPweFPYyr+vDDTRm?=\n\t=?utf-8?q?duYlKUv63JIXymKOaBRh2WjKRFzesFB7bBr1J9XTQwik3nGAkwR7dsXUMDlJgFjbV?=\n\t=?utf-8?q?Lx4bpWGaF2LjlmfiqktsD3w/jD10PunjSxyXI8/Q9qFg/BQgre9PACyv6prBCIc6L?=\n\t=?utf-8?q?Zd1WnZduHmBIv8CTnpuayTtw5sgaX1dYGX5iT1abZuOoeJz0Md9EBtlHhaQwEZDUc?=\n\t=?utf-8?q?FxpI7zz+ml9HArtJF/M5Scs4/Hdx0OAIDRU2rMyL4kK+c38iaWXTKVnb42phg5Yd6?=\n\t=?utf-8?q?0a2Uas3G+pFwNcn27PMoPW89C3zqK5ceiEDoaveQlaXRkASQaH/0lSt0s4tfEIqQV?=\n\t=?utf-8?q?O4bN6rDImC8YLGN3x/xE2A8UySUQjfSG8MuPy/3UuyjLGvImdOftEiqeoJdMN5r0a?=\n\t=?utf-8?q?hhQ0LuOqXc3P97XB7Wov7WKUwDc2VTDkMpVE7EJTqiDUfh2UIYezQZ7moFD5IuHzk?=\n\t=?utf-8?q?xpcbcsrQXzkfzlgNgsDba9wP42iJPb6EzUEUdHyYG5tOj76xZ7tGeLv+JauJjObEe?=\n\t=?utf-8?q?gUFgL0z5iQWdv2NbFKWWAEQNIzKa8kZYexxJwGOLunS7qDaQjqdOcFgY3pjw6nC7n?=\n\t=?utf-8?q?IBXpfK6UHEQpH0WMrPf1vxycOp6iKUXUsLEW5SFxFy8oVx25suZ/iZe0RLrnBriCr?=\n\t=?utf-8?q?uPPnLRyGV0NhuWxHZLje+YuMYfxGrCFvb73gMx4lpBdeXUt11fiA5yG1+SIm204u1?=\n\t=?utf-8?q?TZKDB3XID/pivpzBqXhE1X23rgJiFgTSlOAN6p7eS2jpt3viOuXQg=3D?=","X-Exchange-RoutingPolicyChecked":"\n WBq1XWPZoCGtVrPgVr5BCBzTvsmuGa1fIF1vPYY6SRW2t0ekWPXmqbG0tk748gm2d+shpvmQQDiSnawL8xyjUFp34f9ayYNSGhpXO5KsyM4nlwpmKTlL+UVqrWqoK1ARwyhQXwNQkVsDFK8TOMtdheNBau8xueOYM/FuzT5NPKRc990LjxgMEsnN98uY3duzAVB0eB72Q6SvK6NsBpqseERSf7JqWiGLTKo+k2c35un6RfGfNOwg7/3H+I9CBLM6y2JrSzfrdf/8zWE7Bf1ZU/xWZJShRr8gSLuUGSGp4BInxqbh7GjnvH44ti9Ro8vQSroNjZDdBJjYbtj+oLEmww==","X-MS-Exchange-CrossTenant-Network-Message-Id":"\n 1b7b255e-6508-4409-5456-08dead13ac0e","X-MS-Exchange-CrossTenant-AuthSource":"DS0PR11MB8718.namprd11.prod.outlook.com","X-MS-Exchange-CrossTenant-AuthAs":"Internal","X-MS-Exchange-CrossTenant-OriginalArrivalTime":"08 May 2026 15:08:33.8346 (UTC)","X-MS-Exchange-CrossTenant-FromEntityHeader":"Hosted","X-MS-Exchange-CrossTenant-Id":"46c98d88-e344-4ed4-8496-4ed7712e255d","X-MS-Exchange-CrossTenant-MailboxType":"HOSTED","X-MS-Exchange-CrossTenant-UserPrincipalName":"\n j1H9r6wb+CH7lyR2/NgdQUJ+ujaojZIzUHNb6hBuCYOL4aLuGyEakEAsQb76tsqimr+kNJo7l2KDGibhk/ql9Up2JwqwF2RJeBLYUmugGSw=","X-MS-Exchange-Transport-CrossTenantHeadersStamped":"IA0PR11MB7306","X-OriginatorOrg":"intel.com","X-Mailman-Original-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple;\n d=intel.com; i=@intel.com; q=dns/txt; s=Intel;\n t=1778252919; x=1809788919;\n h=message-id:date:subject:to:cc:references:from:\n in-reply-to:content-transfer-encoding:mime-version;\n bh=DKjKQn5mWBvBPvkLpdkqlFBHyoYlsUhjTNlKERaHGwc=;\n b=FIQSyN3FP3MUKfTDHbUi6LXyHO45yQ6nbTqIbRCwu0fC27MQ0tUUlcs5\n bkR+C/IQSbztUnPaYK/v8f0q96u30S7rZ8X27rnkJx82W5YPQziV6Al6Y\n Zqoxa0iH02GV+g7yrEM+KiS/iG0+18oVKMzbNHeNCzCfT0j20lutUtfB4\n TkLTePM3448RCTtMtkqWExndArjh59Ob0GbYQZ8UYurQ3UAXu/BRsFdnG\n pi2Aj7X4ZtyxbQhoNGBjH/Fv+kdIkTaHFPhwjoUhRvqPO9dWISKDgaL16\n WtRcK6NcGpnpNmXnkbs0XUxZDpwxHwLBaTuw5mal9ReIJGd62b34ENs13\n Q==;","X-Mailman-Original-Authentication-Results":["smtp1.osuosl.org;\n dmarc=pass (p=none dis=none)\n header.from=intel.com","smtp1.osuosl.org;\n dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com\n header.a=rsa-sha256 header.s=Intel header.b=FIQSyN3F","dkim=none (message not signed)\n header.d=none;dmarc=none action=none header.from=intel.com;"],"Subject":"Re: [Intel-wired-lan] [PATCH iwl-next v5 4/5] ice: implement Rx\n queue management ops","X-BeenThere":"intel-wired-lan@osuosl.org","X-Mailman-Version":"2.1.30","Precedence":"list","List-Id":"Intel Wired Ethernet Linux Kernel Driver Development\n <intel-wired-lan.osuosl.org>","List-Unsubscribe":"<https://lists.osuosl.org/mailman/options/intel-wired-lan>,\n <mailto:intel-wired-lan-request@osuosl.org?subject=unsubscribe>","List-Archive":"<http://lists.osuosl.org/pipermail/intel-wired-lan/>","List-Post":"<mailto:intel-wired-lan@osuosl.org>","List-Help":"<mailto:intel-wired-lan-request@osuosl.org?subject=help>","List-Subscribe":"<https://lists.osuosl.org/mailman/listinfo/intel-wired-lan>,\n <mailto:intel-wired-lan-request@osuosl.org?subject=subscribe>","Errors-To":"intel-wired-lan-bounces@osuosl.org","Sender":"\"Intel-wired-lan\" <intel-wired-lan-bounces@osuosl.org>"}}]