get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/1.0/patches/2223073/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 2223073,
    "url": "http://patchwork.ozlabs.org/api/1.0/patches/2223073/?format=api",
    "project": {
        "id": 46,
        "url": "http://patchwork.ozlabs.org/api/1.0/projects/46/?format=api",
        "name": "Intel Wired Ethernet development",
        "link_name": "intel-wired-lan",
        "list_id": "intel-wired-lan.osuosl.org",
        "list_email": "intel-wired-lan@osuosl.org",
        "web_url": "",
        "scm_url": "",
        "webscm_url": ""
    },
    "msgid": "<20260414110006.124286-6-jtornosm@redhat.com>",
    "date": "2026-04-14T11:00:06",
    "name": "[net,v3,5/5] iavf: refactor virtchnl polling into single function",
    "commit_ref": null,
    "pull_url": null,
    "state": "new",
    "archived": false,
    "hash": "8970f4db0195f7c311bdc549f99d7f66b62cfdd2",
    "submitter": {
        "id": 93070,
        "url": "http://patchwork.ozlabs.org/api/1.0/people/93070/?format=api",
        "name": "Jose Ignacio Tornos Martinez",
        "email": "jtornosm@redhat.com"
    },
    "delegate": null,
    "mbox": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20260414110006.124286-6-jtornosm@redhat.com/mbox/",
    "series": [
        {
            "id": 499816,
            "url": "http://patchwork.ozlabs.org/api/1.0/series/499816/?format=api",
            "date": "2026-04-14T11:00:01",
            "name": "Fix i40e/ice/iavf VF bonding after netdev lock changes",
            "version": 3,
            "mbox": "http://patchwork.ozlabs.org/series/499816/mbox/"
        }
    ],
    "check": "pending",
    "checks": "http://patchwork.ozlabs.org/api/patches/2223073/checks/",
    "tags": {},
    "headers": {
        "Return-Path": "<intel-wired-lan-bounces@osuosl.org>",
        "X-Original-To": [
            "incoming@patchwork.ozlabs.org",
            "intel-wired-lan@lists.osuosl.org"
        ],
        "Delivered-To": [
            "patchwork-incoming@legolas.ozlabs.org",
            "intel-wired-lan@lists.osuosl.org"
        ],
        "Authentication-Results": [
            "legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256\n header.s=default header.b=HsiTGsTG;\n\tdkim-atps=neutral",
            "legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org\n (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org;\n envelope-from=intel-wired-lan-bounces@osuosl.org;\n receiver=patchwork.ozlabs.org)"
        ],
        "Received": [
            "from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4fw1Xk688Hz1yDF\n\tfor <incoming@patchwork.ozlabs.org>; Tue, 14 Apr 2026 21:00:54 +1000 (AEST)",
            "from localhost (localhost [127.0.0.1])\n\tby smtp1.osuosl.org (Postfix) with ESMTP id 54BC084C12;\n\tTue, 14 Apr 2026 11:00:53 +0000 (UTC)",
            "from smtp1.osuosl.org ([127.0.0.1])\n by localhost (smtp1.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id CWbIXIah9qPk; Tue, 14 Apr 2026 11:00:52 +0000 (UTC)",
            "from lists1.osuosl.org (lists1.osuosl.org [140.211.166.142])\n\tby smtp1.osuosl.org (Postfix) with ESMTP id 7B7AF84C14;\n\tTue, 14 Apr 2026 11:00:52 +0000 (UTC)",
            "from smtp1.osuosl.org (smtp1.osuosl.org [140.211.166.138])\n by lists1.osuosl.org (Postfix) with ESMTP id BF085237\n for <intel-wired-lan@lists.osuosl.org>; Tue, 14 Apr 2026 11:00:50 +0000 (UTC)",
            "from localhost (localhost [127.0.0.1])\n by smtp1.osuosl.org (Postfix) with ESMTP id A52E884C19\n for <intel-wired-lan@lists.osuosl.org>; Tue, 14 Apr 2026 11:00:50 +0000 (UTC)",
            "from smtp1.osuosl.org ([127.0.0.1])\n by localhost (smtp1.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id JInKWPZM3RNI for <intel-wired-lan@lists.osuosl.org>;\n Tue, 14 Apr 2026 11:00:49 +0000 (UTC)",
            "from us-smtp-delivery-124.mimecast.com\n (us-smtp-delivery-124.mimecast.com [170.10.133.124])\n by smtp1.osuosl.org (Postfix) with ESMTPS id 74FDB84C14\n for <intel-wired-lan@lists.osuosl.org>; Tue, 14 Apr 2026 11:00:49 +0000 (UTC)",
            "from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com\n (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by\n relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3,\n cipher=TLS_AES_256_GCM_SHA384) id us-mta-278-j5qJb-hUPtepsz5YqfYV-A-1; Tue,\n 14 Apr 2026 07:00:46 -0400",
            "from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com\n (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4])\n (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n (No client certificate requested)\n by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS\n id 6342118002F5; Tue, 14 Apr 2026 11:00:45 +0000 (UTC)",
            "from fedora.redhat.com (unknown [10.44.48.43])\n by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP\n id 902B13000C1D; Tue, 14 Apr 2026 11:00:41 +0000 (UTC)"
        ],
        "X-Virus-Scanned": [
            "amavis at osuosl.org",
            "amavis at osuosl.org"
        ],
        "X-Comment": "SPF check N/A for local connections - client-ip=140.211.166.142;\n helo=lists1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org;\n receiver=<UNKNOWN> ",
        "DKIM-Filter": [
            "OpenDKIM Filter v2.11.0 smtp1.osuosl.org 7B7AF84C14",
            "OpenDKIM Filter v2.11.0 smtp1.osuosl.org 74FDB84C14"
        ],
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org;\n\ts=default; t=1776164452;\n\tbh=ZUuW6S8vq+qFO9JpVN9QFMuGPGZumM43B/zb/rVii/U=;\n\th=From:To:Cc:Date:In-Reply-To:References:Subject:List-Id:\n\t List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe:\n\t From;\n\tb=HsiTGsTGnoXVSyqC2vz4Wjwzi9i/vdEovpb7VJDzpbQUJoPC/Z1V8aPHwFCxwfKMq\n\t qnByRtgCR535yEgbIztC305FRWWiPSGd/L5Vk5zoRVe4hAM82sCAKuCXIS2H+btdV6\n\t 7qf361abFeKek5/IEgox3466L0DcaAe/9y5jx5Z9uQcy/qZN8kJi6s41kJmfq+BG2d\n\t qGNbHP5ipzxmaaVcDqack3y5ZAq3Uk1ZBHj+ruxiHlOnNVnA33i5r5sSKyBhDv7zQT\n\t +ResSSQbaLWTc1ElXm8LWvKLnNiS4g492OdiRHLLUB9sFgZWf8GDqnywUw1wOA006F\n\t SU+9G/HiVQhpg==",
        "Received-SPF": "Pass (mailfrom) identity=mailfrom; client-ip=170.10.133.124;\n helo=us-smtp-delivery-124.mimecast.com; envelope-from=jtornosm@redhat.com;\n receiver=<UNKNOWN>",
        "DMARC-Filter": "OpenDMARC Filter v1.4.2 smtp1.osuosl.org 74FDB84C14",
        "X-MC-Unique": "j5qJb-hUPtepsz5YqfYV-A-1",
        "X-Mimecast-MFC-AGG-ID": "j5qJb-hUPtepsz5YqfYV-A_1776164445",
        "From": "Jose Ignacio Tornos Martinez <jtornosm@redhat.com>",
        "To": "netdev@vger.kernel.org",
        "Cc": "intel-wired-lan@lists.osuosl.org, jesse.brandeburg@intel.com,\n anthony.l.nguyen@intel.com, davem@davemloft.net, edumazet@google.com,\n kuba@kernel.org, pabeni@redhat.com,\n Jose Ignacio Tornos Martinez <jtornosm@redhat.com>,\n Przemek Kitszel <przemyslaw.kitszel@intel.com>",
        "Date": "Tue, 14 Apr 2026 13:00:06 +0200",
        "Message-ID": "<20260414110006.124286-6-jtornosm@redhat.com>",
        "In-Reply-To": "<20260414110006.124286-1-jtornosm@redhat.com>",
        "References": "<20260414110006.124286-1-jtornosm@redhat.com>",
        "MIME-Version": "1.0",
        "X-Scanned-By": "MIMEDefang 3.4.1 on 10.30.177.4",
        "X-Mimecast-MFC-PROC-ID": "MdF-DC5iS90-V84CeOiKMkRUKqUlUY0HWUnDUzSCXUQ_1776164445",
        "X-Mimecast-Originator": "redhat.com",
        "Content-Transfer-Encoding": "8bit",
        "content-type": "text/plain; charset=\"US-ASCII\"; x-default=true",
        "X-Mailman-Original-DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=redhat.com;\n s=mimecast20190719; t=1776164448;\n h=from:from:reply-to:subject:subject:date:date:message-id:message-id:\n to:to:cc:cc:mime-version:mime-version:content-type:content-type:\n content-transfer-encoding:content-transfer-encoding:\n in-reply-to:in-reply-to:references:references;\n bh=ZUuW6S8vq+qFO9JpVN9QFMuGPGZumM43B/zb/rVii/U=;\n b=NyIO+A6ITXlhaWmGMM1D2WUoBaLr1+UEWpHs09DY0hEUREskWlJtRmESKtle0KTW31oJoe\n +VR3/vDEcM7D0CSLln1YZkdia3PKRVOlE/hnCOVNkCTw0/M38xrqEDhoNpasXC1jhJUxEQ\n nGD5GEy50OTC4JnHzcJIB2LgPsLfSXE=",
        "X-Mailman-Original-Authentication-Results": [
            "smtp1.osuosl.org;\n dmarc=pass (p=quarantine dis=none)\n header.from=redhat.com",
            "smtp1.osuosl.org;\n dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com\n header.a=rsa-sha256 header.s=mimecast20190719 header.b=NyIO+A6I"
        ],
        "Subject": "[Intel-wired-lan] [PATCH net v3 5/5] iavf: refactor virtchnl\n polling into single function",
        "X-BeenThere": "intel-wired-lan@osuosl.org",
        "X-Mailman-Version": "2.1.30",
        "Precedence": "list",
        "List-Id": "Intel Wired Ethernet Linux Kernel Driver Development\n <intel-wired-lan.osuosl.org>",
        "List-Unsubscribe": "<https://lists.osuosl.org/mailman/options/intel-wired-lan>,\n <mailto:intel-wired-lan-request@osuosl.org?subject=unsubscribe>",
        "List-Archive": "<http://lists.osuosl.org/pipermail/intel-wired-lan/>",
        "List-Post": "<mailto:intel-wired-lan@osuosl.org>",
        "List-Help": "<mailto:intel-wired-lan-request@osuosl.org?subject=help>",
        "List-Subscribe": "<https://lists.osuosl.org/mailman/listinfo/intel-wired-lan>,\n <mailto:intel-wired-lan-request@osuosl.org?subject=subscribe>",
        "Errors-To": "intel-wired-lan-bounces@osuosl.org",
        "Sender": "\"Intel-wired-lan\" <intel-wired-lan-bounces@osuosl.org>"
    },
    "content": "At this moment, the driver has two separate functions for polling virtchnl\nmessages from the admin queue:\n- iavf_poll_virtchnl_msg() for init-time (no timeout, no completion\n  handler)\n- iavf_poll_virtchnl_response() for runtime (with timeout, calls\n  completion)\n\nRefactor by enhancing iavf_poll_virtchnl_msg() to handle both use cases:\n1. Init-time mode (timeout_ms=0):\n  - Polls until matching opcode found or queue empty\n  - Returns raw message data without processing through completion handler\n  - Exits immediately on empty queue (no sleep/retry)\n2. Runtime mode (timeout_ms>0):\n  - Polls with timeout using condition callback or opcode check\n  - Processes all messages through iavf_virtchnl_completion()\n  - Supports custom completion callback (takes priority) or falls back\n    to checking adapter->current_op against expected opcode\n  - Uses pending parameter to skip sleep when more messages queued\n  - Uses 50-75 usec sleep (due to commit 9e3f23f44f32 (\"i40e: reduce wait\n    time for adminq command completion\"))\n\nBy unifying message handling, both init-time and runtime messages can be\nprocessed through the completion handler when appropriate, ensuring\nconsistent state updates and maintaining backward compatibility with all\nexisting call sites.\n\nSuggested-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>\nSigned-off-by: Jose Ignacio Tornos Martinez <jtornosm@redhat.com>\n---\n drivers/net/ethernet/intel/iavf/iavf.h        |   9 +-\n drivers/net/ethernet/intel/iavf/iavf_main.c   |  13 +-\n .../net/ethernet/intel/iavf/iavf_virtchnl.c   | 247 ++++++++----------\n 3 files changed, 125 insertions(+), 144 deletions(-)",
    "diff": "diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h\nindex b012a91b0252..9b25c5a65d2a 100644\n--- a/drivers/net/ethernet/intel/iavf/iavf.h\n+++ b/drivers/net/ethernet/intel/iavf/iavf.h\n@@ -607,11 +607,10 @@ void iavf_disable_vlan_stripping(struct iavf_adapter *adapter);\n void iavf_virtchnl_completion(struct iavf_adapter *adapter,\n \t\t\t      enum virtchnl_ops v_opcode,\n \t\t\t      enum iavf_status v_retval, u8 *msg, u16 msglen);\n-int iavf_poll_virtchnl_response(struct iavf_adapter *adapter,\n-\t\t\t\tbool (*condition)(struct iavf_adapter *, const void *),\n-\t\t\t\tconst void *cond_data,\n-\t\t\t\tenum virtchnl_ops v_opcode,\n-\t\t\t\tunsigned int timeout_ms);\n+int iavf_poll_virtchnl_msg(struct iavf_hw *hw, struct iavf_arq_event_info *event,\n+\t\t\t   enum virtchnl_ops op_to_poll, unsigned int timeout_ms,\n+\t\t\t   bool (*condition)(struct iavf_adapter *, const void *),\n+\t\t\t   const void *cond_data);\n int iavf_config_rss(struct iavf_adapter *adapter);\n void iavf_cfg_queues_bw(struct iavf_adapter *adapter);\n void iavf_cfg_queues_quanta_size(struct iavf_adapter *adapter);\ndiff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c\nindex 80277d495a8d..b0db15fd8ddb 100644\n--- a/drivers/net/ethernet/intel/iavf/iavf_main.c\n+++ b/drivers/net/ethernet/intel/iavf/iavf_main.c\n@@ -1075,6 +1075,7 @@ static bool iavf_mac_change_done(struct iavf_adapter *adapter, const void *data)\n  */\n static int iavf_set_mac_sync(struct iavf_adapter *adapter, const u8 *addr)\n {\n+\tstruct iavf_arq_event_info event;\n \tint ret;\n \n \tnetdev_assert_locked(adapter->netdev);\n@@ -1083,8 +1084,16 @@ static int iavf_set_mac_sync(struct iavf_adapter *adapter, const u8 *addr)\n \tif (ret)\n \t\treturn ret;\n \n-\treturn iavf_poll_virtchnl_response(adapter, iavf_mac_change_done, addr,\n-\t\t\t\t\t   VIRTCHNL_OP_UNKNOWN, 2500);\n+\tevent.buf_len = IAVF_MAX_AQ_BUF_SIZE;\n+\tevent.msg_buf = kzalloc(event.buf_len, GFP_KERNEL);\n+\tif (!event.msg_buf)\n+\t\treturn -ENOMEM;\n+\n+\tret = iavf_poll_virtchnl_msg(&adapter->hw, &event, VIRTCHNL_OP_UNKNOWN,\n+\t\t\t\t     2500, iavf_mac_change_done, addr);\n+\n+\tkfree(event.msg_buf);\n+\treturn ret;\n }\n \n /**\ndiff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c\nindex df124f840ddb..ef9a251060d9 100644\n--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c\n+++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c\n@@ -54,55 +54,121 @@ int iavf_send_api_ver(struct iavf_adapter *adapter)\n }\n \n /**\n- * iavf_poll_virtchnl_msg\n+ * iavf_virtchnl_completion_done - Check if virtchnl operation completed\n+ * @adapter: adapter structure\n+ * @condition: optional callback for custom completion check\n+ * @cond_data: context data for callback\n+ * @op_to_poll: opcode to check against current_op (if no callback)\n+ *\n+ * Checks if operation is complete. Callback takes priority if provided,\n+ * otherwise checks if current_op matches op_to_poll.\n+ *\n+ * Return: true if operation completed\n+ */\n+static inline bool\n+iavf_virtchnl_completion_done(struct iavf_adapter *adapter,\n+\t\t\t      bool (*condition)(struct iavf_adapter *, const void *),\n+\t\t\t      const void *cond_data,\n+\t\t\t      enum virtchnl_ops op_to_poll)\n+{\n+\tif (condition)\n+\t\treturn condition(adapter, cond_data);\n+\n+\treturn adapter->current_op == op_to_poll;\n+}\n+\n+/**\n+ * iavf_poll_virtchnl_msg - Poll admin queue for virtchnl message\n  * @hw: HW configuration structure\n  * @event: event to populate on success\n- * @op_to_poll: requested virtchnl op to poll for\n+ * @op_to_poll: virtchnl opcode to poll for (used for init-time and runtime\n+ *              without callback)\n+ * @timeout_ms: timeout in milliseconds (0 = no timeout, exit on empty queue)\n+ * @condition: optional callback to check custom completion (runtime use,\n+ *             takes priority over op_to_poll check)\n+ * @cond_data: context data for condition callback\n+ *\n+ * Enhanced polling function that handles both init-time and runtime use cases:\n+ * - Init-time: Set op_to_poll, timeout_ms=0, condition=NULL\n+ *   Polls until matching opcode found or queue empty\n+ * - Runtime with callback: Set timeout_ms>0, condition callback, cond_data\n+ *   Polls with timeout until condition returns true (op_to_poll not used)\n+ * - Runtime without callback: Set op_to_poll, timeout_ms>0, condition=NULL\n+ *   Polls with timeout until adapter->current_op == op_to_poll\n+ *\n+ * Runtime messages are processed through iavf_virtchnl_completion().\n+ * For init-time use, returns 0 with raw message data in event buffer.\n+ * For runtime use, returns 0 when completion condition is met.\n  *\n- * Initialize poll for virtchnl msg matching the requested_op. Returns 0\n- * if a message of the correct opcode is in the queue or an error code\n- * if no message matching the op code is waiting and other failures.\n+ * Return: 0 on success, -EAGAIN on timeout, or error code\n  */\n-static int\n-iavf_poll_virtchnl_msg(struct iavf_hw *hw, struct iavf_arq_event_info *event,\n-\t\t       enum virtchnl_ops op_to_poll)\n+int iavf_poll_virtchnl_msg(struct iavf_hw *hw, struct iavf_arq_event_info *event,\n+\t\t\t   enum virtchnl_ops op_to_poll, unsigned int timeout_ms,\n+\t\t\t   bool (*condition)(struct iavf_adapter *, const void *),\n+\t\t\t   const void *cond_data)\n {\n+\tstruct iavf_adapter *adapter = hw->back;\n+\tunsigned long timeout = timeout_ms ? jiffies + msecs_to_jiffies(timeout_ms) : 0;\n \tenum virtchnl_ops received_op;\n \tenum iavf_status status;\n-\tu32 v_retval;\n+\tu32 v_retval = 0;\n+\tu16 pending;\n \n-\twhile (1) {\n-\t\t/* When the AQ is empty, iavf_clean_arq_element will return\n-\t\t * nonzero and this loop will terminate.\n-\t\t */\n-\t\tstatus = iavf_clean_arq_element(hw, event, NULL);\n-\t\tif (status != IAVF_SUCCESS)\n-\t\t\treturn iavf_status_to_errno(status);\n-\t\treceived_op =\n-\t\t    (enum virtchnl_ops)le32_to_cpu(event->desc.cookie_high);\n+\tdo {\n+\t\tif (timeout_ms && iavf_virtchnl_completion_done(adapter, condition,\n+\t\t\t\t\t\t\t\tcond_data, op_to_poll))\n+\t\t\treturn 0;\n \n-\t\tif (received_op == VIRTCHNL_OP_EVENT) {\n-\t\t\tstruct iavf_adapter *adapter = hw->back;\n-\t\t\tstruct virtchnl_pf_event *vpe =\n-\t\t\t\t(struct virtchnl_pf_event *)event->msg_buf;\n+\t\tstatus = iavf_clean_arq_element(hw, event, &pending);\n+\t\tif (status == IAVF_SUCCESS) {\n+\t\t\treceived_op = (enum virtchnl_ops)le32_to_cpu(event->desc.cookie_high);\n \n-\t\t\tif (vpe->event != VIRTCHNL_EVENT_RESET_IMPENDING)\n-\t\t\t\tcontinue;\n+\t\t\t/* Handle reset events specially */\n+\t\t\tif (received_op == VIRTCHNL_OP_EVENT) {\n+\t\t\t\tstruct virtchnl_pf_event *vpe =\n+\t\t\t\t\t(struct virtchnl_pf_event *)event->msg_buf;\n \n-\t\t\tdev_info(&adapter->pdev->dev, \"Reset indication received from the PF\\n\");\n-\t\t\tif (!(adapter->flags & IAVF_FLAG_RESET_PENDING))\n-\t\t\t\tiavf_schedule_reset(adapter,\n-\t\t\t\t\t\t    IAVF_FLAG_RESET_PENDING);\n+\t\t\t\tif (vpe->event != VIRTCHNL_EVENT_RESET_IMPENDING)\n+\t\t\t\t\tcontinue;\n+\n+\t\t\t\tdev_info(&adapter->pdev->dev,\n+\t\t\t\t\t \"Reset indication received from the PF\\n\");\n+\t\t\t\tif (!(adapter->flags & IAVF_FLAG_RESET_PENDING))\n+\t\t\t\t\tiavf_schedule_reset(adapter,\n+\t\t\t\t\t\t\t    IAVF_FLAG_RESET_PENDING);\n+\n+\t\t\t\treturn -EIO;\n+\t\t\t}\n+\n+\t\t\tv_retval = le32_to_cpu(event->desc.cookie_low);\n+\n+\t\t\tif (!timeout_ms) {\n+\t\t\t\tif (received_op == op_to_poll)\n+\t\t\t\t\treturn virtchnl_status_to_errno((enum virtchnl_status_code)\n+\t\t\t\t\t\t\tv_retval);\n+\t\t\t} else {\n+\t\t\t\tiavf_virtchnl_completion(adapter, received_op,\n+\t\t\t\t\t\t\t (enum iavf_status)v_retval,\n+\t\t\t\t\t\t\t event->msg_buf, event->msg_len);\n+\t\t\t}\n+\n+\t\t\tif (pending)\n+\t\t\t\tcontinue;\n+\t\t} else if (!timeout_ms) {\n+\t\t\treturn iavf_status_to_errno(status);\n+\t\t}\n \n-\t\t\treturn -EIO;\n+\t\tif (timeout_ms) {\n+\t\t\tmemset(event->msg_buf, 0, IAVF_MAX_AQ_BUF_SIZE);\n+\t\t\tusleep_range(50, 75);\n \t\t}\n \n-\t\tif (op_to_poll == received_op)\n-\t\t\tbreak;\n-\t}\n+\t} while (!timeout_ms || time_before(jiffies, timeout));\n+\n+\tif (iavf_virtchnl_completion_done(adapter, condition, cond_data, op_to_poll))\n+\t\treturn 0;\n \n-\tv_retval = le32_to_cpu(event->desc.cookie_low);\n-\treturn virtchnl_status_to_errno((enum virtchnl_status_code)v_retval);\n+\treturn -EAGAIN;\n }\n \n /**\n@@ -124,7 +190,8 @@ int iavf_verify_api_ver(struct iavf_adapter *adapter)\n \tif (!event.msg_buf)\n \t\treturn -ENOMEM;\n \n-\terr = iavf_poll_virtchnl_msg(&adapter->hw, &event, VIRTCHNL_OP_VERSION);\n+\terr = iavf_poll_virtchnl_msg(&adapter->hw, &event, VIRTCHNL_OP_VERSION,\n+\t\t\t\t     0, NULL, NULL);\n \tif (!err) {\n \t\tstruct virtchnl_version_info *pf_vvi =\n \t\t\t(struct virtchnl_version_info *)event.msg_buf;\n@@ -294,7 +361,8 @@ int iavf_get_vf_config(struct iavf_adapter *adapter)\n \tif (!event.msg_buf)\n \t\treturn -ENOMEM;\n \n-\terr = iavf_poll_virtchnl_msg(hw, &event, VIRTCHNL_OP_GET_VF_RESOURCES);\n+\terr = iavf_poll_virtchnl_msg(hw, &event, VIRTCHNL_OP_GET_VF_RESOURCES,\n+\t\t\t\t     0, NULL, NULL);\n \tmemcpy(adapter->vf_res, event.msg_buf, min(event.msg_len, len));\n \n \t/* some PFs send more queues than we should have so validate that\n@@ -322,7 +390,8 @@ int iavf_get_vf_vlan_v2_caps(struct iavf_adapter *adapter)\n \t\treturn -ENOMEM;\n \n \terr = iavf_poll_virtchnl_msg(&adapter->hw, &event,\n-\t\t\t\t     VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS);\n+\t\t\t\t     VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS,\n+\t\t\t\t     0, NULL, NULL);\n \tif (!err)\n \t\tmemcpy(&adapter->vlan_v2_caps, event.msg_buf,\n \t\t       min(event.msg_len, len));\n@@ -342,7 +411,8 @@ int iavf_get_vf_supported_rxdids(struct iavf_adapter *adapter)\n \tevent.buf_len = sizeof(rxdids);\n \n \terr = iavf_poll_virtchnl_msg(&adapter->hw, &event,\n-\t\t\t\t     VIRTCHNL_OP_GET_SUPPORTED_RXDIDS);\n+\t\t\t\t     VIRTCHNL_OP_GET_SUPPORTED_RXDIDS,\n+\t\t\t\t     0, NULL, NULL);\n \tif (!err)\n \t\tadapter->supp_rxdids = rxdids;\n \n@@ -359,7 +429,8 @@ int iavf_get_vf_ptp_caps(struct iavf_adapter *adapter)\n \tevent.buf_len = sizeof(caps);\n \n \terr = iavf_poll_virtchnl_msg(&adapter->hw, &event,\n-\t\t\t\t     VIRTCHNL_OP_1588_PTP_GET_CAPS);\n+\t\t\t\t     VIRTCHNL_OP_1588_PTP_GET_CAPS,\n+\t\t\t\t     0, NULL, NULL);\n \tif (!err)\n \t\tadapter->ptp.hw_caps = caps;\n \n@@ -2961,101 +3032,3 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,\n \tadapter->current_op = VIRTCHNL_OP_UNKNOWN;\n }\n \n-/**\n- * iavf_virtchnl_done - Check if virtchnl operation completed\n- * @adapter: board private structure\n- * @condition: optional callback for custom completion check\n- *   (takes priority)\n- * @cond_data: context data for callback\n- * @v_opcode: virtchnl opcode value we're waiting for if no condition\n- *   configured (typically VIRTCHNL_OP_UNKNOWN), if condition not used\n- *\n- * Checks completion status. Callback takes priority if provided. Otherwise\n- * waits for current_op to reach v_opcode (typically VIRTCHNL_OP_UNKNOWN\n- * after completion).\n- *\n- * Return: true if operation completed\n- */\n-static inline bool iavf_virtchnl_done(struct iavf_adapter *adapter,\n-\t\t\t\t      bool (*condition)(struct iavf_adapter *, const void *),\n-\t\t\t\t      const void *cond_data,\n-\t\t\t\t      enum virtchnl_ops v_opcode)\n-{\n-\tif (condition)\n-\t\treturn condition(adapter, cond_data);\n-\n-\treturn adapter->current_op == v_opcode;\n-}\n-\n-/**\n- * iavf_poll_virtchnl_response - Poll admin queue for virtchnl response\n- * @adapter: board private structure\n- * @condition: optional callback to check if desired response received\n- *   (takes priority)\n- * @cond_data: context data passed to condition callback\n- * @v_opcode: virtchnl opcode value to wait for if no condition configured\n- *   (typically VIRTCHNL_OP_UNKNOWN), if condition, not used\n- * @timeout_ms: maximum time to wait in milliseconds\n- *\n- * Polls admin queue and processes all messages until condition returns true\n- * or timeout expires. If condition is NULL, waits for current_op to become\n- * v_opcode (typically VIRTCHNL_OP_UNKNOWN after operation completes).\n- * Caller must hold netdev_lock. This can sleep for up to timeout_ms while\n- * polling hardware.\n- *\n- * Return: 0 on success (condition met), -EAGAIN on timeout or error\n- */\n-int iavf_poll_virtchnl_response(struct iavf_adapter *adapter,\n-\t\t\t\tbool (*condition)(struct iavf_adapter *, const void *),\n-\t\t\t\tconst void *cond_data,\n-\t\t\t\tenum virtchnl_ops v_opcode,\n-\t\t\t\tunsigned int timeout_ms)\n-{\n-\tstruct iavf_hw *hw = &adapter->hw;\n-\tstruct iavf_arq_event_info event;\n-\tenum virtchnl_ops v_op;\n-\tenum iavf_status v_ret;\n-\tunsigned long timeout;\n-\tu16 pending;\n-\tint ret;\n-\n-\tnetdev_assert_locked(adapter->netdev);\n-\n-\tevent.buf_len = IAVF_MAX_AQ_BUF_SIZE;\n-\tevent.msg_buf = kzalloc(event.buf_len, GFP_KERNEL);\n-\tif (!event.msg_buf)\n-\t\treturn -ENOMEM;\n-\n-\ttimeout = jiffies + msecs_to_jiffies(timeout_ms);\n-\tdo {\n-\t\tif (iavf_virtchnl_done(adapter, condition, cond_data, v_opcode)) {\n-\t\t\tret = 0;\n-\t\t\tgoto out;\n-\t\t}\n-\n-\t\tret = iavf_clean_arq_element(hw, &event, &pending);\n-\t\tif (!ret) {\n-\t\t\tv_op = (enum virtchnl_ops)le32_to_cpu(event.desc.cookie_high);\n-\t\t\tv_ret = (enum iavf_status)le32_to_cpu(event.desc.cookie_low);\n-\n-\t\t\tiavf_virtchnl_completion(adapter, v_op, v_ret,\n-\t\t\t\t\t\t event.msg_buf, event.msg_len);\n-\n-\t\t\tmemset(event.msg_buf, 0, IAVF_MAX_AQ_BUF_SIZE);\n-\n-\t\t\tif (pending)\n-\t\t\t\tcontinue;\n-\t\t}\n-\n-\t\tusleep_range(50, 75);\n-\t} while (time_before(jiffies, timeout));\n-\n-\tif (iavf_virtchnl_done(adapter, condition, cond_data, v_opcode))\n-\t\tret = 0;\n-\telse\n-\t\tret = -EAGAIN;\n-\n-out:\n-\tkfree(event.msg_buf);\n-\treturn ret;\n-}\n",
    "prefixes": [
        "net",
        "v3",
        "5/5"
    ]
}