{"id":2235029,"url":"http://patchwork.ozlabs.org/api/1.2/patches/2235029/?format=json","web_url":"http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20260508124208.11622-13-przemyslaw.kitszel@intel.com/","project":{"id":46,"url":"http://patchwork.ozlabs.org/api/1.2/projects/46/?format=json","name":"Intel Wired Ethernet development","link_name":"intel-wired-lan","list_id":"intel-wired-lan.osuosl.org","list_email":"intel-wired-lan@osuosl.org","web_url":"","scm_url":"","webscm_url":"","list_archive_url":"","list_archive_url_format":"","commit_url_format":""},"msgid":"<20260508124208.11622-13-przemyslaw.kitszel@intel.com>","list_archive_url":null,"date":"2026-05-08T12:42:05","name":"[iwl-next,v1,12/15] ice: introduce handling of virtchnl LARGE VF opcodes","commit_ref":null,"pull_url":null,"state":"new","archived":false,"hash":"2d52b29545c5c2febe3cb27c4a2a2da743212ee4","submitter":{"id":85252,"url":"http://patchwork.ozlabs.org/api/1.2/people/85252/?format=json","name":"Przemek Kitszel","email":"przemyslaw.kitszel@intel.com"},"delegate":null,"mbox":"http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20260508124208.11622-13-przemyslaw.kitszel@intel.com/mbox/","series":[{"id":503388,"url":"http://patchwork.ozlabs.org/api/1.2/series/503388/?format=json","web_url":"http://patchwork.ozlabs.org/project/intel-wired-lan/list/?series=503388","date":"2026-05-08T12:41:53","name":"devlink, mlx5, iavf, ice: XLVF for iavf","version":1,"mbox":"http://patchwork.ozlabs.org/series/503388/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/2235029/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/2235029/checks/","tags":{},"related":[],"headers":{"Return-Path":"<intel-wired-lan-bounces@osuosl.org>","X-Original-To":["incoming@patchwork.ozlabs.org","intel-wired-lan@lists.osuosl.org"],"Delivered-To":["patchwork-incoming@legolas.ozlabs.org","intel-wired-lan@lists.osuosl.org"],"Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=osuosl.org header.i=@osuosl.org header.a=rsa-sha256\n header.s=default header.b=FhvMM193;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=osuosl.org\n (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org;\n envelope-from=intel-wired-lan-bounces@osuosl.org;\n receiver=patchwork.ozlabs.org)"],"Received":["from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4gBq3B4s79z1yCg\n\tfor <incoming@patchwork.ozlabs.org>; Fri, 08 May 2026 23:00:06 +1000 (AEST)","from localhost (localhost [127.0.0.1])\n\tby smtp1.osuosl.org (Postfix) with ESMTP id 5105E8237C;\n\tFri,  8 May 2026 13:00:05 +0000 (UTC)","from smtp1.osuosl.org ([127.0.0.1])\n by localhost (smtp1.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id FQ8JHj-IHdmp; Fri,  8 May 2026 13:00:03 +0000 (UTC)","from lists1.osuosl.org (lists1.osuosl.org [140.211.166.142])\n\tby smtp1.osuosl.org (Postfix) with ESMTP id 725C4826EF;\n\tFri,  8 May 2026 13:00:03 +0000 (UTC)","from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137])\n by lists1.osuosl.org (Postfix) with ESMTP id 0A3E7317\n for <intel-wired-lan@lists.osuosl.org>; Fri,  8 May 2026 13:00:02 +0000 (UTC)","from localhost (localhost [127.0.0.1])\n by smtp4.osuosl.org (Postfix) with ESMTP id E4AAF410D0\n for <intel-wired-lan@lists.osuosl.org>; Fri,  8 May 2026 13:00:01 +0000 (UTC)","from smtp4.osuosl.org ([127.0.0.1])\n by localhost (smtp4.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id AN9hzipqiFP1 for <intel-wired-lan@lists.osuosl.org>;\n Fri,  8 May 2026 13:00:01 +0000 (UTC)","from mgamail.intel.com (mgamail.intel.com [198.175.65.17])\n by smtp4.osuosl.org (Postfix) with ESMTPS id E615D410AA\n for <intel-wired-lan@lists.osuosl.org>; Fri,  8 May 2026 13:00:00 +0000 (UTC)","from fmviesa005.fm.intel.com ([10.60.135.145])\n by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 08 May 2026 06:00:01 -0700","from irvmail002.ir.intel.com ([10.43.11.120])\n by fmviesa005.fm.intel.com with ESMTP; 08 May 2026 05:59:55 -0700","from vecna.igk.intel.com (vecna.igk.intel.com [10.123.220.17])\n by irvmail002.ir.intel.com (Postfix) with ESMTP id E5C902FC43;\n Fri,  8 May 2026 13:59:53 +0100 (IST)"],"X-Virus-Scanned":["amavis at osuosl.org","amavis at osuosl.org"],"X-Comment":"SPF check N/A for local connections - client-ip=140.211.166.142;\n helo=lists1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org;\n receiver=<UNKNOWN> ","DKIM-Filter":["OpenDKIM Filter v2.11.0 smtp1.osuosl.org 725C4826EF","OpenDKIM Filter v2.11.0 smtp4.osuosl.org E615D410AA"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org;\n\ts=default; t=1778245203;\n\tbh=+5gn+OnENhflxhj73zQ0DRQ7MpYrioV2fy6396N14wQ=;\n\th=From:To:Cc:Date:In-Reply-To:References:Subject:List-Id:\n\t List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe:\n\t From;\n\tb=FhvMM193EHPhVP11XbTgwEehMuvEmDKvF6aYrC/Kly8yoQdwundnXjaD/qe5mvo3Y\n\t 48szHXVUTZCIZ9XI9y2dWQW//IVkZ3ved5JNYNEu2mjnCcxh1ZhhcTjjl/4FootR97\n\t D+b5A8hS/1SfKD+uyRAtXUPy8V3/P1b6EMq03Xv8+cotuj8O4Ev4TC3c5klNHFRd2y\n\t YVJ1ZT9vWTHMXZ+BUt2sSeSYkFdrafeikA6ZjDlL3ZS/IlJCBaqKmeiVgOjcmnxuHs\n\t EaRdnR1ikhIXFeYwmYB3SpOUD5SAnKpUXR4oD5e2+bDNpR2EhV1uB/IWQDQHZ1mfBL\n\t wOmW7QjhqC1/w==","Received-SPF":"Pass (mailfrom) identity=mailfrom; client-ip=198.175.65.17;\n helo=mgamail.intel.com; envelope-from=przemyslaw.kitszel@intel.com;\n receiver=<UNKNOWN>","DMARC-Filter":"OpenDMARC Filter v1.4.2 smtp4.osuosl.org E615D410AA","X-CSE-ConnectionGUID":["odQF55JMQ4iK9uhBd65jyQ==","S+prxU8KRyq5cKKUH4opnw=="],"X-CSE-MsgGUID":["/je1zVz/SN2kgG9Kjw59ew==","pGlF8a4LQgaASAXSQaQduw=="],"X-IronPort-AV":["E=McAfee;i=\"6800,10657,11779\"; a=\"79199971\"","E=Sophos;i=\"6.23,223,1770624000\"; d=\"scan'208\";a=\"79199971\"","E=Sophos;i=\"6.23,223,1770624000\"; d=\"scan'208\";a=\"241730161\""],"X-ExtLoop1":"1","From":"Przemek Kitszel <przemyslaw.kitszel@intel.com>","To":"intel-wired-lan@lists.osuosl.org, Michal Schmidt <mschmidt@redhat.com>,\n Jakub Kicinski <kuba@kernel.org>, Jiri Pirko <jiri@resnulli.us>","Cc":"netdev@vger.kernel.org, Simon Horman <horms@kernel.org>,\n Tony Nguyen <anthony.l.nguyen@intel.com>,\n Michal Swiatkowski <michal.swiatkowski@linux.intel.com>,\n bruce.richardson@intel.com,\n Vladimir Medvedkin <vladimir.medvedkin@intel.com>,\n padraig.j.connolly@intel.com, ananth.s@intel.com,\n timothy.miskell@intel.com, Jacob Keller <jacob.e.keller@intel.com>,\n Lukasz Czapnik <lukasz.czapnik@intel.com>,\n Aleksandr Loktionov <aleksandr.loktionov@intel.com>,\n Andrew Lunn <andrew+netdev@lunn.ch>,\n \"David S. Miller\" <davem@davemloft.net>,\n Eric Dumazet <edumazet@google.com>, Paolo Abeni <pabeni@redhat.com>,\n Saeed Mahameed <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>,\n Tariq Toukan <tariqt@nvidia.com>, Mark Bloch <mbloch@nvidia.com>,\n Przemek Kitszel <przemyslaw.kitszel@intel.com>","Date":"Fri,  8 May 2026 14:42:05 +0200","Message-Id":"<20260508124208.11622-13-przemyslaw.kitszel@intel.com>","X-Mailer":"git-send-email 2.39.3","In-Reply-To":"<20260508124208.11622-1-przemyslaw.kitszel@intel.com>","References":"<20260508124208.11622-1-przemyslaw.kitszel@intel.com>","MIME-Version":"1.0","Content-Transfer-Encoding":"8bit","X-Mailman-Original-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/simple;\n d=intel.com; i=@intel.com; q=dns/txt; s=Intel;\n t=1778245202; x=1809781202;\n h=from:to:cc:subject:date:message-id:in-reply-to:\n references:mime-version:content-transfer-encoding;\n bh=aOY+1PAmf+sRv0Rgt9OoKUU8GhDT+msBIr+iZE9nWWQ=;\n b=I79QE8hFEE1TO0z0BvWbRZOEImruZks7zhG450OP1Tmu1OycwbSY1jtF\n kL7pLFyTeoEkXzZaC6PpsD75EBq1cjxw9cYbgG+Wa/YI3co5oLlOMArRL\n LL4+3osgjiixIlRVhyS++SLihvPRBRnbsf2eOWsKn25MTkrORJvxFipcJ\n OsPNPq6r9fssOPNCAat+WTzKTGQEGgKvzmzW2b/BhPpHT+KjcHGUbEg5Q\n lwsIa9eI7kMXFUcikrMeMBXid/boaYV875Ig30Nw72fKGtYvQXu1M5PyI\n ok7xkqkt6W8mWyZLlAvGUITP/M+UJoscLFfw9MuGbg+FZyLDoqacfSrwA\n Q==;","X-Mailman-Original-Authentication-Results":["smtp4.osuosl.org;\n dmarc=pass (p=none dis=none)\n header.from=intel.com","smtp4.osuosl.org;\n dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com\n header.a=rsa-sha256 header.s=Intel header.b=I79QE8hF"],"Subject":"[Intel-wired-lan] [PATCH iwl-next v1 12/15] ice: introduce handling\n of virtchnl LARGE VF opcodes","X-BeenThere":"intel-wired-lan@osuosl.org","X-Mailman-Version":"2.1.30","Precedence":"list","List-Id":"Intel Wired Ethernet Linux Kernel Driver Development\n <intel-wired-lan.osuosl.org>","List-Unsubscribe":"<https://lists.osuosl.org/mailman/options/intel-wired-lan>,\n <mailto:intel-wired-lan-request@osuosl.org?subject=unsubscribe>","List-Archive":"<http://lists.osuosl.org/pipermail/intel-wired-lan/>","List-Post":"<mailto:intel-wired-lan@osuosl.org>","List-Help":"<mailto:intel-wired-lan-request@osuosl.org?subject=help>","List-Subscribe":"<https://lists.osuosl.org/mailman/listinfo/intel-wired-lan>,\n <mailto:intel-wired-lan-request@osuosl.org?subject=subscribe>","Errors-To":"intel-wired-lan-bounces@osuosl.org","Sender":"\"Intel-wired-lan\" <intel-wired-lan-bounces@osuosl.org>"},"content":"From: Brett Creeley <brett.creeley@intel.com>\n\nWith new virtchnl offload/capability VFs are able to make use of more than\n16 queues. But to old opcodes were designed with a max of 16 queues, so\nnew ones were added (by iavf/virtchnl commit of this series):\nVIRTCHNL_OP_GET_MAX_RSS_QREGION, VIRTCHNL_OP_ENABLE_QUEUES_V2,\nVIRTCHNL_OP_DISABLE_QUEUES_V2, VIRTCHNL_OP_MAP_QUEUE_VECTOR.\n\nIf a VF wishes to request >16 queues it should first make sure that the\nPF supports the VIRTCHNL_VF_LARGE_NUM_QPAIRS capability.\n\nCo-developed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>\nSigned-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>\nCo-developed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> # msglen val\nSigned-off-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>\nSigned-off-by: Brett Creeley <brett.creeley@intel.com>\n---\n drivers/net/ethernet/intel/ice/ice_vf_lib.h   |   1 +\n drivers/net/ethernet/intel/ice/virt/queues.h  |   3 +\n .../net/ethernet/intel/ice/virt/allowlist.c   |   8 +\n drivers/net/ethernet/intel/ice/virt/queues.c  | 324 ++++++++++++++++++\n 4 files changed, 336 insertions(+)","diff":"diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.h b/drivers/net/ethernet/intel/ice/ice_vf_lib.h\nindex 1b56f7150eb7..5411eaa1761c 100644\n--- a/drivers/net/ethernet/intel/ice/ice_vf_lib.h\n+++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.h\n@@ -125,6 +125,7 @@ struct ice_vf_ops {\n \tvoid (*clear_reset_trigger)(struct ice_vf *vf);\n \tvoid (*irq_close)(struct ice_vf *vf);\n \tvoid (*post_vsi_rebuild)(struct ice_vf *vf);\n+\tstruct ice_q_vector *(*get_q_vector)(struct ice_vsi *vsi, u16 vec_id);\n };\n \n /* Virtchnl/SR-IOV config info */\ndiff --git a/drivers/net/ethernet/intel/ice/virt/queues.h b/drivers/net/ethernet/intel/ice/virt/queues.h\nindex c4a792cecea1..223f609dd4f3 100644\n--- a/drivers/net/ethernet/intel/ice/virt/queues.h\n+++ b/drivers/net/ethernet/intel/ice/virt/queues.h\n@@ -16,5 +16,8 @@ int ice_vc_cfg_q_bw(struct ice_vf *vf, u8 *msg);\n int ice_vc_cfg_q_quanta(struct ice_vf *vf, u8 *msg);\n int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg);\n int ice_vc_request_qs_msg(struct ice_vf *vf, u8 *msg);\n+int ice_vc_ena_qs_v2_msg(struct ice_vf *vf, u8 *msg, u16 msglen);\n+int ice_vc_dis_qs_v2_msg(struct ice_vf *vf, u8 *msg, u16 msglen);\n+int ice_vc_map_q_vector_msg(struct ice_vf *vf, u8 *msg, u16 msglen);\n \n #endif /* _ICE_VIRT_QUEUES_H_ */\ndiff --git a/drivers/net/ethernet/intel/ice/virt/allowlist.c b/drivers/net/ethernet/intel/ice/virt/allowlist.c\nindex a07efec19c45..ef769b843c6f 100644\n--- a/drivers/net/ethernet/intel/ice/virt/allowlist.c\n+++ b/drivers/net/ethernet/intel/ice/virt/allowlist.c\n@@ -95,6 +95,13 @@ static const u32 tc_allowlist_opcodes[] = {\n \tVIRTCHNL_OP_CONFIG_QUANTA,\n };\n \n+static const u32 large_num_qpairs_allowlist_opcodes[] = {\n+\tVIRTCHNL_OP_GET_MAX_RSS_QREGION,\n+\tVIRTCHNL_OP_ENABLE_QUEUES_V2,\n+\tVIRTCHNL_OP_DISABLE_QUEUES_V2,\n+\tVIRTCHNL_OP_MAP_QUEUE_VECTOR,\n+};\n+\n struct allowlist_opcode_info {\n \tconst u32 *opcodes;\n \tsize_t size;\n@@ -117,6 +124,7 @@ static const struct allowlist_opcode_info allowlist_opcodes[] = {\n \tALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_VLAN_V2, vlan_v2_allowlist_opcodes),\n \tALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_QOS, tc_allowlist_opcodes),\n \tALLOW_ITEM(VIRTCHNL_VF_CAP_PTP, ptp_allowlist_opcodes),\n+\tALLOW_ITEM(VIRTCHNL_VF_LARGE_NUM_QPAIRS, large_num_qpairs_allowlist_opcodes),\n };\n \n /**\ndiff --git a/drivers/net/ethernet/intel/ice/virt/queues.c b/drivers/net/ethernet/intel/ice/virt/queues.c\nindex 1d9f69026d1b..b99f18a25024 100644\n--- a/drivers/net/ethernet/intel/ice/virt/queues.c\n+++ b/drivers/net/ethernet/intel/ice/virt/queues.c\n@@ -1021,3 +1021,327 @@ int ice_vc_request_qs_msg(struct ice_vf *vf, u8 *msg)\n \t\t\t\t     v_ret, (u8 *)vfres, sizeof(*vfres));\n }\n \n+static bool ice_vc_supported_queue_type(s32 queue_type)\n+{\n+\treturn queue_type == VIRTCHNL_QUEUE_TYPE_RX ||\n+\t       queue_type == VIRTCHNL_QUEUE_TYPE_TX;\n+}\n+\n+/**\n+ * ice_vc_validate_qs_v2_msg - validate all qs_msg parameters\n+ * @vf: VF the message was received from\n+ * @qs_msg: contents of the message from the VF\n+ * @msglen: length of @qs_msg\n+ *\n+ * Used to validate both the VIRTCHNL_OP_ENABLE_QUEUES_V2 and\n+ * VIRTCHNL_OP_DISABLE_QUEUES_V2 messages. This should always be called before\n+ * attempting to enable and/or disable queues on behalf of a VF in response to\n+ * the previously mentioned opcodes.\n+ *\n+ * Return: If all checks succeed, then return true. Otherwise return\n+ *         false, indicating to the caller that the qs_msg is invalid.\n+ */\n+static bool ice_vc_validate_qs_v2_msg(struct ice_vf *vf,\n+\t\t\t\t      struct virtchnl_del_ena_dis_queues *qs_msg,\n+\t\t\t\t      u16 msglen)\n+{\n+\tif (msglen < virtchnl_struct_size(qs_msg, chunks, 0))\n+\t\treturn false;\n+\n+\tif (msglen < virtchnl_struct_size(qs_msg, chunks, qs_msg->num_chunks))\n+\t\treturn false;\n+\n+\tif (!qs_msg->num_chunks)\n+\t\treturn false;\n+\n+\tif (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states))\n+\t\treturn false;\n+\n+\tif (!ice_vc_isvalid_vsi_id(vf, qs_msg->vport_id))\n+\t\treturn false;\n+\n+\tfor (int i = 0; i < qs_msg->num_chunks; i++) {\n+\t\tu32 max_queue_in_chunk;\n+\n+\t\tif (!ice_vc_supported_queue_type(qs_msg->chunks[i].type))\n+\t\t\treturn false;\n+\n+\t\tif (!qs_msg->chunks[i].num_queues)\n+\t\t\treturn false;\n+\n+\t\tmax_queue_in_chunk = qs_msg->chunks[i].start_queue_id +\n+\t\t\t\t     qs_msg->chunks[i].num_queues;\n+\t\tif (max_queue_in_chunk > vf->num_vf_qs)\n+\t\t\treturn false;\n+\t}\n+\n+\treturn true;\n+}\n+\n+#define ice_for_each_q_in_chunk(chunk, q_id) \\\n+\tfor ((q_id) = (chunk)->start_queue_id; \\\n+\t     (q_id) < (chunk)->start_queue_id + (chunk)->num_queues; \\\n+\t     (q_id)++)\n+\n+static int\n+ice_vc_ena_rxq_chunk(struct ice_vf *vf, struct virtchnl_queue_chunk *chunk)\n+{\n+\tstruct ice_vsi *vsi;\n+\tu32 vf_qid;\n+\n+\tice_for_each_q_in_chunk(chunk, vf_qid) {\n+\t\tint err;\n+\n+\t\tvsi = ice_get_vf_vsi(vf);\n+\t\terr = ice_vf_vsi_ena_single_rxq(vf, vsi, vf_qid);\n+\t\tif (err)\n+\t\t\treturn err;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+static int\n+ice_vc_ena_txq_chunk(struct ice_vf *vf, struct virtchnl_queue_chunk *chunk)\n+{\n+\tstruct ice_vsi *vsi;\n+\tu32 vf_qid;\n+\n+\tice_for_each_q_in_chunk(chunk, vf_qid) {\n+\t\tvsi = ice_get_vf_vsi(vf);\n+\t\tice_vf_vsi_ena_single_txq(vf, vsi, vf_qid);\n+\t}\n+\n+\treturn 0;\n+}\n+\n+/**\n+ * ice_vc_ena_qs_v2_msg - message handling for VIRTCHNL_OP_ENABLE_QUEUES_V2\n+ * @vf: source of the request\n+ * @msg: message to handle\n+ * @msglen: length of the @msg\n+ *\n+ * Return: 0 on success or negative on error.\n+ */\n+int ice_vc_ena_qs_v2_msg(struct ice_vf *vf, u8 *msg, u16 msglen)\n+{\n+\tstruct virtchnl_del_ena_dis_queues *ena_qs_msg =\n+\t\t\t(struct virtchnl_del_ena_dis_queues *)msg;\n+\tenum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS;\n+\n+\tif (!ice_vc_validate_qs_v2_msg(vf, ena_qs_msg, msglen))\n+\t\tgoto error_param;\n+\n+\tfor (int i = 0; i < ena_qs_msg->num_chunks; i++) {\n+\t\tstruct virtchnl_queue_chunk *chunk = &ena_qs_msg->chunks[i];\n+\n+\t\tif (chunk->type == VIRTCHNL_QUEUE_TYPE_RX &&\n+\t\t    ice_vc_ena_rxq_chunk(vf, chunk))\n+\t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n+\t\telse if (chunk->type == VIRTCHNL_QUEUE_TYPE_TX &&\n+\t\t\t ice_vc_ena_txq_chunk(vf, chunk))\n+\t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n+\n+\t\tif (v_ret != VIRTCHNL_STATUS_SUCCESS)\n+\t\t\tgoto error_param;\n+\t}\n+\n+\tset_bit(ICE_VF_STATE_QS_ENA, vf->vf_states);\n+\n+error_param:\n+\treturn ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ENABLE_QUEUES_V2,\n+\t\t\t\t     v_ret, NULL, 0);\n+}\n+\n+static int\n+ice_vc_dis_rxq_chunk(struct ice_vf *vf, struct virtchnl_queue_chunk *chunk)\n+{\n+\tstruct ice_vsi *vsi;\n+\tu32 vf_qid;\n+\n+\tice_for_each_q_in_chunk(chunk, vf_qid) {\n+\t\tint err;\n+\n+\t\tvsi = ice_get_vf_vsi(vf);\n+\t\terr = ice_vf_vsi_dis_single_rxq(vf, vsi, vf_qid);\n+\t\tif (err)\n+\t\t\treturn err;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+static int\n+ice_vc_dis_txq_chunk(struct ice_vf *vf, struct virtchnl_queue_chunk *chunk)\n+{\n+\tstruct ice_vsi *vsi;\n+\tu32 vf_qid;\n+\n+\tice_for_each_q_in_chunk(chunk, vf_qid) {\n+\t\tint err;\n+\n+\t\tvsi = ice_get_vf_vsi(vf);\n+\t\terr = ice_vf_vsi_dis_single_txq(vf, vsi, vf_qid);\n+\t\tif (err)\n+\t\t\treturn err;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+/**\n+ * ice_vc_dis_qs_v2_msg - message handling for VIRTCHNL_OP_DISABLE_QUEUES_V2\n+ * @vf: source of the request\n+ * @msg: message to handle\n+ * @msglen: length of @msg\n+ *\n+ * Return: 0 on success or negative on error.\n+ */\n+int ice_vc_dis_qs_v2_msg(struct ice_vf *vf, u8 *msg, u16 msglen)\n+{\n+\tstruct virtchnl_del_ena_dis_queues *dis_qs_msg =\n+\t\t\t(struct virtchnl_del_ena_dis_queues *)msg;\n+\tenum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS;\n+\n+\tif (!ice_vc_validate_qs_v2_msg(vf, dis_qs_msg, msglen))\n+\t\tgoto error_param;\n+\n+\tfor (int i = 0; i < dis_qs_msg->num_chunks; i++) {\n+\t\tstruct virtchnl_queue_chunk *chunk = &dis_qs_msg->chunks[i];\n+\n+\t\tif (chunk->type == VIRTCHNL_QUEUE_TYPE_RX &&\n+\t\t    ice_vc_dis_rxq_chunk(vf, chunk))\n+\t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n+\t\telse if (chunk->type == VIRTCHNL_QUEUE_TYPE_TX &&\n+\t\t\t ice_vc_dis_txq_chunk(vf, chunk))\n+\t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n+\n+\t\tif (v_ret != VIRTCHNL_STATUS_SUCCESS)\n+\t\t\tgoto error_param;\n+\t}\n+\n+\tif (ice_vf_has_no_qs_ena(vf))\n+\t\tclear_bit(ICE_VF_STATE_QS_ENA, vf->vf_states);\n+\n+error_param:\n+\treturn ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_DISABLE_QUEUES_V2,\n+\t\t\t\t     v_ret, NULL, 0);\n+}\n+\n+/**\n+ * ice_vc_validate_qv_maps - validate parameters sent in the qs_msg structure\n+ * @vf: VF the message was received from\n+ * @qv_maps: contents of the message from the VF\n+ * @msglen: length of the @qv_maps\n+ *\n+ * Used to validate VIRTCHNL_OP_MAP_QUEUE_VECTOR messages. This should always\n+ * be called before attempting map interrupts to queues. If all checks succeed,\n+ * then return success indicating to the caller that the qv_maps are valid.\n+ * Otherwise return false, indicating to the caller that the qv_maps are\n+ * invalid.\n+ *\n+ * Return: true if parameters are valid, false otherwise.\n+ */\n+static bool ice_vc_validate_qv_maps(struct ice_vf *vf,\n+\t\t\t\t    struct virtchnl_queue_vector_maps *qv_maps,\n+\t\t\t\t    u16 msglen)\n+{\n+\tstruct ice_vsi *vsi;\n+\tint total_vectors;\n+\n+\tvsi = vf->pf->vsi[vf->lan_vsi_idx];\n+\tif (!vsi)\n+\t\treturn false;\n+\n+\tif (msglen < virtchnl_struct_size(qv_maps, qv_maps, 0))\n+\t\treturn false;\n+\n+\tif (msglen < virtchnl_struct_size(qv_maps, qv_maps, qv_maps->num_qv_maps))\n+\t\treturn false;\n+\n+\tif (!qv_maps->num_qv_maps)\n+\t\treturn false;\n+\n+\tif (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states))\n+\t\treturn false;\n+\n+\tif (!ice_vc_isvalid_vsi_id(vf, qv_maps->vport_id))\n+\t\treturn false;\n+\n+\ttotal_vectors = vsi->num_q_vectors + ICE_NONQ_VECS_VF;\n+\n+\tfor (int i = 0; i < qv_maps->num_qv_maps; i++) {\n+\t\tif (!ice_vc_supported_queue_type(qv_maps->qv_maps[i].queue_type))\n+\t\t\treturn false;\n+\n+\t\tif (qv_maps->qv_maps[i].queue_id >= vf->num_vf_qs)\n+\t\t\treturn false;\n+\n+\t\tif (qv_maps->qv_maps[i].vector_id >= total_vectors ||\n+\t\t    qv_maps->qv_maps[i].vector_id < ICE_NONQ_VECS_VF)\n+\t\t\treturn false;\n+\t}\n+\n+\treturn true;\n+}\n+\n+/**\n+ * ice_vc_map_q_vector_msg - message handling for VIRTCHNL_OP_MAP_QUEUE_VECTOR\n+ * @vf: source of the request\n+ * @msg: message to handle\n+ * @msglen: length of @msg\n+ *\n+ * Return: 0 on success or negative on error\n+ */\n+int ice_vc_map_q_vector_msg(struct ice_vf *vf, u8 *msg, u16 msglen)\n+{\n+\tenum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS;\n+\tstruct virtchnl_queue_vector_maps *qv_maps;\n+\tstruct ice_vsi *vsi;\n+\n+\tqv_maps = (struct virtchnl_queue_vector_maps *)msg;\n+\n+\tif (!ice_vc_validate_qv_maps(vf, qv_maps, msglen)) {\n+\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n+\t\tgoto error_param;\n+\t}\n+\n+\tfor (int i = 0; i < qv_maps->num_qv_maps; i++) {\n+\t\tstruct virtchnl_queue_vector *qv_map = &qv_maps->qv_maps[i];\n+\t\tstruct ice_q_vector *q_vector;\n+\t\tu16 vector_id;\n+\t\tint vsi_q_id;\n+\n+\t\tvsi = ice_get_vf_vsi(vf);\n+\t\tvsi_q_id = qv_map->queue_id;\n+\t\tvector_id = qv_map->vector_id;\n+\n+\t\tif (!vsi) {\n+\t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n+\t\t\tgoto error_param;\n+\t\t}\n+\n+\t\tq_vector = vf->vf_ops->get_q_vector(vsi, vector_id);\n+\n+\t\tif (!q_vector) {\n+\t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n+\t\t\tgoto error_param;\n+\t\t}\n+\n+\t\tif (!ice_vc_isvalid_q_id(vsi, vsi_q_id))\n+\t\t\treturn VIRTCHNL_STATUS_ERR_PARAM;\n+\n+\t\tif (qv_map->queue_type == VIRTCHNL_QUEUE_TYPE_RX)\n+\t\t\tice_cfg_rxq_interrupt(vsi, vsi_q_id,\n+\t\t\t\t\t      q_vector->vf_reg_idx,\n+\t\t\t\t\t      qv_map->itr_idx);\n+\t\telse if (qv_map->queue_type == VIRTCHNL_QUEUE_TYPE_TX)\n+\t\t\tice_cfg_txq_interrupt(vsi, vsi_q_id,\n+\t\t\t\t\t      q_vector->vf_reg_idx,\n+\t\t\t\t\t      qv_map->itr_idx);\n+\t}\n+\n+error_param:\n+\treturn ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_MAP_QUEUE_VECTOR,\n+\t\t\t\t     v_ret, NULL, 0);\n+}\n","prefixes":["iwl-next","v1","12/15"]}