Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/1141317/?format=api
{ "id": 1141317, "url": "http://patchwork.ozlabs.org/api/patches/1141317/?format=api", "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20190802082533.23083-2-anthony.l.nguyen@intel.com/", "project": { "id": 46, "url": "http://patchwork.ozlabs.org/api/projects/46/?format=api", "name": "Intel Wired Ethernet development", "link_name": "intel-wired-lan", "list_id": "intel-wired-lan.osuosl.org", "list_email": "intel-wired-lan@osuosl.org", "web_url": "", "scm_url": "", "webscm_url": "", "list_archive_url": "", "list_archive_url_format": "", "commit_url_format": "" }, "msgid": "<20190802082533.23083-2-anthony.l.nguyen@intel.com>", "list_archive_url": null, "date": "2019-08-02T08:25:20", "name": "[S26,02/15] ice: add support for virtchnl_queue_select.[tx|rx]_queues bitmap", "commit_ref": null, "pull_url": null, "state": "accepted", "archived": false, "hash": "03d697d23e0cab36c0e8bdcc453c51e617baadcf", "submitter": { "id": 68875, "url": "http://patchwork.ozlabs.org/api/people/68875/?format=api", "name": "Tony Nguyen", "email": "anthony.l.nguyen@intel.com" }, "delegate": { "id": 68, "url": "http://patchwork.ozlabs.org/api/users/68/?format=api", "username": "jtkirshe", "first_name": "Jeff", "last_name": "Kirsher", "email": "jeffrey.t.kirsher@intel.com" }, "mbox": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20190802082533.23083-2-anthony.l.nguyen@intel.com/mbox/", "series": [ { "id": 123025, "url": "http://patchwork.ozlabs.org/api/series/123025/?format=api", "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/list/?series=123025", "date": "2019-08-02T08:25:29", "name": "[S26,01/15] ice: add support for enabling/disabling single queues", "version": 1, "mbox": "http://patchwork.ozlabs.org/series/123025/mbox/" } ], "comments": "http://patchwork.ozlabs.org/api/patches/1141317/comments/", "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/1141317/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<intel-wired-lan-bounces@osuosl.org>", "X-Original-To": [ "incoming@patchwork.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Delivered-To": [ "patchwork-incoming@bilbo.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Authentication-Results": [ "ozlabs.org;\n\tspf=pass (mailfrom) smtp.mailfrom=osuosl.org\n\t(client-ip=140.211.166.136; helo=silver.osuosl.org;\n\tenvelope-from=intel-wired-lan-bounces@osuosl.org;\n\treceiver=<UNKNOWN>)", "ozlabs.org;\n\tdmarc=fail (p=none dis=none) header.from=intel.com" ], "Received": [ "from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256\n\tbits)) (No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 460YD151qCz9s7T\n\tfor <incoming@patchwork.ozlabs.org>;\n\tSat, 3 Aug 2019 02:54:25 +1000 (AEST)", "from localhost (localhost [127.0.0.1])\n\tby silver.osuosl.org (Postfix) with ESMTP id 0B3132051D;\n\tFri, 2 Aug 2019 16:54:24 +0000 (UTC)", "from silver.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id MEGyD3HZvlay; Fri, 2 Aug 2019 16:54:13 +0000 (UTC)", "from ash.osuosl.org (ash.osuosl.org [140.211.166.34])\n\tby silver.osuosl.org (Postfix) with ESMTP id 466D8241F9;\n\tFri, 2 Aug 2019 16:54:09 +0000 (UTC)", "from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138])\n\tby ash.osuosl.org (Postfix) with ESMTP id AAE3C1BF296\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tFri, 2 Aug 2019 16:54:03 +0000 (UTC)", "from localhost (localhost [127.0.0.1])\n\tby whitealder.osuosl.org (Postfix) with ESMTP id A6825831F5\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tFri, 2 Aug 2019 16:54:03 +0000 (UTC)", "from whitealder.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id IlACti6JzgNA for <intel-wired-lan@lists.osuosl.org>;\n\tFri, 2 Aug 2019 16:54:01 +0000 (UTC)", "from mga07.intel.com (mga07.intel.com [134.134.136.100])\n\tby whitealder.osuosl.org (Postfix) with ESMTPS id 39BBC87729\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tFri, 2 Aug 2019 16:54:01 +0000 (UTC)", "from orsmga003.jf.intel.com ([10.7.209.27])\n\tby orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t02 Aug 2019 09:53:58 -0700", "from unknown (HELO localhost.jf.intel.com) ([10.166.244.174])\n\tby orsmga003.jf.intel.com with ESMTP; 02 Aug 2019 09:53:58 -0700" ], "X-Virus-Scanned": [ "amavisd-new at osuosl.org", "amavisd-new at osuosl.org" ], "X-Greylist": "domain auto-whitelisted by SQLgrey-1.7.6", "X-Amp-Result": "SKIPPED(no attachment in message)", "X-Amp-File-Uploaded": "False", "X-ExtLoop1": "1", "X-IronPort-AV": "E=Sophos;i=\"5.64,338,1559545200\"; d=\"scan'208\";a=\"175640939\"", "From": "Tony Nguyen <anthony.l.nguyen@intel.com>", "To": "intel-wired-lan@lists.osuosl.org", "Date": "Fri, 2 Aug 2019 01:25:20 -0700", "Message-Id": "<20190802082533.23083-2-anthony.l.nguyen@intel.com>", "X-Mailer": "git-send-email 2.20.1", "In-Reply-To": "<20190802082533.23083-1-anthony.l.nguyen@intel.com>", "References": "<20190802082533.23083-1-anthony.l.nguyen@intel.com>", "MIME-Version": "1.0", "Subject": "[Intel-wired-lan] [PATCH S26 02/15] ice: add support for\n\tvirtchnl_queue_select.[tx|rx]_queues bitmap", "X-BeenThere": "intel-wired-lan@osuosl.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "Intel Wired Ethernet Linux Kernel Driver Development\n\t<intel-wired-lan.osuosl.org>", "List-Unsubscribe": "<https://lists.osuosl.org/mailman/options/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@osuosl.org?subject=unsubscribe>", "List-Archive": "<http://lists.osuosl.org/pipermail/intel-wired-lan/>", "List-Post": "<mailto:intel-wired-lan@osuosl.org>", "List-Help": "<mailto:intel-wired-lan-request@osuosl.org?subject=help>", "List-Subscribe": "<https://lists.osuosl.org/mailman/listinfo/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@osuosl.org?subject=subscribe>", "Content-Type": "text/plain; charset=\"us-ascii\"", "Content-Transfer-Encoding": "7bit", "Errors-To": "intel-wired-lan-bounces@osuosl.org", "Sender": "\"Intel-wired-lan\" <intel-wired-lan-bounces@osuosl.org>" }, "content": "From: Paul Greenwalt <paul.greenwalt@intel.com>\n\nThe VF driver can call VIRTCHNL_OP_[ENABLE|DISABLE]_QUEUES separately\nfor each queue. Add support for virtchnl_queue_select.[tx|rx]_queues\nbitmap which is used to indicate which queues to enable and disable.\n\nAdd tracing of VF Tx/Rx per queue enable state to avoid enabling enabled\nqueues and disabling disabled queues. Add total queues enabled count and\nclear ICE_VF_STATE_QS_ENA when count is zero.\n\nSigned-off-by: Paul Greenwalt <paul.greenwalt@intel.com>\nSigned-off-by: Peng Huang <peng.huang@intel.com>\nSigned-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>\n---\n drivers/net/ethernet/intel/ice/ice_lib.c | 15 +-\n drivers/net/ethernet/intel/ice/ice_lib.h | 10 +\n drivers/net/ethernet/intel/ice/ice_main.c | 2 +-\n .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 243 +++++++++++++-----\n .../net/ethernet/intel/ice/ice_virtchnl_pf.h | 12 +-\n 5 files changed, 207 insertions(+), 75 deletions(-)", "diff": "diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c\nindex 913467e5c1c1..23124159c8bf 100644\n--- a/drivers/net/ethernet/intel/ice/ice_lib.c\n+++ b/drivers/net/ethernet/intel/ice/ice_lib.c\n@@ -196,7 +196,10 @@ static int ice_pf_rxq_wait(struct ice_pf *pf, int pf_q, bool ena)\n * @ena: start or stop the Rx rings\n * @rxq_idx: Rx queue index\n */\n-static int ice_vsi_ctrl_rx_ring(struct ice_vsi *vsi, bool ena, u16 rxq_idx)\n+#ifndef CONFIG_PCI_IOV\n+static\n+#endif /* !CONFIG_PCI_IOV */\n+int ice_vsi_ctrl_rx_ring(struct ice_vsi *vsi, bool ena, u16 rxq_idx)\n {\n \tint pf_q = vsi->rxq_map[rxq_idx];\n \tstruct ice_pf *pf = vsi->back;\n@@ -2125,7 +2128,10 @@ void ice_trigger_sw_intr(struct ice_hw *hw, struct ice_q_vector *q_vector)\n * @ring: Tx ring to be stopped\n * @txq_meta: Meta data of Tx ring to be stopped\n */\n-static int\n+#ifndef CONFIG_PCI_IOV\n+static\n+#endif /* !CONFIG_PCI_IOV */\n+int\n ice_vsi_stop_tx_ring(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,\n \t\t u16 rel_vmvf_num, struct ice_ring *ring,\n \t\t struct ice_txq_meta *txq_meta)\n@@ -2185,7 +2191,10 @@ ice_vsi_stop_tx_ring(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,\n * Set up a helper struct that will contain all the necessary fields that\n * are needed for stopping Tx queue\n */\n-static void\n+#ifndef CONFIG_PCI_IOV\n+static\n+#endif /* !CONFIG_PCI_IOV */\n+void\n ice_fill_txq_meta(struct ice_vsi *vsi, struct ice_ring *ring,\n \t\t struct ice_txq_meta *txq_meta)\n {\ndiff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h\nindex c2d0040afb3a..97196e971fab 100644\n--- a/drivers/net/ethernet/intel/ice/ice_lib.h\n+++ b/drivers/net/ethernet/intel/ice/ice_lib.h\n@@ -39,6 +39,16 @@ ice_cfg_txq_interrupt(struct ice_vsi *vsi, u16 txq, u16 msix_idx, u16 itr_idx);\n \n void\n ice_cfg_rxq_interrupt(struct ice_vsi *vsi, u16 rxq, u16 msix_idx, u16 itr_idx);\n+\n+int\n+ice_vsi_stop_tx_ring(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,\n+\t\t u16 rel_vmvf_num, struct ice_ring *ring,\n+\t\t struct ice_txq_meta *txq_meta);\n+\n+void ice_fill_txq_meta(struct ice_vsi *vsi, struct ice_ring *ring,\n+\t\t struct ice_txq_meta *txq_meta);\n+\n+int ice_vsi_ctrl_rx_ring(struct ice_vsi *vsi, bool ena, u16 rxq_idx);\n #endif /* CONFIG_PCI_IOV */\n \n int ice_vsi_add_vlan(struct ice_vsi *vsi, u16 vid);\ndiff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c\nindex d34da2e7c253..655034e85d73 100644\n--- a/drivers/net/ethernet/intel/ice/ice_main.c\n+++ b/drivers/net/ethernet/intel/ice/ice_main.c\n@@ -489,7 +489,7 @@ ice_prepare_for_reset(struct ice_pf *pf)\n \n \t/* Disable VFs until reset is completed */\n \tfor (i = 0; i < pf->num_alloc_vfs; i++)\n-\t\tclear_bit(ICE_VF_STATE_ENA, pf->vf[i].vf_states);\n+\t\tice_set_vf_state_qs_dis(&pf->vf[i]);\n \n \t/* disable the VSIs and their queues that are not already DOWN */\n \tice_pf_dis_all_vsi(pf, false);\ndiff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c\nindex e6578d2f0876..78fd3fa8ac8b 100644\n--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c\n+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c\n@@ -251,6 +251,35 @@ static int ice_sriov_free_msix_res(struct ice_pf *pf)\n \treturn 0;\n }\n \n+/**\n+ * ice_set_vf_state_qs_dis - Set VF queues state to disabled\n+ * @vf: pointer to the VF structure\n+ */\n+void ice_set_vf_state_qs_dis(struct ice_vf *vf)\n+{\n+\t/* Clear Rx/Tx enabled queues flag */\n+\tbitmap_zero(vf->txq_ena, ICE_MAX_BASE_QS_PER_VF);\n+\tbitmap_zero(vf->rxq_ena, ICE_MAX_BASE_QS_PER_VF);\n+\tvf->num_qs_ena = 0;\n+\tclear_bit(ICE_VF_STATE_QS_ENA, vf->vf_states);\n+}\n+\n+/**\n+ * ice_dis_vf_qs - Disable the VF queues\n+ * @vf: pointer to the VF structure\n+ */\n+static void ice_dis_vf_qs(struct ice_vf *vf)\n+{\n+\tstruct ice_pf *pf = vf->pf;\n+\tstruct ice_vsi *vsi;\n+\n+\tvsi = pf->vsi[vf->lan_vsi_idx];\n+\n+\tice_vsi_stop_lan_tx_rings(vsi, ICE_NO_RESET, vf->vf_id);\n+\tice_vsi_stop_rx_rings(vsi);\n+\tice_set_vf_state_qs_dis(vf);\n+}\n+\n /**\n * ice_free_vfs - Free all VFs\n * @pf: pointer to the PF structure\n@@ -267,19 +296,9 @@ void ice_free_vfs(struct ice_pf *pf)\n \t\tusleep_range(1000, 2000);\n \n \t/* Avoid wait time by stopping all VFs at the same time */\n-\tfor (i = 0; i < pf->num_alloc_vfs; i++) {\n-\t\tstruct ice_vsi *vsi;\n-\n-\t\tif (!test_bit(ICE_VF_STATE_ENA, pf->vf[i].vf_states))\n-\t\t\tcontinue;\n-\n-\t\tvsi = pf->vsi[pf->vf[i].lan_vsi_idx];\n-\t\t/* stop rings without wait time */\n-\t\tice_vsi_stop_lan_tx_rings(vsi, ICE_NO_RESET, i);\n-\t\tice_vsi_stop_rx_rings(vsi);\n-\n-\t\tclear_bit(ICE_VF_STATE_ENA, pf->vf[i].vf_states);\n-\t}\n+\tfor (i = 0; i < pf->num_alloc_vfs; i++)\n+\t\tif (test_bit(ICE_VF_STATE_QS_ENA, pf->vf[i].vf_states))\n+\t\t\tice_dis_vf_qs(&pf->vf[i]);\n \n \t/* Disable IOV before freeing resources. This lets any VF drivers\n \t * running in the host get themselves cleaned up before we yank\n@@ -1055,17 +1074,9 @@ bool ice_reset_all_vfs(struct ice_pf *pf, bool is_vflr)\n \tfor (v = 0; v < pf->num_alloc_vfs; v++)\n \t\tice_trigger_vf_reset(&pf->vf[v], is_vflr);\n \n-\tfor (v = 0; v < pf->num_alloc_vfs; v++) {\n-\t\tstruct ice_vsi *vsi;\n-\n-\t\tvf = &pf->vf[v];\n-\t\tvsi = pf->vsi[vf->lan_vsi_idx];\n-\t\tif (test_bit(ICE_VF_STATE_ENA, vf->vf_states)) {\n-\t\t\tice_vsi_stop_lan_tx_rings(vsi, ICE_VF_RESET, vf->vf_id);\n-\t\t\tice_vsi_stop_rx_rings(vsi);\n-\t\t\tclear_bit(ICE_VF_STATE_ENA, vf->vf_states);\n-\t\t}\n-\t}\n+\tfor (v = 0; v < pf->num_alloc_vfs; v++)\n+\t\tif (test_bit(ICE_VF_STATE_QS_ENA, pf->vf[v].vf_states))\n+\t\t\tice_dis_vf_qs(&pf->vf[v]);\n \n \t/* HW requires some time to make sure it can flush the FIFO for a VF\n \t * when it resets it. Poll the VPGEN_VFRSTAT register for each VF in\n@@ -1144,24 +1155,21 @@ static bool ice_reset_vf(struct ice_vf *vf, bool is_vflr)\n \t/* If the VFs have been disabled, this means something else is\n \t * resetting the VF, so we shouldn't continue.\n \t */\n-\tif (test_and_set_bit(__ICE_VF_DIS, pf->state))\n+\tif (test_bit(__ICE_VF_DIS, pf->state))\n \t\treturn false;\n \n \tice_trigger_vf_reset(vf, is_vflr);\n \n \tvsi = pf->vsi[vf->lan_vsi_idx];\n \n-\tif (test_bit(ICE_VF_STATE_ENA, vf->vf_states)) {\n-\t\tice_vsi_stop_lan_tx_rings(vsi, ICE_VF_RESET, vf->vf_id);\n-\t\tice_vsi_stop_rx_rings(vsi);\n-\t\tclear_bit(ICE_VF_STATE_ENA, vf->vf_states);\n-\t} else {\n+\tif (test_bit(ICE_VF_STATE_QS_ENA, vf->vf_states))\n+\t\tice_dis_vf_qs(vf);\n+\telse\n \t\t/* Call Disable LAN Tx queue AQ call even when queues are not\n-\t\t * enabled. This is needed for successful completiom of VFR\n+\t\t * enabled. This is needed for successful completion of VFR\n \t\t */\n \t\tice_dis_vsi_txq(vsi->port_info, vsi->idx, 0, 0, NULL, NULL,\n \t\t\t\tNULL, ICE_VF_RESET, vf->vf_id, NULL);\n-\t}\n \n \thw = &pf->hw;\n \t/* poll VPGEN_VFRSTAT reg to make sure\n@@ -1210,7 +1218,6 @@ static bool ice_reset_vf(struct ice_vf *vf, bool is_vflr)\n \tice_cleanup_and_realloc_vf(vf);\n \n \tice_flush(hw);\n-\tclear_bit(__ICE_VF_DIS, pf->state);\n \n \treturn true;\n }\n@@ -1717,10 +1724,12 @@ static bool ice_vc_isvalid_q_id(struct ice_vf *vf, u16 vsi_id, u8 qid)\n * @ring_len: length of ring\n *\n * check for the valid ring count, should be multiple of ICE_REQ_DESC_MULTIPLE\n+ * or zero\n */\n static bool ice_vc_isvalid_ring_len(u16 ring_len)\n {\n-\treturn (ring_len >= ICE_MIN_NUM_DESC &&\n+\treturn ring_len == 0 ||\n+\t (ring_len >= ICE_MIN_NUM_DESC &&\n \t\tring_len <= ICE_MAX_NUM_DESC &&\n \t\t!(ring_len % ICE_REQ_DESC_MULTIPLE));\n }\n@@ -1877,6 +1886,8 @@ static int ice_vc_ena_qs_msg(struct ice_vf *vf, u8 *msg)\n \t (struct virtchnl_queue_select *)msg;\n \tstruct ice_pf *pf = vf->pf;\n \tstruct ice_vsi *vsi;\n+\tunsigned long q_map;\n+\tu16 vf_q_id;\n \n \tif (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) {\n \t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n@@ -1909,12 +1920,48 @@ static int ice_vc_ena_qs_msg(struct ice_vf *vf, u8 *msg)\n \t * Tx queue group list was configured and the context bits were\n \t * programmed using ice_vsi_cfg_txqs\n \t */\n-\tif (ice_vsi_start_rx_rings(vsi))\n-\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n+\tq_map = vqs->rx_queues;\n+\tfor_each_set_bit(vf_q_id, &q_map, ICE_MAX_BASE_QS_PER_VF) {\n+\t\tif (!ice_vc_isvalid_q_id(vf, vqs->vsi_id, vf_q_id)) {\n+\t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n+\t\t\tgoto error_param;\n+\t\t}\n+\n+\t\t/* Skip queue if enabled */\n+\t\tif (test_bit(vf_q_id, vf->rxq_ena))\n+\t\t\tcontinue;\n+\n+\t\tif (ice_vsi_ctrl_rx_ring(vsi, true, vf_q_id)) {\n+\t\t\tdev_err(&vsi->back->pdev->dev,\n+\t\t\t\t\"Failed to enable Rx ring %d on VSI %d\\n\",\n+\t\t\t\tvf_q_id, vsi->vsi_num);\n+\t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n+\t\t\tgoto error_param;\n+\t\t}\n+\n+\t\tset_bit(vf_q_id, vf->rxq_ena);\n+\t\tvf->num_qs_ena++;\n+\t}\n+\n+\tvsi = pf->vsi[vf->lan_vsi_idx];\n+\tq_map = vqs->tx_queues;\n+\tfor_each_set_bit(vf_q_id, &q_map, ICE_MAX_BASE_QS_PER_VF) {\n+\t\tif (!ice_vc_isvalid_q_id(vf, vqs->vsi_id, vf_q_id)) {\n+\t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n+\t\t\tgoto error_param;\n+\t\t}\n+\n+\t\t/* Skip queue if enabled */\n+\t\tif (test_bit(vf_q_id, vf->txq_ena))\n+\t\t\tcontinue;\n+\n+\t\tset_bit(vf_q_id, vf->txq_ena);\n+\t\tvf->num_qs_ena++;\n+\t}\n \n \t/* Set flag to indicate that queues are enabled */\n \tif (v_ret == VIRTCHNL_STATUS_SUCCESS)\n-\t\tset_bit(ICE_VF_STATE_ENA, vf->vf_states);\n+\t\tset_bit(ICE_VF_STATE_QS_ENA, vf->vf_states);\n \n error_param:\n \t/* send the response to the VF */\n@@ -1937,9 +1984,11 @@ static int ice_vc_dis_qs_msg(struct ice_vf *vf, u8 *msg)\n \t (struct virtchnl_queue_select *)msg;\n \tstruct ice_pf *pf = vf->pf;\n \tstruct ice_vsi *vsi;\n+\tunsigned long q_map;\n+\tu16 vf_q_id;\n \n \tif (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states) &&\n-\t !test_bit(ICE_VF_STATE_ENA, vf->vf_states)) {\n+\t !test_bit(ICE_VF_STATE_QS_ENA, vf->vf_states)) {\n \t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n \t\tgoto error_param;\n \t}\n@@ -1966,23 +2015,69 @@ static int ice_vc_dis_qs_msg(struct ice_vf *vf, u8 *msg)\n \t\tgoto error_param;\n \t}\n \n-\tif (ice_vsi_stop_lan_tx_rings(vsi, ICE_NO_RESET, vf->vf_id)) {\n-\t\tdev_err(&vsi->back->pdev->dev,\n-\t\t\t\"Failed to stop tx rings on VSI %d\\n\",\n-\t\t\tvsi->vsi_num);\n-\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n+\tif (vqs->tx_queues) {\n+\t\tq_map = vqs->tx_queues;\n+\n+\t\tfor_each_set_bit(vf_q_id, &q_map, ICE_MAX_BASE_QS_PER_VF) {\n+\t\t\tstruct ice_ring *ring = vsi->tx_rings[vf_q_id];\n+\t\t\tstruct ice_txq_meta txq_meta = { 0 };\n+\n+\t\t\tif (!ice_vc_isvalid_q_id(vf, vqs->vsi_id, vf_q_id)) {\n+\t\t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n+\t\t\t\tgoto error_param;\n+\t\t\t}\n+\n+\t\t\t/* Skip queue if not enabled */\n+\t\t\tif (!test_bit(vf_q_id, vf->txq_ena))\n+\t\t\t\tcontinue;\n+\n+\t\t\tice_fill_txq_meta(vsi, ring, &txq_meta);\n+\n+\t\t\tif (ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, vf->vf_id,\n+\t\t\t\t\t\t ring, &txq_meta)) {\n+\t\t\t\tdev_err(&vsi->back->pdev->dev,\n+\t\t\t\t\t\"Failed to stop Tx ring %d on VSI %d\\n\",\n+\t\t\t\t\tvf_q_id, vsi->vsi_num);\n+\t\t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n+\t\t\t\tgoto error_param;\n+\t\t\t}\n+\n+\t\t\t/* Clear enabled queues flag */\n+\t\t\tclear_bit(vf_q_id, vf->txq_ena);\n+\t\t\tvf->num_qs_ena--;\n+\t\t}\n \t}\n \n-\tif (ice_vsi_stop_rx_rings(vsi)) {\n-\t\tdev_err(&vsi->back->pdev->dev,\n-\t\t\t\"Failed to stop rx rings on VSI %d\\n\",\n-\t\t\tvsi->vsi_num);\n-\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n+\tif (vqs->rx_queues) {\n+\t\tq_map = vqs->rx_queues;\n+\n+\t\tfor_each_set_bit(vf_q_id, &q_map, ICE_MAX_BASE_QS_PER_VF) {\n+\t\t\tif (!ice_vc_isvalid_q_id(vf, vqs->vsi_id, vf_q_id)) {\n+\t\t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n+\t\t\t\tgoto error_param;\n+\t\t\t}\n+\n+\t\t\t/* Skip queue if not enabled */\n+\t\t\tif (!test_bit(vf_q_id, vf->rxq_ena))\n+\t\t\t\tcontinue;\n+\n+\t\t\tif (ice_vsi_ctrl_rx_ring(vsi, false, vf_q_id)) {\n+\t\t\t\tdev_err(&vsi->back->pdev->dev,\n+\t\t\t\t\t\"Failed to stop Rx ring %d on VSI %d\\n\",\n+\t\t\t\t\tvf_q_id, vsi->vsi_num);\n+\t\t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n+\t\t\t\tgoto error_param;\n+\t\t\t}\n+\n+\t\t\t/* Clear enabled queues flag */\n+\t\t\tclear_bit(vf_q_id, vf->rxq_ena);\n+\t\t\tvf->num_qs_ena--;\n+\t\t}\n \t}\n \n \t/* Clear enabled queues flag */\n-\tif (v_ret == VIRTCHNL_STATUS_SUCCESS)\n-\t\tclear_bit(ICE_VF_STATE_ENA, vf->vf_states);\n+\tif (v_ret == VIRTCHNL_STATUS_SUCCESS && !vf->num_qs_ena)\n+\t\tclear_bit(ICE_VF_STATE_QS_ENA, vf->vf_states);\n \n error_param:\n \t/* send the response to the VF */\n@@ -2106,6 +2201,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)\n \tstruct virtchnl_vsi_queue_config_info *qci =\n \t (struct virtchnl_vsi_queue_config_info *)msg;\n \tstruct virtchnl_queue_pair_info *qpi;\n+\tu16 num_rxq = 0, num_txq = 0;\n \tstruct ice_pf *pf = vf->pf;\n \tstruct ice_vsi *vsi;\n \tint i;\n@@ -2148,33 +2244,44 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)\n \t\t\tgoto error_param;\n \t\t}\n \t\t/* copy Tx queue info from VF into VSI */\n-\t\tvsi->tx_rings[i]->dma = qpi->txq.dma_ring_addr;\n-\t\tvsi->tx_rings[i]->count = qpi->txq.ring_len;\n-\t\t/* copy Rx queue info from VF into VSI */\n-\t\tvsi->rx_rings[i]->dma = qpi->rxq.dma_ring_addr;\n-\t\tvsi->rx_rings[i]->count = qpi->rxq.ring_len;\n-\t\tif (qpi->rxq.databuffer_size > ((16 * 1024) - 128) ||\n-\t\t qpi->rxq.databuffer_size < 1024) {\n-\t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n-\t\t\tgoto error_param;\n+\t\tif (qpi->txq.ring_len > 0) {\n+\t\t\tnum_txq++;\n+\t\t\tvsi->tx_rings[i]->dma = qpi->txq.dma_ring_addr;\n+\t\t\tvsi->tx_rings[i]->count = qpi->txq.ring_len;\n \t\t}\n-\t\tvsi->rx_buf_len = qpi->rxq.databuffer_size;\n-\t\tif (qpi->rxq.max_pkt_size >= (16 * 1024) ||\n-\t\t qpi->rxq.max_pkt_size < 64) {\n-\t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n-\t\t\tgoto error_param;\n+\n+\t\t/* copy Rx queue info from VF into VSI */\n+\t\tif (qpi->rxq.ring_len > 0) {\n+\t\t\tnum_rxq++;\n+\t\t\tvsi->rx_rings[i]->dma = qpi->rxq.dma_ring_addr;\n+\t\t\tvsi->rx_rings[i]->count = qpi->rxq.ring_len;\n+\n+\t\t\tif (qpi->rxq.databuffer_size != 0 &&\n+\t\t\t (qpi->rxq.databuffer_size > ((16 * 1024) - 128) ||\n+\t\t\t qpi->rxq.databuffer_size < 1024)) {\n+\t\t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n+\t\t\t\tgoto error_param;\n+\t\t\t}\n+\t\t\tvsi->rx_buf_len = qpi->rxq.databuffer_size;\n+\t\t\tvsi->rx_rings[i]->rx_buf_len = vsi->rx_buf_len;\n+\t\t\tif (qpi->rxq.max_pkt_size >= (16 * 1024) ||\n+\t\t\t qpi->rxq.max_pkt_size < 64) {\n+\t\t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n+\t\t\t\tgoto error_param;\n+\t\t\t}\n \t\t}\n+\n \t\tvsi->max_frame = qpi->rxq.max_pkt_size;\n \t}\n \n \t/* VF can request to configure less than allocated queues\n \t * or default allocated queues. So update the VSI with new number\n \t */\n-\tvsi->num_txq = qci->num_queue_pairs;\n-\tvsi->num_rxq = qci->num_queue_pairs;\n+\tvsi->num_txq = num_txq;\n+\tvsi->num_rxq = num_rxq;\n \t/* All queues of VF VSI are in TC 0 */\n-\tvsi->tc_cfg.tc_info[0].qcount_tx = qci->num_queue_pairs;\n-\tvsi->tc_cfg.tc_info[0].qcount_rx = qci->num_queue_pairs;\n+\tvsi->tc_cfg.tc_info[0].qcount_tx = num_txq;\n+\tvsi->tc_cfg.tc_info[0].qcount_rx = num_rxq;\n \n \tif (ice_vsi_cfg_lan_txqs(vsi) || ice_vsi_cfg_rxqs(vsi))\n \t\tv_ret = VIRTCHNL_STATUS_ERR_ADMIN_QUEUE_ERROR;\ndiff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h\nindex 13f45f37d75e..0d9880c8bba3 100644\n--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h\n+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h\n@@ -41,9 +41,9 @@\n \n /* Specific VF states */\n enum ice_vf_states {\n-\tICE_VF_STATE_INIT = 0,\n-\tICE_VF_STATE_ACTIVE,\n-\tICE_VF_STATE_ENA,\n+\tICE_VF_STATE_INIT = 0,\t\t/* PF is initializing VF */\n+\tICE_VF_STATE_ACTIVE,\t\t/* VF resources are allocated for use */\n+\tICE_VF_STATE_QS_ENA,\t\t/* VF queue(s) enabled */\n \tICE_VF_STATE_DIS,\n \tICE_VF_STATE_MC_PROMISC,\n \tICE_VF_STATE_UC_PROMISC,\n@@ -68,6 +68,8 @@ struct ice_vf {\n \tstruct virtchnl_version_info vf_ver;\n \tu32 driver_caps;\t\t/* reported by VF driver */\n \tstruct virtchnl_ether_addr dflt_lan_addr;\n+\tDECLARE_BITMAP(txq_ena, ICE_MAX_BASE_QS_PER_VF);\n+\tDECLARE_BITMAP(rxq_ena, ICE_MAX_BASE_QS_PER_VF);\n \tu16 port_vlan_id;\n \tu8 pf_set_mac:1;\t\t/* VF MAC address set by VMM admin */\n \tu8 trusted:1;\n@@ -90,6 +92,7 @@ struct ice_vf {\n \tu16 num_mac;\n \tu16 num_vlan;\n \tu16 num_vf_qs;\t\t\t/* num of queue configured per VF */\n+\tu16 num_qs_ena;\t\t\t/* total num of Tx/Rx queue enabled */\n };\n \n #ifdef CONFIG_PCI_IOV\n@@ -116,12 +119,15 @@ int ice_set_vf_link_state(struct net_device *netdev, int vf_id, int link_state);\n int ice_set_vf_spoofchk(struct net_device *netdev, int vf_id, bool ena);\n \n int ice_calc_vf_reg_idx(struct ice_vf *vf, struct ice_q_vector *q_vector);\n+\n+void ice_set_vf_state_qs_dis(struct ice_vf *vf);\n #else /* CONFIG_PCI_IOV */\n #define ice_process_vflr_event(pf) do {} while (0)\n #define ice_free_vfs(pf) do {} while (0)\n #define ice_vc_process_vf_msg(pf, event) do {} while (0)\n #define ice_vc_notify_link_state(pf) do {} while (0)\n #define ice_vc_notify_reset(pf) do {} while (0)\n+#define ice_set_vf_state_qs_dis(vf) do {} while (0)\n \n static inline bool\n ice_reset_all_vfs(struct ice_pf __always_unused *pf,\n", "prefixes": [ "S26", "02/15" ] }