Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/1144205/?format=api
{ "id": 1144205, "url": "http://patchwork.ozlabs.org/api/patches/1144205/?format=api", "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20190808143938.4968-9-anthony.l.nguyen@intel.com/", "project": { "id": 46, "url": "http://patchwork.ozlabs.org/api/projects/46/?format=api", "name": "Intel Wired Ethernet development", "link_name": "intel-wired-lan", "list_id": "intel-wired-lan.osuosl.org", "list_email": "intel-wired-lan@osuosl.org", "web_url": "", "scm_url": "", "webscm_url": "", "list_archive_url": "", "list_archive_url_format": "", "commit_url_format": "" }, "msgid": "<20190808143938.4968-9-anthony.l.nguyen@intel.com>", "list_archive_url": null, "date": "2019-08-08T14:39:32", "name": "[S27,09/15] ice: Minor refactor in queue management", "commit_ref": null, "pull_url": null, "state": "changes-requested", "archived": false, "hash": "fcc9b0b6536ed87fab03464e3f4d9a06be638670", "submitter": { "id": 68875, "url": "http://patchwork.ozlabs.org/api/people/68875/?format=api", "name": "Tony Nguyen", "email": "anthony.l.nguyen@intel.com" }, "delegate": { "id": 68, "url": "http://patchwork.ozlabs.org/api/users/68/?format=api", "username": "jtkirshe", "first_name": "Jeff", "last_name": "Kirsher", "email": "jeffrey.t.kirsher@intel.com" }, "mbox": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20190808143938.4968-9-anthony.l.nguyen@intel.com/mbox/", "series": [ { "id": 124091, "url": "http://patchwork.ozlabs.org/api/series/124091/?format=api", "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/list/?series=124091", "date": "2019-08-08T14:39:28", "name": "[S27,01/15] ice: Limit Max TCs on devices with more than 4 ports", "version": 1, "mbox": "http://patchwork.ozlabs.org/series/124091/mbox/" } ], "comments": "http://patchwork.ozlabs.org/api/patches/1144205/comments/", "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/1144205/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<intel-wired-lan-bounces@osuosl.org>", "X-Original-To": [ "incoming@patchwork.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Delivered-To": [ "patchwork-incoming@bilbo.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Authentication-Results": [ "ozlabs.org;\n\tspf=pass (mailfrom) smtp.mailfrom=osuosl.org\n\t(client-ip=140.211.166.133; helo=hemlock.osuosl.org;\n\tenvelope-from=intel-wired-lan-bounces@osuosl.org;\n\treceiver=<UNKNOWN>)", "ozlabs.org;\n\tdmarc=fail (p=none dis=none) header.from=intel.com" ], "Received": [ "from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256\n\tbits)) (No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 464PDr5MF0z9sMr\n\tfor <incoming@patchwork.ozlabs.org>;\n\tFri, 9 Aug 2019 09:08:28 +1000 (AEST)", "from localhost (localhost [127.0.0.1])\n\tby hemlock.osuosl.org (Postfix) with ESMTP id 1D3948835D;\n\tThu, 8 Aug 2019 23:08:27 +0000 (UTC)", "from hemlock.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id y6kHgopYUr7L; Thu, 8 Aug 2019 23:08:25 +0000 (UTC)", "from ash.osuosl.org (ash.osuosl.org [140.211.166.34])\n\tby hemlock.osuosl.org (Postfix) with ESMTP id 6CBF488320;\n\tThu, 8 Aug 2019 23:08:25 +0000 (UTC)", "from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133])\n\tby ash.osuosl.org (Postfix) with ESMTP id BD6511BF383\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tThu, 8 Aug 2019 23:08:20 +0000 (UTC)", "from localhost (localhost [127.0.0.1])\n\tby hemlock.osuosl.org (Postfix) with ESMTP id B7A42882FD\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tThu, 8 Aug 2019 23:08:20 +0000 (UTC)", "from hemlock.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id 17kdjeI5BRpQ for <intel-wired-lan@lists.osuosl.org>;\n\tThu, 8 Aug 2019 23:08:18 +0000 (UTC)", "from mga14.intel.com (mga14.intel.com [192.55.52.115])\n\tby hemlock.osuosl.org (Postfix) with ESMTPS id EA57E8831F\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tThu, 8 Aug 2019 23:08:17 +0000 (UTC)", "from orsmga008.jf.intel.com ([10.7.209.65])\n\tby fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t08 Aug 2019 16:08:17 -0700", "from unknown (HELO localhost.jf.intel.com) ([10.166.244.174])\n\tby orsmga008.jf.intel.com with ESMTP; 08 Aug 2019 16:08:16 -0700" ], "X-Virus-Scanned": [ "amavisd-new at osuosl.org", "amavisd-new at osuosl.org" ], "X-Greylist": "domain auto-whitelisted by SQLgrey-1.7.6", "X-Amp-Result": "SKIPPED(no attachment in message)", "X-Amp-File-Uploaded": "False", "X-ExtLoop1": "1", "X-IronPort-AV": "E=Sophos;i=\"5.64,363,1559545200\"; d=\"scan'208\";a=\"169141834\"", "From": "Tony Nguyen <anthony.l.nguyen@intel.com>", "To": "intel-wired-lan@lists.osuosl.org", "Date": "Thu, 8 Aug 2019 07:39:32 -0700", "Message-Id": "<20190808143938.4968-9-anthony.l.nguyen@intel.com>", "X-Mailer": "git-send-email 2.20.1", "In-Reply-To": "<20190808143938.4968-1-anthony.l.nguyen@intel.com>", "References": "<20190808143938.4968-1-anthony.l.nguyen@intel.com>", "MIME-Version": "1.0", "Subject": "[Intel-wired-lan] [PATCH S27 09/15] ice: Minor refactor in queue\n\tmanagement", "X-BeenThere": "intel-wired-lan@osuosl.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "Intel Wired Ethernet Linux Kernel Driver Development\n\t<intel-wired-lan.osuosl.org>", "List-Unsubscribe": "<https://lists.osuosl.org/mailman/options/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@osuosl.org?subject=unsubscribe>", "List-Archive": "<http://lists.osuosl.org/pipermail/intel-wired-lan/>", "List-Post": "<mailto:intel-wired-lan@osuosl.org>", "List-Help": "<mailto:intel-wired-lan-request@osuosl.org?subject=help>", "List-Subscribe": "<https://lists.osuosl.org/mailman/listinfo/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@osuosl.org?subject=subscribe>", "Content-Type": "text/plain; charset=\"us-ascii\"", "Content-Transfer-Encoding": "7bit", "Errors-To": "intel-wired-lan-bounces@osuosl.org", "Sender": "\"Intel-wired-lan\" <intel-wired-lan-bounces@osuosl.org>" }, "content": "From: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>\n\nRemove q_left_tx and q_left_rx from the PF struct as these can be\nobtained by calling ice_get_avail_txq_count and ice_get_avail_rxq_count\nrespectively.\n\nThe function ice_determine_q_usage is only setting num_lan_tx and\nnum_lan_rx in the PF structure, and these are later assigned to\nvsi->alloc_txq and vsi->alloc_rxq respectively. This is an unnecessary\nindirection, so remove ice_determine_q_usage and just assign values\nfor vsi->alloc_txq and vsi->alloc_rxq in ice_vsi_set_num_qs and use\nthese to set num_lan_tx and num_lan_rx respectively.\n\nSigned-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>\nSigned-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>\n---\n drivers/net/ethernet/intel/ice/ice.h | 2 -\n drivers/net/ethernet/intel/ice/ice_lib.c | 82 +++++++++++++++----\n drivers/net/ethernet/intel/ice/ice_lib.h | 6 ++\n drivers/net/ethernet/intel/ice/ice_main.c | 29 -------\n .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 14 ++--\n 5 files changed, 82 insertions(+), 51 deletions(-)", "diff": "diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h\nindex 1a64900e7030..dae79bb635f7 100644\n--- a/drivers/net/ethernet/intel/ice/ice.h\n+++ b/drivers/net/ethernet/intel/ice/ice.h\n@@ -373,8 +373,6 @@ struct ice_pf {\n \tu32 num_lan_msix;\t/* Total MSIX vectors for base driver */\n \tu16 num_lan_tx;\t\t/* num LAN Tx queues setup */\n \tu16 num_lan_rx;\t\t/* num LAN Rx queues setup */\n-\tu16 q_left_tx;\t\t/* remaining num Tx queues left unclaimed */\n-\tu16 q_left_rx;\t\t/* remaining num Rx queues left unclaimed */\n \tu16 next_vsi;\t\t/* Next free slot in pf->vsi[] - 0-based! */\n \tu16 num_alloc_vsi;\n \tu16 corer_count;\t/* Core reset count */\ndiff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c\nindex 3c5a86881a6f..46cca842eea9 100644\n--- a/drivers/net/ethernet/intel/ice/ice_lib.c\n+++ b/drivers/net/ethernet/intel/ice/ice_lib.c\n@@ -5,6 +5,11 @@\n #include \"ice_lib.h\"\n #include \"ice_dcb_lib.h\"\n \n+#ifndef CONFIG_PCI_IOV\n+static u16 ice_get_avail_txq_count(struct ice_pf *pf);\n+static u16 ice_get_avail_rxq_count(struct ice_pf *pf);\n+#endif /* !CONFIG_PCI_IOV */\n+\n /**\n * ice_setup_rx_ctx - Configure a receive ring context\n * @ring: The Rx ring to configure\n@@ -343,16 +348,29 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi, u16 vf_id)\n \n \tswitch (vsi->type) {\n \tcase ICE_VSI_PF:\n-\t\tvsi->alloc_txq = pf->num_lan_tx;\n-\t\tvsi->alloc_rxq = pf->num_lan_rx;\n+\t\tvsi->alloc_txq = min_t(int, ice_get_avail_txq_count(pf),\n+\t\t\t\t num_online_cpus());\n \t\tif (vsi->req_txq) {\n \t\t\tvsi->alloc_txq = vsi->req_txq;\n \t\t\tvsi->num_txq = vsi->req_txq;\n \t\t}\n-\t\tif (vsi->req_rxq) {\n-\t\t\tvsi->alloc_rxq = vsi->req_rxq;\n-\t\t\tvsi->num_rxq = vsi->req_rxq;\n+\n+\t\tpf->num_lan_tx = vsi->alloc_txq;\n+\n+\t\t/* only 1 Rx queue unless RSS is enabled */\n+\t\tif (!test_bit(ICE_FLAG_RSS_ENA, pf->flags)) {\n+\t\t\tvsi->alloc_rxq = 1;\n+\t\t} else {\n+\t\t\tvsi->alloc_rxq = min_t(int, ice_get_avail_rxq_count(pf),\n+\t\t\t\t\t num_online_cpus());\n+\t\t\tif (vsi->req_rxq) {\n+\t\t\t\tvsi->alloc_rxq = vsi->req_rxq;\n+\t\t\t\tvsi->num_rxq = vsi->req_rxq;\n+\t\t\t}\n \t\t}\n+\n+\t\tpf->num_lan_rx = vsi->alloc_rxq;\n+\n \t\tvsi->num_q_vectors = max_t(int, vsi->alloc_rxq, vsi->alloc_txq);\n \t\tbreak;\n \tcase ICE_VSI_VF:\n@@ -748,6 +766,51 @@ void ice_vsi_put_qs(struct ice_vsi *vsi)\n \tmutex_unlock(&pf->avail_q_mutex);\n }\n \n+/**\n+ * ice_get_avail_q_count - Get count of queues in use\n+ * @pf_qmap: bitmap to get queue use count from\n+ * @lock: pointer to a mutex that protects access to pf_qmap\n+ * @size: size of the bitmap\n+ */\n+static u16\n+ice_get_avail_q_count(unsigned long *pf_qmap, struct mutex *lock, u16 size)\n+{\n+\tu16 count = 0, bit;\n+\n+\tmutex_lock(lock);\n+\tfor_each_clear_bit(bit, pf_qmap, size)\n+\t\tcount++;\n+\tmutex_unlock(lock);\n+\n+\treturn count;\n+}\n+\n+/**\n+ * ice_get_avail_txq_count - Get count of Tx queues in use\n+ * @pf: pointer to an ice_pf instance\n+ */\n+#ifndef CONFIG_PCI_IOV\n+static\n+#endif /* !CONFIG_PCI_IOV */\n+u16 ice_get_avail_txq_count(struct ice_pf *pf)\n+{\n+\treturn ice_get_avail_q_count(pf->avail_txqs, &pf->avail_q_mutex,\n+\t\t\t\t pf->max_pf_txqs);\n+}\n+\n+/**\n+ * ice_get_avail_rxq_count - Get count of Rx queues in use\n+ * @pf: pointer to an ice_pf instance\n+ */\n+#ifndef CONFIG_PCI_IOV\n+static\n+#endif /* !CONFIG_PCI_IOV */\n+u16 ice_get_avail_rxq_count(struct ice_pf *pf)\n+{\n+\treturn ice_get_avail_q_count(pf->avail_rxqs, &pf->avail_q_mutex,\n+\t\t\t\t pf->max_pf_rxqs);\n+}\n+\n /**\n * ice_rss_clean - Delete RSS related VSI structures that hold user inputs\n * @vsi: the VSI being removed\n@@ -2597,9 +2660,6 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,\n \t\tif (ret)\n \t\t\tgoto unroll_vector_base;\n \n-\t\tpf->q_left_tx -= vsi->alloc_txq;\n-\t\tpf->q_left_rx -= vsi->alloc_rxq;\n-\n \t\t/* Do not exit if configuring RSS had an issue, at least\n \t\t * receive traffic on first queue. Hence no need to capture\n \t\t * return value\n@@ -2663,8 +2723,6 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,\n \tice_vsi_delete(vsi);\n unroll_get_qs:\n \tice_vsi_put_qs(vsi);\n-\tpf->q_left_tx += vsi->alloc_txq;\n-\tpf->q_left_rx += vsi->alloc_rxq;\n \tice_vsi_clear(vsi);\n \n \treturn NULL;\n@@ -3012,8 +3070,6 @@ int ice_vsi_release(struct ice_vsi *vsi)\n \tice_vsi_clear_rings(vsi);\n \n \tice_vsi_put_qs(vsi);\n-\tpf->q_left_tx += vsi->alloc_txq;\n-\tpf->q_left_rx += vsi->alloc_rxq;\n \n \t/* retain SW VSI data structure since it is needed to unregister and\n \t * free VSI netdev when PF is not in reset recovery pending state,\\\n@@ -3122,8 +3178,6 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi)\n \t\tif (ret)\n \t\t\tgoto err_vectors;\n \n-\t\tpf->q_left_tx -= vsi->alloc_txq;\n-\t\tpf->q_left_rx -= vsi->alloc_rxq;\n \t\tbreak;\n \tdefault:\n \t\tbreak;\ndiff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h\nindex 97196e971fab..b6cd595b9acd 100644\n--- a/drivers/net/ethernet/intel/ice/ice_lib.h\n+++ b/drivers/net/ethernet/intel/ice/ice_lib.h\n@@ -104,6 +104,12 @@ void ice_trigger_sw_intr(struct ice_hw *hw, struct ice_q_vector *q_vector);\n \n void ice_vsi_put_qs(struct ice_vsi *vsi);\n \n+#ifdef CONFIG_PCI_IOV\n+u16 ice_get_avail_txq_count(struct ice_pf *pf);\n+\n+u16 ice_get_avail_rxq_count(struct ice_pf *pf);\n+#endif /* CONFIG_PCI_IOV */\n+\n #ifdef CONFIG_DCB\n void ice_vsi_map_rings_to_vectors(struct ice_vsi *vsi);\n #endif /* CONFIG_DCB */\ndiff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c\nindex 7514a3d2cd34..cfa06a69f029 100644\n--- a/drivers/net/ethernet/intel/ice/ice_main.c\n+++ b/drivers/net/ethernet/intel/ice/ice_main.c\n@@ -2228,38 +2228,11 @@ static int ice_setup_pf_sw(struct ice_pf *pf)\n \t\tice_vsi_free_q_vectors(vsi);\n \t\tice_vsi_delete(vsi);\n \t\tice_vsi_put_qs(vsi);\n-\t\tpf->q_left_tx += vsi->alloc_txq;\n-\t\tpf->q_left_rx += vsi->alloc_rxq;\n \t\tice_vsi_clear(vsi);\n \t}\n \treturn status;\n }\n \n-/**\n- * ice_determine_q_usage - Calculate queue distribution\n- * @pf: board private structure\n- *\n- * Return -ENOMEM if we don't get enough queues for all ports\n- */\n-static void ice_determine_q_usage(struct ice_pf *pf)\n-{\n-\tu16 q_left_tx, q_left_rx;\n-\n-\tq_left_tx = pf->hw.func_caps.common_cap.num_txq;\n-\tq_left_rx = pf->hw.func_caps.common_cap.num_rxq;\n-\n-\tpf->num_lan_tx = min_t(int, q_left_tx, num_online_cpus());\n-\n-\t/* only 1 Rx queue unless RSS is enabled */\n-\tif (!test_bit(ICE_FLAG_RSS_ENA, pf->flags))\n-\t\tpf->num_lan_rx = 1;\n-\telse\n-\t\tpf->num_lan_rx = min_t(int, q_left_rx, num_online_cpus());\n-\n-\tpf->q_left_tx = q_left_tx - pf->num_lan_tx;\n-\tpf->q_left_rx = q_left_rx - pf->num_lan_rx;\n-}\n-\n /**\n * ice_deinit_pf - Unrolls initialziations done by ice_init_pf\n * @pf: board private structure to initialize\n@@ -2574,8 +2547,6 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)\n \t\terr = 0;\n \t}\n \n-\tice_determine_q_usage(pf);\n-\n \tpf->num_alloc_vsi = hw->func_caps.guar_num_vsi;\n \tif (!pf->num_alloc_vsi) {\n \t\terr = -EIO;\ndiff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c\nindex c38939b1d496..2795d85622c5 100644\n--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c\n+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c\n@@ -588,7 +588,8 @@ static int ice_alloc_vf_res(struct ice_vf *vf)\n \t/* Update number of VF queues, in case VF had requested for queue\n \t * changes\n \t */\n-\ttx_rx_queue_left = min_t(int, pf->q_left_tx, pf->q_left_rx);\n+\ttx_rx_queue_left = min_t(int, ice_get_avail_txq_count(pf),\n+\t\t\t\t ice_get_avail_rxq_count(pf));\n \ttx_rx_queue_left += ICE_DFLT_QS_PER_VF;\n \tif (vf->num_req_qs && vf->num_req_qs <= tx_rx_queue_left &&\n \t vf->num_req_qs != vf->num_vf_qs)\n@@ -891,11 +892,11 @@ static int ice_check_avail_res(struct ice_pf *pf)\n \t * at runtime through Virtchnl, that is the reason we start by reserving\n \t * few queues.\n \t */\n-\tnum_txq = ice_determine_res(pf, pf->q_left_tx, ICE_DFLT_QS_PER_VF,\n-\t\t\t\t ICE_MIN_QS_PER_VF);\n+\tnum_txq = ice_determine_res(pf, ice_get_avail_txq_count(pf),\n+\t\t\t\t ICE_DFLT_QS_PER_VF, ICE_MIN_QS_PER_VF);\n \n-\tnum_rxq = ice_determine_res(pf, pf->q_left_rx, ICE_DFLT_QS_PER_VF,\n-\t\t\t\t ICE_MIN_QS_PER_VF);\n+\tnum_rxq = ice_determine_res(pf, ice_get_avail_rxq_count(pf),\n+\t\t\t\t ICE_DFLT_QS_PER_VF, ICE_MIN_QS_PER_VF);\n \n \tif (!num_txq || !num_rxq)\n \t\treturn -EIO;\n@@ -2504,7 +2505,8 @@ static int ice_vc_request_qs_msg(struct ice_vf *vf, u8 *msg)\n \t}\n \n \tcur_queues = vf->num_vf_qs;\n-\ttx_rx_queue_left = min_t(u16, pf->q_left_tx, pf->q_left_rx);\n+\ttx_rx_queue_left = min_t(u16, ice_get_avail_txq_count(pf),\n+\t\t\t\t ice_get_avail_rxq_count(pf));\n \tmax_allowed_vf_queues = tx_rx_queue_left + cur_queues;\n \tif (!req_queues) {\n \t\tdev_err(&pf->pdev->dev,\n", "prefixes": [ "S27", "09/15" ] }