Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/1157159/?format=api
{ "id": 1157159, "url": "http://patchwork.ozlabs.org/api/patches/1157159/?format=api", "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20190903083108.19593-7-anthony.l.nguyen@intel.com/", "project": { "id": 46, "url": "http://patchwork.ozlabs.org/api/projects/46/?format=api", "name": "Intel Wired Ethernet development", "link_name": "intel-wired-lan", "list_id": "intel-wired-lan.osuosl.org", "list_email": "intel-wired-lan@osuosl.org", "web_url": "", "scm_url": "", "webscm_url": "", "list_archive_url": "", "list_archive_url_format": "", "commit_url_format": "" }, "msgid": "<20190903083108.19593-7-anthony.l.nguyen@intel.com>", "list_archive_url": null, "date": "2019-09-03T08:31:06", "name": "[S28,v2,7/9] ice: Minor refactor in queue management", "commit_ref": null, "pull_url": null, "state": "accepted", "archived": false, "hash": "23e673f19367ded02f780470945e9f8afe6ed2df", "submitter": { "id": 68875, "url": "http://patchwork.ozlabs.org/api/people/68875/?format=api", "name": "Tony Nguyen", "email": "anthony.l.nguyen@intel.com" }, "delegate": { "id": 68, "url": "http://patchwork.ozlabs.org/api/users/68/?format=api", "username": "jtkirshe", "first_name": "Jeff", "last_name": "Kirsher", "email": "jeffrey.t.kirsher@intel.com" }, "mbox": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20190903083108.19593-7-anthony.l.nguyen@intel.com/mbox/", "series": [ { "id": 128806, "url": "http://patchwork.ozlabs.org/api/series/128806/?format=api", "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/list/?series=128806", "date": "2019-09-03T08:31:06", "name": "[S28,v2,1/9] ice: Reliably reset VFs", "version": 2, "mbox": "http://patchwork.ozlabs.org/series/128806/mbox/" } ], "comments": "http://patchwork.ozlabs.org/api/patches/1157159/comments/", "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/1157159/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<intel-wired-lan-bounces@osuosl.org>", "X-Original-To": [ "incoming@patchwork.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Delivered-To": [ "patchwork-incoming@bilbo.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Authentication-Results": [ "ozlabs.org;\n\tspf=pass (mailfrom) smtp.mailfrom=osuosl.org\n\t(client-ip=140.211.166.137; helo=fraxinus.osuosl.org;\n\tenvelope-from=intel-wired-lan-bounces@osuosl.org;\n\treceiver=<UNKNOWN>)", "ozlabs.org;\n\tdmarc=fail (p=none dis=none) header.from=intel.com" ], "Received": [ "from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256\n\tbits)) (No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 46NCr71gn7z9s00\n\tfor <incoming@patchwork.ozlabs.org>;\n\tWed, 4 Sep 2019 03:00:22 +1000 (AEST)", "from localhost (localhost [127.0.0.1])\n\tby fraxinus.osuosl.org (Postfix) with ESMTP id 160E685EF1;\n\tTue, 3 Sep 2019 17:00:21 +0000 (UTC)", "from fraxinus.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id FrYZFeInt6tu; Tue, 3 Sep 2019 17:00:19 +0000 (UTC)", "from ash.osuosl.org (ash.osuosl.org [140.211.166.34])\n\tby fraxinus.osuosl.org (Postfix) with ESMTP id E70C885F9D;\n\tTue, 3 Sep 2019 17:00:18 +0000 (UTC)", "from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133])\n\tby ash.osuosl.org (Postfix) with ESMTP id 9D7371BF27F\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tTue, 3 Sep 2019 17:00:17 +0000 (UTC)", "from localhost (localhost [127.0.0.1])\n\tby hemlock.osuosl.org (Postfix) with ESMTP id 97A9687E3C\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tTue, 3 Sep 2019 17:00:17 +0000 (UTC)", "from hemlock.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id s4+zUR-dHHt4 for <intel-wired-lan@lists.osuosl.org>;\n\tTue, 3 Sep 2019 17:00:15 +0000 (UTC)", "from mga03.intel.com (mga03.intel.com [134.134.136.65])\n\tby hemlock.osuosl.org (Postfix) with ESMTPS id 9D8C687E76\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tTue, 3 Sep 2019 17:00:15 +0000 (UTC)", "from orsmga006.jf.intel.com ([10.7.209.51])\n\tby orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t03 Sep 2019 10:00:13 -0700", "from unknown (HELO localhost.jf.intel.com) ([10.166.244.174])\n\tby orsmga006.jf.intel.com with ESMTP; 03 Sep 2019 10:00:13 -0700" ], "X-Virus-Scanned": [ "amavisd-new at osuosl.org", "amavisd-new at osuosl.org" ], "X-Greylist": "domain auto-whitelisted by SQLgrey-1.7.6", "X-Amp-Result": "SKIPPED(no attachment in message)", "X-Amp-File-Uploaded": "False", "X-ExtLoop1": "1", "X-IronPort-AV": "E=Sophos;i=\"5.64,463,1559545200\"; d=\"scan'208\";a=\"187320663\"", "From": "Tony Nguyen <anthony.l.nguyen@intel.com>", "To": "intel-wired-lan@lists.osuosl.org", "Date": "Tue, 3 Sep 2019 01:31:06 -0700", "Message-Id": "<20190903083108.19593-7-anthony.l.nguyen@intel.com>", "X-Mailer": "git-send-email 2.20.1", "In-Reply-To": "<20190903083108.19593-1-anthony.l.nguyen@intel.com>", "References": "<20190903083108.19593-1-anthony.l.nguyen@intel.com>", "MIME-Version": "1.0", "Subject": "[Intel-wired-lan] [PATCH S28 v2 7/9] ice: Minor refactor in queue\n\tmanagement", "X-BeenThere": "intel-wired-lan@osuosl.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "Intel Wired Ethernet Linux Kernel Driver Development\n\t<intel-wired-lan.osuosl.org>", "List-Unsubscribe": "<https://lists.osuosl.org/mailman/options/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@osuosl.org?subject=unsubscribe>", "List-Archive": "<http://lists.osuosl.org/pipermail/intel-wired-lan/>", "List-Post": "<mailto:intel-wired-lan@osuosl.org>", "List-Help": "<mailto:intel-wired-lan-request@osuosl.org?subject=help>", "List-Subscribe": "<https://lists.osuosl.org/mailman/listinfo/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@osuosl.org?subject=subscribe>", "Content-Type": "text/plain; charset=\"us-ascii\"", "Content-Transfer-Encoding": "7bit", "Errors-To": "intel-wired-lan-bounces@osuosl.org", "Sender": "\"Intel-wired-lan\" <intel-wired-lan-bounces@osuosl.org>" }, "content": "From: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>\n\nRemove q_left_tx and q_left_rx from the PF struct as these can be\nobtained by calling ice_get_avail_txq_count and ice_get_avail_rxq_count\nrespectively.\n\nThe function ice_determine_q_usage is only setting num_lan_tx and\nnum_lan_rx in the PF structure, and these are later assigned to\nvsi->alloc_txq and vsi->alloc_rxq respectively. This is an unnecessary\nindirection, so remove ice_determine_q_usage and just assign values\nfor vsi->alloc_txq and vsi->alloc_rxq in ice_vsi_set_num_qs and use\nthese to set num_lan_tx and num_lan_rx respectively.\n\nSigned-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>\nSigned-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>\n---\nv2:\nMove ice_get_avail_txq_count() and ice_get_avail_rxq_count() to ice_main.c\nto avoid static namespace issues\n---\n drivers/net/ethernet/intel/ice/ice.h | 4 +-\n drivers/net/ethernet/intel/ice/ice_lib.c | 25 ++++++----\n drivers/net/ethernet/intel/ice/ice_main.c | 50 +++++++++++--------\n .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 14 +++---\n 4 files changed, 54 insertions(+), 39 deletions(-)", "diff": "diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h\nindex c7f234688499..6c4faf7551f6 100644\n--- a/drivers/net/ethernet/intel/ice/ice.h\n+++ b/drivers/net/ethernet/intel/ice/ice.h\n@@ -368,8 +368,6 @@ struct ice_pf {\n \tu32 num_lan_msix;\t/* Total MSIX vectors for base driver */\n \tu16 num_lan_tx;\t\t/* num LAN Tx queues setup */\n \tu16 num_lan_rx;\t\t/* num LAN Rx queues setup */\n-\tu16 q_left_tx;\t\t/* remaining num Tx queues left unclaimed */\n-\tu16 q_left_rx;\t\t/* remaining num Rx queues left unclaimed */\n \tu16 next_vsi;\t\t/* Next free slot in pf->vsi[] - 0-based! */\n \tu16 num_alloc_vsi;\n \tu16 corer_count;\t/* Core reset count */\n@@ -438,6 +436,8 @@ static inline struct ice_vsi *ice_get_main_vsi(struct ice_pf *pf)\n int ice_vsi_setup_tx_rings(struct ice_vsi *vsi);\n int ice_vsi_setup_rx_rings(struct ice_vsi *vsi);\n void ice_set_ethtool_ops(struct net_device *netdev);\n+u16 ice_get_avail_txq_count(struct ice_pf *pf);\n+u16 ice_get_avail_rxq_count(struct ice_pf *pf);\n void ice_update_vsi_stats(struct ice_vsi *vsi);\n void ice_update_pf_stats(struct ice_pf *pf);\n int ice_up(struct ice_vsi *vsi);\ndiff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c\nindex 5f7c75c3b24b..7cd8c5d13bcc 100644\n--- a/drivers/net/ethernet/intel/ice/ice_lib.c\n+++ b/drivers/net/ethernet/intel/ice/ice_lib.c\n@@ -343,8 +343,20 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi, u16 vf_id)\n \n \tswitch (vsi->type) {\n \tcase ICE_VSI_PF:\n-\t\tvsi->alloc_txq = pf->num_lan_tx;\n-\t\tvsi->alloc_rxq = pf->num_lan_rx;\n+\t\tvsi->alloc_txq = min_t(int, ice_get_avail_txq_count(pf),\n+\t\t\t\t num_online_cpus());\n+\n+\t\tpf->num_lan_tx = vsi->alloc_txq;\n+\n+\t\t/* only 1 Rx queue unless RSS is enabled */\n+\t\tif (!test_bit(ICE_FLAG_RSS_ENA, pf->flags))\n+\t\t\tvsi->alloc_rxq = 1;\n+\t\telse\n+\t\t\tvsi->alloc_rxq = min_t(int, ice_get_avail_rxq_count(pf),\n+\t\t\t\t\t num_online_cpus());\n+\n+\t\tpf->num_lan_rx = vsi->alloc_rxq;\n+\n \t\tvsi->num_q_vectors = max_t(int, vsi->alloc_rxq, vsi->alloc_txq);\n \t\tbreak;\n \tcase ICE_VSI_VF:\n@@ -2577,9 +2589,6 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,\n \t\tif (ret)\n \t\t\tgoto unroll_vector_base;\n \n-\t\tpf->q_left_tx -= vsi->alloc_txq;\n-\t\tpf->q_left_rx -= vsi->alloc_rxq;\n-\n \t\t/* Do not exit if configuring RSS had an issue, at least\n \t\t * receive traffic on first queue. Hence no need to capture\n \t\t * return value\n@@ -2643,8 +2652,6 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,\n \tice_vsi_delete(vsi);\n unroll_get_qs:\n \tice_vsi_put_qs(vsi);\n-\tpf->q_left_tx += vsi->alloc_txq;\n-\tpf->q_left_rx += vsi->alloc_rxq;\n \tice_vsi_clear(vsi);\n \n \treturn NULL;\n@@ -2992,8 +2999,6 @@ int ice_vsi_release(struct ice_vsi *vsi)\n \tice_vsi_clear_rings(vsi);\n \n \tice_vsi_put_qs(vsi);\n-\tpf->q_left_tx += vsi->alloc_txq;\n-\tpf->q_left_rx += vsi->alloc_rxq;\n \n \t/* retain SW VSI data structure since it is needed to unregister and\n \t * free VSI netdev when PF is not in reset recovery pending state,\\\n@@ -3102,8 +3107,6 @@ int ice_vsi_rebuild(struct ice_vsi *vsi)\n \t\tif (ret)\n \t\t\tgoto err_vectors;\n \n-\t\tpf->q_left_tx -= vsi->alloc_txq;\n-\t\tpf->q_left_rx -= vsi->alloc_rxq;\n \t\tbreak;\n \tdefault:\n \t\tbreak;\ndiff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c\nindex 2d92d8591a8a..f8be9ada2447 100644\n--- a/drivers/net/ethernet/intel/ice/ice_main.c\n+++ b/drivers/net/ethernet/intel/ice/ice_main.c\n@@ -2192,36 +2192,48 @@ static int ice_setup_pf_sw(struct ice_pf *pf)\n \t\tice_vsi_free_q_vectors(vsi);\n \t\tice_vsi_delete(vsi);\n \t\tice_vsi_put_qs(vsi);\n-\t\tpf->q_left_tx += vsi->alloc_txq;\n-\t\tpf->q_left_rx += vsi->alloc_rxq;\n \t\tice_vsi_clear(vsi);\n \t}\n \treturn status;\n }\n \n /**\n- * ice_determine_q_usage - Calculate queue distribution\n- * @pf: board private structure\n- *\n- * Return -ENOMEM if we don't get enough queues for all ports\n+ * ice_get_avail_q_count - Get count of queues in use\n+ * @pf_qmap: bitmap to get queue use count from\n+ * @lock: pointer to a mutex that protects access to pf_qmap\n+ * @size: size of the bitmap\n */\n-static void ice_determine_q_usage(struct ice_pf *pf)\n+static u16\n+ice_get_avail_q_count(unsigned long *pf_qmap, struct mutex *lock, u16 size)\n {\n-\tu16 q_left_tx, q_left_rx;\n+\tu16 count = 0, bit;\n \n-\tq_left_tx = pf->hw.func_caps.common_cap.num_txq;\n-\tq_left_rx = pf->hw.func_caps.common_cap.num_rxq;\n+\tmutex_lock(lock);\n+\tfor_each_clear_bit(bit, pf_qmap, size)\n+\t\tcount++;\n+\tmutex_unlock(lock);\n \n-\tpf->num_lan_tx = min_t(int, q_left_tx, num_online_cpus());\n+\treturn count;\n+}\n \n-\t/* only 1 Rx queue unless RSS is enabled */\n-\tif (!test_bit(ICE_FLAG_RSS_ENA, pf->flags))\n-\t\tpf->num_lan_rx = 1;\n-\telse\n-\t\tpf->num_lan_rx = min_t(int, q_left_rx, num_online_cpus());\n+/**\n+ * ice_get_avail_txq_count - Get count of Tx queues in use\n+ * @pf: pointer to an ice_pf instance\n+ */\n+u16 ice_get_avail_txq_count(struct ice_pf *pf)\n+{\n+\treturn ice_get_avail_q_count(pf->avail_txqs, &pf->avail_q_mutex,\n+\t\t\t\t pf->max_pf_txqs);\n+}\n \n-\tpf->q_left_tx = q_left_tx - pf->num_lan_tx;\n-\tpf->q_left_rx = q_left_rx - pf->num_lan_rx;\n+/**\n+ * ice_get_avail_rxq_count - Get count of Rx queues in use\n+ * @pf: pointer to an ice_pf instance\n+ */\n+u16 ice_get_avail_rxq_count(struct ice_pf *pf)\n+{\n+\treturn ice_get_avail_q_count(pf->avail_rxqs, &pf->avail_q_mutex,\n+\t\t\t\t pf->max_pf_rxqs);\n }\n \n /**\n@@ -2541,8 +2553,6 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)\n \t\t}\n \t}\n \n-\tice_determine_q_usage(pf);\n-\n \tpf->num_alloc_vsi = hw->func_caps.guar_num_vsi;\n \tif (!pf->num_alloc_vsi) {\n \t\terr = -EIO;\ndiff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c\nindex 30e8e6166a59..64de05ccbc47 100644\n--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c\n+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c\n@@ -595,7 +595,8 @@ static int ice_alloc_vf_res(struct ice_vf *vf)\n \t/* Update number of VF queues, in case VF had requested for queue\n \t * changes\n \t */\n-\ttx_rx_queue_left = min_t(int, pf->q_left_tx, pf->q_left_rx);\n+\ttx_rx_queue_left = min_t(int, ice_get_avail_txq_count(pf),\n+\t\t\t\t ice_get_avail_rxq_count(pf));\n \ttx_rx_queue_left += ICE_DFLT_QS_PER_VF;\n \tif (vf->num_req_qs && vf->num_req_qs <= tx_rx_queue_left &&\n \t vf->num_req_qs != vf->num_vf_qs)\n@@ -898,11 +899,11 @@ static int ice_check_avail_res(struct ice_pf *pf)\n \t * at runtime through Virtchnl, that is the reason we start by reserving\n \t * few queues.\n \t */\n-\tnum_txq = ice_determine_res(pf, pf->q_left_tx, ICE_DFLT_QS_PER_VF,\n-\t\t\t\t ICE_MIN_QS_PER_VF);\n+\tnum_txq = ice_determine_res(pf, ice_get_avail_txq_count(pf),\n+\t\t\t\t ICE_DFLT_QS_PER_VF, ICE_MIN_QS_PER_VF);\n \n-\tnum_rxq = ice_determine_res(pf, pf->q_left_rx, ICE_DFLT_QS_PER_VF,\n-\t\t\t\t ICE_MIN_QS_PER_VF);\n+\tnum_rxq = ice_determine_res(pf, ice_get_avail_rxq_count(pf),\n+\t\t\t\t ICE_DFLT_QS_PER_VF, ICE_MIN_QS_PER_VF);\n \n \tif (!num_txq || !num_rxq)\n \t\treturn -EIO;\n@@ -2511,7 +2512,8 @@ static int ice_vc_request_qs_msg(struct ice_vf *vf, u8 *msg)\n \t}\n \n \tcur_queues = vf->num_vf_qs;\n-\ttx_rx_queue_left = min_t(u16, pf->q_left_tx, pf->q_left_rx);\n+\ttx_rx_queue_left = min_t(u16, ice_get_avail_txq_count(pf),\n+\t\t\t\t ice_get_avail_rxq_count(pf));\n \tmax_allowed_vf_queues = tx_rx_queue_left + cur_queues;\n \tif (!req_queues) {\n \t\tdev_err(&pf->pdev->dev,\n", "prefixes": [ "S28", "v2", "7/9" ] }