Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/972046/?format=api
{ "id": 972046, "url": "http://patchwork.ozlabs.org/api/patches/972046/?format=api", "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20180920002319.10971-6-anirudh.venkataramanan@intel.com/", "project": { "id": 46, "url": "http://patchwork.ozlabs.org/api/projects/46/?format=api", "name": "Intel Wired Ethernet development", "link_name": "intel-wired-lan", "list_id": "intel-wired-lan.osuosl.org", "list_email": "intel-wired-lan@osuosl.org", "web_url": "", "scm_url": "", "webscm_url": "", "list_archive_url": "", "list_archive_url_format": "", "commit_url_format": "" }, "msgid": "<20180920002319.10971-6-anirudh.venkataramanan@intel.com>", "list_archive_url": null, "date": "2018-09-20T00:23:08", "name": "[05/16] ice: Move common functions out of ice_main.c part 5/7", "commit_ref": null, "pull_url": null, "state": "accepted", "archived": false, "hash": "440b3376854a2d33e0b60d2903b1b950f993c369", "submitter": { "id": 73601, "url": "http://patchwork.ozlabs.org/api/people/73601/?format=api", "name": "Anirudh Venkataramanan", "email": "anirudh.venkataramanan@intel.com" }, "delegate": { "id": 68, "url": "http://patchwork.ozlabs.org/api/users/68/?format=api", "username": "jtkirshe", "first_name": "Jeff", "last_name": "Kirsher", "email": "jeffrey.t.kirsher@intel.com" }, "mbox": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20180920002319.10971-6-anirudh.venkataramanan@intel.com/mbox/", "series": [ { "id": 66525, "url": "http://patchwork.ozlabs.org/api/series/66525/?format=api", "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/list/?series=66525", "date": "2018-09-20T00:23:03", "name": "Implementation updates for ice", "version": 1, "mbox": "http://patchwork.ozlabs.org/series/66525/mbox/" } ], "comments": "http://patchwork.ozlabs.org/api/patches/972046/comments/", "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/972046/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<intel-wired-lan-bounces@osuosl.org>", "X-Original-To": [ "incoming@patchwork.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Delivered-To": [ "patchwork-incoming@bilbo.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Authentication-Results": [ "ozlabs.org;\n\tspf=pass (mailfrom) smtp.mailfrom=osuosl.org\n\t(client-ip=140.211.166.138; helo=whitealder.osuosl.org;\n\tenvelope-from=intel-wired-lan-bounces@osuosl.org;\n\treceiver=<UNKNOWN>)", "ozlabs.org;\n\tdmarc=fail (p=none dis=none) header.from=intel.com" ], "Received": [ "from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256\n\tbits)) (No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 42FyBy1JxLz9sCP\n\tfor <incoming@patchwork.ozlabs.org>;\n\tThu, 20 Sep 2018 10:23:54 +1000 (AEST)", "from localhost (localhost [127.0.0.1])\n\tby whitealder.osuosl.org (Postfix) with ESMTP id B5D7A88209;\n\tThu, 20 Sep 2018 00:23:52 +0000 (UTC)", "from whitealder.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id DyOjQR0VFpXe; Thu, 20 Sep 2018 00:23:46 +0000 (UTC)", "from ash.osuosl.org (ash.osuosl.org [140.211.166.34])\n\tby whitealder.osuosl.org (Postfix) with ESMTP id CD236882D6;\n\tThu, 20 Sep 2018 00:23:36 +0000 (UTC)", "from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136])\n\tby ash.osuosl.org (Postfix) with ESMTP id 88ACE1D0FC1\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tThu, 20 Sep 2018 00:23:33 +0000 (UTC)", "from localhost (localhost [127.0.0.1])\n\tby silver.osuosl.org (Postfix) with ESMTP id 8472F201F5\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tThu, 20 Sep 2018 00:23:33 +0000 (UTC)", "from silver.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id eYxJdRpX18Au for <intel-wired-lan@lists.osuosl.org>;\n\tThu, 20 Sep 2018 00:23:29 +0000 (UTC)", "from mga05.intel.com (mga05.intel.com [192.55.52.43])\n\tby silver.osuosl.org (Postfix) with ESMTPS id 4D09730973\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tThu, 20 Sep 2018 00:23:25 +0000 (UTC)", "from fmsmga006.fm.intel.com ([10.253.24.20])\n\tby fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t19 Sep 2018 17:23:24 -0700", "from shasta.jf.intel.com ([10.166.241.11])\n\tby fmsmga006.fm.intel.com with ESMTP; 19 Sep 2018 17:23:19 -0700" ], "X-Virus-Scanned": [ "amavisd-new at osuosl.org", "amavisd-new at osuosl.org" ], "X-Greylist": "domain auto-whitelisted by SQLgrey-1.7.6", "X-Amp-Result": "SKIPPED(no attachment in message)", "X-Amp-File-Uploaded": "False", "X-ExtLoop1": "1", "X-IronPort-AV": "E=Sophos;i=\"5.53,396,1531810800\"; d=\"scan'208\";a=\"265057695\"", "From": "Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>", "To": "intel-wired-lan@lists.osuosl.org", "Date": "Wed, 19 Sep 2018 17:23:08 -0700", "Message-Id": "<20180920002319.10971-6-anirudh.venkataramanan@intel.com>", "X-Mailer": "git-send-email 2.14.3", "In-Reply-To": "<20180920002319.10971-1-anirudh.venkataramanan@intel.com>", "References": "<20180920002319.10971-1-anirudh.venkataramanan@intel.com>", "Subject": "[Intel-wired-lan] [PATCH 05/16] ice: Move common functions out of\n\tice_main.c part 5/7", "X-BeenThere": "intel-wired-lan@osuosl.org", "X-Mailman-Version": "2.1.24", "Precedence": "list", "List-Id": "Intel Wired Ethernet Linux Kernel Driver Development\n\t<intel-wired-lan.osuosl.org>", "List-Unsubscribe": "<https://lists.osuosl.org/mailman/options/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@osuosl.org?subject=unsubscribe>", "List-Archive": "<http://lists.osuosl.org/pipermail/intel-wired-lan/>", "List-Post": "<mailto:intel-wired-lan@osuosl.org>", "List-Help": "<mailto:intel-wired-lan-request@osuosl.org?subject=help>", "List-Subscribe": "<https://lists.osuosl.org/mailman/listinfo/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@osuosl.org?subject=subscribe>", "MIME-Version": "1.0", "Content-Type": "text/plain; charset=\"us-ascii\"", "Content-Transfer-Encoding": "7bit", "Errors-To": "intel-wired-lan-bounces@osuosl.org", "Sender": "\"Intel-wired-lan\" <intel-wired-lan-bounces@osuosl.org>" }, "content": "This patch continues the code move out of ice_main.c\n\nThe following top level functions (and related dependency functions) were\nmoved to ice_lib.c:\nice_vsi_clear\nice_vsi_close\nice_vsi_free_arrays\nice_vsi_map_rings_to_vectors\n\nSigned-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>\n---\n drivers/net/ethernet/intel/ice/ice_lib.c | 133 ++++++++++++++++++++++++++++++\n drivers/net/ethernet/intel/ice/ice_lib.h | 8 ++\n drivers/net/ethernet/intel/ice/ice_main.c | 132 -----------------------------\n 3 files changed, 141 insertions(+), 132 deletions(-)", "diff": "diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c\nindex fa96279139d0..c9f82e351802 100644\n--- a/drivers/net/ethernet/intel/ice/ice_lib.c\n+++ b/drivers/net/ethernet/intel/ice/ice_lib.c\n@@ -341,6 +341,71 @@ void ice_vsi_delete(struct ice_vsi *vsi)\n \t\t\tvsi->vsi_num);\n }\n \n+/**\n+ * ice_vsi_free_arrays - clean up vsi resources\n+ * @vsi: pointer to VSI being cleared\n+ * @free_qvectors: bool to specify if q_vectors should be deallocated\n+ */\n+void ice_vsi_free_arrays(struct ice_vsi *vsi, bool free_qvectors)\n+{\n+\tstruct ice_pf *pf = vsi->back;\n+\n+\t/* free the ring and vector containers */\n+\tif (free_qvectors && vsi->q_vectors) {\n+\t\tdevm_kfree(&pf->pdev->dev, vsi->q_vectors);\n+\t\tvsi->q_vectors = NULL;\n+\t}\n+\tif (vsi->tx_rings) {\n+\t\tdevm_kfree(&pf->pdev->dev, vsi->tx_rings);\n+\t\tvsi->tx_rings = NULL;\n+\t}\n+\tif (vsi->rx_rings) {\n+\t\tdevm_kfree(&pf->pdev->dev, vsi->rx_rings);\n+\t\tvsi->rx_rings = NULL;\n+\t}\n+}\n+\n+/**\n+ * ice_vsi_clear - clean up and deallocate the provided vsi\n+ * @vsi: pointer to VSI being cleared\n+ *\n+ * This deallocates the vsi's queue resources, removes it from the PF's\n+ * VSI array if necessary, and deallocates the VSI\n+ *\n+ * Returns 0 on success, negative on failure\n+ */\n+int ice_vsi_clear(struct ice_vsi *vsi)\n+{\n+\tstruct ice_pf *pf = NULL;\n+\n+\tif (!vsi)\n+\t\treturn 0;\n+\n+\tif (!vsi->back)\n+\t\treturn -EINVAL;\n+\n+\tpf = vsi->back;\n+\n+\tif (!pf->vsi[vsi->idx] || pf->vsi[vsi->idx] != vsi) {\n+\t\tdev_dbg(&pf->pdev->dev, \"vsi does not exist at pf->vsi[%d]\\n\",\n+\t\t\tvsi->idx);\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tmutex_lock(&pf->sw_mutex);\n+\t/* updates the PF for this cleared vsi */\n+\n+\tpf->vsi[vsi->idx] = NULL;\n+\tif (vsi->idx < pf->next_vsi)\n+\t\tpf->next_vsi = vsi->idx;\n+\n+\tice_vsi_free_arrays(vsi, true);\n+\tmutex_unlock(&pf->sw_mutex);\n+\tdevm_kfree(&pf->pdev->dev, vsi);\n+\n+\treturn 0;\n+}\n+\n /**\n * ice_msix_clean_rings - MSIX mode Interrupt Handler\n * @irq: interrupt number\n@@ -700,6 +765,60 @@ int ice_vsi_alloc_rings(struct ice_vsi *vsi)\n \treturn -ENOMEM;\n }\n \n+/**\n+ * ice_vsi_map_rings_to_vectors - Map VSI rings to interrupt vectors\n+ * @vsi: the VSI being configured\n+ *\n+ * This function maps descriptor rings to the queue-specific vectors allotted\n+ * through the MSI-X enabling code. On a constrained vector budget, we map Tx\n+ * and Rx rings to the vector as \"efficiently\" as possible.\n+ */\n+void ice_vsi_map_rings_to_vectors(struct ice_vsi *vsi)\n+{\n+\tint q_vectors = vsi->num_q_vectors;\n+\tint tx_rings_rem, rx_rings_rem;\n+\tint v_id;\n+\n+\t/* initially assigning remaining rings count to VSIs num queue value */\n+\ttx_rings_rem = vsi->num_txq;\n+\trx_rings_rem = vsi->num_rxq;\n+\n+\tfor (v_id = 0; v_id < q_vectors; v_id++) {\n+\t\tstruct ice_q_vector *q_vector = vsi->q_vectors[v_id];\n+\t\tint tx_rings_per_v, rx_rings_per_v, q_id, q_base;\n+\n+\t\t/* Tx rings mapping to vector */\n+\t\ttx_rings_per_v = DIV_ROUND_UP(tx_rings_rem, q_vectors - v_id);\n+\t\tq_vector->num_ring_tx = tx_rings_per_v;\n+\t\tq_vector->tx.ring = NULL;\n+\t\tq_base = vsi->num_txq - tx_rings_rem;\n+\n+\t\tfor (q_id = q_base; q_id < (q_base + tx_rings_per_v); q_id++) {\n+\t\t\tstruct ice_ring *tx_ring = vsi->tx_rings[q_id];\n+\n+\t\t\ttx_ring->q_vector = q_vector;\n+\t\t\ttx_ring->next = q_vector->tx.ring;\n+\t\t\tq_vector->tx.ring = tx_ring;\n+\t\t}\n+\t\ttx_rings_rem -= tx_rings_per_v;\n+\n+\t\t/* Rx rings mapping to vector */\n+\t\trx_rings_per_v = DIV_ROUND_UP(rx_rings_rem, q_vectors - v_id);\n+\t\tq_vector->num_ring_rx = rx_rings_per_v;\n+\t\tq_vector->rx.ring = NULL;\n+\t\tq_base = vsi->num_rxq - rx_rings_rem;\n+\n+\t\tfor (q_id = q_base; q_id < (q_base + rx_rings_per_v); q_id++) {\n+\t\t\tstruct ice_ring *rx_ring = vsi->rx_rings[q_id];\n+\n+\t\t\trx_ring->q_vector = q_vector;\n+\t\t\trx_ring->next = q_vector->rx.ring;\n+\t\t\tq_vector->rx.ring = rx_ring;\n+\t\t}\n+\t\trx_rings_rem -= rx_rings_per_v;\n+\t}\n+}\n+\n /**\n * ice_add_mac_to_list - Add a mac address filter entry to the list\n * @vsi: the VSI to be forwarded to\n@@ -1385,6 +1504,20 @@ void ice_vsi_free_rx_rings(struct ice_vsi *vsi)\n \t\t\tice_free_rx_ring(vsi->rx_rings[i]);\n }\n \n+/**\n+ * ice_vsi_close - Shut down a VSI\n+ * @vsi: the VSI being shut down\n+ */\n+void ice_vsi_close(struct ice_vsi *vsi)\n+{\n+\tif (!test_and_set_bit(__ICE_DOWN, vsi->state))\n+\t\tice_down(vsi);\n+\n+\tice_vsi_free_irq(vsi);\n+\tice_vsi_free_tx_rings(vsi);\n+\tice_vsi_free_rx_rings(vsi);\n+}\n+\n /**\n * ice_free_res - free a block of resources\n * @res: pointer to the resource\ndiff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h\nindex ab5a4e1edcf8..002bbca8e7ea 100644\n--- a/drivers/net/ethernet/intel/ice/ice_lib.h\n+++ b/drivers/net/ethernet/intel/ice/ice_lib.h\n@@ -6,6 +6,8 @@\n \n #include \"ice.h\"\n \n+void ice_vsi_map_rings_to_vectors(struct ice_vsi *vsi);\n+\n int ice_vsi_alloc_rings(struct ice_vsi *vsi);\n \n void ice_vsi_set_rss_params(struct ice_vsi *vsi);\n@@ -16,6 +18,8 @@ int ice_get_free_slot(void *array, int size, int curr);\n \n int ice_vsi_init(struct ice_vsi *vsi);\n \n+void ice_vsi_free_arrays(struct ice_vsi *vsi, bool free_qvectors);\n+\n void ice_vsi_clear_rings(struct ice_vsi *vsi);\n \n int ice_vsi_alloc_arrays(struct ice_vsi *vsi, bool alloc_qvectors);\n@@ -51,6 +55,10 @@ int ice_cfg_vlan_pruning(struct ice_vsi *vsi, bool ena);\n \n void ice_vsi_delete(struct ice_vsi *vsi);\n \n+int ice_vsi_clear(struct ice_vsi *vsi);\n+\n+void ice_vsi_close(struct ice_vsi *vsi);\n+\n int ice_free_res(struct ice_res_tracker *res, u16 index, u16 id);\n \n int\ndiff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c\nindex 9ef8611d13b0..40f4e0c9b722 100644\n--- a/drivers/net/ethernet/intel/ice/ice_main.c\n+++ b/drivers/net/ethernet/intel/ice/ice_main.c\n@@ -1313,60 +1313,6 @@ static irqreturn_t ice_misc_intr(int __always_unused irq, void *data)\n \treturn ret;\n }\n \n-/**\n- * ice_vsi_map_rings_to_vectors - Map VSI rings to interrupt vectors\n- * @vsi: the VSI being configured\n- *\n- * This function maps descriptor rings to the queue-specific vectors allotted\n- * through the MSI-X enabling code. On a constrained vector budget, we map Tx\n- * and Rx rings to the vector as \"efficiently\" as possible.\n- */\n-static void ice_vsi_map_rings_to_vectors(struct ice_vsi *vsi)\n-{\n-\tint q_vectors = vsi->num_q_vectors;\n-\tint tx_rings_rem, rx_rings_rem;\n-\tint v_id;\n-\n-\t/* initially assigning remaining rings count to VSIs num queue value */\n-\ttx_rings_rem = vsi->num_txq;\n-\trx_rings_rem = vsi->num_rxq;\n-\n-\tfor (v_id = 0; v_id < q_vectors; v_id++) {\n-\t\tstruct ice_q_vector *q_vector = vsi->q_vectors[v_id];\n-\t\tint tx_rings_per_v, rx_rings_per_v, q_id, q_base;\n-\n-\t\t/* Tx rings mapping to vector */\n-\t\ttx_rings_per_v = DIV_ROUND_UP(tx_rings_rem, q_vectors - v_id);\n-\t\tq_vector->num_ring_tx = tx_rings_per_v;\n-\t\tq_vector->tx.ring = NULL;\n-\t\tq_base = vsi->num_txq - tx_rings_rem;\n-\n-\t\tfor (q_id = q_base; q_id < (q_base + tx_rings_per_v); q_id++) {\n-\t\t\tstruct ice_ring *tx_ring = vsi->tx_rings[q_id];\n-\n-\t\t\ttx_ring->q_vector = q_vector;\n-\t\t\ttx_ring->next = q_vector->tx.ring;\n-\t\t\tq_vector->tx.ring = tx_ring;\n-\t\t}\n-\t\ttx_rings_rem -= tx_rings_per_v;\n-\n-\t\t/* Rx rings mapping to vector */\n-\t\trx_rings_per_v = DIV_ROUND_UP(rx_rings_rem, q_vectors - v_id);\n-\t\tq_vector->num_ring_rx = rx_rings_per_v;\n-\t\tq_vector->rx.ring = NULL;\n-\t\tq_base = vsi->num_rxq - rx_rings_rem;\n-\n-\t\tfor (q_id = q_base; q_id < (q_base + rx_rings_per_v); q_id++) {\n-\t\t\tstruct ice_ring *rx_ring = vsi->rx_rings[q_id];\n-\n-\t\t\trx_ring->q_vector = q_vector;\n-\t\t\trx_ring->next = q_vector->rx.ring;\n-\t\t\tq_vector->rx.ring = rx_ring;\n-\t\t}\n-\t\trx_rings_rem -= rx_rings_per_v;\n-\t}\n-}\n-\n /**\n * ice_vsi_alloc - Allocates the next available struct vsi in the PF\n * @pf: board private structure\n@@ -1770,71 +1716,6 @@ static int ice_cfg_netdev(struct ice_vsi *vsi)\n \treturn 0;\n }\n \n-/**\n- * ice_vsi_free_arrays - clean up vsi resources\n- * @vsi: pointer to VSI being cleared\n- * @free_qvectors: bool to specify if q_vectors should be deallocated\n- */\n-static void ice_vsi_free_arrays(struct ice_vsi *vsi, bool free_qvectors)\n-{\n-\tstruct ice_pf *pf = vsi->back;\n-\n-\t/* free the ring and vector containers */\n-\tif (free_qvectors && vsi->q_vectors) {\n-\t\tdevm_kfree(&pf->pdev->dev, vsi->q_vectors);\n-\t\tvsi->q_vectors = NULL;\n-\t}\n-\tif (vsi->tx_rings) {\n-\t\tdevm_kfree(&pf->pdev->dev, vsi->tx_rings);\n-\t\tvsi->tx_rings = NULL;\n-\t}\n-\tif (vsi->rx_rings) {\n-\t\tdevm_kfree(&pf->pdev->dev, vsi->rx_rings);\n-\t\tvsi->rx_rings = NULL;\n-\t}\n-}\n-\n-/**\n- * ice_vsi_clear - clean up and deallocate the provided vsi\n- * @vsi: pointer to VSI being cleared\n- *\n- * This deallocates the vsi's queue resources, removes it from the PF's\n- * VSI array if necessary, and deallocates the VSI\n- *\n- * Returns 0 on success, negative on failure\n- */\n-static int ice_vsi_clear(struct ice_vsi *vsi)\n-{\n-\tstruct ice_pf *pf = NULL;\n-\n-\tif (!vsi)\n-\t\treturn 0;\n-\n-\tif (!vsi->back)\n-\t\treturn -EINVAL;\n-\n-\tpf = vsi->back;\n-\n-\tif (!pf->vsi[vsi->idx] || pf->vsi[vsi->idx] != vsi) {\n-\t\tdev_dbg(&pf->pdev->dev, \"vsi does not exist at pf->vsi[%d]\\n\",\n-\t\t\tvsi->idx);\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tmutex_lock(&pf->sw_mutex);\n-\t/* updates the PF for this cleared vsi */\n-\n-\tpf->vsi[vsi->idx] = NULL;\n-\tif (vsi->idx < pf->next_vsi)\n-\t\tpf->next_vsi = vsi->idx;\n-\n-\tice_vsi_free_arrays(vsi, true);\n-\tmutex_unlock(&pf->sw_mutex);\n-\tdevm_kfree(&pf->pdev->dev, vsi);\n-\n-\treturn 0;\n-}\n-\n /**\n * ice_vsi_alloc_q_vector - Allocate memory for a single interrupt vector\n * @vsi: the VSI being configured\n@@ -3733,19 +3614,6 @@ static int ice_vsi_open(struct ice_vsi *vsi)\n \treturn err;\n }\n \n-/**\n- * ice_vsi_close - Shut down a VSI\n- * @vsi: the VSI being shut down\n- */\n-static void ice_vsi_close(struct ice_vsi *vsi)\n-{\n-\tif (!test_and_set_bit(__ICE_DOWN, vsi->state))\n-\t\tice_down(vsi);\n-\n-\tice_vsi_free_irq(vsi);\n-\tice_vsi_free_tx_rings(vsi);\n-\tice_vsi_free_rx_rings(vsi);\n-}\n \n /**\n * ice_rss_clean - Delete RSS related VSI structures that hold user inputs\n", "prefixes": [ "05/16" ] }