get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/972053/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 972053,
    "url": "http://patchwork.ozlabs.org/api/patches/972053/?format=api",
    "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20180920002319.10971-7-anirudh.venkataramanan@intel.com/",
    "project": {
        "id": 46,
        "url": "http://patchwork.ozlabs.org/api/projects/46/?format=api",
        "name": "Intel Wired Ethernet development",
        "link_name": "intel-wired-lan",
        "list_id": "intel-wired-lan.osuosl.org",
        "list_email": "intel-wired-lan@osuosl.org",
        "web_url": "",
        "scm_url": "",
        "webscm_url": "",
        "list_archive_url": "",
        "list_archive_url_format": "",
        "commit_url_format": ""
    },
    "msgid": "<20180920002319.10971-7-anirudh.venkataramanan@intel.com>",
    "list_archive_url": null,
    "date": "2018-09-20T00:23:09",
    "name": "[06/16] ice: Move common functions out of ice_main.c part 6/7",
    "commit_ref": null,
    "pull_url": null,
    "state": "accepted",
    "archived": false,
    "hash": "91ccf7b6fe2e3eeac38e5ff0aa7045544f1f435d",
    "submitter": {
        "id": 73601,
        "url": "http://patchwork.ozlabs.org/api/people/73601/?format=api",
        "name": "Anirudh Venkataramanan",
        "email": "anirudh.venkataramanan@intel.com"
    },
    "delegate": {
        "id": 68,
        "url": "http://patchwork.ozlabs.org/api/users/68/?format=api",
        "username": "jtkirshe",
        "first_name": "Jeff",
        "last_name": "Kirsher",
        "email": "jeffrey.t.kirsher@intel.com"
    },
    "mbox": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20180920002319.10971-7-anirudh.venkataramanan@intel.com/mbox/",
    "series": [
        {
            "id": 66525,
            "url": "http://patchwork.ozlabs.org/api/series/66525/?format=api",
            "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/list/?series=66525",
            "date": "2018-09-20T00:23:03",
            "name": "Implementation updates for ice",
            "version": 1,
            "mbox": "http://patchwork.ozlabs.org/series/66525/mbox/"
        }
    ],
    "comments": "http://patchwork.ozlabs.org/api/patches/972053/comments/",
    "check": "pending",
    "checks": "http://patchwork.ozlabs.org/api/patches/972053/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<intel-wired-lan-bounces@osuosl.org>",
        "X-Original-To": [
            "incoming@patchwork.ozlabs.org",
            "intel-wired-lan@lists.osuosl.org"
        ],
        "Delivered-To": [
            "patchwork-incoming@bilbo.ozlabs.org",
            "intel-wired-lan@lists.osuosl.org"
        ],
        "Authentication-Results": [
            "ozlabs.org;\n\tspf=pass (mailfrom) smtp.mailfrom=osuosl.org\n\t(client-ip=140.211.166.138; helo=whitealder.osuosl.org;\n\tenvelope-from=intel-wired-lan-bounces@osuosl.org;\n\treceiver=<UNKNOWN>)",
            "ozlabs.org;\n\tdmarc=fail (p=none dis=none) header.from=intel.com"
        ],
        "Received": [
            "from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256\n\tbits)) (No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 42FyC23DXHz9s9J\n\tfor <incoming@patchwork.ozlabs.org>;\n\tThu, 20 Sep 2018 10:23:58 +1000 (AEST)",
            "from localhost (localhost [127.0.0.1])\n\tby whitealder.osuosl.org (Postfix) with ESMTP id F23F288209;\n\tThu, 20 Sep 2018 00:23:56 +0000 (UTC)",
            "from whitealder.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id RUz2aX+D3cKx; Thu, 20 Sep 2018 00:23:42 +0000 (UTC)",
            "from ash.osuosl.org (ash.osuosl.org [140.211.166.34])\n\tby whitealder.osuosl.org (Postfix) with ESMTP id 8C875882DE;\n\tThu, 20 Sep 2018 00:23:34 +0000 (UTC)",
            "from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136])\n\tby ash.osuosl.org (Postfix) with ESMTP id 772FC1C08AF\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tThu, 20 Sep 2018 00:23:30 +0000 (UTC)",
            "from localhost (localhost [127.0.0.1])\n\tby silver.osuosl.org (Postfix) with ESMTP id 73544227F5\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tThu, 20 Sep 2018 00:23:30 +0000 (UTC)",
            "from silver.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id DAghWKPpIOX2 for <intel-wired-lan@lists.osuosl.org>;\n\tThu, 20 Sep 2018 00:23:24 +0000 (UTC)",
            "from mga05.intel.com (mga05.intel.com [192.55.52.43])\n\tby silver.osuosl.org (Postfix) with ESMTPS id 96AE530937\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tThu, 20 Sep 2018 00:23:24 +0000 (UTC)",
            "from fmsmga006.fm.intel.com ([10.253.24.20])\n\tby fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t19 Sep 2018 17:23:23 -0700",
            "from shasta.jf.intel.com ([10.166.241.11])\n\tby fmsmga006.fm.intel.com with ESMTP; 19 Sep 2018 17:23:19 -0700"
        ],
        "X-Virus-Scanned": [
            "amavisd-new at osuosl.org",
            "amavisd-new at osuosl.org"
        ],
        "X-Greylist": "domain auto-whitelisted by SQLgrey-1.7.6",
        "X-Amp-Result": "SKIPPED(no attachment in message)",
        "X-Amp-File-Uploaded": "False",
        "X-ExtLoop1": "1",
        "X-IronPort-AV": "E=Sophos;i=\"5.53,396,1531810800\"; d=\"scan'208\";a=\"265057696\"",
        "From": "Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>",
        "To": "intel-wired-lan@lists.osuosl.org",
        "Date": "Wed, 19 Sep 2018 17:23:09 -0700",
        "Message-Id": "<20180920002319.10971-7-anirudh.venkataramanan@intel.com>",
        "X-Mailer": "git-send-email 2.14.3",
        "In-Reply-To": "<20180920002319.10971-1-anirudh.venkataramanan@intel.com>",
        "References": "<20180920002319.10971-1-anirudh.venkataramanan@intel.com>",
        "Subject": "[Intel-wired-lan] [PATCH 06/16] ice: Move common functions out of\n\tice_main.c part 6/7",
        "X-BeenThere": "intel-wired-lan@osuosl.org",
        "X-Mailman-Version": "2.1.24",
        "Precedence": "list",
        "List-Id": "Intel Wired Ethernet Linux Kernel Driver Development\n\t<intel-wired-lan.osuosl.org>",
        "List-Unsubscribe": "<https://lists.osuosl.org/mailman/options/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@osuosl.org?subject=unsubscribe>",
        "List-Archive": "<http://lists.osuosl.org/pipermail/intel-wired-lan/>",
        "List-Post": "<mailto:intel-wired-lan@osuosl.org>",
        "List-Help": "<mailto:intel-wired-lan-request@osuosl.org?subject=help>",
        "List-Subscribe": "<https://lists.osuosl.org/mailman/listinfo/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@osuosl.org?subject=subscribe>",
        "MIME-Version": "1.0",
        "Content-Type": "text/plain; charset=\"us-ascii\"",
        "Content-Transfer-Encoding": "7bit",
        "Errors-To": "intel-wired-lan-bounces@osuosl.org",
        "Sender": "\"Intel-wired-lan\" <intel-wired-lan-bounces@osuosl.org>"
    },
    "content": "This patch continues the code move out of ice_main.c\n\nThe following top level functions (and related dependency functions) were\nmoved to ice_lib.c:\nice_vsi_setup_vector_base\nice_vsi_alloc_q_vectors\nice_vsi_get_qs\n\nThe following functions were made static again:\nice_vsi_free_arrays\nice_vsi_clear_rings\n\nAlso, in this patch, the netdev and NAPI registration logic was de-coupled\nfrom the VSI creation logic (ice_vsi_setup) as for SR-IOV, while we want to\ncreate VF VSIs using ice_vsi_setup, we don't want to create netdevs.\n\nSigned-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>\n---\n drivers/net/ethernet/intel/ice/ice_lib.c  | 463 +++++++++++++++++++++++++++-\n drivers/net/ethernet/intel/ice/ice_lib.h  |  16 +-\n drivers/net/ethernet/intel/ice/ice_main.c | 492 +++---------------------------\n 3 files changed, 521 insertions(+), 450 deletions(-)",
    "diff": "diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c\nindex c9f82e351802..b17cba3ae887 100644\n--- a/drivers/net/ethernet/intel/ice/ice_lib.c\n+++ b/drivers/net/ethernet/intel/ice/ice_lib.c\n@@ -346,7 +346,7 @@ void ice_vsi_delete(struct ice_vsi *vsi)\n  * @vsi: pointer to VSI being cleared\n  * @free_qvectors: bool to specify if q_vectors should be deallocated\n  */\n-void ice_vsi_free_arrays(struct ice_vsi *vsi, bool free_qvectors)\n+static void ice_vsi_free_arrays(struct ice_vsi *vsi, bool free_qvectors)\n {\n \tstruct ice_pf *pf = vsi->back;\n \n@@ -423,6 +423,141 @@ irqreturn_t ice_msix_clean_rings(int __always_unused irq, void *data)\n \treturn IRQ_HANDLED;\n }\n \n+/**\n+ * ice_vsi_get_qs_contig - Assign a contiguous chunk of queues to VSI\n+ * @vsi: the VSI getting queues\n+ *\n+ * Return 0 on success and a negative value on error\n+ */\n+static int ice_vsi_get_qs_contig(struct ice_vsi *vsi)\n+{\n+\tstruct ice_pf *pf = vsi->back;\n+\tint offset, ret = 0;\n+\n+\tmutex_lock(&pf->avail_q_mutex);\n+\t/* look for contiguous block of queues for tx */\n+\toffset = bitmap_find_next_zero_area(pf->avail_txqs, ICE_MAX_TXQS,\n+\t\t\t\t\t    0, vsi->alloc_txq, 0);\n+\tif (offset < ICE_MAX_TXQS) {\n+\t\tint i;\n+\n+\t\tbitmap_set(pf->avail_txqs, offset, vsi->alloc_txq);\n+\t\tfor (i = 0; i < vsi->alloc_txq; i++)\n+\t\t\tvsi->txq_map[i] = i + offset;\n+\t} else {\n+\t\tret = -ENOMEM;\n+\t\tvsi->tx_mapping_mode = ICE_VSI_MAP_SCATTER;\n+\t}\n+\n+\t/* look for contiguous block of queues for rx */\n+\toffset = bitmap_find_next_zero_area(pf->avail_rxqs, ICE_MAX_RXQS,\n+\t\t\t\t\t    0, vsi->alloc_rxq, 0);\n+\tif (offset < ICE_MAX_RXQS) {\n+\t\tint i;\n+\n+\t\tbitmap_set(pf->avail_rxqs, offset, vsi->alloc_rxq);\n+\t\tfor (i = 0; i < vsi->alloc_rxq; i++)\n+\t\t\tvsi->rxq_map[i] = i + offset;\n+\t} else {\n+\t\tret = -ENOMEM;\n+\t\tvsi->rx_mapping_mode = ICE_VSI_MAP_SCATTER;\n+\t}\n+\tmutex_unlock(&pf->avail_q_mutex);\n+\n+\treturn ret;\n+}\n+\n+/**\n+ * ice_vsi_get_qs_scatter - Assign a scattered queues to VSI\n+ * @vsi: the VSI getting queues\n+ *\n+ * Return 0 on success and a negative value on error\n+ */\n+static int ice_vsi_get_qs_scatter(struct ice_vsi *vsi)\n+{\n+\tstruct ice_pf *pf = vsi->back;\n+\tint i, index = 0;\n+\n+\tmutex_lock(&pf->avail_q_mutex);\n+\n+\tif (vsi->tx_mapping_mode == ICE_VSI_MAP_SCATTER) {\n+\t\tfor (i = 0; i < vsi->alloc_txq; i++) {\n+\t\t\tindex = find_next_zero_bit(pf->avail_txqs,\n+\t\t\t\t\t\t   ICE_MAX_TXQS, index);\n+\t\t\tif (index < ICE_MAX_TXQS) {\n+\t\t\t\tset_bit(index, pf->avail_txqs);\n+\t\t\t\tvsi->txq_map[i] = index;\n+\t\t\t} else {\n+\t\t\t\tgoto err_scatter_tx;\n+\t\t\t}\n+\t\t}\n+\t}\n+\n+\tif (vsi->rx_mapping_mode == ICE_VSI_MAP_SCATTER) {\n+\t\tfor (i = 0; i < vsi->alloc_rxq; i++) {\n+\t\t\tindex = find_next_zero_bit(pf->avail_rxqs,\n+\t\t\t\t\t\t   ICE_MAX_RXQS, index);\n+\t\t\tif (index < ICE_MAX_RXQS) {\n+\t\t\t\tset_bit(index, pf->avail_rxqs);\n+\t\t\t\tvsi->rxq_map[i] = index;\n+\t\t\t} else {\n+\t\t\t\tgoto err_scatter_rx;\n+\t\t\t}\n+\t\t}\n+\t}\n+\n+\tmutex_unlock(&pf->avail_q_mutex);\n+\treturn 0;\n+\n+err_scatter_rx:\n+\t/* unflag any queues we have grabbed (i is failed position) */\n+\tfor (index = 0; index < i; index++) {\n+\t\tclear_bit(vsi->rxq_map[index], pf->avail_rxqs);\n+\t\tvsi->rxq_map[index] = 0;\n+\t}\n+\ti = vsi->alloc_txq;\n+err_scatter_tx:\n+\t/* i is either position of failed attempt or vsi->alloc_txq */\n+\tfor (index = 0; index < i; index++) {\n+\t\tclear_bit(vsi->txq_map[index], pf->avail_txqs);\n+\t\tvsi->txq_map[index] = 0;\n+\t}\n+\n+\tmutex_unlock(&pf->avail_q_mutex);\n+\treturn -ENOMEM;\n+}\n+\n+/**\n+ * ice_vsi_get_qs - Assign queues from PF to VSI\n+ * @vsi: the VSI to assign queues to\n+ *\n+ * Returns 0 on success and a negative value on error\n+ */\n+int ice_vsi_get_qs(struct ice_vsi *vsi)\n+{\n+\tint ret = 0;\n+\n+\tvsi->tx_mapping_mode = ICE_VSI_MAP_CONTIG;\n+\tvsi->rx_mapping_mode = ICE_VSI_MAP_CONTIG;\n+\n+\t/* NOTE: ice_vsi_get_qs_contig() will set the rx/tx mapping\n+\t * modes individually to scatter if assigning contiguous queues\n+\t * to rx or tx fails\n+\t */\n+\tret = ice_vsi_get_qs_contig(vsi);\n+\tif (ret < 0) {\n+\t\tif (vsi->tx_mapping_mode == ICE_VSI_MAP_SCATTER)\n+\t\t\tvsi->alloc_txq = max_t(u16, vsi->alloc_txq,\n+\t\t\t\t\t       ICE_MAX_SCATTER_TXQS);\n+\t\tif (vsi->rx_mapping_mode == ICE_VSI_MAP_SCATTER)\n+\t\t\tvsi->alloc_rxq = max_t(u16, vsi->alloc_rxq,\n+\t\t\t\t\t       ICE_MAX_SCATTER_RXQS);\n+\t\tret = ice_vsi_get_qs_scatter(vsi);\n+\t}\n+\n+\treturn ret;\n+}\n+\n /**\n  * ice_vsi_put_qs - Release queues from VSI to PF\n  * @vsi: the VSI thats going to release queues\n@@ -447,6 +582,22 @@ void ice_vsi_put_qs(struct ice_vsi *vsi)\n \tmutex_unlock(&pf->avail_q_mutex);\n }\n \n+/**\n+ * ice_rss_clean - Delete RSS related VSI structures that hold user inputs\n+ * @vsi: the VSI being removed\n+ */\n+static void ice_rss_clean(struct ice_vsi *vsi)\n+{\n+\tstruct ice_pf *pf;\n+\n+\tpf = vsi->back;\n+\n+\tif (vsi->rss_hkey_user)\n+\t\tdevm_kfree(&pf->pdev->dev, vsi->rss_hkey_user);\n+\tif (vsi->rss_lut_user)\n+\t\tdevm_kfree(&pf->pdev->dev, vsi->rss_lut_user);\n+}\n+\n /**\n  * ice_vsi_set_rss_params - Setup RSS capabilities per VSI type\n  * @vsi: the VSI being configured\n@@ -685,11 +836,183 @@ int ice_vsi_init(struct ice_vsi *vsi)\n \treturn ret;\n }\n \n+/**\n+ * ice_free_q_vector - Free memory allocated for a specific interrupt vector\n+ * @vsi: VSI having the memory freed\n+ * @v_idx: index of the vector to be freed\n+ */\n+static void ice_free_q_vector(struct ice_vsi *vsi, int v_idx)\n+{\n+\tstruct ice_q_vector *q_vector;\n+\tstruct ice_ring *ring;\n+\n+\tif (!vsi->q_vectors[v_idx]) {\n+\t\tdev_dbg(&vsi->back->pdev->dev, \"Queue vector at index %d not found\\n\",\n+\t\t\tv_idx);\n+\t\treturn;\n+\t}\n+\tq_vector = vsi->q_vectors[v_idx];\n+\n+\tice_for_each_ring(ring, q_vector->tx)\n+\t\tring->q_vector = NULL;\n+\tice_for_each_ring(ring, q_vector->rx)\n+\t\tring->q_vector = NULL;\n+\n+\t/* only VSI with an associated netdev is set up with NAPI */\n+\tif (vsi->netdev)\n+\t\tnetif_napi_del(&q_vector->napi);\n+\n+\tdevm_kfree(&vsi->back->pdev->dev, q_vector);\n+\tvsi->q_vectors[v_idx] = NULL;\n+}\n+\n+/**\n+ * ice_vsi_free_q_vectors - Free memory allocated for interrupt vectors\n+ * @vsi: the VSI having memory freed\n+ */\n+void ice_vsi_free_q_vectors(struct ice_vsi *vsi)\n+{\n+\tint v_idx;\n+\n+\tfor (v_idx = 0; v_idx < vsi->num_q_vectors; v_idx++)\n+\t\tice_free_q_vector(vsi, v_idx);\n+}\n+\n+/**\n+ * ice_vsi_alloc_q_vector - Allocate memory for a single interrupt vector\n+ * @vsi: the VSI being configured\n+ * @v_idx: index of the vector in the vsi struct\n+ *\n+ * We allocate one q_vector.  If allocation fails we return -ENOMEM.\n+ */\n+static int ice_vsi_alloc_q_vector(struct ice_vsi *vsi, int v_idx)\n+{\n+\tstruct ice_pf *pf = vsi->back;\n+\tstruct ice_q_vector *q_vector;\n+\n+\t/* allocate q_vector */\n+\tq_vector = devm_kzalloc(&pf->pdev->dev, sizeof(*q_vector), GFP_KERNEL);\n+\tif (!q_vector)\n+\t\treturn -ENOMEM;\n+\n+\tq_vector->vsi = vsi;\n+\tq_vector->v_idx = v_idx;\n+\t/* only set affinity_mask if the CPU is online */\n+\tif (cpu_online(v_idx))\n+\t\tcpumask_set_cpu(v_idx, &q_vector->affinity_mask);\n+\n+\t/* This will not be called in the driver load path because the netdev\n+\t * will not be created yet. All other cases with register the NAPI\n+\t * handler here (i.e. resume, reset/rebuild, etc.)\n+\t */\n+\tif (vsi->netdev)\n+\t\tnetif_napi_add(vsi->netdev, &q_vector->napi, ice_napi_poll,\n+\t\t\t       NAPI_POLL_WEIGHT);\n+\n+\t/* tie q_vector and vsi together */\n+\tvsi->q_vectors[v_idx] = q_vector;\n+\n+\treturn 0;\n+}\n+\n+/**\n+ * ice_vsi_alloc_q_vectors - Allocate memory for interrupt vectors\n+ * @vsi: the VSI being configured\n+ *\n+ * We allocate one q_vector per queue interrupt.  If allocation fails we\n+ * return -ENOMEM.\n+ */\n+int ice_vsi_alloc_q_vectors(struct ice_vsi *vsi)\n+{\n+\tstruct ice_pf *pf = vsi->back;\n+\tint v_idx = 0, num_q_vectors;\n+\tint err;\n+\n+\tif (vsi->q_vectors[0]) {\n+\t\tdev_dbg(&pf->pdev->dev, \"VSI %d has existing q_vectors\\n\",\n+\t\t\tvsi->vsi_num);\n+\t\treturn -EEXIST;\n+\t}\n+\n+\tif (test_bit(ICE_FLAG_MSIX_ENA, pf->flags)) {\n+\t\tnum_q_vectors = vsi->num_q_vectors;\n+\t} else {\n+\t\terr = -EINVAL;\n+\t\tgoto err_out;\n+\t}\n+\n+\tfor (v_idx = 0; v_idx < num_q_vectors; v_idx++) {\n+\t\terr = ice_vsi_alloc_q_vector(vsi, v_idx);\n+\t\tif (err)\n+\t\t\tgoto err_out;\n+\t}\n+\n+\treturn 0;\n+\n+err_out:\n+\twhile (v_idx--)\n+\t\tice_free_q_vector(vsi, v_idx);\n+\n+\tdev_err(&pf->pdev->dev,\n+\t\t\"Failed to allocate %d q_vector for VSI %d, ret=%d\\n\",\n+\t\tvsi->num_q_vectors, vsi->vsi_num, err);\n+\tvsi->num_q_vectors = 0;\n+\treturn err;\n+}\n+\n+/**\n+ * ice_vsi_setup_vector_base - Set up the base vector for the given VSI\n+ * @vsi: ptr to the VSI\n+ *\n+ * This should only be called after ice_vsi_alloc() which allocates the\n+ * corresponding SW VSI structure and initializes num_queue_pairs for the\n+ * newly allocated VSI.\n+ *\n+ * Returns 0 on success or negative on failure\n+ */\n+int ice_vsi_setup_vector_base(struct ice_vsi *vsi)\n+{\n+\tstruct ice_pf *pf = vsi->back;\n+\tint num_q_vectors = 0;\n+\n+\tif (vsi->base_vector) {\n+\t\tdev_dbg(&pf->pdev->dev, \"VSI %d has non-zero base vector %d\\n\",\n+\t\t\tvsi->vsi_num, vsi->base_vector);\n+\t\treturn -EEXIST;\n+\t}\n+\n+\tif (!test_bit(ICE_FLAG_MSIX_ENA, pf->flags))\n+\t\treturn -ENOENT;\n+\n+\tswitch (vsi->type) {\n+\tcase ICE_VSI_PF:\n+\t\tnum_q_vectors = vsi->num_q_vectors;\n+\t\tbreak;\n+\tdefault:\n+\t\tdev_warn(&vsi->back->pdev->dev, \"Unknown VSI type %d\\n\",\n+\t\t\t vsi->type);\n+\t\tbreak;\n+\t}\n+\n+\tif (num_q_vectors)\n+\t\tvsi->base_vector = ice_get_res(pf, pf->irq_tracker,\n+\t\t\t\t\t       num_q_vectors, vsi->idx);\n+\n+\tif (vsi->base_vector < 0) {\n+\t\tdev_err(&pf->pdev->dev,\n+\t\t\t\"Failed to get tracking for %d vectors for VSI %d, err=%d\\n\",\n+\t\t\tnum_q_vectors, vsi->vsi_num, vsi->base_vector);\n+\t\treturn -ENOENT;\n+\t}\n+\n+\treturn 0;\n+}\n+\n /**\n  * ice_vsi_clear_rings - Deallocates the Tx and Rx rings for VSI\n  * @vsi: the VSI having rings deallocated\n  */\n-void ice_vsi_clear_rings(struct ice_vsi *vsi)\n+static void ice_vsi_clear_rings(struct ice_vsi *vsi)\n {\n \tint i;\n \n@@ -1674,6 +1997,142 @@ void ice_vsi_dis_irq(struct ice_vsi *vsi)\n \t}\n }\n \n+/**\n+ * ice_vsi_release - Delete a VSI and free its resources\n+ * @vsi: the VSI being removed\n+ *\n+ * Returns 0 on success or < 0 on error\n+ */\n+int ice_vsi_release(struct ice_vsi *vsi)\n+{\n+\tstruct ice_pf *pf;\n+\n+\tif (!vsi->back)\n+\t\treturn -ENODEV;\n+\tpf = vsi->back;\n+\t/* do not unregister and free netdevs while driver is in the reset\n+\t * recovery pending state. Since reset/rebuild happens through PF\n+\t * service task workqueue, its not a good idea to unregister netdev\n+\t * that is associated to the PF that is running the work queue items\n+\t * currently. This is done to avoid check_flush_dependency() warning\n+\t * on this wq\n+\t */\n+\tif (vsi->netdev && !ice_is_reset_recovery_pending(pf->state)) {\n+\t\tunregister_netdev(vsi->netdev);\n+\t\tfree_netdev(vsi->netdev);\n+\t\tvsi->netdev = NULL;\n+\t}\n+\n+\tif (test_bit(ICE_FLAG_RSS_ENA, pf->flags))\n+\t\tice_rss_clean(vsi);\n+\n+\t/* Disable VSI and free resources */\n+\tice_vsi_dis_irq(vsi);\n+\tice_vsi_close(vsi);\n+\n+\t/* reclaim interrupt vectors back to PF */\n+\tice_free_res(vsi->back->irq_tracker, vsi->base_vector, vsi->idx);\n+\tpf->num_avail_msix += vsi->num_q_vectors;\n+\n+\tice_remove_vsi_fltr(&pf->hw, vsi->vsi_num);\n+\tice_vsi_delete(vsi);\n+\tice_vsi_free_q_vectors(vsi);\n+\tice_vsi_clear_rings(vsi);\n+\n+\tice_vsi_put_qs(vsi);\n+\tpf->q_left_tx += vsi->alloc_txq;\n+\tpf->q_left_rx += vsi->alloc_rxq;\n+\n+\t/* retain SW VSI data structure since it is needed to unregister and\n+\t * free VSI netdev when PF is not in reset recovery pending state,\\\n+\t * for ex: during rmmod.\n+\t */\n+\tif (!ice_is_reset_recovery_pending(pf->state))\n+\t\tice_vsi_clear(vsi);\n+\n+\treturn 0;\n+}\n+\n+/**\n+ * ice_vsi_rebuild - Rebuild VSI after reset\n+ * @vsi: vsi to be rebuild\n+ *\n+ * Returns 0 on success and negative value on failure\n+ */\n+int ice_vsi_rebuild(struct ice_vsi *vsi)\n+{\n+\tu16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };\n+\tint ret, i;\n+\n+\tif (!vsi)\n+\t\treturn -EINVAL;\n+\n+\tice_vsi_free_q_vectors(vsi);\n+\tice_free_res(vsi->back->irq_tracker, vsi->base_vector, vsi->idx);\n+\tvsi->base_vector = 0;\n+\tice_vsi_clear_rings(vsi);\n+\tice_vsi_free_arrays(vsi, false);\n+\tice_vsi_set_num_qs(vsi);\n+\n+\t/* Initialize VSI struct elements and create VSI in FW */\n+\tret = ice_vsi_init(vsi);\n+\tif (ret < 0)\n+\t\tgoto err_vsi;\n+\n+\tret = ice_vsi_alloc_arrays(vsi, false);\n+\tif (ret < 0)\n+\t\tgoto err_vsi;\n+\n+\tswitch (vsi->type) {\n+\tcase ICE_VSI_PF:\n+\t\tret = ice_vsi_alloc_q_vectors(vsi);\n+\t\tif (ret)\n+\t\t\tgoto err_rings;\n+\n+\t\tret = ice_vsi_setup_vector_base(vsi);\n+\t\tif (ret)\n+\t\t\tgoto err_vectors;\n+\n+\t\tret = ice_vsi_alloc_rings(vsi);\n+\t\tif (ret)\n+\t\t\tgoto err_vectors;\n+\n+\t\tice_vsi_map_rings_to_vectors(vsi);\n+\t\tbreak;\n+\tdefault:\n+\t\tbreak;\n+\t}\n+\n+\tice_vsi_set_tc_cfg(vsi);\n+\n+\t/* configure VSI nodes based on number of queues and TC's */\n+\tfor (i = 0; i < vsi->tc_cfg.numtc; i++)\n+\t\tmax_txqs[i] = vsi->num_txq;\n+\n+\tret = ice_cfg_vsi_lan(vsi->port_info, vsi->vsi_num,\n+\t\t\t      vsi->tc_cfg.ena_tc, max_txqs);\n+\tif (ret) {\n+\t\tdev_info(&vsi->back->pdev->dev,\n+\t\t\t \"Failed VSI lan queue config\\n\");\n+\t\tgoto err_vectors;\n+\t}\n+\treturn 0;\n+\n+err_vectors:\n+\tice_vsi_free_q_vectors(vsi);\n+err_rings:\n+\tif (vsi->netdev) {\n+\t\tvsi->current_netdev_flags = 0;\n+\t\tunregister_netdev(vsi->netdev);\n+\t\tfree_netdev(vsi->netdev);\n+\t\tvsi->netdev = NULL;\n+\t}\n+err_vsi:\n+\tice_vsi_clear(vsi);\n+\tset_bit(__ICE_RESET_FAILED, vsi->back->state);\n+\treturn ret;\n+}\n+\n /**\n  * ice_is_reset_recovery_pending - schedule a reset\n  * @state: pf state field\ndiff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h\nindex 002bbca8e7ea..aaab3fc4b018 100644\n--- a/drivers/net/ethernet/intel/ice/ice_lib.h\n+++ b/drivers/net/ethernet/intel/ice/ice_lib.h\n@@ -6,6 +6,12 @@\n \n #include \"ice.h\"\n \n+int ice_vsi_setup_vector_base(struct ice_vsi *vsi);\n+\n+int ice_vsi_alloc_q_vectors(struct ice_vsi *vsi);\n+\n+int ice_vsi_get_qs(struct ice_vsi *vsi);\n+\n void ice_vsi_map_rings_to_vectors(struct ice_vsi *vsi);\n \n int ice_vsi_alloc_rings(struct ice_vsi *vsi);\n@@ -18,10 +24,6 @@ int ice_get_free_slot(void *array, int size, int curr);\n \n int ice_vsi_init(struct ice_vsi *vsi);\n \n-void ice_vsi_free_arrays(struct ice_vsi *vsi, bool free_qvectors);\n-\n-void ice_vsi_clear_rings(struct ice_vsi *vsi);\n-\n int ice_vsi_alloc_arrays(struct ice_vsi *vsi, bool alloc_qvectors);\n \n int ice_add_mac_to_list(struct ice_vsi *vsi, struct list_head *add_list,\n@@ -57,6 +59,8 @@ void ice_vsi_delete(struct ice_vsi *vsi);\n \n int ice_vsi_clear(struct ice_vsi *vsi);\n \n+int ice_vsi_release(struct ice_vsi *vsi);\n+\n void ice_vsi_close(struct ice_vsi *vsi);\n \n int ice_free_res(struct ice_res_tracker *res, u16 index, u16 id);\n@@ -64,8 +68,12 @@ int ice_free_res(struct ice_res_tracker *res, u16 index, u16 id);\n int\n ice_get_res(struct ice_pf *pf, struct ice_res_tracker *res, u16 needed, u16 id);\n \n+int ice_vsi_rebuild(struct ice_vsi *vsi);\n+\n bool ice_is_reset_recovery_pending(unsigned long *state);\n \n+void ice_vsi_free_q_vectors(struct ice_vsi *vsi);\n+\n void ice_vsi_put_qs(struct ice_vsi *vsi);\n \n void ice_vsi_dis_irq(struct ice_vsi *vsi);\ndiff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c\nindex 40f4e0c9b722..2741ba729ab5 100644\n--- a/drivers/net/ethernet/intel/ice/ice_main.c\n+++ b/drivers/net/ethernet/intel/ice/ice_main.c\n@@ -32,7 +32,6 @@ static const struct net_device_ops ice_netdev_ops;\n \n static void ice_pf_dis_all_vsi(struct ice_pf *pf);\n static void ice_rebuild(struct ice_pf *pf);\n-static int ice_vsi_release(struct ice_vsi *vsi);\n \n static void ice_vsi_release_all(struct ice_pf *pf);\n static void ice_update_vsi_stats(struct ice_vsi *vsi);\n@@ -1465,185 +1464,43 @@ static int ice_req_irq_msix_misc(struct ice_pf *pf)\n }\n \n /**\n- * ice_vsi_get_qs_contig - Assign a contiguous chunk of queues to VSI\n- * @vsi: the VSI getting queues\n- *\n- * Return 0 on success and a negative value on error\n- */\n-static int ice_vsi_get_qs_contig(struct ice_vsi *vsi)\n-{\n-\tstruct ice_pf *pf = vsi->back;\n-\tint offset, ret = 0;\n-\n-\tmutex_lock(&pf->avail_q_mutex);\n-\t/* look for contiguous block of queues for tx */\n-\toffset = bitmap_find_next_zero_area(pf->avail_txqs, ICE_MAX_TXQS,\n-\t\t\t\t\t    0, vsi->alloc_txq, 0);\n-\tif (offset < ICE_MAX_TXQS) {\n-\t\tint i;\n-\n-\t\tbitmap_set(pf->avail_txqs, offset, vsi->alloc_txq);\n-\t\tfor (i = 0; i < vsi->alloc_txq; i++)\n-\t\t\tvsi->txq_map[i] = i + offset;\n-\t} else {\n-\t\tret = -ENOMEM;\n-\t\tvsi->tx_mapping_mode = ICE_VSI_MAP_SCATTER;\n-\t}\n-\n-\t/* look for contiguous block of queues for rx */\n-\toffset = bitmap_find_next_zero_area(pf->avail_rxqs, ICE_MAX_RXQS,\n-\t\t\t\t\t    0, vsi->alloc_rxq, 0);\n-\tif (offset < ICE_MAX_RXQS) {\n-\t\tint i;\n-\n-\t\tbitmap_set(pf->avail_rxqs, offset, vsi->alloc_rxq);\n-\t\tfor (i = 0; i < vsi->alloc_rxq; i++)\n-\t\t\tvsi->rxq_map[i] = i + offset;\n-\t} else {\n-\t\tret = -ENOMEM;\n-\t\tvsi->rx_mapping_mode = ICE_VSI_MAP_SCATTER;\n-\t}\n-\tmutex_unlock(&pf->avail_q_mutex);\n-\n-\treturn ret;\n-}\n-\n-/**\n- * ice_vsi_get_qs_scatter - Assign a scattered queues to VSI\n- * @vsi: the VSI getting queues\n- *\n- * Return 0 on success and a negative value on error\n+ * ice_napi_del - Remove NAPI handler for the VSI\n+ * @vsi: VSI for which NAPI handler is to be removed\n  */\n-static int ice_vsi_get_qs_scatter(struct ice_vsi *vsi)\n+static void ice_napi_del(struct ice_vsi *vsi)\n {\n-\tstruct ice_pf *pf = vsi->back;\n-\tint i, index = 0;\n-\n-\tmutex_lock(&pf->avail_q_mutex);\n-\n-\tif (vsi->tx_mapping_mode == ICE_VSI_MAP_SCATTER) {\n-\t\tfor (i = 0; i < vsi->alloc_txq; i++) {\n-\t\t\tindex = find_next_zero_bit(pf->avail_txqs,\n-\t\t\t\t\t\t   ICE_MAX_TXQS, index);\n-\t\t\tif (index < ICE_MAX_TXQS) {\n-\t\t\t\tset_bit(index, pf->avail_txqs);\n-\t\t\t\tvsi->txq_map[i] = index;\n-\t\t\t} else {\n-\t\t\t\tgoto err_scatter_tx;\n-\t\t\t}\n-\t\t}\n-\t}\n-\n-\tif (vsi->rx_mapping_mode == ICE_VSI_MAP_SCATTER) {\n-\t\tfor (i = 0; i < vsi->alloc_rxq; i++) {\n-\t\t\tindex = find_next_zero_bit(pf->avail_rxqs,\n-\t\t\t\t\t\t   ICE_MAX_RXQS, index);\n-\t\t\tif (index < ICE_MAX_RXQS) {\n-\t\t\t\tset_bit(index, pf->avail_rxqs);\n-\t\t\t\tvsi->rxq_map[i] = index;\n-\t\t\t} else {\n-\t\t\t\tgoto err_scatter_rx;\n-\t\t\t}\n-\t\t}\n-\t}\n-\n-\tmutex_unlock(&pf->avail_q_mutex);\n-\treturn 0;\n+\tint v_idx;\n \n-err_scatter_rx:\n-\t/* unflag any queues we have grabbed (i is failed position) */\n-\tfor (index = 0; index < i; index++) {\n-\t\tclear_bit(vsi->rxq_map[index], pf->avail_rxqs);\n-\t\tvsi->rxq_map[index] = 0;\n-\t}\n-\ti = vsi->alloc_txq;\n-err_scatter_tx:\n-\t/* i is either position of failed attempt or vsi->alloc_txq */\n-\tfor (index = 0; index < i; index++) {\n-\t\tclear_bit(vsi->txq_map[index], pf->avail_txqs);\n-\t\tvsi->txq_map[index] = 0;\n-\t}\n+\tif (!vsi->netdev)\n+\t\treturn;\n \n-\tmutex_unlock(&pf->avail_q_mutex);\n-\treturn -ENOMEM;\n+\tfor (v_idx = 0; v_idx < vsi->num_q_vectors; v_idx++)\n+\t\tnetif_napi_del(&vsi->q_vectors[v_idx]->napi);\n }\n \n /**\n- * ice_vsi_get_qs - Assign queues from PF to VSI\n- * @vsi: the VSI to assign queues to\n+ * ice_napi_add - register NAPI handler for the VSI\n+ * @vsi: VSI for which NAPI handler is to be registered\n  *\n- * Returns 0 on success and a negative value on error\n+ * This function is only called in the driver's load path. Registering the NAPI\n+ * handler is done in ice_vsi_alloc_q_vector() for all other cases (i.e. resume,\n+ * reset/rebuild, etc.)\n  */\n-static int ice_vsi_get_qs(struct ice_vsi *vsi)\n+static void ice_napi_add(struct ice_vsi *vsi)\n {\n-\tint ret = 0;\n-\n-\tvsi->tx_mapping_mode = ICE_VSI_MAP_CONTIG;\n-\tvsi->rx_mapping_mode = ICE_VSI_MAP_CONTIG;\n-\n-\t/* NOTE: ice_vsi_get_qs_contig() will set the rx/tx mapping\n-\t * modes individually to scatter if assigning contiguous queues\n-\t * to rx or tx fails\n-\t */\n-\tret = ice_vsi_get_qs_contig(vsi);\n-\tif (ret < 0) {\n-\t\tif (vsi->tx_mapping_mode == ICE_VSI_MAP_SCATTER)\n-\t\t\tvsi->alloc_txq = max_t(u16, vsi->alloc_txq,\n-\t\t\t\t\t       ICE_MAX_SCATTER_TXQS);\n-\t\tif (vsi->rx_mapping_mode == ICE_VSI_MAP_SCATTER)\n-\t\t\tvsi->alloc_rxq = max_t(u16, vsi->alloc_rxq,\n-\t\t\t\t\t       ICE_MAX_SCATTER_RXQS);\n-\t\tret = ice_vsi_get_qs_scatter(vsi);\n-\t}\n-\n-\treturn ret;\n-}\n-\n-/**\n- * ice_free_q_vector - Free memory allocated for a specific interrupt vector\n- * @vsi: VSI having the memory freed\n- * @v_idx: index of the vector to be freed\n- */\n-static void ice_free_q_vector(struct ice_vsi *vsi, int v_idx)\n-{\n-\tstruct ice_q_vector *q_vector;\n-\tstruct ice_ring *ring;\n+\tint v_idx;\n \n-\tif (!vsi->q_vectors[v_idx]) {\n-\t\tdev_dbg(&vsi->back->pdev->dev, \"Queue vector at index %d not found\\n\",\n-\t\t\tv_idx);\n+\tif (!vsi->netdev)\n \t\treturn;\n-\t}\n-\tq_vector = vsi->q_vectors[v_idx];\n-\n-\tice_for_each_ring(ring, q_vector->tx)\n-\t\tring->q_vector = NULL;\n-\tice_for_each_ring(ring, q_vector->rx)\n-\t\tring->q_vector = NULL;\n-\n-\t/* only VSI with an associated netdev is set up with NAPI */\n-\tif (vsi->netdev)\n-\t\tnetif_napi_del(&q_vector->napi);\n-\n-\tdevm_kfree(&vsi->back->pdev->dev, q_vector);\n-\tvsi->q_vectors[v_idx] = NULL;\n-}\n-\n-/**\n- * ice_vsi_free_q_vectors - Free memory allocated for interrupt vectors\n- * @vsi: the VSI having memory freed\n- */\n-static void ice_vsi_free_q_vectors(struct ice_vsi *vsi)\n-{\n-\tint v_idx;\n \n \tfor (v_idx = 0; v_idx < vsi->num_q_vectors; v_idx++)\n-\t\tice_free_q_vector(vsi, v_idx);\n+\t\tnetif_napi_add(vsi->netdev, &vsi->q_vectors[v_idx]->napi,\n+\t\t\t       ice_napi_poll, NAPI_POLL_WEIGHT);\n }\n \n /**\n- * ice_cfg_netdev - Setup the netdev flags\n- * @vsi: the VSI being configured\n+ * ice_cfg_netdev - Allocate, configure and register a netdev\n+ * @vsi: the VSI associated with the new netdev\n  *\n  * Returns 0 on success, negative value on failure\n  */\n@@ -1656,6 +1513,7 @@ static int ice_cfg_netdev(struct ice_vsi *vsi)\n \tstruct ice_netdev_priv *np;\n \tstruct net_device *netdev;\n \tu8 mac_addr[ETH_ALEN];\n+\tint err;\n \n \tnetdev = alloc_etherdev_mqs(sizeof(struct ice_netdev_priv),\n \t\t\t\t    vsi->alloc_txq, vsi->alloc_rxq);\n@@ -1713,130 +1571,14 @@ static int ice_cfg_netdev(struct ice_vsi *vsi)\n \tnetdev->min_mtu = ETH_MIN_MTU;\n \tnetdev->max_mtu = ICE_MAX_MTU;\n \n-\treturn 0;\n-}\n-\n-/**\n- * ice_vsi_alloc_q_vector - Allocate memory for a single interrupt vector\n- * @vsi: the VSI being configured\n- * @v_idx: index of the vector in the vsi struct\n- *\n- * We allocate one q_vector.  If allocation fails we return -ENOMEM.\n- */\n-static int ice_vsi_alloc_q_vector(struct ice_vsi *vsi, int v_idx)\n-{\n-\tstruct ice_pf *pf = vsi->back;\n-\tstruct ice_q_vector *q_vector;\n-\n-\t/* allocate q_vector */\n-\tq_vector = devm_kzalloc(&pf->pdev->dev, sizeof(*q_vector), GFP_KERNEL);\n-\tif (!q_vector)\n-\t\treturn -ENOMEM;\n-\n-\tq_vector->vsi = vsi;\n-\tq_vector->v_idx = v_idx;\n-\t/* only set affinity_mask if the CPU is online */\n-\tif (cpu_online(v_idx))\n-\t\tcpumask_set_cpu(v_idx, &q_vector->affinity_mask);\n-\n-\tif (vsi->netdev)\n-\t\tnetif_napi_add(vsi->netdev, &q_vector->napi, ice_napi_poll,\n-\t\t\t       NAPI_POLL_WEIGHT);\n-\t/* tie q_vector and vsi together */\n-\tvsi->q_vectors[v_idx] = q_vector;\n-\n-\treturn 0;\n-}\n-\n-/**\n- * ice_vsi_alloc_q_vectors - Allocate memory for interrupt vectors\n- * @vsi: the VSI being configured\n- *\n- * We allocate one q_vector per queue interrupt.  If allocation fails we\n- * return -ENOMEM.\n- */\n-static int ice_vsi_alloc_q_vectors(struct ice_vsi *vsi)\n-{\n-\tstruct ice_pf *pf = vsi->back;\n-\tint v_idx = 0, num_q_vectors;\n-\tint err;\n-\n-\tif (vsi->q_vectors[0]) {\n-\t\tdev_dbg(&pf->pdev->dev, \"VSI %d has existing q_vectors\\n\",\n-\t\t\tvsi->vsi_num);\n-\t\treturn -EEXIST;\n-\t}\n-\n-\tif (test_bit(ICE_FLAG_MSIX_ENA, pf->flags)) {\n-\t\tnum_q_vectors = vsi->num_q_vectors;\n-\t} else {\n-\t\terr = -EINVAL;\n-\t\tgoto err_out;\n-\t}\n-\n-\tfor (v_idx = 0; v_idx < num_q_vectors; v_idx++) {\n-\t\terr = ice_vsi_alloc_q_vector(vsi, v_idx);\n-\t\tif (err)\n-\t\t\tgoto err_out;\n-\t}\n-\n-\treturn 0;\n-\n-err_out:\n-\twhile (v_idx--)\n-\t\tice_free_q_vector(vsi, v_idx);\n-\n-\tdev_err(&pf->pdev->dev,\n-\t\t\"Failed to allocate %d q_vector for VSI %d, ret=%d\\n\",\n-\t\tvsi->num_q_vectors, vsi->vsi_num, err);\n-\tvsi->num_q_vectors = 0;\n-\treturn err;\n-}\n-\n-/**\n- * ice_vsi_setup_vector_base - Set up the base vector for the given VSI\n- * @vsi: ptr to the VSI\n- *\n- * This should only be called after ice_vsi_alloc() which allocates the\n- * corresponding SW VSI structure and initializes num_queue_pairs for the\n- * newly allocated VSI.\n- *\n- * Returns 0 on success or negative on failure\n- */\n-static int ice_vsi_setup_vector_base(struct ice_vsi *vsi)\n-{\n-\tstruct ice_pf *pf = vsi->back;\n-\tint num_q_vectors = 0;\n-\n-\tif (vsi->base_vector) {\n-\t\tdev_dbg(&pf->pdev->dev, \"VSI %d has non-zero base vector %d\\n\",\n-\t\t\tvsi->vsi_num, vsi->base_vector);\n-\t\treturn -EEXIST;\n-\t}\n-\n-\tif (!test_bit(ICE_FLAG_MSIX_ENA, pf->flags))\n-\t\treturn -ENOENT;\n-\n-\tswitch (vsi->type) {\n-\tcase ICE_VSI_PF:\n-\t\tnum_q_vectors = vsi->num_q_vectors;\n-\t\tbreak;\n-\tdefault:\n-\t\tdev_warn(&vsi->back->pdev->dev, \"Unknown VSI type %d\\n\",\n-\t\t\t vsi->type);\n-\t\tbreak;\n-\t}\n+\terr = register_netdev(vsi->netdev);\n+\tif (err)\n+\t\treturn err;\n \n-\tif (num_q_vectors)\n-\t\tvsi->base_vector = ice_get_res(pf, pf->irq_tracker,\n-\t\t\t\t\t       num_q_vectors, vsi->idx);\n+\tnetif_carrier_off(vsi->netdev);\n \n-\tif (vsi->base_vector < 0) {\n-\t\tdev_err(&pf->pdev->dev,\n-\t\t\t\"Failed to get tracking for %d vectors for VSI %d, err=%d\\n\",\n-\t\t\tnum_q_vectors, vsi->vsi_num, vsi->base_vector);\n-\t\treturn -ENOENT;\n-\t}\n+\t/* make sure transmit queues start off as stopped */\n+\tnetif_tx_stop_all_queues(vsi->netdev);\n \n \treturn 0;\n }\n@@ -1918,87 +1660,6 @@ static int ice_vsi_cfg_rss(struct ice_vsi *vsi)\n \treturn err;\n }\n \n-/**\n- * ice_vsi_rebuild - Rebuild VSI after reset\n- * @vsi: vsi to be rebuild\n- *\n- * Returns 0 on success and negative value on failure\n- */\n-static int ice_vsi_rebuild(struct ice_vsi *vsi)\n-{\n-\tu16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };\n-\tint ret, i;\n-\n-\tif (!vsi)\n-\t\treturn -EINVAL;\n-\n-\tice_vsi_free_q_vectors(vsi);\n-\tice_free_res(vsi->back->irq_tracker, vsi->base_vector, vsi->idx);\n-\tvsi->base_vector = 0;\n-\tice_vsi_clear_rings(vsi);\n-\tice_vsi_free_arrays(vsi, false);\n-\tice_vsi_set_num_qs(vsi);\n-\n-\t/* Initialize VSI struct elements and create VSI in FW */\n-\tret = ice_vsi_init(vsi);\n-\tif (ret < 0)\n-\t\tgoto err_vsi;\n-\n-\tret = ice_vsi_alloc_arrays(vsi, false);\n-\tif (ret < 0)\n-\t\tgoto err_vsi;\n-\n-\tswitch (vsi->type) {\n-\tcase ICE_VSI_PF:\n-\t\t/* fall through */\n-\t\tret = ice_vsi_alloc_q_vectors(vsi);\n-\t\tif (ret)\n-\t\t\tgoto err_rings;\n-\n-\t\tret = ice_vsi_setup_vector_base(vsi);\n-\t\tif (ret)\n-\t\t\tgoto err_vectors;\n-\n-\t\tret = ice_vsi_alloc_rings(vsi);\n-\t\tif (ret)\n-\t\t\tgoto err_vectors;\n-\n-\t\tice_vsi_map_rings_to_vectors(vsi);\n-\t\tbreak;\n-\tdefault:\n-\t\tbreak;\n-\t}\n-\n-\tice_vsi_set_tc_cfg(vsi);\n-\n-\t/* configure VSI nodes based on number of queues and TC's */\n-\tfor (i = 0; i < vsi->tc_cfg.numtc; i++)\n-\t\tmax_txqs[i] = vsi->num_txq;\n-\n-\tret = ice_cfg_vsi_lan(vsi->port_info, vsi->vsi_num,\n-\t\t\t      vsi->tc_cfg.ena_tc, max_txqs);\n-\tif (ret) {\n-\t\tdev_info(&vsi->back->pdev->dev,\n-\t\t\t \"Failed VSI lan queue config\\n\");\n-\t\tgoto err_vectors;\n-\t}\n-\treturn 0;\n-\n-err_vectors:\n-\tice_vsi_free_q_vectors(vsi);\n-err_rings:\n-\tif (vsi->netdev) {\n-\t\tvsi->current_netdev_flags = 0;\n-\t\tunregister_netdev(vsi->netdev);\n-\t\tfree_netdev(vsi->netdev);\n-\t\tvsi->netdev = NULL;\n-\t}\n-err_vsi:\n-\tice_vsi_clear(vsi);\n-\tset_bit(__ICE_RESET_FAILED, vsi->back->state);\n-\treturn ret;\n-}\n-\n /**\n  * ice_vsi_setup - Set up a VSI by a given type\n  * @pf: board private structure\n@@ -2237,6 +1898,18 @@ static int ice_setup_pf_sw(struct ice_pf *pf)\n \t\tgoto unroll_vsi_setup;\n \t}\n \n+\tstatus = ice_cfg_netdev(vsi);\n+\tif (status) {\n+\t\tstatus = -ENODEV;\n+\t\tgoto unroll_vsi_setup;\n+\t}\n+\n+\t/* registering the NAPI handler requires both the queues and\n+\t * netdev to be created, which are done in ice_pf_vsi_setup()\n+\t * and ice_cfg_netdev() respectively\n+\t */\n+\tice_napi_add(vsi);\n+\n \t/* To add a MAC filter, first add the MAC to a list and then\n \t * pass the list to ice_add_mac.\n \t */\n@@ -2245,7 +1918,7 @@ static int ice_setup_pf_sw(struct ice_pf *pf)\n \tstatus = ice_add_mac_to_list(vsi, &tmp_add_list,\n \t\t\t\t     vsi->port_info->mac.perm_addr);\n \tif (status)\n-\t\tgoto unroll_vsi_setup;\n+\t\tgoto unroll_napi_add;\n \n \t/* VSI needs to receive broadcast traffic, so add the broadcast\n \t * MAC address to the list as well.\n@@ -2269,16 +1942,20 @@ static int ice_setup_pf_sw(struct ice_pf *pf)\n free_mac_list:\n \tice_free_fltr_list(&pf->pdev->dev, &tmp_add_list);\n \n-unroll_vsi_setup:\n+unroll_napi_add:\n \tif (vsi) {\n-\t\tice_vsi_free_q_vectors(vsi);\n-\t\tif (vsi->netdev && vsi->netdev->reg_state == NETREG_REGISTERED)\n-\t\t\tunregister_netdev(vsi->netdev);\n+\t\tice_napi_del(vsi);\n \t\tif (vsi->netdev) {\n+\t\t\tif (vsi->netdev->reg_state == NETREG_REGISTERED)\n+\t\t\t\tunregister_netdev(vsi->netdev);\n \t\t\tfree_netdev(vsi->netdev);\n \t\t\tvsi->netdev = NULL;\n \t\t}\n+\t}\n \n+unroll_vsi_setup:\n+\tif (vsi) {\n+\t\tice_vsi_free_q_vectors(vsi);\n \t\tice_vsi_delete(vsi);\n \t\tice_vsi_put_qs(vsi);\n \t\tpf->q_left_tx += vsi->alloc_txq;\n@@ -3614,79 +3291,6 @@ static int ice_vsi_open(struct ice_vsi *vsi)\n \treturn err;\n }\n \n-\n-/**\n- * ice_rss_clean - Delete RSS related VSI structures that hold user inputs\n- * @vsi: the VSI being removed\n- */\n-static void ice_rss_clean(struct ice_vsi *vsi)\n-{\n-\tstruct ice_pf *pf;\n-\n-\tpf = vsi->back;\n-\n-\tif (vsi->rss_hkey_user)\n-\t\tdevm_kfree(&pf->pdev->dev, vsi->rss_hkey_user);\n-\tif (vsi->rss_lut_user)\n-\t\tdevm_kfree(&pf->pdev->dev, vsi->rss_lut_user);\n-}\n-\n-/**\n- * ice_vsi_release - Delete a VSI and free its resources\n- * @vsi: the VSI being removed\n- *\n- * Returns 0 on success or < 0 on error\n- */\n-static int ice_vsi_release(struct ice_vsi *vsi)\n-{\n-\tstruct ice_pf *pf;\n-\n-\tif (!vsi->back)\n-\t\treturn -ENODEV;\n-\tpf = vsi->back;\n-\t/* do not unregister and free netdevs while driver is in the reset\n-\t * recovery pending state. Since reset/rebuild happens through PF\n-\t * service task workqueue, its not a good idea to unregister netdev\n-\t * that is associated to the PF that is running the work queue items\n-\t * currently. This is done to avoid check_flush_dependency() warning\n-\t * on this wq\n-\t */\n-\tif (vsi->netdev && !ice_is_reset_recovery_pending(pf->state)) {\n-\t\tunregister_netdev(vsi->netdev);\n-\t\tfree_netdev(vsi->netdev);\n-\t\tvsi->netdev = NULL;\n-\t}\n-\n-\tif (test_bit(ICE_FLAG_RSS_ENA, pf->flags))\n-\t\tice_rss_clean(vsi);\n-\n-\t/* Disable VSI and free resources */\n-\tice_vsi_dis_irq(vsi);\n-\tice_vsi_close(vsi);\n-\n-\t/* reclaim interrupt vectors back to PF */\n-\tice_free_res(vsi->back->irq_tracker, vsi->base_vector, vsi->idx);\n-\tpf->num_avail_msix += vsi->num_q_vectors;\n-\n-\tice_remove_vsi_fltr(&pf->hw, vsi->vsi_num);\n-\tice_vsi_delete(vsi);\n-\tice_vsi_free_q_vectors(vsi);\n-\tice_vsi_clear_rings(vsi);\n-\n-\tice_vsi_put_qs(vsi);\n-\tpf->q_left_tx += vsi->alloc_txq;\n-\tpf->q_left_rx += vsi->alloc_rxq;\n-\n-\t/* retain SW VSI data structure since it is needed to unregister and\n-\t * free VSI netdev when PF is not in reset recovery pending state,\\\n-\t * for ex: during rmmod.\n-\t */\n-\tif (!ice_is_reset_recovery_pending(pf->state))\n-\t\tice_vsi_clear(vsi);\n-\n-\treturn 0;\n-}\n-\n /**\n  * ice_vsi_release_all - Delete all VSIs\n  * @pf: PF from which all VSIs are being removed\n",
    "prefixes": [
        "06/16"
    ]
}