Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/1245995/?format=api
{ "id": 1245995, "url": "http://patchwork.ozlabs.org/api/patches/1245995/?format=api", "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20200227181505.61720-2-anthony.l.nguyen@intel.com/", "project": { "id": 46, "url": "http://patchwork.ozlabs.org/api/projects/46/?format=api", "name": "Intel Wired Ethernet development", "link_name": "intel-wired-lan", "list_id": "intel-wired-lan.osuosl.org", "list_email": "intel-wired-lan@osuosl.org", "web_url": "", "scm_url": "", "webscm_url": "", "list_archive_url": "", "list_archive_url_format": "", "commit_url_format": "" }, "msgid": "<20200227181505.61720-2-anthony.l.nguyen@intel.com>", "list_archive_url": null, "date": "2020-02-27T18:14:52", "name": "[S40,02/15] ice: allow bigger VFs", "commit_ref": null, "pull_url": null, "state": "accepted", "archived": false, "hash": "839794172ba84a2e2fb5e8e5c65e4ba2612a7f9b", "submitter": { "id": 68875, "url": "http://patchwork.ozlabs.org/api/people/68875/?format=api", "name": "Tony Nguyen", "email": "anthony.l.nguyen@intel.com" }, "delegate": { "id": 68, "url": "http://patchwork.ozlabs.org/api/users/68/?format=api", "username": "jtkirshe", "first_name": "Jeff", "last_name": "Kirsher", "email": "jeffrey.t.kirsher@intel.com" }, "mbox": "http://patchwork.ozlabs.org/project/intel-wired-lan/patch/20200227181505.61720-2-anthony.l.nguyen@intel.com/mbox/", "series": [ { "id": 161280, "url": "http://patchwork.ozlabs.org/api/series/161280/?format=api", "web_url": "http://patchwork.ozlabs.org/project/intel-wired-lan/list/?series=161280", "date": "2020-02-27T18:14:51", "name": "[S40,01/15] iavf: Enable support for up to 16 queues", "version": 1, "mbox": "http://patchwork.ozlabs.org/series/161280/mbox/" } ], "comments": "http://patchwork.ozlabs.org/api/patches/1245995/comments/", "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/1245995/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<intel-wired-lan-bounces@osuosl.org>", "X-Original-To": [ "incoming@patchwork.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Delivered-To": [ "patchwork-incoming@bilbo.ozlabs.org", "intel-wired-lan@lists.osuosl.org" ], "Authentication-Results": [ "ozlabs.org; spf=pass (sender SPF authorized)\n\tsmtp.mailfrom=osuosl.org (client-ip=140.211.166.133;\n\thelo=hemlock.osuosl.org;\n\tenvelope-from=intel-wired-lan-bounces@osuosl.org;\n\treceiver=<UNKNOWN>)", "ozlabs.org;\n\tdmarc=fail (p=none dis=none) header.from=intel.com" ], "Received": [ "from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256\n\tbits)) (No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 48T17n2vHQz9sSm\n\tfor <incoming@patchwork.ozlabs.org>;\n\tFri, 28 Feb 2020 05:16:05 +1100 (AEDT)", "from localhost (localhost [127.0.0.1])\n\tby hemlock.osuosl.org (Postfix) with ESMTP id EFA5285AA0;\n\tThu, 27 Feb 2020 18:16:03 +0000 (UTC)", "from hemlock.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id EBKqvA72tq8u; Thu, 27 Feb 2020 18:16:00 +0000 (UTC)", "from ash.osuosl.org (ash.osuosl.org [140.211.166.34])\n\tby hemlock.osuosl.org (Postfix) with ESMTP id E26C788035;\n\tThu, 27 Feb 2020 18:15:59 +0000 (UTC)", "from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133])\n\tby ash.osuosl.org (Postfix) with ESMTP id C859B1BF3D2\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tThu, 27 Feb 2020 18:15:55 +0000 (UTC)", "from localhost (localhost [127.0.0.1])\n\tby hemlock.osuosl.org (Postfix) with ESMTP id C417087FF7\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tThu, 27 Feb 2020 18:15:55 +0000 (UTC)", "from hemlock.osuosl.org ([127.0.0.1])\n\tby localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id 5ITD-KMpIzb1 for <intel-wired-lan@lists.osuosl.org>;\n\tThu, 27 Feb 2020 18:15:53 +0000 (UTC)", "from mga07.intel.com (mga07.intel.com [134.134.136.100])\n\tby hemlock.osuosl.org (Postfix) with ESMTPS id C7FDB87FE7\n\tfor <intel-wired-lan@lists.osuosl.org>;\n\tThu, 27 Feb 2020 18:15:52 +0000 (UTC)", "from fmsmga003.fm.intel.com ([10.253.24.29])\n\tby orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t27 Feb 2020 10:15:51 -0800", "from unknown (HELO localhost.jf.intel.com) ([10.166.244.174])\n\tby FMSMGA003.fm.intel.com with ESMTP; 27 Feb 2020 10:15:51 -0800" ], "X-Virus-Scanned": [ "amavisd-new at osuosl.org", "amavisd-new at osuosl.org" ], "X-Greylist": "domain auto-whitelisted by SQLgrey-1.7.6", "X-Amp-Result": "SKIPPED(no attachment in message)", "X-Amp-File-Uploaded": "False", "X-ExtLoop1": "1", "X-IronPort-AV": "E=Sophos;i=\"5.70,492,1574150400\"; d=\"scan'208\";a=\"285408841\"", "From": "Tony Nguyen <anthony.l.nguyen@intel.com>", "To": "intel-wired-lan@lists.osuosl.org", "Date": "Thu, 27 Feb 2020 10:14:52 -0800", "Message-Id": "<20200227181505.61720-2-anthony.l.nguyen@intel.com>", "X-Mailer": "git-send-email 2.20.1", "In-Reply-To": "<20200227181505.61720-1-anthony.l.nguyen@intel.com>", "References": "<20200227181505.61720-1-anthony.l.nguyen@intel.com>", "MIME-Version": "1.0", "Subject": "[Intel-wired-lan] [PATCH S40 02/15] ice: allow bigger VFs", "X-BeenThere": "intel-wired-lan@osuosl.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "Intel Wired Ethernet Linux Kernel Driver Development\n\t<intel-wired-lan.osuosl.org>", "List-Unsubscribe": "<https://lists.osuosl.org/mailman/options/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@osuosl.org?subject=unsubscribe>", "List-Archive": "<http://lists.osuosl.org/pipermail/intel-wired-lan/>", "List-Post": "<mailto:intel-wired-lan@osuosl.org>", "List-Help": "<mailto:intel-wired-lan-request@osuosl.org?subject=help>", "List-Subscribe": "<https://lists.osuosl.org/mailman/listinfo/intel-wired-lan>, \n\t<mailto:intel-wired-lan-request@osuosl.org?subject=subscribe>", "Content-Type": "text/plain; charset=\"us-ascii\"", "Content-Transfer-Encoding": "7bit", "Errors-To": "intel-wired-lan-bounces@osuosl.org", "Sender": "\"Intel-wired-lan\" <intel-wired-lan-bounces@osuosl.org>" }, "content": "From: Mitch Williams <mitch.a.williams@intel.com>\n\nUnlike the XL710 series, 800-series hardware can allocate more than 4\nMSI-X vectors per VF. This patch enables that functionality. We\ndynamically allocate vectors and queues depending on how many VFs are\nenabled. Allocating the maximum number of VFs replicates XL710\nbehavior with 4 queues and 4 vectors. But allocating a smaller number\nof VFs will give you 16 queues and 16 vectors.\n\nSigned-off-by: Mitch Williams <mitch.a.williams@intel.com>\nSigned-off-by: Brett Creeley <brett.creeley@intel.com>\nSigned-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>\n---\n drivers/net/ethernet/intel/ice/ice.h | 1 -\n drivers/net/ethernet/intel/ice/ice_lib.c | 9 +-\n .../net/ethernet/intel/ice/ice_virtchnl_pf.c | 279 +++++++++---------\n .../net/ethernet/intel/ice/ice_virtchnl_pf.h | 15 +-\n 4 files changed, 146 insertions(+), 158 deletions(-)", "diff": "diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h\nindex 92e44a87f905..21a384ed962f 100644\n--- a/drivers/net/ethernet/intel/ice/ice.h\n+++ b/drivers/net/ethernet/intel/ice/ice.h\n@@ -72,7 +72,6 @@ extern const char ice_drv_ver[];\n #define ICE_Q_WAIT_RETRY_LIMIT\t10\n #define ICE_Q_WAIT_MAX_RETRY\t(5 * ICE_Q_WAIT_RETRY_LIMIT)\n #define ICE_MAX_LG_RSS_QS\t256\n-#define ICE_MAX_SMALL_RSS_QS\t8\n #define ICE_RES_VALID_BIT\t0x8000\n #define ICE_RES_MISC_VEC_ID\t(ICE_RES_VALID_BIT - 1)\n #define ICE_RDMA_NUM_VECS\t4\ndiff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c\nindex 1ab5fb20523b..0a89ce957dd2 100644\n--- a/drivers/net/ethernet/intel/ice/ice_lib.c\n+++ b/drivers/net/ethernet/intel/ice/ice_lib.c\n@@ -582,12 +582,11 @@ static void ice_vsi_set_rss_params(struct ice_vsi *vsi)\n \t\tvsi->rss_lut_type = ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF;\n \t\tbreak;\n \tcase ICE_VSI_VF:\n-\t\t/* VF VSI will gets a small RSS table\n-\t\t * For VSI_LUT, LUT size should be set to 64 bytes\n+\t\t/* VF VSI will get a small RSS table.\n+\t\t * For VSI_LUT, LUT size should be set to 64 bytes.\n \t\t */\n \t\tvsi->rss_table_size = ICE_VSIQF_HLUT_ARRAY_SIZE;\n-\t\tvsi->rss_size = min_t(int, num_online_cpus(),\n-\t\t\t\t BIT(cap->rss_table_entry_width));\n+\t\tvsi->rss_size = ICE_MAX_RSS_QS_PER_VF;\n \t\tvsi->rss_lut_type = ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_VSI;\n \t\tbreak;\n \tcase ICE_VSI_LB:\n@@ -695,7 +694,7 @@ static void ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)\n \t\t\tif (vsi->type == ICE_VSI_PF)\n \t\t\t\tmax_rss = ICE_MAX_LG_RSS_QS;\n \t\t\telse\n-\t\t\t\tmax_rss = ICE_MAX_SMALL_RSS_QS;\n+\t\t\t\tmax_rss = ICE_MAX_RSS_QS_PER_VF;\n \t\t\tqcount_rx = min_t(int, rx_numq_tc, max_rss);\n \t\t\tif (!vsi->req_rxq)\n \t\t\t\tqcount_rx = min_t(int, qcount_rx,\ndiff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c\nindex 603994a64fd6..d1912a2a3caa 100644\n--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c\n+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c\n@@ -99,8 +99,8 @@ ice_set_pfe_link(struct ice_vf *vf, struct virtchnl_pf_event *pfe,\n */\n static bool ice_vf_has_no_qs_ena(struct ice_vf *vf)\n {\n-\treturn (!bitmap_weight(vf->rxq_ena, ICE_MAX_BASE_QS_PER_VF) &&\n-\t\t!bitmap_weight(vf->txq_ena, ICE_MAX_BASE_QS_PER_VF));\n+\treturn (!bitmap_weight(vf->rxq_ena, ICE_MAX_RSS_QS_PER_VF) &&\n+\t\t!bitmap_weight(vf->txq_ena, ICE_MAX_RSS_QS_PER_VF));\n }\n \n /**\n@@ -232,11 +232,7 @@ static void ice_dis_vf_mappings(struct ice_vf *vf)\n * ice_sriov_free_msix_res - Reset/free any used MSIX resources\n * @pf: pointer to the PF structure\n *\n- * If MSIX entries from the pf->irq_tracker were needed then we need to\n- * reset the irq_tracker->end and give back the entries we needed to\n- * num_avail_sw_msix.\n- *\n- * If no MSIX entries were taken from the pf->irq_tracker then just clear\n+ * Since no MSIX entries are taken from the pf->irq_tracker then just clear\n * the pf->sriov_base_vector.\n *\n * Returns 0 on success, and -EINVAL on error.\n@@ -253,11 +249,7 @@ static int ice_sriov_free_msix_res(struct ice_pf *pf)\n \t\treturn -EINVAL;\n \n \t/* give back irq_tracker resources used */\n-\tif (pf->sriov_base_vector < res->num_entries) {\n-\t\tres->end = res->num_entries;\n-\t\tpf->num_avail_sw_msix +=\n-\t\t\tres->num_entries - pf->sriov_base_vector;\n-\t}\n+\tWARN_ON(pf->sriov_base_vector < res->num_entries);\n \n \tpf->sriov_base_vector = 0;\n \n@@ -271,8 +263,8 @@ static int ice_sriov_free_msix_res(struct ice_pf *pf)\n void ice_set_vf_state_qs_dis(struct ice_vf *vf)\n {\n \t/* Clear Rx/Tx enabled queues flag */\n-\tbitmap_zero(vf->txq_ena, ICE_MAX_BASE_QS_PER_VF);\n-\tbitmap_zero(vf->rxq_ena, ICE_MAX_BASE_QS_PER_VF);\n+\tbitmap_zero(vf->txq_ena, ICE_MAX_RSS_QS_PER_VF);\n+\tbitmap_zero(vf->rxq_ena, ICE_MAX_RSS_QS_PER_VF);\n \tclear_bit(ICE_VF_STATE_QS_ENA, vf->vf_states);\n }\n \n@@ -604,7 +596,7 @@ static int ice_alloc_vf_res(struct ice_vf *vf)\n \t */\n \ttx_rx_queue_left = min_t(int, ice_get_avail_txq_count(pf),\n \t\t\t\t ice_get_avail_rxq_count(pf));\n-\ttx_rx_queue_left += ICE_DFLT_QS_PER_VF;\n+\ttx_rx_queue_left += pf->num_vf_qps;\n \tif (vf->num_req_qs && vf->num_req_qs <= tx_rx_queue_left &&\n \t vf->num_req_qs != vf->num_vf_qs)\n \t\tvf->num_vf_qs = vf->num_req_qs;\n@@ -803,127 +795,108 @@ static int ice_get_max_valid_res_idx(struct ice_res_tracker *res)\n * @num_msix_needed: number of MSIX vectors needed for all SR-IOV VFs\n *\n * This function allows SR-IOV resources to be taken from the end of the PF's\n- * allowed HW MSIX vectors so in many cases the irq_tracker will not\n- * be needed. In these cases we just set the pf->sriov_base_vector and return\n- * success.\n+ * allowed HW MSIX vectors so that the irq_tracker will not be affected. We\n+ * just set the pf->sriov_base_vector and return success.\n *\n- * If SR-IOV needs to use any pf->irq_tracker entries it updates the\n- * irq_tracker->end based on the first entry needed for SR-IOV. This makes it\n- * so any calls to ice_get_res() using the irq_tracker will not try to use\n- * resources at or beyond the newly set value.\n+ * If there are not enough resources available, return an error. This should\n+ * always be caught by ice_set_per_vf_res().\n *\n * Return 0 on success, and -EINVAL when there are not enough MSIX vectors in\n * in the PF's space available for SR-IOV.\n */\n static int ice_sriov_set_msix_res(struct ice_pf *pf, u16 num_msix_needed)\n {\n-\tint max_valid_res_idx = ice_get_max_valid_res_idx(pf->irq_tracker);\n-\tu16 pf_total_msix_vectors =\n-\t\tpf->hw.func_caps.common_cap.num_msix_vectors;\n-\tstruct ice_res_tracker *res = pf->irq_tracker;\n+\tu16 total_vectors = pf->hw.func_caps.common_cap.num_msix_vectors;\n+\tint vectors_used = pf->irq_tracker->num_entries;\n \tint sriov_base_vector;\n \n-\tif (max_valid_res_idx < 0)\n-\t\treturn max_valid_res_idx;\n-\n-\tsriov_base_vector = pf_total_msix_vectors - num_msix_needed;\n+\tsriov_base_vector = total_vectors - num_msix_needed;\n \n \t/* make sure we only grab irq_tracker entries from the list end and\n \t * that we have enough available MSIX vectors\n \t */\n-\tif (sriov_base_vector <= max_valid_res_idx)\n+\tif (sriov_base_vector < vectors_used)\n \t\treturn -EINVAL;\n \n \tpf->sriov_base_vector = sriov_base_vector;\n \n-\t/* dip into irq_tracker entries and update used resources */\n-\tif (num_msix_needed > (pf_total_msix_vectors - res->num_entries)) {\n-\t\tpf->num_avail_sw_msix -=\n-\t\t\tres->num_entries - pf->sriov_base_vector;\n-\t\tres->end = pf->sriov_base_vector;\n-\t}\n-\n \treturn 0;\n }\n \n /**\n- * ice_check_avail_res - check if vectors and queues are available\n+ * ice_set_per_vf_res - check if vectors and queues are available\n * @pf: pointer to the PF structure\n *\n- * This function is where we calculate actual number of resources for VF VSIs,\n- * we don't reserve ahead of time during probe. Returns success if vectors and\n- * queues resources are available, otherwise returns error code\n+ * First, determine HW interrupts from common pool. If we allocate fewer VFs, we\n+ * get more vectors and can enable more queues per VF. Note that this does not\n+ * grab any vectors from the SW pool already allocated. Also note, that all\n+ * vector counts include one for each VF's miscellaneous interrupt vector\n+ * (i.e. OICR).\n+ *\n+ * Minimum VFs - 2 vectors, 1 queue pair\n+ * Small VFs - 5 vectors, 4 queue pairs\n+ * Medium VFs - 17 vectors, 16 queue pairs\n+ *\n+ * Second, determine number of queue pairs per VF by starting with a pre-defined\n+ * maximum each VF supports. If this is not possible, then we adjust based on\n+ * queue pairs available on the device.\n+ *\n+ * Lastly, set queue and MSI-X VF variables tracked by the PF so it can be used\n+ * by each VF during VF initialization and reset.\n */\n-static int ice_check_avail_res(struct ice_pf *pf)\n+static int ice_set_per_vf_res(struct ice_pf *pf)\n {\n \tint max_valid_res_idx = ice_get_max_valid_res_idx(pf->irq_tracker);\n-\tu16 num_msix, num_txq, num_rxq, num_avail_msix;\n \tstruct device *dev = ice_pf_to_dev(pf);\n+\tu16 num_msix, num_txq, num_rxq;\n+\tint v;\n \n \tif (!pf->num_alloc_vfs || max_valid_res_idx < 0)\n \t\treturn -EINVAL;\n \n-\t/* add 1 to max_valid_res_idx to account for it being 0-based */\n-\tnum_avail_msix = pf->hw.func_caps.common_cap.num_msix_vectors -\n-\t\t(max_valid_res_idx + 1);\n-\n-\t/* Grab from HW interrupts common pool\n-\t * Note: By the time the user decides it needs more vectors in a VF\n-\t * its already too late since one must decide this prior to creating the\n-\t * VF interface. So the best we can do is take a guess as to what the\n-\t * user might want.\n-\t *\n-\t * We have two policies for vector allocation:\n-\t * 1. if num_alloc_vfs is from 1 to 16, then we consider this as small\n-\t * number of NFV VFs used for NFV appliances, since this is a special\n-\t * case, we try to assign maximum vectors per VF (65) as much as\n-\t * possible, based on determine_resources algorithm.\n-\t * 2. if num_alloc_vfs is from 17 to 256, then its large number of\n-\t * regular VFs which are not used for any special purpose. Hence try to\n-\t * grab default interrupt vectors (5 as supported by AVF driver).\n-\t */\n-\tif (pf->num_alloc_vfs <= 16) {\n-\t\tnum_msix = ice_determine_res(pf, num_avail_msix,\n-\t\t\t\t\t ICE_MAX_INTR_PER_VF,\n-\t\t\t\t\t ICE_MIN_INTR_PER_VF);\n-\t} else if (pf->num_alloc_vfs <= ICE_MAX_VF_COUNT) {\n-\t\tnum_msix = ice_determine_res(pf, num_avail_msix,\n-\t\t\t\t\t ICE_DFLT_INTR_PER_VF,\n-\t\t\t\t\t ICE_MIN_INTR_PER_VF);\n+\t/* determine MSI-X resources per VF */\n+\tv = (pf->hw.func_caps.common_cap.num_msix_vectors -\n+\t pf->irq_tracker->num_entries) / pf->num_alloc_vfs;\n+\tif (v >= ICE_NUM_VF_MSIX_MED) {\n+\t\tnum_msix = ICE_NUM_VF_MSIX_MED;\n+\t} else if (v >= ICE_NUM_VF_MSIX_SMALL) {\n+\t\tnum_msix = ICE_NUM_VF_MSIX_SMALL;\n+\t} else if (v >= ICE_MIN_INTR_PER_VF) {\n+\t\tnum_msix = ICE_MIN_INTR_PER_VF;\n \t} else {\n-\t\tdev_err(dev, \"Number of VFs %d exceeds max VF count %d\\n\",\n-\t\t\tpf->num_alloc_vfs, ICE_MAX_VF_COUNT);\n+\t\tdev_err(dev, \"Not enough vectors to support %d VFs\\n\",\n+\t\t\tpf->num_alloc_vfs);\n \t\treturn -EIO;\n \t}\n \n-\tif (!num_msix)\n-\t\treturn -EIO;\n-\n-\t/* Grab from the common pool\n-\t * start by requesting Default queues (4 as supported by AVF driver),\n-\t * Note that, the main difference between queues and vectors is, latter\n-\t * can only be reserved at init time but queues can be requested by VF\n-\t * at runtime through Virtchnl, that is the reason we start by reserving\n-\t * few queues.\n-\t */\n+\t/* determine queue resources per VF */\n \tnum_txq = ice_determine_res(pf, ice_get_avail_txq_count(pf),\n-\t\t\t\t ICE_DFLT_QS_PER_VF, ICE_MIN_QS_PER_VF);\n+\t\t\t\t min_t(u16, num_msix - 1,\n+\t\t\t\t\t ICE_MAX_RSS_QS_PER_VF),\n+\t\t\t\t ICE_MIN_QS_PER_VF);\n \n \tnum_rxq = ice_determine_res(pf, ice_get_avail_rxq_count(pf),\n-\t\t\t\t ICE_DFLT_QS_PER_VF, ICE_MIN_QS_PER_VF);\n+\t\t\t\t min_t(u16, num_msix - 1,\n+\t\t\t\t\t ICE_MAX_RSS_QS_PER_VF),\n+\t\t\t\t ICE_MIN_QS_PER_VF);\n \n-\tif (!num_txq || !num_rxq)\n+\tif (!num_txq || !num_rxq) {\n+\t\tdev_err(dev, \"Not enough queues to support %d VFs\\n\",\n+\t\t\tpf->num_alloc_vfs);\n \t\treturn -EIO;\n+\t}\n \n-\tif (ice_sriov_set_msix_res(pf, num_msix * pf->num_alloc_vfs))\n+\tif (ice_sriov_set_msix_res(pf, num_msix * pf->num_alloc_vfs)) {\n+\t\tdev_err(dev, \"Unable to set MSI-X resources for %d VFs\\n\",\n+\t\t\tpf->num_alloc_vfs);\n \t\treturn -EINVAL;\n+\t}\n \n-\t/* since AVF driver works with only queue pairs which means, it expects\n-\t * to have equal number of Rx and Tx queues, so take the minimum of\n-\t * available Tx or Rx queues\n-\t */\n+\t/* only allow equal Tx/Rx queue count (i.e. queue pairs) */\n \tpf->num_vf_qps = min_t(int, num_txq, num_rxq);\n \tpf->num_vf_msix = num_msix;\n+\tdev_info(dev, \"Enabling %d VFs with %d vectors and %d queues per VF\\n\",\n+\t\t pf->num_alloc_vfs, num_msix, pf->num_vf_qps);\n \n \treturn 0;\n }\n@@ -1032,7 +1005,7 @@ static bool ice_config_res_vfs(struct ice_pf *pf)\n \tstruct ice_hw *hw = &pf->hw;\n \tint v;\n \n-\tif (ice_check_avail_res(pf)) {\n+\tif (ice_set_per_vf_res(pf)) {\n \t\tdev_err(dev, \"Cannot allocate VF resources, try with fewer number of VFs\\n\");\n \t\treturn false;\n \t}\n@@ -2101,8 +2074,8 @@ static int ice_vc_get_stats_msg(struct ice_vf *vf, u8 *msg)\n static bool ice_vc_validate_vqs_bitmaps(struct virtchnl_queue_select *vqs)\n {\n \tif ((!vqs->rx_queues && !vqs->tx_queues) ||\n-\t vqs->rx_queues >= BIT(ICE_MAX_BASE_QS_PER_VF) ||\n-\t vqs->tx_queues >= BIT(ICE_MAX_BASE_QS_PER_VF))\n+\t vqs->rx_queues >= BIT(ICE_MAX_RSS_QS_PER_VF) ||\n+\t vqs->tx_queues >= BIT(ICE_MAX_RSS_QS_PER_VF))\n \t\treturn false;\n \n \treturn true;\n@@ -2151,7 +2124,7 @@ static int ice_vc_ena_qs_msg(struct ice_vf *vf, u8 *msg)\n \t * programmed using ice_vsi_cfg_txqs\n \t */\n \tq_map = vqs->rx_queues;\n-\tfor_each_set_bit(vf_q_id, &q_map, ICE_MAX_BASE_QS_PER_VF) {\n+\tfor_each_set_bit(vf_q_id, &q_map, ICE_MAX_RSS_QS_PER_VF) {\n \t\tif (!ice_vc_isvalid_q_id(vf, vqs->vsi_id, vf_q_id)) {\n \t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n \t\t\tgoto error_param;\n@@ -2172,7 +2145,7 @@ static int ice_vc_ena_qs_msg(struct ice_vf *vf, u8 *msg)\n \t}\n \n \tq_map = vqs->tx_queues;\n-\tfor_each_set_bit(vf_q_id, &q_map, ICE_MAX_BASE_QS_PER_VF) {\n+\tfor_each_set_bit(vf_q_id, &q_map, ICE_MAX_RSS_QS_PER_VF) {\n \t\tif (!ice_vc_isvalid_q_id(vf, vqs->vsi_id, vf_q_id)) {\n \t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n \t\t\tgoto error_param;\n@@ -2229,12 +2202,6 @@ static int ice_vc_dis_qs_msg(struct ice_vf *vf, u8 *msg)\n \t\tgoto error_param;\n \t}\n \n-\tif (vqs->rx_queues > ICE_MAX_BASE_QS_PER_VF ||\n-\t vqs->tx_queues > ICE_MAX_BASE_QS_PER_VF) {\n-\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n-\t\tgoto error_param;\n-\t}\n-\n \tvsi = pf->vsi[vf->lan_vsi_idx];\n \tif (!vsi) {\n \t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n@@ -2244,7 +2211,7 @@ static int ice_vc_dis_qs_msg(struct ice_vf *vf, u8 *msg)\n \tif (vqs->tx_queues) {\n \t\tq_map = vqs->tx_queues;\n \n-\t\tfor_each_set_bit(vf_q_id, &q_map, ICE_MAX_BASE_QS_PER_VF) {\n+\t\tfor_each_set_bit(vf_q_id, &q_map, ICE_MAX_RSS_QS_PER_VF) {\n \t\t\tstruct ice_ring *ring = vsi->tx_rings[vf_q_id];\n \t\t\tstruct ice_txq_meta txq_meta = { 0 };\n \n@@ -2275,7 +2242,7 @@ static int ice_vc_dis_qs_msg(struct ice_vf *vf, u8 *msg)\n \tq_map = vqs->rx_queues;\n \t/* speed up Rx queue disable by batching them if possible */\n \tif (q_map &&\n-\t bitmap_equal(&q_map, vf->rxq_ena, ICE_MAX_BASE_QS_PER_VF)) {\n+\t bitmap_equal(&q_map, vf->rxq_ena, ICE_MAX_RSS_QS_PER_VF)) {\n \t\tif (ice_vsi_stop_all_rx_rings(vsi)) {\n \t\t\tdev_err(ice_pf_to_dev(vsi->back), \"Failed to stop all Rx rings on VSI %d\\n\",\n \t\t\t\tvsi->vsi_num);\n@@ -2283,9 +2250,9 @@ static int ice_vc_dis_qs_msg(struct ice_vf *vf, u8 *msg)\n \t\t\tgoto error_param;\n \t\t}\n \n-\t\tbitmap_zero(vf->rxq_ena, ICE_MAX_BASE_QS_PER_VF);\n+\t\tbitmap_zero(vf->rxq_ena, ICE_MAX_RSS_QS_PER_VF);\n \t} else if (q_map) {\n-\t\tfor_each_set_bit(vf_q_id, &q_map, ICE_MAX_BASE_QS_PER_VF) {\n+\t\tfor_each_set_bit(vf_q_id, &q_map, ICE_MAX_RSS_QS_PER_VF) {\n \t\t\tif (!ice_vc_isvalid_q_id(vf, vqs->vsi_id, vf_q_id)) {\n \t\t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n \t\t\t\tgoto error_param;\n@@ -2318,6 +2285,57 @@ static int ice_vc_dis_qs_msg(struct ice_vf *vf, u8 *msg)\n \t\t\t\t NULL, 0);\n }\n \n+/**\n+ * ice_cfg_interrupt\n+ * @vf: pointer to the VF info\n+ * @vsi: the VSI being configured\n+ * @vector_id: vector ID\n+ * @map: vector map for mapping vectors to queues\n+ * @q_vector: structure for interrupt vector\n+ * configure the IRQ to queue map\n+ */\n+static int\n+ice_cfg_interrupt(struct ice_vf *vf, struct ice_vsi *vsi, u16 vector_id,\n+\t\t struct virtchnl_vector_map *map,\n+\t\t struct ice_q_vector *q_vector)\n+{\n+\tu16 vsi_q_id, vsi_q_id_idx;\n+\tunsigned long qmap;\n+\n+\tq_vector->num_ring_rx = 0;\n+\tq_vector->num_ring_tx = 0;\n+\n+\tqmap = map->rxq_map;\n+\tfor_each_set_bit(vsi_q_id_idx, &qmap, ICE_MAX_RSS_QS_PER_VF) {\n+\t\tvsi_q_id = vsi_q_id_idx;\n+\n+\t\tif (!ice_vc_isvalid_q_id(vf, vsi->vsi_num, vsi_q_id))\n+\t\t\treturn VIRTCHNL_STATUS_ERR_PARAM;\n+\n+\t\tq_vector->num_ring_rx++;\n+\t\tq_vector->rx.itr_idx = map->rxitr_idx;\n+\t\tvsi->rx_rings[vsi_q_id]->q_vector = q_vector;\n+\t\tice_cfg_rxq_interrupt(vsi, vsi_q_id, vector_id,\n+\t\t\t\t q_vector->rx.itr_idx);\n+\t}\n+\n+\tqmap = map->txq_map;\n+\tfor_each_set_bit(vsi_q_id_idx, &qmap, ICE_MAX_RSS_QS_PER_VF) {\n+\t\tvsi_q_id = vsi_q_id_idx;\n+\n+\t\tif (!ice_vc_isvalid_q_id(vf, vsi->vsi_num, vsi_q_id))\n+\t\t\treturn VIRTCHNL_STATUS_ERR_PARAM;\n+\n+\t\tq_vector->num_ring_tx++;\n+\t\tq_vector->tx.itr_idx = map->txitr_idx;\n+\t\tvsi->tx_rings[vsi_q_id]->q_vector = q_vector;\n+\t\tice_cfg_txq_interrupt(vsi, vsi_q_id, vector_id,\n+\t\t\t\t q_vector->tx.itr_idx);\n+\t}\n+\n+\treturn VIRTCHNL_STATUS_SUCCESS;\n+}\n+\n /**\n * ice_vc_cfg_irq_map_msg\n * @vf: pointer to the VF info\n@@ -2328,13 +2346,11 @@ static int ice_vc_dis_qs_msg(struct ice_vf *vf, u8 *msg)\n static int ice_vc_cfg_irq_map_msg(struct ice_vf *vf, u8 *msg)\n {\n \tenum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS;\n+\tu16 num_q_vectors_mapped, vsi_id, vector_id;\n \tstruct virtchnl_irq_map_info *irqmap_info;\n-\tu16 vsi_id, vsi_q_id, vector_id;\n \tstruct virtchnl_vector_map *map;\n \tstruct ice_pf *pf = vf->pf;\n-\tu16 num_q_vectors_mapped;\n \tstruct ice_vsi *vsi;\n-\tunsigned long qmap;\n \tint i;\n \n \tirqmap_info = (struct virtchnl_irq_map_info *)msg;\n@@ -2346,7 +2362,7 @@ static int ice_vc_cfg_irq_map_msg(struct ice_vf *vf, u8 *msg)\n \t */\n \tif (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states) ||\n \t pf->num_vf_msix < num_q_vectors_mapped ||\n-\t !irqmap_info->num_vectors) {\n+\t !num_q_vectors_mapped) {\n \t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n \t\tgoto error_param;\n \t}\n@@ -2367,7 +2383,7 @@ static int ice_vc_cfg_irq_map_msg(struct ice_vf *vf, u8 *msg)\n \t\t/* vector_id is always 0-based for each VF, and can never be\n \t\t * larger than or equal to the max allowed interrupts per VF\n \t\t */\n-\t\tif (!(vector_id < ICE_MAX_INTR_PER_VF) ||\n+\t\tif (!(vector_id < pf->num_vf_msix) ||\n \t\t !ice_vc_isvalid_vsi_id(vf, vsi_id) ||\n \t\t (!vector_id && (map->rxq_map || map->txq_map))) {\n \t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n@@ -2388,33 +2404,10 @@ static int ice_vc_cfg_irq_map_msg(struct ice_vf *vf, u8 *msg)\n \t\t}\n \n \t\t/* lookout for the invalid queue index */\n-\t\tqmap = map->rxq_map;\n-\t\tq_vector->num_ring_rx = 0;\n-\t\tfor_each_set_bit(vsi_q_id, &qmap, ICE_MAX_BASE_QS_PER_VF) {\n-\t\t\tif (!ice_vc_isvalid_q_id(vf, vsi_id, vsi_q_id)) {\n-\t\t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n-\t\t\t\tgoto error_param;\n-\t\t\t}\n-\t\t\tq_vector->num_ring_rx++;\n-\t\t\tq_vector->rx.itr_idx = map->rxitr_idx;\n-\t\t\tvsi->rx_rings[vsi_q_id]->q_vector = q_vector;\n-\t\t\tice_cfg_rxq_interrupt(vsi, vsi_q_id, vector_id,\n-\t\t\t\t\t q_vector->rx.itr_idx);\n-\t\t}\n-\n-\t\tqmap = map->txq_map;\n-\t\tq_vector->num_ring_tx = 0;\n-\t\tfor_each_set_bit(vsi_q_id, &qmap, ICE_MAX_BASE_QS_PER_VF) {\n-\t\t\tif (!ice_vc_isvalid_q_id(vf, vsi_id, vsi_q_id)) {\n-\t\t\t\tv_ret = VIRTCHNL_STATUS_ERR_PARAM;\n-\t\t\t\tgoto error_param;\n-\t\t\t}\n-\t\t\tq_vector->num_ring_tx++;\n-\t\t\tq_vector->tx.itr_idx = map->txitr_idx;\n-\t\t\tvsi->tx_rings[vsi_q_id]->q_vector = q_vector;\n-\t\t\tice_cfg_txq_interrupt(vsi, vsi_q_id, vector_id,\n-\t\t\t\t\t q_vector->tx.itr_idx);\n-\t\t}\n+\t\tv_ret = (enum virtchnl_status_code)\n+\t\t\tice_cfg_interrupt(vf, vsi, vector_id, map, q_vector);\n+\t\tif (v_ret)\n+\t\t\tgoto error_param;\n \t}\n \n error_param:\n@@ -2457,7 +2450,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)\n \t\tgoto error_param;\n \t}\n \n-\tif (qci->num_queue_pairs > ICE_MAX_BASE_QS_PER_VF ||\n+\tif (qci->num_queue_pairs > ICE_MAX_RSS_QS_PER_VF ||\n \t qci->num_queue_pairs > min_t(u16, vsi->alloc_txq, vsi->alloc_rxq)) {\n \t\tdev_err(ice_pf_to_dev(pf), \"VF-%d requesting more than supported number of queues: %d\\n\",\n \t\t\tvf->vf_id, min_t(u16, vsi->alloc_txq, vsi->alloc_rxq));\n@@ -2764,16 +2757,16 @@ static int ice_vc_request_qs_msg(struct ice_vf *vf, u8 *msg)\n \tif (!req_queues) {\n \t\tdev_err(dev, \"VF %d tried to request 0 queues. Ignoring.\\n\",\n \t\t\tvf->vf_id);\n-\t} else if (req_queues > ICE_MAX_BASE_QS_PER_VF) {\n+\t} else if (req_queues > ICE_MAX_RSS_QS_PER_VF) {\n \t\tdev_err(dev, \"VF %d tried to request more than %d queues.\\n\",\n-\t\t\tvf->vf_id, ICE_MAX_BASE_QS_PER_VF);\n-\t\tvfres->num_queue_pairs = ICE_MAX_BASE_QS_PER_VF;\n+\t\t\tvf->vf_id, ICE_MAX_RSS_QS_PER_VF);\n+\t\tvfres->num_queue_pairs = ICE_MAX_RSS_QS_PER_VF;\n \t} else if (req_queues > cur_queues &&\n \t\t req_queues - cur_queues > tx_rx_queue_left) {\n \t\tdev_warn(dev, \"VF %d requested %u more queues, but only %u left.\\n\",\n \t\t\t vf->vf_id, req_queues - cur_queues, tx_rx_queue_left);\n \t\tvfres->num_queue_pairs = min_t(u16, max_allowed_vf_queues,\n-\t\t\t\t\t ICE_MAX_BASE_QS_PER_VF);\n+\t\t\t\t\t ICE_MAX_RSS_QS_PER_VF);\n \t} else {\n \t\t/* request is successful, then reset VF */\n \t\tvf->num_req_qs = req_queues;\ndiff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h\nindex 36dad0eba3db..3f9464269bd2 100644\n--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h\n+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h\n@@ -21,18 +21,15 @@\n #define ICE_PCI_CIAD_WAIT_COUNT\t\t100\n #define ICE_PCI_CIAD_WAIT_DELAY_US\t1\n \n-/* VF resources default values and limitation */\n+/* VF resource constraints */\n #define ICE_MAX_VF_COUNT\t\t256\n-#define ICE_MAX_QS_PER_VF\t\t256\n #define ICE_MIN_QS_PER_VF\t\t1\n-#define ICE_DFLT_QS_PER_VF\t\t4\n #define ICE_NONQ_VECS_VF\t\t1\n #define ICE_MAX_SCATTER_QS_PER_VF\t16\n-#define ICE_MAX_BASE_QS_PER_VF\t\t16\n-#define ICE_MAX_INTR_PER_VF\t\t65\n-#define ICE_MAX_POLICY_INTR_PER_VF\t33\n+#define ICE_MAX_RSS_QS_PER_VF\t\t16\n+#define ICE_NUM_VF_MSIX_MED\t\t17\n+#define ICE_NUM_VF_MSIX_SMALL\t\t5\n #define ICE_MIN_INTR_PER_VF\t\t(ICE_MIN_QS_PER_VF + 1)\n-#define ICE_DFLT_INTR_PER_VF\t\t(ICE_DFLT_QS_PER_VF + 1)\n #define ICE_MAX_VF_RESET_TRIES\t\t40\n #define ICE_MAX_VF_RESET_SLEEP_MS\t20\n \n@@ -75,8 +72,8 @@ struct ice_vf {\n \tstruct virtchnl_version_info vf_ver;\n \tu32 driver_caps;\t\t/* reported by VF driver */\n \tstruct virtchnl_ether_addr dflt_lan_addr;\n-\tDECLARE_BITMAP(txq_ena, ICE_MAX_BASE_QS_PER_VF);\n-\tDECLARE_BITMAP(rxq_ena, ICE_MAX_BASE_QS_PER_VF);\n+\tDECLARE_BITMAP(txq_ena, ICE_MAX_RSS_QS_PER_VF);\n+\tDECLARE_BITMAP(rxq_ena, ICE_MAX_RSS_QS_PER_VF);\n \tu16 port_vlan_info;\t\t/* Port VLAN ID and QoS */\n \tu8 pf_set_mac:1;\t\t/* VF MAC address set by VMM admin */\n \tu8 trusted:1;\n", "prefixes": [ "S40", "02/15" ] }