Message ID | 20170528003411.17603-3-jakub.kicinski@netronome.com |
---|---|
State | Changes Requested, archived |
Delegated to: | David Miller |
Headers | show |
> pf->limit_vfs = nfp_rtsym_read_le(pf->cpp, "nfd_vf_cfg_max_vfs", > &err); > if (!err) > - return; > + return pci_sriov_set_totalvfs(pf->pdev, pf->limit_vfs); While you're at it, If you're going to enforce the limit at the PCI level, shouldn't you retire 'limit_vfs' altogether? BTW, under which conditions would you expect to find a difference in the maximal number of VFs?
On Sun, 28 May 2017 14:49:58 +0000, Mintz, Yuval wrote: > > pf->limit_vfs = nfp_rtsym_read_le(pf->cpp, "nfd_vf_cfg_max_vfs", > > &err); > > if (!err) > > - return; > > + return pci_sriov_set_totalvfs(pf->pdev, pf->limit_vfs); > > While you're at it, If you're going to enforce the limit at the PCI level, > shouldn't you retire 'limit_vfs' altogether? I don't think so, unfortunately. Sometimes FW sets this value to 0, which means no VFs should be used, but the PCIe subsystem uses 0 as "driver limit not set" :( I will put that in the commit message. > BTW, under which conditions would you expect to find a difference > in the maximal number of VFs? It mostly comes down to how FW projects choose to partition PCIe-side resources on the NFP. Some project for which SR-IOV is not a priority may want to disable it completely. The NFP is very software-driven, including most of PCIe interactions, descriptor formats etc. It's really up to particular projects to shape how the card works.
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_main.c b/drivers/net/ethernet/netronome/nfp/nfp_main.c index f22f56c9218f..ba174e163834 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_main.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_main.c @@ -73,20 +73,22 @@ static const struct pci_device_id nfp_pci_device_ids[] = { }; MODULE_DEVICE_TABLE(pci, nfp_pci_device_ids); -static void nfp_pcie_sriov_read_nfd_limit(struct nfp_pf *pf) +static int nfp_pcie_sriov_read_nfd_limit(struct nfp_pf *pf) { -#ifdef CONFIG_PCI_IOV int err; pf->limit_vfs = nfp_rtsym_read_le(pf->cpp, "nfd_vf_cfg_max_vfs", &err); if (!err) - return; + return pci_sriov_set_totalvfs(pf->pdev, pf->limit_vfs); pf->limit_vfs = ~0; + pci_sriov_set_totalvfs(pf->pdev, 0); /* 0 is unset */ /* Allow any setting for backwards compatibility if symbol not found */ - if (err != -ENOENT) - nfp_warn(pf->cpp, "Warning: VF limit read failed: %d\n", err); -#endif + if (err == -ENOENT) + return 0; + + nfp_warn(pf->cpp, "Warning: VF limit read failed: %d\n", err); + return err; } static int nfp_pcie_sriov_enable(struct pci_dev *pdev, int num_vfs) @@ -373,14 +375,18 @@ static int nfp_pci_probe(struct pci_dev *pdev, if (err) goto err_devlink_unreg; - nfp_pcie_sriov_read_nfd_limit(pf); + err = nfp_pcie_sriov_read_nfd_limit(pf); + if (err) + goto err_fw_unload; err = nfp_net_pci_probe(pf); if (err) - goto err_fw_unload; + goto err_sriov_unlimit; return 0; +err_sriov_unlimit: + pci_sriov_set_totalvfs(pf->pdev, 0); err_fw_unload: if (pf->fw_loaded) nfp_fw_unload(pf); @@ -411,6 +417,7 @@ static void nfp_pci_remove(struct pci_dev *pdev) nfp_net_pci_remove(pf); nfp_pcie_sriov_disable(pdev); + pci_sriov_set_totalvfs(pf->pdev, 0); devlink_unregister(devlink);
PCI subsystem has support for drivers limiting the number of VFs available below what the IOV capability claims. Make use of it. While at it remove the #ifdef/#endif on CONFIG_PCI_IOV, it was there to avoid unnecessary warnings in case device read failed but kernel doesn't have SR-IOV support anyway. Device reads should not fail. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> --- drivers/net/ethernet/netronome/nfp/nfp_main.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-)