diff mbox series

[mlx5-next,v6,1/4] PCI: Add sysfs callback to allow MSI-X table size change of SR-IOV VFs

Message ID 20210209133445.700225-2-leon@kernel.org
State New
Headers show
Series Dynamically assign MSI-X vectors count | expand

Commit Message

Leon Romanovsky Feb. 9, 2021, 1:34 p.m. UTC
From: Leon Romanovsky <leonro@nvidia.com>

Extend PCI sysfs interface with a new callback that allows configuration
of the number of MSI-X vectors for specific SR-IOV VF. This is needed
to optimize the performance of VFs devices by allocating the number of
vectors based on the administrator knowledge of the intended use of the VF.

This function is applicable for SR-IOV VF because such devices allocate
their MSI-X table before they will run on the VMs and HW can't guess the
right number of vectors, so some devices allocate them statically and equally.

1) The newly added /sys/bus/pci/devices/.../sriov_vf_msix_count
file will be seen for the VFs and it is writable as long as a driver is not
bound to the VF.

The values accepted are:
 * > 0 - this will be number reported by the Table Size in the VF's MSI-X Message
         Control register
 * < 0 - not valid
 * = 0 - will reset to the device default value

2) In order to make management easy, provide new read-only sysfs file that
returns a total number of possible to configure MSI-X vectors.

cat /sys/bus/pci/devices/.../sriov_vf_total_msix
  = 0 - feature is not supported
  > 0 - total number of MSI-X vectors available for distribution among the VFs

Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
 Documentation/ABI/testing/sysfs-bus-pci |  28 +++++
 drivers/pci/iov.c                       | 153 ++++++++++++++++++++++++
 include/linux/pci.h                     |  12 ++
 3 files changed, 193 insertions(+)

--
2.29.2

Comments

Bjorn Helgaas Feb. 15, 2021, 9:01 p.m. UTC | #1
On Tue, Feb 09, 2021 at 03:34:42PM +0200, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@nvidia.com>
> 
> Extend PCI sysfs interface with a new callback that allows configuration
> of the number of MSI-X vectors for specific SR-IOV VF. This is needed
> to optimize the performance of VFs devices by allocating the number of
> vectors based on the administrator knowledge of the intended use of the VF.
> 
> This function is applicable for SR-IOV VF because such devices allocate
> their MSI-X table before they will run on the VMs and HW can't guess the
> right number of vectors, so some devices allocate them statically and equally.

This commit log should be clear that this functionality is motivated
by *mlx5* behavior.  The description above makes it sound like this is
generic PCI spec behavior, and it is not.

It may be a reasonable design that conforms to the spec, and we hope
the model will be usable by other designs, but it is not required by
the spec and AFAIK there is nothing in the spec you can point to as
background for this.

So don't *remove* the text you have above, but please *add* some
preceding background information about how mlx5 works.

> 1) The newly added /sys/bus/pci/devices/.../sriov_vf_msix_count
> file will be seen for the VFs and it is writable as long as a driver is not
> bound to the VF.

  This adds /sys/bus/pci/devices/.../sriov_vf_msix_count for VF
  devices and is writable ...

> The values accepted are:
>  * > 0 - this will be number reported by the Table Size in the VF's MSI-X Message
>          Control register
>  * < 0 - not valid
>  * = 0 - will reset to the device default value

  = 0 - will reset to a device-specific default value

> 2) In order to make management easy, provide new read-only sysfs file that
> returns a total number of possible to configure MSI-X vectors.

  For PF devices, this adds a read-only
  /sys/bus/pci/devices/.../sriov_vf_total_msix file that contains the
  total number of MSI-X vectors available for distribution among VFs.

Just as in sysfs-bus-pci, this file should be listed first, because
you must read it before you can use vf_msix_count.

> cat /sys/bus/pci/devices/.../sriov_vf_total_msix
>   = 0 - feature is not supported
>   > 0 - total number of MSI-X vectors available for distribution among the VFs
> 
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> ---
>  Documentation/ABI/testing/sysfs-bus-pci |  28 +++++
>  drivers/pci/iov.c                       | 153 ++++++++++++++++++++++++
>  include/linux/pci.h                     |  12 ++
>  3 files changed, 193 insertions(+)
> 
> diff --git a/Documentation/ABI/testing/sysfs-bus-pci b/Documentation/ABI/testing/sysfs-bus-pci
> index 25c9c39770c6..7dadc3610959 100644
> --- a/Documentation/ABI/testing/sysfs-bus-pci
> +++ b/Documentation/ABI/testing/sysfs-bus-pci
> @@ -375,3 +375,31 @@ Description:
>  		The value comes from the PCI kernel device state and can be one
>  		of: "unknown", "error", "D0", D1", "D2", "D3hot", "D3cold".
>  		The file is read only.
> +
> +What:		/sys/bus/pci/devices/.../sriov_vf_total_msix
> +Date:		January 2021
> +Contact:	Leon Romanovsky <leonro@nvidia.com>
> +Description:
> +		This file is associated with the SR-IOV PFs.
> +		It contains the total number of MSI-X vectors available for
> +		assignment to all VFs associated with this PF. It may be zero
> +		if the device doesn't support this functionality.

s/associated with the/associated with/

> +What:		/sys/bus/pci/devices/.../sriov_vf_msix_count
> +Date:		January 2021
> +Contact:	Leon Romanovsky <leonro@nvidia.com>
> +Description:
> +		This file is associated with the SR-IOV VFs.
> +		It allows configuration of the number of MSI-X vectors for
> +		the VF. This is needed to optimize performance of newly bound
> +		devices by allocating the number of vectors based on the
> +		administrator knowledge of targeted VM.

s/associated with the/associated with/
s/knowledge of targeted VM/knowledge of how the VF will be used/

> +		The values accepted are:
> +		 * > 0 - this will be number reported by the VF's MSI-X
> +			 capability

  this number will be reported as the Table Size in the VF's MSI-X
  capability

> +		 * < 0 - not valid
> +		 * = 0 - will reset to the device default value
> +
> +		The file is writable if the PF is bound to a driver that
> +		implements ->sriov_set_msix_vec_count().
> diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c
> index 4afd4ee4f7f0..c0554aa6b90a 100644
> --- a/drivers/pci/iov.c
> +++ b/drivers/pci/iov.c
> @@ -31,6 +31,7 @@ int pci_iov_virtfn_devfn(struct pci_dev *dev, int vf_id)
>  	return (dev->devfn + dev->sriov->offset +
>  		dev->sriov->stride * vf_id) & 0xff;
>  }
> +EXPORT_SYMBOL_GPL(pci_iov_virtfn_devfn);
> 
>  /*
>   * Per SR-IOV spec sec 3.3.10 and 3.3.11, First VF Offset and VF Stride may
> @@ -157,6 +158,158 @@ int pci_iov_sysfs_link(struct pci_dev *dev,
>  	return rc;
>  }
> 
> +#ifdef CONFIG_PCI_MSI
> +static ssize_t sriov_vf_msix_count_store(struct device *dev,
> +					 struct device_attribute *attr,
> +					 const char *buf, size_t count)
> +{
> +	struct pci_dev *vf_dev = to_pci_dev(dev);
> +	struct pci_dev *pdev = pci_physfn(vf_dev);
> +	int val, ret;
> +
> +	ret = kstrtoint(buf, 0, &val);
> +	if (ret)
> +		return ret;
> +
> +	if (val < 0)
> +		return -EINVAL;
> +
> +	device_lock(&pdev->dev);
> +	if (!pdev->driver || !pdev->driver->sriov_set_msix_vec_count) {
> +		ret = -EOPNOTSUPP;
> +		goto err_pdev;
> +	}
> +
> +	device_lock(&vf_dev->dev);
> +	if (vf_dev->driver) {
> +		/*
> +		 * Driver already probed this VF and configured itself
> +		 * based on previously configured (or default) MSI-X vector
> +		 * count. It is too late to change this field for this
> +		 * specific VF.
> +		 */
> +		ret = -EBUSY;
> +		goto err_dev;
> +	}
> +
> +	ret = pdev->driver->sriov_set_msix_vec_count(vf_dev, val);
> +
> +err_dev:
> +	device_unlock(&vf_dev->dev);
> +err_pdev:
> +	device_unlock(&pdev->dev);
> +	return ret ? : count;
> +}
> +static DEVICE_ATTR_WO(sriov_vf_msix_count);
> +
> +static ssize_t sriov_vf_total_msix_show(struct device *dev,
> +					struct device_attribute *attr,
> +					char *buf)
> +{
> +	struct pci_dev *pdev = to_pci_dev(dev);
> +	u32 vf_total_msix;
> +
> +	device_lock(dev);
> +	if (!pdev->driver || !pdev->driver->sriov_get_vf_total_msix) {
> +		device_unlock(dev);
> +		return -EOPNOTSUPP;
> +	}
> +	vf_total_msix = pdev->driver->sriov_get_vf_total_msix(pdev);
> +	device_unlock(dev);
> +
> +	return sysfs_emit(buf, "%u\n", vf_total_msix);
> +}
> +static DEVICE_ATTR_RO(sriov_vf_total_msix);
> +#endif
> +
> +static const struct attribute *sriov_pf_dev_attrs[] = {
> +#ifdef CONFIG_PCI_MSI
> +	&dev_attr_sriov_vf_total_msix.attr,
> +#endif
> +	NULL,
> +};
> +
> +static const struct attribute *sriov_vf_dev_attrs[] = {
> +#ifdef CONFIG_PCI_MSI
> +	&dev_attr_sriov_vf_msix_count.attr,
> +#endif
> +	NULL,
> +};
> +
> +/*
> + * The PF can change the specific properties of associated VFs. Such
> + * functionality is usually known after PF probed and PCI sysfs files
> + * were already created.

s/The PF can/The PF may be able to/

> + * The function below is driven by such PF. It adds sysfs files to already
> + * existing PF/VF sysfs device hierarchies.

  pci_enable_vf_overlay() and pci_disable_vf_overlay() should be
  called by PF drivers that support changing the number of MSI-X
  vectors assigned to their VFs.

> + */
> +int pci_enable_vf_overlay(struct pci_dev *dev)
> +{
> +	struct pci_dev *virtfn;
> +	int id, ret;
> +
> +	if (!dev->is_physfn || !dev->sriov->num_VFs)
> +		return 0;
> +
> +	ret = sysfs_create_files(&dev->dev.kobj, sriov_pf_dev_attrs);

But I still don't like the fact that we're calling
sysfs_create_files() and sysfs_remove_files() directly.  It makes
complication and opportunities for errors.

I don't see the advantage of creating these files only when the PF
driver supports this.  The management tools have to deal with
sriov_vf_total_msix == 0 and sriov_vf_msix_count == 0 anyway.
Having the sysfs files not be present at all might be slightly
prettier to the person running "ls", but I'm not sure the code
complication is worth that.

I see a hint that Alex might have requested this "only visible when PF
driver supports it" functionality, but I don't see that email on
linux-pci, so I missed the background.

It's true that we have a clump of "sriov_*" sysfs files and this makes
the clump a little bigger.  I wish we had put them all inside an "iov"
directory to begin with, but that's water under the bridge.

> +	if (ret)
> +		return ret;
> +
> +	for (id = 0; id < dev->sriov->num_VFs; id++) {
> +		virtfn = pci_get_domain_bus_and_slot(
> +			pci_domain_nr(dev->bus), pci_iov_virtfn_bus(dev, id),
> +			pci_iov_virtfn_devfn(dev, id));
> +
> +		if (!virtfn)
> +			continue;
> +
> +		ret = sysfs_create_files(&virtfn->dev.kobj,
> +					 sriov_vf_dev_attrs);
> +		if (ret)
> +			goto out;
> +	}
> +	return 0;
> +
> +out:
> +	while (id--) {
> +		virtfn = pci_get_domain_bus_and_slot(
> +			pci_domain_nr(dev->bus), pci_iov_virtfn_bus(dev, id),
> +			pci_iov_virtfn_devfn(dev, id));
> +
> +		if (!virtfn)
> +			continue;
> +
> +		sysfs_remove_files(&virtfn->dev.kobj, sriov_vf_dev_attrs);
> +	}
> +	sysfs_remove_files(&dev->dev.kobj, sriov_pf_dev_attrs);
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(pci_enable_vf_overlay);
> +
> +void pci_disable_vf_overlay(struct pci_dev *dev)
> +{
> +	struct pci_dev *virtfn;
> +	int id;
> +
> +	if (!dev->is_physfn || !dev->sriov->num_VFs)
> +		return;
> +
> +	id = dev->sriov->num_VFs;
> +	while (id--) {
> +		virtfn = pci_get_domain_bus_and_slot(
> +			pci_domain_nr(dev->bus), pci_iov_virtfn_bus(dev, id),
> +			pci_iov_virtfn_devfn(dev, id));
> +
> +		if (!virtfn)
> +			continue;
> +
> +		sysfs_remove_files(&virtfn->dev.kobj, sriov_vf_dev_attrs);
> +	}
> +	sysfs_remove_files(&dev->dev.kobj, sriov_pf_dev_attrs);
> +}
> +EXPORT_SYMBOL_GPL(pci_disable_vf_overlay);
> +
>  int pci_iov_add_virtfn(struct pci_dev *dev, int id)
>  {
>  	int i;
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index b32126d26997..732611937574 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -856,6 +856,11 @@ struct module;
>   *		e.g. drivers/net/e100.c.
>   * @sriov_configure: Optional driver callback to allow configuration of
>   *		number of VFs to enable via sysfs "sriov_numvfs" file.
> + * @sriov_set_msix_vec_count: Driver callback to change number of MSI-X vectors
> + *              to configure via sysfs "sriov_vf_msix_count" entry. This will
> + *              change MSI-X Table Size in their Message Control registers.

s/Driver callback/PF driver callback/
s/in their/in VF/

> + * @sriov_get_vf_total_msix: Total number of MSI-X veectors to distribute
> + *              to the VFs

s/Total number/PF driver callback to get the total number/
s/veectors/vectors/
s/to distribute/available for distribution/

>   * @err_handler: See Documentation/PCI/pci-error-recovery.rst
>   * @groups:	Sysfs attribute groups.
>   * @driver:	Driver model structure.
> @@ -871,6 +876,8 @@ struct pci_driver {
>  	int  (*resume)(struct pci_dev *dev);	/* Device woken up */
>  	void (*shutdown)(struct pci_dev *dev);
>  	int  (*sriov_configure)(struct pci_dev *dev, int num_vfs); /* On PF */
> +	int  (*sriov_set_msix_vec_count)(struct pci_dev *vf, int msix_vec_count); /* On PF */
> +	u32  (*sriov_get_vf_total_msix)(struct pci_dev *pf);
>  	const struct pci_error_handlers *err_handler;
>  	const struct attribute_group **groups;
>  	struct device_driver	driver;
> @@ -2059,6 +2066,9 @@ void __iomem *pci_ioremap_wc_bar(struct pci_dev *pdev, int bar);
>  int pci_iov_virtfn_bus(struct pci_dev *dev, int id);
>  int pci_iov_virtfn_devfn(struct pci_dev *dev, int id);
> 
> +int pci_enable_vf_overlay(struct pci_dev *dev);
> +void pci_disable_vf_overlay(struct pci_dev *dev);
> +
>  int pci_enable_sriov(struct pci_dev *dev, int nr_virtfn);
>  void pci_disable_sriov(struct pci_dev *dev);
> 
> @@ -2100,6 +2110,8 @@ static inline int pci_iov_add_virtfn(struct pci_dev *dev, int id)
>  }
>  static inline void pci_iov_remove_virtfn(struct pci_dev *dev,
>  					 int id) { }
> +static inline int pci_enable_vf_overlay(struct pci_dev *dev) { return 0; }
> +static inline void pci_disable_vf_overlay(struct pci_dev *dev) { }
>  static inline void pci_disable_sriov(struct pci_dev *dev) { }
>  static inline int pci_num_vf(struct pci_dev *dev) { return 0; }
>  static inline int pci_vfs_assigned(struct pci_dev *dev)
> --
> 2.29.2
>
Leon Romanovsky Feb. 16, 2021, 7:33 a.m. UTC | #2
On Mon, Feb 15, 2021 at 03:01:06PM -0600, Bjorn Helgaas wrote:
> On Tue, Feb 09, 2021 at 03:34:42PM +0200, Leon Romanovsky wrote:
> > From: Leon Romanovsky <leonro@nvidia.com>
> >
> > Extend PCI sysfs interface with a new callback that allows configuration
> > of the number of MSI-X vectors for specific SR-IOV VF. This is needed
> > to optimize the performance of VFs devices by allocating the number of
> > vectors based on the administrator knowledge of the intended use of the VF.
> >
> > This function is applicable for SR-IOV VF because such devices allocate
> > their MSI-X table before they will run on the VMs and HW can't guess the
> > right number of vectors, so some devices allocate them statically and equally.
>
> This commit log should be clear that this functionality is motivated
> by *mlx5* behavior.  The description above makes it sound like this is
> generic PCI spec behavior, and it is not.
>
> It may be a reasonable design that conforms to the spec, and we hope
> the model will be usable by other designs, but it is not required by
> the spec and AFAIK there is nothing in the spec you can point to as
> background for this.
>
> So don't *remove* the text you have above, but please *add* some
> preceding background information about how mlx5 works.
>
> > 1) The newly added /sys/bus/pci/devices/.../sriov_vf_msix_count
> > file will be seen for the VFs and it is writable as long as a driver is not
> > bound to the VF.
>
>   This adds /sys/bus/pci/devices/.../sriov_vf_msix_count for VF
>   devices and is writable ...
>
> > The values accepted are:
> >  * > 0 - this will be number reported by the Table Size in the VF's MSI-X Message
> >          Control register
> >  * < 0 - not valid
> >  * = 0 - will reset to the device default value
>
>   = 0 - will reset to a device-specific default value
>
> > 2) In order to make management easy, provide new read-only sysfs file that
> > returns a total number of possible to configure MSI-X vectors.
>
>   For PF devices, this adds a read-only
>   /sys/bus/pci/devices/.../sriov_vf_total_msix file that contains the
>   total number of MSI-X vectors available for distribution among VFs.
>
> Just as in sysfs-bus-pci, this file should be listed first, because
> you must read it before you can use vf_msix_count.

No problem, I'll change, just remember that we are talking about commit
message because in Documentation file, the order is exactly as you request.

>
> > cat /sys/bus/pci/devices/.../sriov_vf_total_msix
> >   = 0 - feature is not supported
> >   > 0 - total number of MSI-X vectors available for distribution among the VFs
> >
> > Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> > ---
> >  Documentation/ABI/testing/sysfs-bus-pci |  28 +++++
> >  drivers/pci/iov.c                       | 153 ++++++++++++++++++++++++
> >  include/linux/pci.h                     |  12 ++
> >  3 files changed, 193 insertions(+)

<...>

> > + */
> > +int pci_enable_vf_overlay(struct pci_dev *dev)
> > +{
> > +	struct pci_dev *virtfn;
> > +	int id, ret;
> > +
> > +	if (!dev->is_physfn || !dev->sriov->num_VFs)
> > +		return 0;
> > +
> > +	ret = sysfs_create_files(&dev->dev.kobj, sriov_pf_dev_attrs);
>
> But I still don't like the fact that we're calling
> sysfs_create_files() and sysfs_remove_files() directly.  It makes
> complication and opportunities for errors.

It is not different from any other code that we have in the kernel.
Let's be concrete, can you point to the errors in this code that I
should fix?

>
> I don't see the advantage of creating these files only when the PF
> driver supports this.  The management tools have to deal with
> sriov_vf_total_msix == 0 and sriov_vf_msix_count == 0 anyway.
> Having the sysfs files not be present at all might be slightly
> prettier to the person running "ls", but I'm not sure the code
> complication is worth that.

It is more than "ls", right now sriov_numvfs is visible without relation
to the driver, even if driver doesn't implement ".sriov_configure", which
IMHO bad. We didn't want to repeat.

Right now, we have many devices that supports SR-IOV, but small amount
of them are capable to rewrite their VF MSI-X table siz. We don't want
"to punish" and clatter their sysfs.

>
> I see a hint that Alex might have requested this "only visible when PF
> driver supports it" functionality, but I don't see that email on
> linux-pci, so I missed the background.

First version of this patch had static files solution.
https://lore.kernel.org/linux-pci/20210103082440.34994-2-leon@kernel.org/#Z30drivers:pci:iov.c

Thanks
Bjorn Helgaas Feb. 16, 2021, 4:12 p.m. UTC | #3
Proposed subject:

  PCI/IOV: Add dynamic MSI-X vector assignment sysfs interface

On Tue, Feb 16, 2021 at 09:33:44AM +0200, Leon Romanovsky wrote:
> On Mon, Feb 15, 2021 at 03:01:06PM -0600, Bjorn Helgaas wrote:
> > On Tue, Feb 09, 2021 at 03:34:42PM +0200, Leon Romanovsky wrote:
> > > From: Leon Romanovsky <leonro@nvidia.com>

Here's a draft of the sort of thing I'm looking for here:

  A typical cloud provider SR-IOV use case is to create many VFs for
  use by guest VMs.  The VFs may not be assigned to a VM until a
  customer requests a VM of a certain size, e.g., number of CPUs.  A
  VF may need MSI-X vectors proportional to the number of CPUs in the
  VM, but there is no standard way to change the number of MSI-X
  vectors supported by a VF.

  Some Mellanox ConnectX devices support dynamic assignment of MSI-X
  vectors to SR-IOV VFs.  This can be done by the PF driver after VFs
  are enabled, and it can be done without affecting VFs that are
  already in use.  The hardware supports a limited pool of MSI-X
  vectors that can be assigned to the PF or to individual VFs.  This
  is device-specific behavior that requires support in the PF driver.

  Add a read-only "sriov_vf_total_msix" sysfs file for the PF and a
  writable "sriov_vf_msix_count" file for each VF.  Management
  software may use these to learn how many MSI-X vectors are available
  and to dynamically assign them to VFs before the VFs are passed
  through to a VM.

  If the PF driver implements the ->sriov_get_vf_total_msix()
  callback, "sriov_vf_total_msix" contains the total number of MSI-X
  vectors available for distribution among VFs.

  If no driver is bound to the VF, writing "N" to
  "sriov_vf_msix_count" uses the PF driver ->sriov_set_msix_vec_count()
  callback to assign "N" MSI-X vectors to the VF.  When a VF driver
  subsequently reads the MSI-X Message Control register, it will see
  the new Table Size "N".

> > > Extend PCI sysfs interface with a new callback that allows configuration
> > > of the number of MSI-X vectors for specific SR-IOV VF. This is needed
> > > to optimize the performance of VFs devices by allocating the number of
> > > vectors based on the administrator knowledge of the intended use of the VF.
> > >
> > > This function is applicable for SR-IOV VF because such devices allocate
> > > their MSI-X table before they will run on the VMs and HW can't guess the
> > > right number of vectors, so some devices allocate them statically and equally.
> >
> > This commit log should be clear that this functionality is motivated
> > by *mlx5* behavior.  The description above makes it sound like this is
> > generic PCI spec behavior, and it is not.
> >
> > It may be a reasonable design that conforms to the spec, and we hope
> > the model will be usable by other designs, but it is not required by
> > the spec and AFAIK there is nothing in the spec you can point to as
> > background for this.
> >
> > So don't *remove* the text you have above, but please *add* some
> > preceding background information about how mlx5 works.
> >
> > > 1) The newly added /sys/bus/pci/devices/.../sriov_vf_msix_count
> > > file will be seen for the VFs and it is writable as long as a driver is not
> > > bound to the VF.
> >
> >   This adds /sys/bus/pci/devices/.../sriov_vf_msix_count for VF
> >   devices and is writable ...
> >
> > > The values accepted are:
> > >  * > 0 - this will be number reported by the Table Size in the VF's MSI-X Message
> > >          Control register
> > >  * < 0 - not valid
> > >  * = 0 - will reset to the device default value
> >
> >   = 0 - will reset to a device-specific default value
> >
> > > 2) In order to make management easy, provide new read-only sysfs file that
> > > returns a total number of possible to configure MSI-X vectors.
> >
> >   For PF devices, this adds a read-only
> >   /sys/bus/pci/devices/.../sriov_vf_total_msix file that contains the
> >   total number of MSI-X vectors available for distribution among VFs.
> >
> > Just as in sysfs-bus-pci, this file should be listed first, because
> > you must read it before you can use vf_msix_count.
> 
> No problem, I'll change, just remember that we are talking about commit
> message because in Documentation file, the order is exactly as you request.

Yes, I noticed that, thank you!  It will be good to have them in the
same order in both the commit log and the Documentation file.  I think
it will make more sense to readers.

> > > cat /sys/bus/pci/devices/.../sriov_vf_total_msix
> > >   = 0 - feature is not supported
> > >   > 0 - total number of MSI-X vectors available for distribution among the VFs
> > >
> > > Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> > > ---
> > >  Documentation/ABI/testing/sysfs-bus-pci |  28 +++++
> > >  drivers/pci/iov.c                       | 153 ++++++++++++++++++++++++
> > >  include/linux/pci.h                     |  12 ++
> > >  3 files changed, 193 insertions(+)
> 
> <...>
> 
> > > + */
> > > +int pci_enable_vf_overlay(struct pci_dev *dev)
> > > +{
> > > +	struct pci_dev *virtfn;
> > > +	int id, ret;
> > > +
> > > +	if (!dev->is_physfn || !dev->sriov->num_VFs)
> > > +		return 0;
> > > +
> > > +	ret = sysfs_create_files(&dev->dev.kobj, sriov_pf_dev_attrs);
> >
> > But I still don't like the fact that we're calling
> > sysfs_create_files() and sysfs_remove_files() directly.  It makes
> > complication and opportunities for errors.
> 
> It is not different from any other code that we have in the kernel.

It *is* different.  There is a general rule that drivers should not
call sysfs_* [1].  The PCI core is arguably not a "driver," but it is
still true that callers of sysfs_create_files() are very special, and
I'd prefer not to add another one.

> Let's be concrete, can you point to the errors in this code that I
> should fix?

I'm not saying there are current errors; I'm saying the additional
code makes errors possible in future code.  For example, we hope that
other drivers can use these sysfs interfaces, and it's possible they
may not call pci_enable_vf_overlay() or pci_disable_vfs_overlay()
correctly.

Or there may be races in device addition/removal.  We have current
issues in this area, e.g., [2], and they're fairly subtle.  I'm not
saying your patches have these issues; only that extra code makes more
chances for mistakes and it's more work to validate it.

> > I don't see the advantage of creating these files only when the PF
> > driver supports this.  The management tools have to deal with
> > sriov_vf_total_msix == 0 and sriov_vf_msix_count == 0 anyway.
> > Having the sysfs files not be present at all might be slightly
> > prettier to the person running "ls", but I'm not sure the code
> > complication is worth that.
> 
> It is more than "ls", right now sriov_numvfs is visible without relation
> to the driver, even if driver doesn't implement ".sriov_configure", which
> IMHO bad. We didn't want to repeat.
> 
> Right now, we have many devices that supports SR-IOV, but small amount
> of them are capable to rewrite their VF MSI-X table siz. We don't want
> "to punish" and clatter their sysfs.

I agree, it's clutter, but at least it's just cosmetic clutter (but
I'm willing to hear discussion about why it's more than cosmetic; see
below).

From the management software point of view, I don't think it matters.
That software already needs to deal with files that don't exist (on
old kernels) and files that contain zero (feature not supported or no
vectors are available).

From my point of view, pci_enable_vf_overlay() or
pci_disable_vfs_overlay() are also clutter, at least compared to
static sysfs attributes.

> > I see a hint that Alex might have requested this "only visible when PF
> > driver supports it" functionality, but I don't see that email on
> > linux-pci, so I missed the background.
> 
> First version of this patch had static files solution.
> https://lore.kernel.org/linux-pci/20210103082440.34994-2-leon@kernel.org/#Z30drivers:pci:iov.c

Thanks for the pointer to the patch.  Can you point me to the
discussion about why we should use the "only visible when PF driver
supports it" model?

Bjorn

[1] https://lore.kernel.org/linux-pci/YBmG7qgIDYIveDfX@kroah.com/
[2] https://lore.kernel.org/linux-pci/20200716110423.xtfyb3n6tn5ixedh@pali/
Leon Romanovsky Feb. 16, 2021, 7:58 p.m. UTC | #4
On Tue, Feb 16, 2021 at 10:12:12AM -0600, Bjorn Helgaas wrote:
> Proposed subject:
>
>   PCI/IOV: Add dynamic MSI-X vector assignment sysfs interface
>
> On Tue, Feb 16, 2021 at 09:33:44AM +0200, Leon Romanovsky wrote:
> > On Mon, Feb 15, 2021 at 03:01:06PM -0600, Bjorn Helgaas wrote:
> > > On Tue, Feb 09, 2021 at 03:34:42PM +0200, Leon Romanovsky wrote:
> > > > From: Leon Romanovsky <leonro@nvidia.com>
>
> Here's a draft of the sort of thing I'm looking for here:
>
>   A typical cloud provider SR-IOV use case is to create many VFs for
>   use by guest VMs.  The VFs may not be assigned to a VM until a
>   customer requests a VM of a certain size, e.g., number of CPUs.  A
>   VF may need MSI-X vectors proportional to the number of CPUs in the
>   VM, but there is no standard way to change the number of MSI-X
>   vectors supported by a VF.
>
>   Some Mellanox ConnectX devices support dynamic assignment of MSI-X
>   vectors to SR-IOV VFs.  This can be done by the PF driver after VFs
>   are enabled, and it can be done without affecting VFs that are
>   already in use.  The hardware supports a limited pool of MSI-X
>   vectors that can be assigned to the PF or to individual VFs.  This
>   is device-specific behavior that requires support in the PF driver.
>
>   Add a read-only "sriov_vf_total_msix" sysfs file for the PF and a
>   writable "sriov_vf_msix_count" file for each VF.  Management
>   software may use these to learn how many MSI-X vectors are available
>   and to dynamically assign them to VFs before the VFs are passed
>   through to a VM.
>
>   If the PF driver implements the ->sriov_get_vf_total_msix()
>   callback, "sriov_vf_total_msix" contains the total number of MSI-X
>   vectors available for distribution among VFs.
>
>   If no driver is bound to the VF, writing "N" to
>   "sriov_vf_msix_count" uses the PF driver ->sriov_set_msix_vec_count()
>   callback to assign "N" MSI-X vectors to the VF.  When a VF driver
>   subsequently reads the MSI-X Message Control register, it will see
>   the new Table Size "N".

It is extremely helpful, I will copy/paste it to the commit message. Thanks.

>

<...>

> > > > + */
> > > > +int pci_enable_vf_overlay(struct pci_dev *dev)
> > > > +{
> > > > +	struct pci_dev *virtfn;
> > > > +	int id, ret;
> > > > +
> > > > +	if (!dev->is_physfn || !dev->sriov->num_VFs)
> > > > +		return 0;
> > > > +
> > > > +	ret = sysfs_create_files(&dev->dev.kobj, sriov_pf_dev_attrs);
> > >
> > > But I still don't like the fact that we're calling
> > > sysfs_create_files() and sysfs_remove_files() directly.  It makes
> > > complication and opportunities for errors.
> >
> > It is not different from any other code that we have in the kernel.
>
> It *is* different.  There is a general rule that drivers should not
> call sysfs_* [1].  The PCI core is arguably not a "driver," but it is
> still true that callers of sysfs_create_files() are very special, and
> I'd prefer not to add another one.

PCI for me is a bus, and bus is the right place to manage sysfs.
But it doesn't matter, we understand each other positions.

>
> > Let's be concrete, can you point to the errors in this code that I
> > should fix?
>
> I'm not saying there are current errors; I'm saying the additional
> code makes errors possible in future code.  For example, we hope that
> other drivers can use these sysfs interfaces, and it's possible they
> may not call pci_enable_vf_overlay() or pci_disable_vfs_overlay()
> correctly.

If not, we will fix, we just need is to ensure that sysfs name won't
change, everything else is easy to change.

>
> Or there may be races in device addition/removal.  We have current
> issues in this area, e.g., [2], and they're fairly subtle.  I'm not
> saying your patches have these issues; only that extra code makes more
> chances for mistakes and it's more work to validate it.
>
> > > I don't see the advantage of creating these files only when the PF
> > > driver supports this.  The management tools have to deal with
> > > sriov_vf_total_msix == 0 and sriov_vf_msix_count == 0 anyway.
> > > Having the sysfs files not be present at all might be slightly
> > > prettier to the person running "ls", but I'm not sure the code
> > > complication is worth that.
> >
> > It is more than "ls", right now sriov_numvfs is visible without relation
> > to the driver, even if driver doesn't implement ".sriov_configure", which
> > IMHO bad. We didn't want to repeat.
> >
> > Right now, we have many devices that supports SR-IOV, but small amount
> > of them are capable to rewrite their VF MSI-X table siz. We don't want
> > "to punish" and clatter their sysfs.
>
> I agree, it's clutter, but at least it's just cosmetic clutter (but
> I'm willing to hear discussion about why it's more than cosmetic; see
> below).

It is more than cosmetic and IMHO it is related to the driver role.
This feature is advertised, managed and configured by PF. It is very
natural request that the PF will view/hide those sysfs files.

>
> From the management software point of view, I don't think it matters.
> That software already needs to deal with files that don't exist (on
> old kernels) and files that contain zero (feature not supported or no
> vectors are available).

Right, in v0, I used static approach.

>
> From my point of view, pci_enable_vf_overlay() or
> pci_disable_vfs_overlay() are also clutter, at least compared to
> static sysfs attributes.
>
> > > I see a hint that Alex might have requested this "only visible when PF
> > > driver supports it" functionality, but I don't see that email on
> > > linux-pci, so I missed the background.
> >
> > First version of this patch had static files solution.
> > https://lore.kernel.org/linux-pci/20210103082440.34994-2-leon@kernel.org/#Z30drivers:pci:iov.c
>
> Thanks for the pointer to the patch.  Can you point me to the
> discussion about why we should use the "only visible when PF driver
> supports it" model?

It is hard to pinpoint specific sentence, this discussion is spread
across many emails and I implemented it in v4.

See this request from Alex:
https://lore.kernel.org/linux-pci/20210114170543.143cce49@omen.home.shazbot.org/
and this is my acknowledge:
https://lore.kernel.org/linux-pci/20210116082331.GL944463@unreal/

BTW, I asked more than once how these sysfs knobs should be handled in the PCI/core.

Thanks

>
> Bjorn
>
> [1] https://lore.kernel.org/linux-pci/YBmG7qgIDYIveDfX@kroah.com/
> [2] https://lore.kernel.org/linux-pci/20200716110423.xtfyb3n6tn5ixedh@pali/
Jason Gunthorpe Feb. 16, 2021, 8:37 p.m. UTC | #5
On Tue, Feb 16, 2021 at 10:12:12AM -0600, Bjorn Helgaas wrote:
> > >
> > > But I still don't like the fact that we're calling
> > > sysfs_create_files() and sysfs_remove_files() directly.  It makes
> > > complication and opportunities for errors.
> > 
> > It is not different from any other code that we have in the kernel.
> 
> It *is* different.  There is a general rule that drivers should not
> call sysfs_* [1].  The PCI core is arguably not a "driver," but it is
> still true that callers of sysfs_create_files() are very special, and
> I'd prefer not to add another one.

I think the point of [1] is people should be setting up their sysfs in
the struct device attribute groups/etc before doing device_add() and
allowing the driver core to handle everything. This can be done in
a lot of cases, eg we have examples of building a dynamic list of
attributes

In other cases, calling wrappers like device_create_file() introduces
a bit more type safety, so adding a device_create_files() would be
trivial enough

Other places in PCI are using syfs_create_group() (and there are over
400 calls to this function in all sorts of device drivers):

drivers/pci/msi.c:      ret = sysfs_create_groups(&pdev->dev.kobj, msi_irq_groups);
drivers/pci/p2pdma.c:   error = sysfs_create_group(&pdev->dev.kobj, &p2pmem_group);
drivers/pci/pci-label.c:        return sysfs_create_group(&pdev->dev.kobj, &smbios_attr_group);
drivers/pci/pci-label.c:        return sysfs_create_group(&pdev->dev.kobj, &acpi_attr_group);

For post-driver_add() stuff, maybe this should do the same, a
"srio_vf/" group?

And a minor cleanup would change these to use device_create_bin_file():

drivers/pci/pci-sysfs.c:        retval = sysfs_create_bin_file(&pdev->dev.kobj, res_attr);
drivers/pci/pci-sysfs.c:                retval = sysfs_create_bin_file(&pdev->dev.kobj, &pcie_config_attr);
drivers/pci/pci-sysfs.c:                retval = sysfs_create_bin_file(&pdev->dev.kobj, &pci_config_attr);
drivers/pci/pci-sysfs.c:                retval = sysfs_create_bin_file(&pdev->dev.kobj, attr);
drivers/pci/vpd.c:      retval = sysfs_create_bin_file(&dev->dev.kobj, attr);

I haven't worked out why pci_create_firmware_label_files() and all of
this needs to be after device_add() though.. Would be slick to put
that in the normal attribute list - we've got some examples of dynamic
pre-device_add() attribute lists in the tree (eg tpm, rdma) that work
nicely.

Jason
Bjorn Helgaas Feb. 17, 2021, 6:02 p.m. UTC | #6
[+cc Greg in case he wants to chime in on the sysfs discussion.
TL;DR: we're trying to add/remove sysfs files when a PCI driver that
supports certain callbacks binds or unbinds; series at
https://lore.kernel.org/r/20210209133445.700225-1-leon@kernel.org]

On Tue, Feb 16, 2021 at 09:58:25PM +0200, Leon Romanovsky wrote:
> On Tue, Feb 16, 2021 at 10:12:12AM -0600, Bjorn Helgaas wrote:
> > On Tue, Feb 16, 2021 at 09:33:44AM +0200, Leon Romanovsky wrote:
> > > On Mon, Feb 15, 2021 at 03:01:06PM -0600, Bjorn Helgaas wrote:
> > > > On Tue, Feb 09, 2021 at 03:34:42PM +0200, Leon Romanovsky wrote:
> > > > > From: Leon Romanovsky <leonro@nvidia.com>

> > > > > +int pci_enable_vf_overlay(struct pci_dev *dev)
> > > > > +{
> > > > > +	struct pci_dev *virtfn;
> > > > > +	int id, ret;
> > > > > +
> > > > > +	if (!dev->is_physfn || !dev->sriov->num_VFs)
> > > > > +		return 0;
> > > > > +
> > > > > +	ret = sysfs_create_files(&dev->dev.kobj, sriov_pf_dev_attrs);
> > > >
> > > > But I still don't like the fact that we're calling
> > > > sysfs_create_files() and sysfs_remove_files() directly.  It makes
> > > > complication and opportunities for errors.
> > >
> > > It is not different from any other code that we have in the kernel.
> >
> > It *is* different.  There is a general rule that drivers should not
> > call sysfs_* [1].  The PCI core is arguably not a "driver," but it is
> > still true that callers of sysfs_create_files() are very special, and
> > I'd prefer not to add another one.
> 
> PCI for me is a bus, and bus is the right place to manage sysfs.
> But it doesn't matter, we understand each other positions.
> 
> > > Let's be concrete, can you point to the errors in this code that I
> > > should fix?
> >
> > I'm not saying there are current errors; I'm saying the additional
> > code makes errors possible in future code.  For example, we hope that
> > other drivers can use these sysfs interfaces, and it's possible they
> > may not call pci_enable_vf_overlay() or pci_disable_vfs_overlay()
> > correctly.
> 
> If not, we will fix, we just need is to ensure that sysfs name won't
> change, everything else is easy to change.
> 
> > Or there may be races in device addition/removal.  We have current
> > issues in this area, e.g., [2], and they're fairly subtle.  I'm not
> > saying your patches have these issues; only that extra code makes more
> > chances for mistakes and it's more work to validate it.
> >
> > > > I don't see the advantage of creating these files only when
> > > > the PF driver supports this.  The management tools have to
> > > > deal with sriov_vf_total_msix == 0 and sriov_vf_msix_count ==
> > > > 0 anyway.  Having the sysfs files not be present at all might
> > > > be slightly prettier to the person running "ls", but I'm not
> > > > sure the code complication is worth that.
> > >
> > > It is more than "ls", right now sriov_numvfs is visible without
> > > relation to the driver, even if driver doesn't implement
> > > ".sriov_configure", which IMHO bad. We didn't want to repeat.
> > >
> > > Right now, we have many devices that supports SR-IOV, but small
> > > amount of them are capable to rewrite their VF MSI-X table siz.
> > > We don't want "to punish" and clatter their sysfs.
> >
> > I agree, it's clutter, but at least it's just cosmetic clutter
> > (but I'm willing to hear discussion about why it's more than
> > cosmetic; see below).
> 
> It is more than cosmetic and IMHO it is related to the driver role.
> This feature is advertised, managed and configured by PF. It is very
> natural request that the PF will view/hide those sysfs files.

Agreed, it's natural if the PF driver adds/removes those files.  But I
don't think it's *essential*, and they *could* be static because of
this:

> > From the management software point of view, I don't think it matters.
> > That software already needs to deal with files that don't exist (on
> > old kernels) and files that contain zero (feature not supported or no
> > vectors are available).

I wonder if sysfs_update_group() would let us have our cake and eat
it, too?  Maybe we could define these files as static attributes and
call sysfs_update_group() when the PF driver binds or unbinds?

Makes me wonder if the device core could call sysfs_update_group()
when binding/unbinding drivers.  But there are only a few existing
callers, and it looks like none of them are for the bind/unbind
situation, so maybe that would be pointless.

> > From my point of view, pci_enable_vf_overlay() or
> > pci_disable_vfs_overlay() are also clutter, at least compared to
> > static sysfs attributes.
> >
> > > > I see a hint that Alex might have requested this "only visible when PF
> > > > driver supports it" functionality, but I don't see that email on
> > > > linux-pci, so I missed the background.
> > >
> > > First version of this patch had static files solution.
> > > https://lore.kernel.org/linux-pci/20210103082440.34994-2-leon@kernel.org/#Z30drivers:pci:iov.c
> >
> > Thanks for the pointer to the patch.  Can you point me to the
> > discussion about why we should use the "only visible when PF driver
> > supports it" model?
> 
> It is hard to pinpoint specific sentence, this discussion is spread
> across many emails and I implemented it in v4.
> 
> See this request from Alex:
> https://lore.kernel.org/linux-pci/20210114170543.143cce49@omen.home.shazbot.org/
> and this is my acknowledge:
> https://lore.kernel.org/linux-pci/20210116082331.GL944463@unreal/
> 
> BTW, I asked more than once how these sysfs knobs should be handled
> in the PCI/core.

Thanks for the pointers.  This is the first instance I can think of
where we want to create PCI core sysfs files based on a driver
binding, so there really isn't a precedent.

> > [1] https://lore.kernel.org/linux-pci/YBmG7qgIDYIveDfX@kroah.com/
> > [2] https://lore.kernel.org/linux-pci/20200716110423.xtfyb3n6tn5ixedh@pali/
Jason Gunthorpe Feb. 17, 2021, 7:25 p.m. UTC | #7
On Wed, Feb 17, 2021 at 12:02:39PM -0600, Bjorn Helgaas wrote:

> > BTW, I asked more than once how these sysfs knobs should be handled
> > in the PCI/core.
> 
> Thanks for the pointers.  This is the first instance I can think of
> where we want to create PCI core sysfs files based on a driver
> binding, so there really isn't a precedent.

The MSI stuff does it today, doesn't it? eg:

virtblk_probe (this is a driver bind)
  init_vq
   virtio_find_vqs
    vp_modern_find_vqs
     vp_find_vqs
      vp_find_vqs_msix
       vp_request_msix_vectors
        pci_alloc_irq_vectors_affinity
         __pci_enable_msi_range
          msi_capability_init
	   populate_msi_sysfs
	    	ret = sysfs_create_groups(&pdev->dev.kobj, msi_irq_groups);

And the sysfs is removed during pci_disable_msi(), also called by the
driver

Jason
Bjorn Helgaas Feb. 17, 2021, 8:28 p.m. UTC | #8
On Wed, Feb 17, 2021 at 03:25:22PM -0400, Jason Gunthorpe wrote:
> On Wed, Feb 17, 2021 at 12:02:39PM -0600, Bjorn Helgaas wrote:
> 
> > > BTW, I asked more than once how these sysfs knobs should be handled
> > > in the PCI/core.
> > 
> > Thanks for the pointers.  This is the first instance I can think of
> > where we want to create PCI core sysfs files based on a driver
> > binding, so there really isn't a precedent.
> 
> The MSI stuff does it today, doesn't it? eg:
> 
> virtblk_probe (this is a driver bind)
>   init_vq
>    virtio_find_vqs
>     vp_modern_find_vqs
>      vp_find_vqs
>       vp_find_vqs_msix
>        vp_request_msix_vectors
>         pci_alloc_irq_vectors_affinity
>          __pci_enable_msi_range
>           msi_capability_init
> 	   populate_msi_sysfs
> 	    	ret = sysfs_create_groups(&pdev->dev.kobj, msi_irq_groups);
> 
> And the sysfs is removed during pci_disable_msi(), also called by the
> driver

Yes, you're right, I didn't notice that one.

I'm not quite convinced that we clean up correctly in all cases --
pci_disable_msix(), pci_disable_msi(), pci_free_irq_vectors(),
pcim_release(), etc are called by several drivers, but in my quick
look I didn't see a guaranteed-to-be-called path to the cleanup during
driver unbind.  I probably just missed it.
Jason Gunthorpe Feb. 17, 2021, 11:52 p.m. UTC | #9
On Wed, Feb 17, 2021 at 02:28:35PM -0600, Bjorn Helgaas wrote:
> On Wed, Feb 17, 2021 at 03:25:22PM -0400, Jason Gunthorpe wrote:
> > On Wed, Feb 17, 2021 at 12:02:39PM -0600, Bjorn Helgaas wrote:
> > 
> > > > BTW, I asked more than once how these sysfs knobs should be handled
> > > > in the PCI/core.
> > > 
> > > Thanks for the pointers.  This is the first instance I can think of
> > > where we want to create PCI core sysfs files based on a driver
> > > binding, so there really isn't a precedent.
> > 
> > The MSI stuff does it today, doesn't it? eg:
> > 
> > virtblk_probe (this is a driver bind)
> >   init_vq
> >    virtio_find_vqs
> >     vp_modern_find_vqs
> >      vp_find_vqs
> >       vp_find_vqs_msix
> >        vp_request_msix_vectors
> >         pci_alloc_irq_vectors_affinity
> >          __pci_enable_msi_range
> >           msi_capability_init
> > 	   populate_msi_sysfs
> > 	    	ret = sysfs_create_groups(&pdev->dev.kobj, msi_irq_groups);
> > 
> > And the sysfs is removed during pci_disable_msi(), also called by the
> > driver
> 
> Yes, you're right, I didn't notice that one.
> 
> I'm not quite convinced that we clean up correctly in all cases --
> pci_disable_msix(), pci_disable_msi(), pci_free_irq_vectors(),
> pcim_release(), etc are called by several drivers, but in my quick
> look I didn't see a guaranteed-to-be-called path to the cleanup during
> driver unbind.  I probably just missed it.
 
I think the contract is the driver has to pair the msi enable with the
msi disable on its own? It is very similar to what is happening here.

Probably there are bugs in drivers on error paths, but there are
always bugs in drivers on error paths..

Jason
Leon Romanovsky Feb. 18, 2021, 10:15 a.m. UTC | #10
On Wed, Feb 17, 2021 at 12:02:39PM -0600, Bjorn Helgaas wrote:
> [+cc Greg in case he wants to chime in on the sysfs discussion.
> TL;DR: we're trying to add/remove sysfs files when a PCI driver that
> supports certain callbacks binds or unbinds; series at
> https://lore.kernel.org/r/20210209133445.700225-1-leon@kernel.org]
>
> On Tue, Feb 16, 2021 at 09:58:25PM +0200, Leon Romanovsky wrote:
> > On Tue, Feb 16, 2021 at 10:12:12AM -0600, Bjorn Helgaas wrote:
> > > On Tue, Feb 16, 2021 at 09:33:44AM +0200, Leon Romanovsky wrote:
> > > > On Mon, Feb 15, 2021 at 03:01:06PM -0600, Bjorn Helgaas wrote:
> > > > > On Tue, Feb 09, 2021 at 03:34:42PM +0200, Leon Romanovsky wrote:
> > > > > > From: Leon Romanovsky <leonro@nvidia.com>
>
> > > > > > +int pci_enable_vf_overlay(struct pci_dev *dev)
> > > > > > +{
> > > > > > +	struct pci_dev *virtfn;
> > > > > > +	int id, ret;
> > > > > > +
> > > > > > +	if (!dev->is_physfn || !dev->sriov->num_VFs)
> > > > > > +		return 0;
> > > > > > +
> > > > > > +	ret = sysfs_create_files(&dev->dev.kobj, sriov_pf_dev_attrs);
> > > > >
> > > > > But I still don't like the fact that we're calling
> > > > > sysfs_create_files() and sysfs_remove_files() directly.  It makes
> > > > > complication and opportunities for errors.
> > > >
> > > > It is not different from any other code that we have in the kernel.
> > >
> > > It *is* different.  There is a general rule that drivers should not
> > > call sysfs_* [1].  The PCI core is arguably not a "driver," but it is
> > > still true that callers of sysfs_create_files() are very special, and
> > > I'd prefer not to add another one.
> >
> > PCI for me is a bus, and bus is the right place to manage sysfs.
> > But it doesn't matter, we understand each other positions.
> >
> > > > Let's be concrete, can you point to the errors in this code that I
> > > > should fix?
> > >
> > > I'm not saying there are current errors; I'm saying the additional
> > > code makes errors possible in future code.  For example, we hope that
> > > other drivers can use these sysfs interfaces, and it's possible they
> > > may not call pci_enable_vf_overlay() or pci_disable_vfs_overlay()
> > > correctly.
> >
> > If not, we will fix, we just need is to ensure that sysfs name won't
> > change, everything else is easy to change.
> >
> > > Or there may be races in device addition/removal.  We have current
> > > issues in this area, e.g., [2], and they're fairly subtle.  I'm not
> > > saying your patches have these issues; only that extra code makes more
> > > chances for mistakes and it's more work to validate it.
> > >
> > > > > I don't see the advantage of creating these files only when
> > > > > the PF driver supports this.  The management tools have to
> > > > > deal with sriov_vf_total_msix == 0 and sriov_vf_msix_count ==
> > > > > 0 anyway.  Having the sysfs files not be present at all might
> > > > > be slightly prettier to the person running "ls", but I'm not
> > > > > sure the code complication is worth that.
> > > >
> > > > It is more than "ls", right now sriov_numvfs is visible without
> > > > relation to the driver, even if driver doesn't implement
> > > > ".sriov_configure", which IMHO bad. We didn't want to repeat.
> > > >
> > > > Right now, we have many devices that supports SR-IOV, but small
> > > > amount of them are capable to rewrite their VF MSI-X table siz.
> > > > We don't want "to punish" and clatter their sysfs.
> > >
> > > I agree, it's clutter, but at least it's just cosmetic clutter
> > > (but I'm willing to hear discussion about why it's more than
> > > cosmetic; see below).
> >
> > It is more than cosmetic and IMHO it is related to the driver role.
> > This feature is advertised, managed and configured by PF. It is very
> > natural request that the PF will view/hide those sysfs files.
>
> Agreed, it's natural if the PF driver adds/removes those files.  But I
> don't think it's *essential*, and they *could* be static because of
> this:
>
> > > From the management software point of view, I don't think it matters.
> > > That software already needs to deal with files that don't exist (on
> > > old kernels) and files that contain zero (feature not supported or no
> > > vectors are available).
>
> I wonder if sysfs_update_group() would let us have our cake and eat
> it, too?  Maybe we could define these files as static attributes and
> call sysfs_update_group() when the PF driver binds or unbinds?
>
> Makes me wonder if the device core could call sysfs_update_group()
> when binding/unbinding drivers.  But there are only a few existing
> callers, and it looks like none of them are for the bind/unbind
> situation, so maybe that would be pointless.

Also it will be not an easy task to do it in driver/core. Our attributes need to be
visible if driver is bound -> we will call to sysfs_update_group() after
->bind() callback. It means that in uwind, we will call to sysfs_update_group() before
->unbind() and the driver will be still bound. So the check is is_supported() for driver
exists/or not won't be possible.

So I tried something similar in bus/pci code and it was super hacky -
the sriov code in general pci path.

BTW, I found sentence which sent me to do directory layout.
https://lore.kernel.org/linux-pci/20210110150727.1965295-1-leon@kernel.org/T/#u
-------------------------------------------------------------------------------
> In addition you could probably even create a directory on the PF with
> the new control you had added for getting the master count as well as
> look at adding symlinks to the VF files so that you could manage all
> of the resources in one spot. That would result in the controls being
> nicely organized and easy to use.

Thanks, for you inputs.

I'll try offline different variants and will post v4 soon.
---------------------------------------------------------------------------------
>
> > > From my point of view, pci_enable_vf_overlay() or
> > > pci_disable_vfs_overlay() are also clutter, at least compared to
> > > static sysfs attributes.
> > >
> > > > > I see a hint that Alex might have requested this "only visible when PF
> > > > > driver supports it" functionality, but I don't see that email on
> > > > > linux-pci, so I missed the background.
> > > >
> > > > First version of this patch had static files solution.
> > > > https://lore.kernel.org/linux-pci/20210103082440.34994-2-leon@kernel.org/#Z30drivers:pci:iov.c
> > >
> > > Thanks for the pointer to the patch.  Can you point me to the
> > > discussion about why we should use the "only visible when PF driver
> > > supports it" model?
> >
> > It is hard to pinpoint specific sentence, this discussion is spread
> > across many emails and I implemented it in v4.
> >
> > See this request from Alex:
> > https://lore.kernel.org/linux-pci/20210114170543.143cce49@omen.home.shazbot.org/
> > and this is my acknowledge:
> > https://lore.kernel.org/linux-pci/20210116082331.GL944463@unreal/
> >
> > BTW, I asked more than once how these sysfs knobs should be handled
> > in the PCI/core.
>
> Thanks for the pointers.  This is the first instance I can think of
> where we want to create PCI core sysfs files based on a driver
> binding, so there really isn't a precedent.

It is always nice to be first :).

>
> > > [1] https://lore.kernel.org/linux-pci/YBmG7qgIDYIveDfX@kroah.com/
> > > [2] https://lore.kernel.org/linux-pci/20200716110423.xtfyb3n6tn5ixedh@pali/
Bjorn Helgaas Feb. 18, 2021, 10:39 p.m. UTC | #11
On Thu, Feb 18, 2021 at 12:15:51PM +0200, Leon Romanovsky wrote:
> On Wed, Feb 17, 2021 at 12:02:39PM -0600, Bjorn Helgaas wrote:
> > [+cc Greg in case he wants to chime in on the sysfs discussion.
> > TL;DR: we're trying to add/remove sysfs files when a PCI driver that
> > supports certain callbacks binds or unbinds; series at
> > https://lore.kernel.org/r/20210209133445.700225-1-leon@kernel.org]
> >
> > On Tue, Feb 16, 2021 at 09:58:25PM +0200, Leon Romanovsky wrote:
> > > On Tue, Feb 16, 2021 at 10:12:12AM -0600, Bjorn Helgaas wrote:
> > > > On Tue, Feb 16, 2021 at 09:33:44AM +0200, Leon Romanovsky wrote:
> > > > > On Mon, Feb 15, 2021 at 03:01:06PM -0600, Bjorn Helgaas wrote:
> > > > > > On Tue, Feb 09, 2021 at 03:34:42PM +0200, Leon Romanovsky wrote:
> > > > > > > From: Leon Romanovsky <leonro@nvidia.com>
> >
> > > > > > > +int pci_enable_vf_overlay(struct pci_dev *dev)
> > > > > > > +{
> > > > > > > +	struct pci_dev *virtfn;
> > > > > > > +	int id, ret;
> > > > > > > +
> > > > > > > +	if (!dev->is_physfn || !dev->sriov->num_VFs)
> > > > > > > +		return 0;
> > > > > > > +
> > > > > > > +	ret = sysfs_create_files(&dev->dev.kobj, sriov_pf_dev_attrs);
> > > > > >
> > > > > > But I still don't like the fact that we're calling
> > > > > > sysfs_create_files() and sysfs_remove_files() directly.  It makes
> > > > > > complication and opportunities for errors.
> > > > >
> > > > > It is not different from any other code that we have in the kernel.
> > > >
> > > > It *is* different.  There is a general rule that drivers should not
> > > > call sysfs_* [1].  The PCI core is arguably not a "driver," but it is
> > > > still true that callers of sysfs_create_files() are very special, and
> > > > I'd prefer not to add another one.
> > >
> > > PCI for me is a bus, and bus is the right place to manage sysfs.
> > > But it doesn't matter, we understand each other positions.
> > >
> > > > > Let's be concrete, can you point to the errors in this code that I
> > > > > should fix?
> > > >
> > > > I'm not saying there are current errors; I'm saying the additional
> > > > code makes errors possible in future code.  For example, we hope that
> > > > other drivers can use these sysfs interfaces, and it's possible they
> > > > may not call pci_enable_vf_overlay() or pci_disable_vfs_overlay()
> > > > correctly.
> > >
> > > If not, we will fix, we just need is to ensure that sysfs name won't
> > > change, everything else is easy to change.
> > >
> > > > Or there may be races in device addition/removal.  We have current
> > > > issues in this area, e.g., [2], and they're fairly subtle.  I'm not
> > > > saying your patches have these issues; only that extra code makes more
> > > > chances for mistakes and it's more work to validate it.
> > > >
> > > > > > I don't see the advantage of creating these files only when
> > > > > > the PF driver supports this.  The management tools have to
> > > > > > deal with sriov_vf_total_msix == 0 and sriov_vf_msix_count ==
> > > > > > 0 anyway.  Having the sysfs files not be present at all might
> > > > > > be slightly prettier to the person running "ls", but I'm not
> > > > > > sure the code complication is worth that.
> > > > >
> > > > > It is more than "ls", right now sriov_numvfs is visible without
> > > > > relation to the driver, even if driver doesn't implement
> > > > > ".sriov_configure", which IMHO bad. We didn't want to repeat.
> > > > >
> > > > > Right now, we have many devices that supports SR-IOV, but small
> > > > > amount of them are capable to rewrite their VF MSI-X table siz.
> > > > > We don't want "to punish" and clatter their sysfs.
> > > >
> > > > I agree, it's clutter, but at least it's just cosmetic clutter
> > > > (but I'm willing to hear discussion about why it's more than
> > > > cosmetic; see below).
> > >
> > > It is more than cosmetic and IMHO it is related to the driver role.
> > > This feature is advertised, managed and configured by PF. It is very
> > > natural request that the PF will view/hide those sysfs files.
> >
> > Agreed, it's natural if the PF driver adds/removes those files.  But I
> > don't think it's *essential*, and they *could* be static because of
> > this:
> >
> > > > From the management software point of view, I don't think it matters.
> > > > That software already needs to deal with files that don't exist (on
> > > > old kernels) and files that contain zero (feature not supported or no
> > > > vectors are available).
> >
> > I wonder if sysfs_update_group() would let us have our cake and eat
> > it, too?  Maybe we could define these files as static attributes and
> > call sysfs_update_group() when the PF driver binds or unbinds?
> >
> > Makes me wonder if the device core could call sysfs_update_group()
> > when binding/unbinding drivers.  But there are only a few existing
> > callers, and it looks like none of them are for the bind/unbind
> > situation, so maybe that would be pointless.
> 
> Also it will be not an easy task to do it in driver/core. Our
> attributes need to be visible if driver is bound -> we will call to
> sysfs_update_group() after ->bind() callback. It means that in
> uwind, we will call to sysfs_update_group() before ->unbind() and
> the driver will be still bound. So the check is is_supported() for
> driver exists/or not won't be possible.

Poking around some more, I found .dev_groups, which might be
applicable?  The test patch below applies to v5.11 and makes the "bh"
file visible in devices bound to the uhci_hcd driver if the function
number is odd.

This thread has more details and some samples:
https://lore.kernel.org/lkml/20190731124349.4474-1-gregkh@linuxfoundation.org/

On qemu, with 00:1a.[012] and 00:1d.[012] set up as uhci_hcd devices:

  root@ubuntu:~# ls /sys/bus/pci/drivers/uhci_hcd
  0000:00:1a.0  0000:00:1a.2  0000:00:1d.1  bind    new_id     uevent
  0000:00:1a.1  0000:00:1d.0  0000:00:1d.2  module  remove_id  unbind
  root@ubuntu:~# grep . /sys/devices/pci0000:00/0000:00:*/bh /dev/null
  /sys/devices/pci0000:00/0000:00:1a.1/bh:hi bjorn
  /sys/devices/pci0000:00/0000:00:1d.1/bh:hi bjorn

diff --git a/drivers/usb/host/uhci-pci.c b/drivers/usb/host/uhci-pci.c
index 9b88745d247f..17ea5bf0dab0 100644
--- a/drivers/usb/host/uhci-pci.c
+++ b/drivers/usb/host/uhci-pci.c
@@ -297,6 +297,38 @@ static int uhci_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
 	return usb_hcd_pci_probe(dev, id, &uhci_driver);
 }
 
+static ssize_t bh_show(struct device *dev, struct device_attribute *attr,
+			char *buf)
+{
+	return snprintf(buf, PAGE_SIZE, "hi bjorn\n");
+}
+static DEVICE_ATTR_RO(bh);
+
+static umode_t bh_is_visible(struct kobject *kobj, struct attribute *a, int n)
+{
+	struct device *dev = kobj_to_dev(kobj);
+	struct pci_dev *pdev = to_pci_dev(dev);
+	umode_t mode = (PCI_FUNC(pdev->devfn) % 2) ? 0444 : 0;
+
+	dev_info(dev, "%s mode %o\n", __func__, mode);
+	return mode;
+}
+
+static struct attribute *bh_attrs[] = {
+	&dev_attr_bh.attr,
+	NULL,
+};
+
+static const struct attribute_group bh_group = {
+	.attrs = bh_attrs,
+	.is_visible = bh_is_visible,
+};
+
+static const struct attribute_group *bh_groups[] = {
+	&bh_group,
+	NULL
+};
+
 static struct pci_driver uhci_pci_driver = {
 	.name =		hcd_name,
 	.id_table =	uhci_pci_ids,
@@ -307,7 +339,8 @@ static struct pci_driver uhci_pci_driver = {
 
 #ifdef CONFIG_PM
 	.driver =	{
-		.pm =	&usb_hcd_pci_pm_ops
+		.pm =	&usb_hcd_pci_pm_ops,
+		.dev_groups = bh_groups,
 	},
 #endif
 };
Leon Romanovsky Feb. 19, 2021, 7:52 a.m. UTC | #12
On Thu, Feb 18, 2021 at 04:39:50PM -0600, Bjorn Helgaas wrote:
> On Thu, Feb 18, 2021 at 12:15:51PM +0200, Leon Romanovsky wrote:
> > On Wed, Feb 17, 2021 at 12:02:39PM -0600, Bjorn Helgaas wrote:
> > > [+cc Greg in case he wants to chime in on the sysfs discussion.
> > > TL;DR: we're trying to add/remove sysfs files when a PCI driver that
> > > supports certain callbacks binds or unbinds; series at
> > > https://lore.kernel.org/r/20210209133445.700225-1-leon@kernel.org]
> > >
> > > On Tue, Feb 16, 2021 at 09:58:25PM +0200, Leon Romanovsky wrote:
> > > > On Tue, Feb 16, 2021 at 10:12:12AM -0600, Bjorn Helgaas wrote:
> > > > > On Tue, Feb 16, 2021 at 09:33:44AM +0200, Leon Romanovsky wrote:
> > > > > > On Mon, Feb 15, 2021 at 03:01:06PM -0600, Bjorn Helgaas wrote:
> > > > > > > On Tue, Feb 09, 2021 at 03:34:42PM +0200, Leon Romanovsky wrote:
> > > > > > > > From: Leon Romanovsky <leonro@nvidia.com>
> > >
> > > > > > > > +int pci_enable_vf_overlay(struct pci_dev *dev)
> > > > > > > > +{
> > > > > > > > +	struct pci_dev *virtfn;
> > > > > > > > +	int id, ret;
> > > > > > > > +
> > > > > > > > +	if (!dev->is_physfn || !dev->sriov->num_VFs)
> > > > > > > > +		return 0;
> > > > > > > > +
> > > > > > > > +	ret = sysfs_create_files(&dev->dev.kobj, sriov_pf_dev_attrs);
> > > > > > >
> > > > > > > But I still don't like the fact that we're calling
> > > > > > > sysfs_create_files() and sysfs_remove_files() directly.  It makes
> > > > > > > complication and opportunities for errors.
> > > > > >
> > > > > > It is not different from any other code that we have in the kernel.
> > > > >
> > > > > It *is* different.  There is a general rule that drivers should not
> > > > > call sysfs_* [1].  The PCI core is arguably not a "driver," but it is
> > > > > still true that callers of sysfs_create_files() are very special, and
> > > > > I'd prefer not to add another one.
> > > >
> > > > PCI for me is a bus, and bus is the right place to manage sysfs.
> > > > But it doesn't matter, we understand each other positions.
> > > >
> > > > > > Let's be concrete, can you point to the errors in this code that I
> > > > > > should fix?
> > > > >
> > > > > I'm not saying there are current errors; I'm saying the additional
> > > > > code makes errors possible in future code.  For example, we hope that
> > > > > other drivers can use these sysfs interfaces, and it's possible they
> > > > > may not call pci_enable_vf_overlay() or pci_disable_vfs_overlay()
> > > > > correctly.
> > > >
> > > > If not, we will fix, we just need is to ensure that sysfs name won't
> > > > change, everything else is easy to change.
> > > >
> > > > > Or there may be races in device addition/removal.  We have current
> > > > > issues in this area, e.g., [2], and they're fairly subtle.  I'm not
> > > > > saying your patches have these issues; only that extra code makes more
> > > > > chances for mistakes and it's more work to validate it.
> > > > >
> > > > > > > I don't see the advantage of creating these files only when
> > > > > > > the PF driver supports this.  The management tools have to
> > > > > > > deal with sriov_vf_total_msix == 0 and sriov_vf_msix_count ==
> > > > > > > 0 anyway.  Having the sysfs files not be present at all might
> > > > > > > be slightly prettier to the person running "ls", but I'm not
> > > > > > > sure the code complication is worth that.
> > > > > >
> > > > > > It is more than "ls", right now sriov_numvfs is visible without
> > > > > > relation to the driver, even if driver doesn't implement
> > > > > > ".sriov_configure", which IMHO bad. We didn't want to repeat.
> > > > > >
> > > > > > Right now, we have many devices that supports SR-IOV, but small
> > > > > > amount of them are capable to rewrite their VF MSI-X table siz.
> > > > > > We don't want "to punish" and clatter their sysfs.
> > > > >
> > > > > I agree, it's clutter, but at least it's just cosmetic clutter
> > > > > (but I'm willing to hear discussion about why it's more than
> > > > > cosmetic; see below).
> > > >
> > > > It is more than cosmetic and IMHO it is related to the driver role.
> > > > This feature is advertised, managed and configured by PF. It is very
> > > > natural request that the PF will view/hide those sysfs files.
> > >
> > > Agreed, it's natural if the PF driver adds/removes those files.  But I
> > > don't think it's *essential*, and they *could* be static because of
> > > this:
> > >
> > > > > From the management software point of view, I don't think it matters.
> > > > > That software already needs to deal with files that don't exist (on
> > > > > old kernels) and files that contain zero (feature not supported or no
> > > > > vectors are available).
> > >
> > > I wonder if sysfs_update_group() would let us have our cake and eat
> > > it, too?  Maybe we could define these files as static attributes and
> > > call sysfs_update_group() when the PF driver binds or unbinds?
> > >
> > > Makes me wonder if the device core could call sysfs_update_group()
> > > when binding/unbinding drivers.  But there are only a few existing
> > > callers, and it looks like none of them are for the bind/unbind
> > > situation, so maybe that would be pointless.
> >
> > Also it will be not an easy task to do it in driver/core. Our
> > attributes need to be visible if driver is bound -> we will call to
> > sysfs_update_group() after ->bind() callback. It means that in
> > uwind, we will call to sysfs_update_group() before ->unbind() and
> > the driver will be still bound. So the check is is_supported() for
> > driver exists/or not won't be possible.
>
> Poking around some more, I found .dev_groups, which might be
> applicable?  The test patch below applies to v5.11 and makes the "bh"
> file visible in devices bound to the uhci_hcd driver if the function
> number is odd.

This solution can be applicable for generic drivers where we can afford
to have custom sysfs files for this driver. In our case, we are talking
about hardware device driver. Both RDMA and netdev are against allowing
for such drivers to create their own sysfs. It will be real nightmare to
have different names/layout/output for the same functionality.

This .dev_groups moves responsibility over sysfs to the drivers and it
is no-go for us.

Another problem with this approach is addition of VFs, not only every
driver will start to manage its own sysfs, but it will need to iterate
over PCI bus or internal lists to find VFs, because we want to create
.set_msix_vec on VFs after PF is bound.

So instead of one, controlled place, we will find ourselves with many
genius implementations of the same thing in the drivers.

Bjorn, we really do standard enable/disable flow with out overlay thing.

Thanks

>
> This thread has more details and some samples:
> https://lore.kernel.org/lkml/20190731124349.4474-1-gregkh@linuxfoundation.org/
>
> On qemu, with 00:1a.[012] and 00:1d.[012] set up as uhci_hcd devices:
>
>   root@ubuntu:~# ls /sys/bus/pci/drivers/uhci_hcd
>   0000:00:1a.0  0000:00:1a.2  0000:00:1d.1  bind    new_id     uevent
>   0000:00:1a.1  0000:00:1d.0  0000:00:1d.2  module  remove_id  unbind
>   root@ubuntu:~# grep . /sys/devices/pci0000:00/0000:00:*/bh /dev/null
>   /sys/devices/pci0000:00/0000:00:1a.1/bh:hi bjorn
>   /sys/devices/pci0000:00/0000:00:1d.1/bh:hi bjorn
>
> diff --git a/drivers/usb/host/uhci-pci.c b/drivers/usb/host/uhci-pci.c
> index 9b88745d247f..17ea5bf0dab0 100644
> --- a/drivers/usb/host/uhci-pci.c
> +++ b/drivers/usb/host/uhci-pci.c
> @@ -297,6 +297,38 @@ static int uhci_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
>  	return usb_hcd_pci_probe(dev, id, &uhci_driver);
>  }
>
> +static ssize_t bh_show(struct device *dev, struct device_attribute *attr,
> +			char *buf)
> +{
> +	return snprintf(buf, PAGE_SIZE, "hi bjorn\n");
> +}
> +static DEVICE_ATTR_RO(bh);
> +
> +static umode_t bh_is_visible(struct kobject *kobj, struct attribute *a, int n)
> +{
> +	struct device *dev = kobj_to_dev(kobj);
> +	struct pci_dev *pdev = to_pci_dev(dev);
> +	umode_t mode = (PCI_FUNC(pdev->devfn) % 2) ? 0444 : 0;
> +
> +	dev_info(dev, "%s mode %o\n", __func__, mode);
> +	return mode;
> +}
> +
> +static struct attribute *bh_attrs[] = {
> +	&dev_attr_bh.attr,
> +	NULL,
> +};
> +
> +static const struct attribute_group bh_group = {
> +	.attrs = bh_attrs,
> +	.is_visible = bh_is_visible,
> +};
> +
> +static const struct attribute_group *bh_groups[] = {
> +	&bh_group,
> +	NULL
> +};
> +
>  static struct pci_driver uhci_pci_driver = {
>  	.name =		hcd_name,
>  	.id_table =	uhci_pci_ids,
> @@ -307,7 +339,8 @@ static struct pci_driver uhci_pci_driver = {
>
>  #ifdef CONFIG_PM
>  	.driver =	{
> -		.pm =	&usb_hcd_pci_pm_ops
> +		.pm =	&usb_hcd_pci_pm_ops,
> +		.dev_groups = bh_groups,
>  	},
>  #endif
>  };
Greg Kroah-Hartman Feb. 19, 2021, 8:20 a.m. UTC | #13
On Fri, Feb 19, 2021 at 09:52:12AM +0200, Leon Romanovsky wrote:
> On Thu, Feb 18, 2021 at 04:39:50PM -0600, Bjorn Helgaas wrote:
> > On Thu, Feb 18, 2021 at 12:15:51PM +0200, Leon Romanovsky wrote:
> > > On Wed, Feb 17, 2021 at 12:02:39PM -0600, Bjorn Helgaas wrote:
> > > > [+cc Greg in case he wants to chime in on the sysfs discussion.
> > > > TL;DR: we're trying to add/remove sysfs files when a PCI driver that
> > > > supports certain callbacks binds or unbinds; series at
> > > > https://lore.kernel.org/r/20210209133445.700225-1-leon@kernel.org]
> > > >
> > > > On Tue, Feb 16, 2021 at 09:58:25PM +0200, Leon Romanovsky wrote:
> > > > > On Tue, Feb 16, 2021 at 10:12:12AM -0600, Bjorn Helgaas wrote:
> > > > > > On Tue, Feb 16, 2021 at 09:33:44AM +0200, Leon Romanovsky wrote:
> > > > > > > On Mon, Feb 15, 2021 at 03:01:06PM -0600, Bjorn Helgaas wrote:
> > > > > > > > On Tue, Feb 09, 2021 at 03:34:42PM +0200, Leon Romanovsky wrote:
> > > > > > > > > From: Leon Romanovsky <leonro@nvidia.com>
> > > >
> > > > > > > > > +int pci_enable_vf_overlay(struct pci_dev *dev)
> > > > > > > > > +{
> > > > > > > > > +	struct pci_dev *virtfn;
> > > > > > > > > +	int id, ret;
> > > > > > > > > +
> > > > > > > > > +	if (!dev->is_physfn || !dev->sriov->num_VFs)
> > > > > > > > > +		return 0;
> > > > > > > > > +
> > > > > > > > > +	ret = sysfs_create_files(&dev->dev.kobj, sriov_pf_dev_attrs);
> > > > > > > >
> > > > > > > > But I still don't like the fact that we're calling
> > > > > > > > sysfs_create_files() and sysfs_remove_files() directly.  It makes
> > > > > > > > complication and opportunities for errors.
> > > > > > >
> > > > > > > It is not different from any other code that we have in the kernel.
> > > > > >
> > > > > > It *is* different.  There is a general rule that drivers should not
> > > > > > call sysfs_* [1].  The PCI core is arguably not a "driver," but it is
> > > > > > still true that callers of sysfs_create_files() are very special, and
> > > > > > I'd prefer not to add another one.
> > > > >
> > > > > PCI for me is a bus, and bus is the right place to manage sysfs.
> > > > > But it doesn't matter, we understand each other positions.
> > > > >
> > > > > > > Let's be concrete, can you point to the errors in this code that I
> > > > > > > should fix?
> > > > > >
> > > > > > I'm not saying there are current errors; I'm saying the additional
> > > > > > code makes errors possible in future code.  For example, we hope that
> > > > > > other drivers can use these sysfs interfaces, and it's possible they
> > > > > > may not call pci_enable_vf_overlay() or pci_disable_vfs_overlay()
> > > > > > correctly.
> > > > >
> > > > > If not, we will fix, we just need is to ensure that sysfs name won't
> > > > > change, everything else is easy to change.
> > > > >
> > > > > > Or there may be races in device addition/removal.  We have current
> > > > > > issues in this area, e.g., [2], and they're fairly subtle.  I'm not
> > > > > > saying your patches have these issues; only that extra code makes more
> > > > > > chances for mistakes and it's more work to validate it.
> > > > > >
> > > > > > > > I don't see the advantage of creating these files only when
> > > > > > > > the PF driver supports this.  The management tools have to
> > > > > > > > deal with sriov_vf_total_msix == 0 and sriov_vf_msix_count ==
> > > > > > > > 0 anyway.  Having the sysfs files not be present at all might
> > > > > > > > be slightly prettier to the person running "ls", but I'm not
> > > > > > > > sure the code complication is worth that.
> > > > > > >
> > > > > > > It is more than "ls", right now sriov_numvfs is visible without
> > > > > > > relation to the driver, even if driver doesn't implement
> > > > > > > ".sriov_configure", which IMHO bad. We didn't want to repeat.
> > > > > > >
> > > > > > > Right now, we have many devices that supports SR-IOV, but small
> > > > > > > amount of them are capable to rewrite their VF MSI-X table siz.
> > > > > > > We don't want "to punish" and clatter their sysfs.
> > > > > >
> > > > > > I agree, it's clutter, but at least it's just cosmetic clutter
> > > > > > (but I'm willing to hear discussion about why it's more than
> > > > > > cosmetic; see below).
> > > > >
> > > > > It is more than cosmetic and IMHO it is related to the driver role.
> > > > > This feature is advertised, managed and configured by PF. It is very
> > > > > natural request that the PF will view/hide those sysfs files.
> > > >
> > > > Agreed, it's natural if the PF driver adds/removes those files.  But I
> > > > don't think it's *essential*, and they *could* be static because of
> > > > this:
> > > >
> > > > > > From the management software point of view, I don't think it matters.
> > > > > > That software already needs to deal with files that don't exist (on
> > > > > > old kernels) and files that contain zero (feature not supported or no
> > > > > > vectors are available).
> > > >
> > > > I wonder if sysfs_update_group() would let us have our cake and eat
> > > > it, too?  Maybe we could define these files as static attributes and
> > > > call sysfs_update_group() when the PF driver binds or unbinds?
> > > >
> > > > Makes me wonder if the device core could call sysfs_update_group()
> > > > when binding/unbinding drivers.  But there are only a few existing
> > > > callers, and it looks like none of them are for the bind/unbind
> > > > situation, so maybe that would be pointless.
> > >
> > > Also it will be not an easy task to do it in driver/core. Our
> > > attributes need to be visible if driver is bound -> we will call to
> > > sysfs_update_group() after ->bind() callback. It means that in
> > > uwind, we will call to sysfs_update_group() before ->unbind() and
> > > the driver will be still bound. So the check is is_supported() for
> > > driver exists/or not won't be possible.
> >
> > Poking around some more, I found .dev_groups, which might be
> > applicable?  The test patch below applies to v5.11 and makes the "bh"
> > file visible in devices bound to the uhci_hcd driver if the function
> > number is odd.
> 
> This solution can be applicable for generic drivers where we can afford
> to have custom sysfs files for this driver. In our case, we are talking
> about hardware device driver. Both RDMA and netdev are against allowing
> for such drivers to create their own sysfs. It will be real nightmare to
> have different names/layout/output for the same functionality.
> 
> This .dev_groups moves responsibility over sysfs to the drivers and it
> is no-go for us.

But it _is_ the driver's responsibility for sysfs files, right?

If not, what exactly are you trying to do here, as I am very confused.

> Another problem with this approach is addition of VFs, not only every
> driver will start to manage its own sysfs, but it will need to iterate
> over PCI bus or internal lists to find VFs, because we want to create
> .set_msix_vec on VFs after PF is bound.

What?  I don't understand at all.

> So instead of one, controlled place, we will find ourselves with many
> genius implementations of the same thing in the drivers.

Same _what_ thing?

> Bjorn, we really do standard enable/disable flow with out overlay thing.

Ok, can you step back and try to explain what problem you are trying to
solve first, before getting bogged down in odd details?  I find it
highly unlikely that this is something "unique", but I could be wrong as
I do not understand what you are wanting to do here at all.

thanks,

greg k-h
Leon Romanovsky Feb. 19, 2021, 4:58 p.m. UTC | #14
On Fri, Feb 19, 2021 at 09:20:18AM +0100, Greg Kroah-Hartman wrote:
> On Fri, Feb 19, 2021 at 09:52:12AM +0200, Leon Romanovsky wrote:
> > On Thu, Feb 18, 2021 at 04:39:50PM -0600, Bjorn Helgaas wrote:
> > > On Thu, Feb 18, 2021 at 12:15:51PM +0200, Leon Romanovsky wrote:
> > > > On Wed, Feb 17, 2021 at 12:02:39PM -0600, Bjorn Helgaas wrote:
> > > > > [+cc Greg in case he wants to chime in on the sysfs discussion.
> > > > > TL;DR: we're trying to add/remove sysfs files when a PCI driver that
> > > > > supports certain callbacks binds or unbinds; series at
> > > > > https://lore.kernel.org/r/20210209133445.700225-1-leon@kernel.org]
> > > > >
> > > > > On Tue, Feb 16, 2021 at 09:58:25PM +0200, Leon Romanovsky wrote:
> > > > > > On Tue, Feb 16, 2021 at 10:12:12AM -0600, Bjorn Helgaas wrote:
> > > > > > > On Tue, Feb 16, 2021 at 09:33:44AM +0200, Leon Romanovsky wrote:
> > > > > > > > On Mon, Feb 15, 2021 at 03:01:06PM -0600, Bjorn Helgaas wrote:
> > > > > > > > > On Tue, Feb 09, 2021 at 03:34:42PM +0200, Leon Romanovsky wrote:
> > > > > > > > > > From: Leon Romanovsky <leonro@nvidia.com>
> > > > >
> > > > > > > > > > +int pci_enable_vf_overlay(struct pci_dev *dev)
> > > > > > > > > > +{
> > > > > > > > > > +	struct pci_dev *virtfn;
> > > > > > > > > > +	int id, ret;
> > > > > > > > > > +
> > > > > > > > > > +	if (!dev->is_physfn || !dev->sriov->num_VFs)
> > > > > > > > > > +		return 0;
> > > > > > > > > > +
> > > > > > > > > > +	ret = sysfs_create_files(&dev->dev.kobj, sriov_pf_dev_attrs);
> > > > > > > > >
> > > > > > > > > But I still don't like the fact that we're calling
> > > > > > > > > sysfs_create_files() and sysfs_remove_files() directly.  It makes
> > > > > > > > > complication and opportunities for errors.
> > > > > > > >
> > > > > > > > It is not different from any other code that we have in the kernel.
> > > > > > >
> > > > > > > It *is* different.  There is a general rule that drivers should not
> > > > > > > call sysfs_* [1].  The PCI core is arguably not a "driver," but it is
> > > > > > > still true that callers of sysfs_create_files() are very special, and
> > > > > > > I'd prefer not to add another one.
> > > > > >
> > > > > > PCI for me is a bus, and bus is the right place to manage sysfs.
> > > > > > But it doesn't matter, we understand each other positions.
> > > > > >
> > > > > > > > Let's be concrete, can you point to the errors in this code that I
> > > > > > > > should fix?
> > > > > > >
> > > > > > > I'm not saying there are current errors; I'm saying the additional
> > > > > > > code makes errors possible in future code.  For example, we hope that
> > > > > > > other drivers can use these sysfs interfaces, and it's possible they
> > > > > > > may not call pci_enable_vf_overlay() or pci_disable_vfs_overlay()
> > > > > > > correctly.
> > > > > >
> > > > > > If not, we will fix, we just need is to ensure that sysfs name won't
> > > > > > change, everything else is easy to change.
> > > > > >
> > > > > > > Or there may be races in device addition/removal.  We have current
> > > > > > > issues in this area, e.g., [2], and they're fairly subtle.  I'm not
> > > > > > > saying your patches have these issues; only that extra code makes more
> > > > > > > chances for mistakes and it's more work to validate it.
> > > > > > >
> > > > > > > > > I don't see the advantage of creating these files only when
> > > > > > > > > the PF driver supports this.  The management tools have to
> > > > > > > > > deal with sriov_vf_total_msix == 0 and sriov_vf_msix_count ==
> > > > > > > > > 0 anyway.  Having the sysfs files not be present at all might
> > > > > > > > > be slightly prettier to the person running "ls", but I'm not
> > > > > > > > > sure the code complication is worth that.
> > > > > > > >
> > > > > > > > It is more than "ls", right now sriov_numvfs is visible without
> > > > > > > > relation to the driver, even if driver doesn't implement
> > > > > > > > ".sriov_configure", which IMHO bad. We didn't want to repeat.
> > > > > > > >
> > > > > > > > Right now, we have many devices that supports SR-IOV, but small
> > > > > > > > amount of them are capable to rewrite their VF MSI-X table siz.
> > > > > > > > We don't want "to punish" and clatter their sysfs.
> > > > > > >
> > > > > > > I agree, it's clutter, but at least it's just cosmetic clutter
> > > > > > > (but I'm willing to hear discussion about why it's more than
> > > > > > > cosmetic; see below).
> > > > > >
> > > > > > It is more than cosmetic and IMHO it is related to the driver role.
> > > > > > This feature is advertised, managed and configured by PF. It is very
> > > > > > natural request that the PF will view/hide those sysfs files.
> > > > >
> > > > > Agreed, it's natural if the PF driver adds/removes those files.  But I
> > > > > don't think it's *essential*, and they *could* be static because of
> > > > > this:
> > > > >
> > > > > > > From the management software point of view, I don't think it matters.
> > > > > > > That software already needs to deal with files that don't exist (on
> > > > > > > old kernels) and files that contain zero (feature not supported or no
> > > > > > > vectors are available).
> > > > >
> > > > > I wonder if sysfs_update_group() would let us have our cake and eat
> > > > > it, too?  Maybe we could define these files as static attributes and
> > > > > call sysfs_update_group() when the PF driver binds or unbinds?
> > > > >
> > > > > Makes me wonder if the device core could call sysfs_update_group()
> > > > > when binding/unbinding drivers.  But there are only a few existing
> > > > > callers, and it looks like none of them are for the bind/unbind
> > > > > situation, so maybe that would be pointless.
> > > >
> > > > Also it will be not an easy task to do it in driver/core. Our
> > > > attributes need to be visible if driver is bound -> we will call to
> > > > sysfs_update_group() after ->bind() callback. It means that in
> > > > uwind, we will call to sysfs_update_group() before ->unbind() and
> > > > the driver will be still bound. So the check is is_supported() for
> > > > driver exists/or not won't be possible.
> > >
> > > Poking around some more, I found .dev_groups, which might be
> > > applicable?  The test patch below applies to v5.11 and makes the "bh"
> > > file visible in devices bound to the uhci_hcd driver if the function
> > > number is odd.
> >
> > This solution can be applicable for generic drivers where we can afford
> > to have custom sysfs files for this driver. In our case, we are talking
> > about hardware device driver. Both RDMA and netdev are against allowing
> > for such drivers to create their own sysfs. It will be real nightmare to
> > have different names/layout/output for the same functionality.
> >
> > This .dev_groups moves responsibility over sysfs to the drivers and it
> > is no-go for us.
>
> But it _is_ the driver's responsibility for sysfs files, right?

It depends on how you declare "responsibility". Direct creating/deletion of
sysfs files is prohibited in netdev and RDMA subsystems. We want to provide
to our users and stack uniformed way of interacting with the system.

It is super painful to manage large fleet of NICs and/or HCAs if every device
driver provides something different for the same feature.

>
> If not, what exactly are you trying to do here, as I am very confused.

https://lore.kernel.org/linux-rdma/20210216203726.GH4247@nvidia.com/T/#m899d883c8a10d95959ac0cd2833762f93729b8ef
Please see more details below.

>
> > Another problem with this approach is addition of VFs, not only every
> > driver will start to manage its own sysfs, but it will need to iterate
> > over PCI bus or internal lists to find VFs, because we want to create
> > .set_msix_vec on VFs after PF is bound.
>
> What?  I don't understand at all.
>
> > So instead of one, controlled place, we will find ourselves with many
> > genius implementations of the same thing in the drivers.
>
> Same _what_ thing?

This thread is part of conversation with Bjorn where he is looking for a
way to avoid creation of sysfs files in the PCI/core.
https://lore.kernel.org/linux-rdma/20210216203726.GH4247@nvidia.com/T/#madc000cf04b5246b450f7183a1d80abdf408a949

https://lore.kernel.org/linux-rdma/20210216203726.GH4247@nvidia.com/T/#Z2e.:..:20210209133445.700225-2-leon::40kernel.org:0drivers:pci:iov.c

>
> > Bjorn, we really do standard enable/disable flow with out overlay thing.
>
> Ok, can you step back and try to explain what problem you are trying to
> solve first, before getting bogged down in odd details?  I find it
> highly unlikely that this is something "unique", but I could be wrong as
> I do not understand what you are wanting to do here at all.

I don't know if you are familiar with SR-IOV concepts, if yes, just skip
the following paragraph.

SR-IOV capable devices have two types of their hardware functions which visible
as PCI devices: physical functions (PF) and virtual functions (VF). Both types
have PCI BDF and driver which probes them during initialization. The PF has extra
properties and it is the one who creates (spawns) new VFs (everything according to
he PCI-SIG).

This series adds new sysfs files to the VFs which are not bound yet (without driver attached)
while the PF driver is loaded. The change to VFs is needed to be done before their driver is
loaded because MSI-X table vector size (the property which we are changing) is used very early
in the initialization sequence.
https://lore.kernel.org/linux-rdma/20210216203726.GH4247@nvidia.com/T/#m899d883c8a10d95959ac0cd2833762f93729b8ef

We have two different flows for supported devices:
1. PF starts and initiates VFs.
2. PF starts and connects to already existing VFs.

So there is nothing "unique" here, as long as this logic is handled by the PCI/core.

Thanks

>
> thanks,
>
> greg k-h
Bjorn Helgaas Feb. 20, 2021, 7:06 p.m. UTC | #15
On Fri, Feb 19, 2021 at 09:20:18AM +0100, Greg Kroah-Hartman wrote:

> Ok, can you step back and try to explain what problem you are trying to
> solve first, before getting bogged down in odd details?  I find it
> highly unlikely that this is something "unique", but I could be wrong as
> I do not understand what you are wanting to do here at all.

We want to add two new sysfs files:

  sriov_vf_total_msix, for PF devices
  sriov_vf_msix_count, for VF devices associated with the PF

AFAICT it is *acceptable* if they are both present always.  But it
would be *ideal* if they were only present when a driver that
implements the ->sriov_get_vf_total_msix() callback is bound to the
PF.
Leon Romanovsky Feb. 21, 2021, 6:59 a.m. UTC | #16
On Sat, Feb 20, 2021 at 01:06:00PM -0600, Bjorn Helgaas wrote:
> On Fri, Feb 19, 2021 at 09:20:18AM +0100, Greg Kroah-Hartman wrote:
>
> > Ok, can you step back and try to explain what problem you are trying to
> > solve first, before getting bogged down in odd details?  I find it
> > highly unlikely that this is something "unique", but I could be wrong as
> > I do not understand what you are wanting to do here at all.
>
> We want to add two new sysfs files:
>
>   sriov_vf_total_msix, for PF devices
>   sriov_vf_msix_count, for VF devices associated with the PF
>
> AFAICT it is *acceptable* if they are both present always.  But it
> would be *ideal* if they were only present when a driver that
> implements the ->sriov_get_vf_total_msix() callback is bound to the
> PF.

BTW, we already have all possible combinations: static, static with
folder, with and without "sriov_" prefix, dynamic with and without
folders on VFs.

I need to know on which version I'll get Acked-by and that version I
will resubmit.

Thanks
Greg Kroah-Hartman Feb. 21, 2021, 1 p.m. UTC | #17
On Sat, Feb 20, 2021 at 01:06:00PM -0600, Bjorn Helgaas wrote:
> On Fri, Feb 19, 2021 at 09:20:18AM +0100, Greg Kroah-Hartman wrote:
> 
> > Ok, can you step back and try to explain what problem you are trying to
> > solve first, before getting bogged down in odd details?  I find it
> > highly unlikely that this is something "unique", but I could be wrong as
> > I do not understand what you are wanting to do here at all.
> 
> We want to add two new sysfs files:
> 
>   sriov_vf_total_msix, for PF devices
>   sriov_vf_msix_count, for VF devices associated with the PF
> 
> AFAICT it is *acceptable* if they are both present always.  But it
> would be *ideal* if they were only present when a driver that
> implements the ->sriov_get_vf_total_msix() callback is bound to the
> PF.

Ok, so in the pci bus probe function, if the driver that successfully
binds to the device is of this type, then create the sysfs files.

The driver core will properly emit a KOBJ_BIND message when the driver
is bound to the device, so userspace knows it is now safe to rescan the
device to see any new attributes.

Here's some horrible pseudo-patch for where this probably should be
done:


diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
index ec44a79e951a..5a854a5e3977 100644
--- a/drivers/pci/pci-driver.c
+++ b/drivers/pci/pci-driver.c
@@ -307,8 +307,14 @@ static long local_pci_probe(void *_ddi)
 	pm_runtime_get_sync(dev);
 	pci_dev->driver = pci_drv;
 	rc = pci_drv->probe(pci_dev, ddi->id);
-	if (!rc)
+	if (!rc) {
+		/* If PF or FV driver was bound, let's add some more sysfs files */
+		if (pci_drv->is_pf)
+			device_add_groups(pci_dev->dev, pf_groups);
+		if (pci_drv->is_fv)
+			device_add_groups(pci_dev->dev, fv_groups);
 		return rc;
+	}
 	if (rc < 0) {
 		pci_dev->driver = NULL;
 		pm_runtime_put_sync(dev);




Add some proper error handling if device_add_groups() fails, and then do
the same thing to remove the sysfs files when the device is unbound from
the driver, and you should be good to go.

Or is this what you all are talking about already and I'm just totally
confused?

thanks,

greg k-h
Leon Romanovsky Feb. 21, 2021, 1:55 p.m. UTC | #18
On Sun, Feb 21, 2021 at 02:00:41PM +0100, Greg Kroah-Hartman wrote:
> On Sat, Feb 20, 2021 at 01:06:00PM -0600, Bjorn Helgaas wrote:
> > On Fri, Feb 19, 2021 at 09:20:18AM +0100, Greg Kroah-Hartman wrote:
> >
> > > Ok, can you step back and try to explain what problem you are trying to
> > > solve first, before getting bogged down in odd details?  I find it
> > > highly unlikely that this is something "unique", but I could be wrong as
> > > I do not understand what you are wanting to do here at all.
> >
> > We want to add two new sysfs files:
> >
> >   sriov_vf_total_msix, for PF devices
> >   sriov_vf_msix_count, for VF devices associated with the PF
> >
> > AFAICT it is *acceptable* if they are both present always.  But it
> > would be *ideal* if they were only present when a driver that
> > implements the ->sriov_get_vf_total_msix() callback is bound to the
> > PF.
>
> Ok, so in the pci bus probe function, if the driver that successfully
> binds to the device is of this type, then create the sysfs files.
>
> The driver core will properly emit a KOBJ_BIND message when the driver
> is bound to the device, so userspace knows it is now safe to rescan the
> device to see any new attributes.
>
> Here's some horrible pseudo-patch for where this probably should be
> done:
>
>
> diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
> index ec44a79e951a..5a854a5e3977 100644
> --- a/drivers/pci/pci-driver.c
> +++ b/drivers/pci/pci-driver.c
> @@ -307,8 +307,14 @@ static long local_pci_probe(void *_ddi)
>  	pm_runtime_get_sync(dev);
>  	pci_dev->driver = pci_drv;
>  	rc = pci_drv->probe(pci_dev, ddi->id);
> -	if (!rc)
> +	if (!rc) {
> +		/* If PF or FV driver was bound, let's add some more sysfs files */
> +		if (pci_drv->is_pf)
> +			device_add_groups(pci_dev->dev, pf_groups);
> +		if (pci_drv->is_fv)
> +			device_add_groups(pci_dev->dev, fv_groups);
>  		return rc;
> +	}
>  	if (rc < 0) {
>  		pci_dev->driver = NULL;
>  		pm_runtime_put_sync(dev);
>
>
>
>
> Add some proper error handling if device_add_groups() fails, and then do
> the same thing to remove the sysfs files when the device is unbound from
> the driver, and you should be good to go.
>
> Or is this what you all are talking about already and I'm just totally
> confused?

There are two different things here. First we need to add sysfs files
for VF as the event of PF driver bind, not for the VF binds.

In your pseudo code, it will look:
  	rc = pci_drv->probe(pci_dev, ddi->id);
 -	if (!rc)
 +	if (!rc) {
 +		/* If PF or FV driver was bound, let's add some more sysfs files */
 +		if (pci_drv->is_pf) {
 +                      int i = 0;
 +			device_add_groups(pci_dev->dev, pf_groups);
 +                      for (i; i < pci_dev->totalVF; i++) {
 +                              struct pci_device vf_dev = find_vf_device(pci_dev, i);
 +
 +				device_add_groups(vf_dev->dev, fv_groups);
 +                      }
 +              }
  		return rc;

Second, the code proposed by me does that but with driver callback that
PF calls during init/uninit.

Thanks

>
> thanks,
>
> greg k-h
Greg Kroah-Hartman Feb. 21, 2021, 3:01 p.m. UTC | #19
On Sun, Feb 21, 2021 at 03:55:18PM +0200, Leon Romanovsky wrote:
> On Sun, Feb 21, 2021 at 02:00:41PM +0100, Greg Kroah-Hartman wrote:
> > On Sat, Feb 20, 2021 at 01:06:00PM -0600, Bjorn Helgaas wrote:
> > > On Fri, Feb 19, 2021 at 09:20:18AM +0100, Greg Kroah-Hartman wrote:
> > >
> > > > Ok, can you step back and try to explain what problem you are trying to
> > > > solve first, before getting bogged down in odd details?  I find it
> > > > highly unlikely that this is something "unique", but I could be wrong as
> > > > I do not understand what you are wanting to do here at all.
> > >
> > > We want to add two new sysfs files:
> > >
> > >   sriov_vf_total_msix, for PF devices
> > >   sriov_vf_msix_count, for VF devices associated with the PF
> > >
> > > AFAICT it is *acceptable* if they are both present always.  But it
> > > would be *ideal* if they were only present when a driver that
> > > implements the ->sriov_get_vf_total_msix() callback is bound to the
> > > PF.
> >
> > Ok, so in the pci bus probe function, if the driver that successfully
> > binds to the device is of this type, then create the sysfs files.
> >
> > The driver core will properly emit a KOBJ_BIND message when the driver
> > is bound to the device, so userspace knows it is now safe to rescan the
> > device to see any new attributes.
> >
> > Here's some horrible pseudo-patch for where this probably should be
> > done:
> >
> >
> > diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
> > index ec44a79e951a..5a854a5e3977 100644
> > --- a/drivers/pci/pci-driver.c
> > +++ b/drivers/pci/pci-driver.c
> > @@ -307,8 +307,14 @@ static long local_pci_probe(void *_ddi)
> >  	pm_runtime_get_sync(dev);
> >  	pci_dev->driver = pci_drv;
> >  	rc = pci_drv->probe(pci_dev, ddi->id);
> > -	if (!rc)
> > +	if (!rc) {
> > +		/* If PF or FV driver was bound, let's add some more sysfs files */
> > +		if (pci_drv->is_pf)
> > +			device_add_groups(pci_dev->dev, pf_groups);
> > +		if (pci_drv->is_fv)
> > +			device_add_groups(pci_dev->dev, fv_groups);
> >  		return rc;
> > +	}
> >  	if (rc < 0) {
> >  		pci_dev->driver = NULL;
> >  		pm_runtime_put_sync(dev);
> >
> >
> >
> >
> > Add some proper error handling if device_add_groups() fails, and then do
> > the same thing to remove the sysfs files when the device is unbound from
> > the driver, and you should be good to go.
> >
> > Or is this what you all are talking about already and I'm just totally
> > confused?
> 
> There are two different things here. First we need to add sysfs files
> for VF as the event of PF driver bind, not for the VF binds.
> 
> In your pseudo code, it will look:
>   	rc = pci_drv->probe(pci_dev, ddi->id);
>  -	if (!rc)
>  +	if (!rc) {
>  +		/* If PF or FV driver was bound, let's add some more sysfs files */
>  +		if (pci_drv->is_pf) {
>  +                      int i = 0;
>  +			device_add_groups(pci_dev->dev, pf_groups);
>  +                      for (i; i < pci_dev->totalVF; i++) {
>  +                              struct pci_device vf_dev = find_vf_device(pci_dev, i);
>  +
>  +				device_add_groups(vf_dev->dev, fv_groups);

Hahaha, no.

You are randomly adding new sysfs files to a _DIFFERENT_ device than the
one that is currently undergoing the probe() call?  That's crazy.  And
will break userspace.

Why would you want that?  The device should ONLY change when the device
that controls it has a driver bound/unbound to it, that should NEVER
cause random other devices on the bus to change state or sysfs files.

>  +                      }
>  +              }
>   		return rc;
> 
> Second, the code proposed by me does that but with driver callback that
> PF calls during init/uninit.

That works too, but really, why not just have the pci core do it for
you?  That way you do not have to go and modify each and every PCI
driver to get this type of support.  PCI core things belong in the PCI
core, not in each individual driver.

thanks,

greg k-h
Leon Romanovsky Feb. 21, 2021, 3:30 p.m. UTC | #20
On Sun, Feb 21, 2021 at 04:01:32PM +0100, Greg Kroah-Hartman wrote:
> On Sun, Feb 21, 2021 at 03:55:18PM +0200, Leon Romanovsky wrote:
> > On Sun, Feb 21, 2021 at 02:00:41PM +0100, Greg Kroah-Hartman wrote:
> > > On Sat, Feb 20, 2021 at 01:06:00PM -0600, Bjorn Helgaas wrote:
> > > > On Fri, Feb 19, 2021 at 09:20:18AM +0100, Greg Kroah-Hartman wrote:
> > > >
> > > > > Ok, can you step back and try to explain what problem you are trying to
> > > > > solve first, before getting bogged down in odd details?  I find it
> > > > > highly unlikely that this is something "unique", but I could be wrong as
> > > > > I do not understand what you are wanting to do here at all.
> > > >
> > > > We want to add two new sysfs files:
> > > >
> > > >   sriov_vf_total_msix, for PF devices
> > > >   sriov_vf_msix_count, for VF devices associated with the PF
> > > >
> > > > AFAICT it is *acceptable* if they are both present always.  But it
> > > > would be *ideal* if they were only present when a driver that
> > > > implements the ->sriov_get_vf_total_msix() callback is bound to the
> > > > PF.
> > >
> > > Ok, so in the pci bus probe function, if the driver that successfully
> > > binds to the device is of this type, then create the sysfs files.
> > >
> > > The driver core will properly emit a KOBJ_BIND message when the driver
> > > is bound to the device, so userspace knows it is now safe to rescan the
> > > device to see any new attributes.
> > >
> > > Here's some horrible pseudo-patch for where this probably should be
> > > done:
> > >
> > >
> > > diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
> > > index ec44a79e951a..5a854a5e3977 100644
> > > --- a/drivers/pci/pci-driver.c
> > > +++ b/drivers/pci/pci-driver.c
> > > @@ -307,8 +307,14 @@ static long local_pci_probe(void *_ddi)
> > >  	pm_runtime_get_sync(dev);
> > >  	pci_dev->driver = pci_drv;
> > >  	rc = pci_drv->probe(pci_dev, ddi->id);
> > > -	if (!rc)
> > > +	if (!rc) {
> > > +		/* If PF or FV driver was bound, let's add some more sysfs files */
> > > +		if (pci_drv->is_pf)
> > > +			device_add_groups(pci_dev->dev, pf_groups);
> > > +		if (pci_drv->is_fv)
> > > +			device_add_groups(pci_dev->dev, fv_groups);
> > >  		return rc;
> > > +	}
> > >  	if (rc < 0) {
> > >  		pci_dev->driver = NULL;
> > >  		pm_runtime_put_sync(dev);
> > >
> > >
> > >
> > >
> > > Add some proper error handling if device_add_groups() fails, and then do
> > > the same thing to remove the sysfs files when the device is unbound from
> > > the driver, and you should be good to go.
> > >
> > > Or is this what you all are talking about already and I'm just totally
> > > confused?
> >
> > There are two different things here. First we need to add sysfs files
> > for VF as the event of PF driver bind, not for the VF binds.
> >
> > In your pseudo code, it will look:
> >   	rc = pci_drv->probe(pci_dev, ddi->id);
> >  -	if (!rc)
> >  +	if (!rc) {
> >  +		/* If PF or FV driver was bound, let's add some more sysfs files */
> >  +		if (pci_drv->is_pf) {
> >  +                      int i = 0;
> >  +			device_add_groups(pci_dev->dev, pf_groups);
> >  +                      for (i; i < pci_dev->totalVF; i++) {
> >  +                              struct pci_device vf_dev = find_vf_device(pci_dev, i);
> >  +
> >  +				device_add_groups(vf_dev->dev, fv_groups);
>
> Hahaha, no.
>
> You are randomly adding new sysfs files to a _DIFFERENT_ device than the
> one that is currently undergoing the probe() call?  That's crazy.  And
> will break userspace.

It is more complex than _DIFFERENT_ device, we are talking about SR-IOV
capable devices and their VFs which are created by connected PF function.

And VF MUST not be probed, we are checking it and protecting this flow.
To summarize: PF must be bound to the driver, VF mustn't.

>
> Why would you want that?  The device should ONLY change when the device
> that controls it has a driver bound/unbound to it, that should NEVER
> cause random other devices on the bus to change state or sysfs files.

Greg, I don't know if you are familiar with SR-IOV concepts which I
explained before, but we are not talking about random devices. The PF device
owns VFs, it is visible in the bus with different symlinks and even PF driver
iterates over those VFs during its probe.

The PF driver is the one who starts and stops those VF devices.

>
> >  +                      }
> >  +              }
> >   		return rc;
> >
> > Second, the code proposed by me does that but with driver callback that
> > PF calls during init/uninit.
>
> That works too, but really, why not just have the pci core do it for
> you?  That way you do not have to go and modify each and every PCI
> driver to get this type of support.  PCI core things belong in the PCI
> core, not in each individual driver.

There are not many drivers which are supporting this specific configuration
flow. It needs to be SR-IOV capable device, with ability to overwrite
specific PCI default through PF device for its VF devices. For now, we
are talking about one device in the market and I can imagine that extra 2-3
vendors in the world will support this flow.

During review, I was requested to create API that controls those sysfs
for specific devices that explicitly acknowledge support.

Greg, please take a look on the cover letter and Documentation.
https://lore.kernel.org/linux-rdma/20210216203726.GH4247@nvidia.com/T/#m899d883c8a10d95959ac0cd2833762f93729b8ef

Thanks

>
> thanks,
>
> greg k-h
Bjorn Helgaas Feb. 23, 2021, 9:07 p.m. UTC | #21
On Sun, Feb 21, 2021 at 08:59:18AM +0200, Leon Romanovsky wrote:
> On Sat, Feb 20, 2021 at 01:06:00PM -0600, Bjorn Helgaas wrote:
> > On Fri, Feb 19, 2021 at 09:20:18AM +0100, Greg Kroah-Hartman wrote:
> >
> > > Ok, can you step back and try to explain what problem you are trying to
> > > solve first, before getting bogged down in odd details?  I find it
> > > highly unlikely that this is something "unique", but I could be wrong as
> > > I do not understand what you are wanting to do here at all.
> >
> > We want to add two new sysfs files:
> >
> >   sriov_vf_total_msix, for PF devices
> >   sriov_vf_msix_count, for VF devices associated with the PF
> >
> > AFAICT it is *acceptable* if they are both present always.  But it
> > would be *ideal* if they were only present when a driver that
> > implements the ->sriov_get_vf_total_msix() callback is bound to the
> > PF.
> 
> BTW, we already have all possible combinations: static, static with
> folder, with and without "sriov_" prefix, dynamic with and without
> folders on VFs.
> 
> I need to know on which version I'll get Acked-by and that version I
> will resubmit.

I propose that you make static attributes for both files, so
"sriov_vf_total_msix" is visible for *every* PF in the system and
"sriov_vf_msix_count" is visible for *every* VF in the system.

The PF "sriov_vf_total_msix" show function can return zero if there's
no PF driver or it doesn't support ->sriov_get_vf_total_msix().
(Incidentally, I think the documentation should mention that when it
*is* supported, the contents of this file are *constant*, i.e., it
does not decrease as vectors are assigned to VFs.)

The "sriov_vf_msix_count" set function can ignore writes if there's no
PF driver or it doesn't support ->sriov_get_vf_total_msix(), or if a
VF driver is bound.

Any userspace software must be able to deal with those scenarios
anyway, so I don't think the mere presence or absence of the files is
a meaningful signal to that software.

If we figure out a way to make the files visible only when the
appropriate driver is bound, that might be nice and could always be
done later.  But I don't think it's essential.

Bjorn
Greg Kroah-Hartman Feb. 24, 2021, 8:09 a.m. UTC | #22
On Tue, Feb 23, 2021 at 03:07:43PM -0600, Bjorn Helgaas wrote:
> On Sun, Feb 21, 2021 at 08:59:18AM +0200, Leon Romanovsky wrote:
> > On Sat, Feb 20, 2021 at 01:06:00PM -0600, Bjorn Helgaas wrote:
> > > On Fri, Feb 19, 2021 at 09:20:18AM +0100, Greg Kroah-Hartman wrote:
> > >
> > > > Ok, can you step back and try to explain what problem you are trying to
> > > > solve first, before getting bogged down in odd details?  I find it
> > > > highly unlikely that this is something "unique", but I could be wrong as
> > > > I do not understand what you are wanting to do here at all.
> > >
> > > We want to add two new sysfs files:
> > >
> > >   sriov_vf_total_msix, for PF devices
> > >   sriov_vf_msix_count, for VF devices associated with the PF
> > >
> > > AFAICT it is *acceptable* if they are both present always.  But it
> > > would be *ideal* if they were only present when a driver that
> > > implements the ->sriov_get_vf_total_msix() callback is bound to the
> > > PF.
> > 
> > BTW, we already have all possible combinations: static, static with
> > folder, with and without "sriov_" prefix, dynamic with and without
> > folders on VFs.
> > 
> > I need to know on which version I'll get Acked-by and that version I
> > will resubmit.
> 
> I propose that you make static attributes for both files, so
> "sriov_vf_total_msix" is visible for *every* PF in the system and
> "sriov_vf_msix_count" is visible for *every* VF in the system.
> 
> The PF "sriov_vf_total_msix" show function can return zero if there's
> no PF driver or it doesn't support ->sriov_get_vf_total_msix().
> (Incidentally, I think the documentation should mention that when it
> *is* supported, the contents of this file are *constant*, i.e., it
> does not decrease as vectors are assigned to VFs.)
> 
> The "sriov_vf_msix_count" set function can ignore writes if there's no
> PF driver or it doesn't support ->sriov_get_vf_total_msix(), or if a
> VF driver is bound.
> 
> Any userspace software must be able to deal with those scenarios
> anyway, so I don't think the mere presence or absence of the files is
> a meaningful signal to that software.

Hopefully, good luck with that!

> If we figure out a way to make the files visible only when the
> appropriate driver is bound, that might be nice and could always be
> done later.  But I don't think it's essential.

That seems reasonable, feel free to cc: me on the next patch series and
I'll try to review it, which should make more sense to me than this
email thread :)

thanks,

greg k-h
Leon Romanovsky Feb. 24, 2021, 9:53 a.m. UTC | #23
On Tue, Feb 23, 2021 at 03:07:43PM -0600, Bjorn Helgaas wrote:
> On Sun, Feb 21, 2021 at 08:59:18AM +0200, Leon Romanovsky wrote:
> > On Sat, Feb 20, 2021 at 01:06:00PM -0600, Bjorn Helgaas wrote:
> > > On Fri, Feb 19, 2021 at 09:20:18AM +0100, Greg Kroah-Hartman wrote:
> > >
> > > > Ok, can you step back and try to explain what problem you are trying to
> > > > solve first, before getting bogged down in odd details?  I find it
> > > > highly unlikely that this is something "unique", but I could be wrong as
> > > > I do not understand what you are wanting to do here at all.
> > >
> > > We want to add two new sysfs files:
> > >
> > >   sriov_vf_total_msix, for PF devices
> > >   sriov_vf_msix_count, for VF devices associated with the PF
> > >
> > > AFAICT it is *acceptable* if they are both present always.  But it
> > > would be *ideal* if they were only present when a driver that
> > > implements the ->sriov_get_vf_total_msix() callback is bound to the
> > > PF.
> >
> > BTW, we already have all possible combinations: static, static with
> > folder, with and without "sriov_" prefix, dynamic with and without
> > folders on VFs.
> >
> > I need to know on which version I'll get Acked-by and that version I
> > will resubmit.
>
> I propose that you make static attributes for both files, so
> "sriov_vf_total_msix" is visible for *every* PF in the system and
> "sriov_vf_msix_count" is visible for *every* VF in the system.

No problem, this is close to v0/v1.

>
> The PF "sriov_vf_total_msix" show function can return zero if there's
> no PF driver or it doesn't support ->sriov_get_vf_total_msix().
> (Incidentally, I think the documentation should mention that when it
> *is* supported, the contents of this file are *constant*, i.e., it
> does not decrease as vectors are assigned to VFs.)
>
> The "sriov_vf_msix_count" set function can ignore writes if there's no
> PF driver or it doesn't support ->sriov_get_vf_total_msix(), or if a
> VF driver is bound.

Just to be clear, why don't we return EINVAL/EOPNOTSUPP instead of
silently ignore?

Thanks
Bjorn Helgaas Feb. 24, 2021, 3:07 p.m. UTC | #24
On Wed, Feb 24, 2021 at 11:53:30AM +0200, Leon Romanovsky wrote:
> On Tue, Feb 23, 2021 at 03:07:43PM -0600, Bjorn Helgaas wrote:
> > On Sun, Feb 21, 2021 at 08:59:18AM +0200, Leon Romanovsky wrote:
> > > On Sat, Feb 20, 2021 at 01:06:00PM -0600, Bjorn Helgaas wrote:
> > > > On Fri, Feb 19, 2021 at 09:20:18AM +0100, Greg Kroah-Hartman wrote:
> > > >
> > > > > Ok, can you step back and try to explain what problem you are trying to
> > > > > solve first, before getting bogged down in odd details?  I find it
> > > > > highly unlikely that this is something "unique", but I could be wrong as
> > > > > I do not understand what you are wanting to do here at all.
> > > >
> > > > We want to add two new sysfs files:
> > > >
> > > >   sriov_vf_total_msix, for PF devices
> > > >   sriov_vf_msix_count, for VF devices associated with the PF
> > > >
> > > > AFAICT it is *acceptable* if they are both present always.  But it
> > > > would be *ideal* if they were only present when a driver that
> > > > implements the ->sriov_get_vf_total_msix() callback is bound to the
> > > > PF.
> > >
> > > BTW, we already have all possible combinations: static, static with
> > > folder, with and without "sriov_" prefix, dynamic with and without
> > > folders on VFs.
> > >
> > > I need to know on which version I'll get Acked-by and that version I
> > > will resubmit.
> >
> > I propose that you make static attributes for both files, so
> > "sriov_vf_total_msix" is visible for *every* PF in the system and
> > "sriov_vf_msix_count" is visible for *every* VF in the system.
> 
> No problem, this is close to v0/v1.
> 
> > The PF "sriov_vf_total_msix" show function can return zero if there's
> > no PF driver or it doesn't support ->sriov_get_vf_total_msix().
> > (Incidentally, I think the documentation should mention that when it
> > *is* supported, the contents of this file are *constant*, i.e., it
> > does not decrease as vectors are assigned to VFs.)
> >
> > The "sriov_vf_msix_count" set function can ignore writes if there's no
> > PF driver or it doesn't support ->sriov_get_vf_total_msix(), or if a
> > VF driver is bound.
> 
> Just to be clear, why don't we return EINVAL/EOPNOTSUPP instead of
> silently ignore?

Returning some error is fine.  I just meant that the reads/writes
would have no effect on the PCI core or the device driver.
Don Dutile Feb. 24, 2021, 9:37 p.m. UTC | #25
On 2/24/21 3:09 AM, Greg Kroah-Hartman wrote:
> On Tue, Feb 23, 2021 at 03:07:43PM -0600, Bjorn Helgaas wrote:
>> On Sun, Feb 21, 2021 at 08:59:18AM +0200, Leon Romanovsky wrote:
>>> On Sat, Feb 20, 2021 at 01:06:00PM -0600, Bjorn Helgaas wrote:
>>>> On Fri, Feb 19, 2021 at 09:20:18AM +0100, Greg Kroah-Hartman wrote:
>>>>
>>>>> Ok, can you step back and try to explain what problem you are trying to
>>>>> solve first, before getting bogged down in odd details?  I find it
>>>>> highly unlikely that this is something "unique", but I could be wrong as
>>>>> I do not understand what you are wanting to do here at all.
>>>> We want to add two new sysfs files:
>>>>
>>>>    sriov_vf_total_msix, for PF devices
>>>>    sriov_vf_msix_count, for VF devices associated with the PF
>>>>
>>>> AFAICT it is *acceptable* if they are both present always.  But it
>>>> would be *ideal* if they were only present when a driver that
>>>> implements the ->sriov_get_vf_total_msix() callback is bound to the
>>>> PF.
>>> BTW, we already have all possible combinations: static, static with
>>> folder, with and without "sriov_" prefix, dynamic with and without
>>> folders on VFs.
>>>
>>> I need to know on which version I'll get Acked-by and that version I
>>> will resubmit.
>> I propose that you make static attributes for both files, so
>> "sriov_vf_total_msix" is visible for *every* PF in the system and
>> "sriov_vf_msix_count" is visible for *every* VF in the system.
>>
>> The PF "sriov_vf_total_msix" show function can return zero if there's
>> no PF driver or it doesn't support ->sriov_get_vf_total_msix().
>> (Incidentally, I think the documentation should mention that when it
>> *is* supported, the contents of this file are *constant*, i.e., it
>> does not decrease as vectors are assigned to VFs.)
>>
>> The "sriov_vf_msix_count" set function can ignore writes if there's no
>> PF driver or it doesn't support ->sriov_get_vf_total_msix(), or if a
>> VF driver is bound.
>>
>> Any userspace software must be able to deal with those scenarios
>> anyway, so I don't think the mere presence or absence of the files is
>> a meaningful signal to that software.
> Hopefully, good luck with that!
Management sw is use to dealing with optional sysfs files.
libvirt does that now with the VF files for a PF -- all PF's don't have VFs.
The VF files are only created if a VF ext-cfg-hdr exists.

So, as Bjorn said, mgmt sw related to optionally tuning PCIe devices are designed to check for file existence.

>> If we figure out a way to make the files visible only when the
>> appropriate driver is bound, that might be nice and could always be
>> done later.  But I don't think it's essential.
> That seems reasonable, feel free to cc: me on the next patch series and
> I'll try to review it, which should make more sense to me than this
> email thread :)
>
> thanks,
>
> greg k-h
>
diff mbox series

Patch

diff --git a/Documentation/ABI/testing/sysfs-bus-pci b/Documentation/ABI/testing/sysfs-bus-pci
index 25c9c39770c6..7dadc3610959 100644
--- a/Documentation/ABI/testing/sysfs-bus-pci
+++ b/Documentation/ABI/testing/sysfs-bus-pci
@@ -375,3 +375,31 @@  Description:
 		The value comes from the PCI kernel device state and can be one
 		of: "unknown", "error", "D0", D1", "D2", "D3hot", "D3cold".
 		The file is read only.
+
+What:		/sys/bus/pci/devices/.../sriov_vf_total_msix
+Date:		January 2021
+Contact:	Leon Romanovsky <leonro@nvidia.com>
+Description:
+		This file is associated with the SR-IOV PFs.
+		It contains the total number of MSI-X vectors available for
+		assignment to all VFs associated with this PF. It may be zero
+		if the device doesn't support this functionality.
+
+What:		/sys/bus/pci/devices/.../sriov_vf_msix_count
+Date:		January 2021
+Contact:	Leon Romanovsky <leonro@nvidia.com>
+Description:
+		This file is associated with the SR-IOV VFs.
+		It allows configuration of the number of MSI-X vectors for
+		the VF. This is needed to optimize performance of newly bound
+		devices by allocating the number of vectors based on the
+		administrator knowledge of targeted VM.
+
+		The values accepted are:
+		 * > 0 - this will be number reported by the VF's MSI-X
+			 capability
+		 * < 0 - not valid
+		 * = 0 - will reset to the device default value
+
+		The file is writable if the PF is bound to a driver that
+		implements ->sriov_set_msix_vec_count().
diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c
index 4afd4ee4f7f0..c0554aa6b90a 100644
--- a/drivers/pci/iov.c
+++ b/drivers/pci/iov.c
@@ -31,6 +31,7 @@  int pci_iov_virtfn_devfn(struct pci_dev *dev, int vf_id)
 	return (dev->devfn + dev->sriov->offset +
 		dev->sriov->stride * vf_id) & 0xff;
 }
+EXPORT_SYMBOL_GPL(pci_iov_virtfn_devfn);

 /*
  * Per SR-IOV spec sec 3.3.10 and 3.3.11, First VF Offset and VF Stride may
@@ -157,6 +158,158 @@  int pci_iov_sysfs_link(struct pci_dev *dev,
 	return rc;
 }

+#ifdef CONFIG_PCI_MSI
+static ssize_t sriov_vf_msix_count_store(struct device *dev,
+					 struct device_attribute *attr,
+					 const char *buf, size_t count)
+{
+	struct pci_dev *vf_dev = to_pci_dev(dev);
+	struct pci_dev *pdev = pci_physfn(vf_dev);
+	int val, ret;
+
+	ret = kstrtoint(buf, 0, &val);
+	if (ret)
+		return ret;
+
+	if (val < 0)
+		return -EINVAL;
+
+	device_lock(&pdev->dev);
+	if (!pdev->driver || !pdev->driver->sriov_set_msix_vec_count) {
+		ret = -EOPNOTSUPP;
+		goto err_pdev;
+	}
+
+	device_lock(&vf_dev->dev);
+	if (vf_dev->driver) {
+		/*
+		 * Driver already probed this VF and configured itself
+		 * based on previously configured (or default) MSI-X vector
+		 * count. It is too late to change this field for this
+		 * specific VF.
+		 */
+		ret = -EBUSY;
+		goto err_dev;
+	}
+
+	ret = pdev->driver->sriov_set_msix_vec_count(vf_dev, val);
+
+err_dev:
+	device_unlock(&vf_dev->dev);
+err_pdev:
+	device_unlock(&pdev->dev);
+	return ret ? : count;
+}
+static DEVICE_ATTR_WO(sriov_vf_msix_count);
+
+static ssize_t sriov_vf_total_msix_show(struct device *dev,
+					struct device_attribute *attr,
+					char *buf)
+{
+	struct pci_dev *pdev = to_pci_dev(dev);
+	u32 vf_total_msix;
+
+	device_lock(dev);
+	if (!pdev->driver || !pdev->driver->sriov_get_vf_total_msix) {
+		device_unlock(dev);
+		return -EOPNOTSUPP;
+	}
+	vf_total_msix = pdev->driver->sriov_get_vf_total_msix(pdev);
+	device_unlock(dev);
+
+	return sysfs_emit(buf, "%u\n", vf_total_msix);
+}
+static DEVICE_ATTR_RO(sriov_vf_total_msix);
+#endif
+
+static const struct attribute *sriov_pf_dev_attrs[] = {
+#ifdef CONFIG_PCI_MSI
+	&dev_attr_sriov_vf_total_msix.attr,
+#endif
+	NULL,
+};
+
+static const struct attribute *sriov_vf_dev_attrs[] = {
+#ifdef CONFIG_PCI_MSI
+	&dev_attr_sriov_vf_msix_count.attr,
+#endif
+	NULL,
+};
+
+/*
+ * The PF can change the specific properties of associated VFs. Such
+ * functionality is usually known after PF probed and PCI sysfs files
+ * were already created.
+ *
+ * The function below is driven by such PF. It adds sysfs files to already
+ * existing PF/VF sysfs device hierarchies.
+ */
+int pci_enable_vf_overlay(struct pci_dev *dev)
+{
+	struct pci_dev *virtfn;
+	int id, ret;
+
+	if (!dev->is_physfn || !dev->sriov->num_VFs)
+		return 0;
+
+	ret = sysfs_create_files(&dev->dev.kobj, sriov_pf_dev_attrs);
+	if (ret)
+		return ret;
+
+	for (id = 0; id < dev->sriov->num_VFs; id++) {
+		virtfn = pci_get_domain_bus_and_slot(
+			pci_domain_nr(dev->bus), pci_iov_virtfn_bus(dev, id),
+			pci_iov_virtfn_devfn(dev, id));
+
+		if (!virtfn)
+			continue;
+
+		ret = sysfs_create_files(&virtfn->dev.kobj,
+					 sriov_vf_dev_attrs);
+		if (ret)
+			goto out;
+	}
+	return 0;
+
+out:
+	while (id--) {
+		virtfn = pci_get_domain_bus_and_slot(
+			pci_domain_nr(dev->bus), pci_iov_virtfn_bus(dev, id),
+			pci_iov_virtfn_devfn(dev, id));
+
+		if (!virtfn)
+			continue;
+
+		sysfs_remove_files(&virtfn->dev.kobj, sriov_vf_dev_attrs);
+	}
+	sysfs_remove_files(&dev->dev.kobj, sriov_pf_dev_attrs);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(pci_enable_vf_overlay);
+
+void pci_disable_vf_overlay(struct pci_dev *dev)
+{
+	struct pci_dev *virtfn;
+	int id;
+
+	if (!dev->is_physfn || !dev->sriov->num_VFs)
+		return;
+
+	id = dev->sriov->num_VFs;
+	while (id--) {
+		virtfn = pci_get_domain_bus_and_slot(
+			pci_domain_nr(dev->bus), pci_iov_virtfn_bus(dev, id),
+			pci_iov_virtfn_devfn(dev, id));
+
+		if (!virtfn)
+			continue;
+
+		sysfs_remove_files(&virtfn->dev.kobj, sriov_vf_dev_attrs);
+	}
+	sysfs_remove_files(&dev->dev.kobj, sriov_pf_dev_attrs);
+}
+EXPORT_SYMBOL_GPL(pci_disable_vf_overlay);
+
 int pci_iov_add_virtfn(struct pci_dev *dev, int id)
 {
 	int i;
diff --git a/include/linux/pci.h b/include/linux/pci.h
index b32126d26997..732611937574 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -856,6 +856,11 @@  struct module;
  *		e.g. drivers/net/e100.c.
  * @sriov_configure: Optional driver callback to allow configuration of
  *		number of VFs to enable via sysfs "sriov_numvfs" file.
+ * @sriov_set_msix_vec_count: Driver callback to change number of MSI-X vectors
+ *              to configure via sysfs "sriov_vf_msix_count" entry. This will
+ *              change MSI-X Table Size in their Message Control registers.
+ * @sriov_get_vf_total_msix: Total number of MSI-X veectors to distribute
+ *              to the VFs
  * @err_handler: See Documentation/PCI/pci-error-recovery.rst
  * @groups:	Sysfs attribute groups.
  * @driver:	Driver model structure.
@@ -871,6 +876,8 @@  struct pci_driver {
 	int  (*resume)(struct pci_dev *dev);	/* Device woken up */
 	void (*shutdown)(struct pci_dev *dev);
 	int  (*sriov_configure)(struct pci_dev *dev, int num_vfs); /* On PF */
+	int  (*sriov_set_msix_vec_count)(struct pci_dev *vf, int msix_vec_count); /* On PF */
+	u32  (*sriov_get_vf_total_msix)(struct pci_dev *pf);
 	const struct pci_error_handlers *err_handler;
 	const struct attribute_group **groups;
 	struct device_driver	driver;
@@ -2059,6 +2066,9 @@  void __iomem *pci_ioremap_wc_bar(struct pci_dev *pdev, int bar);
 int pci_iov_virtfn_bus(struct pci_dev *dev, int id);
 int pci_iov_virtfn_devfn(struct pci_dev *dev, int id);

+int pci_enable_vf_overlay(struct pci_dev *dev);
+void pci_disable_vf_overlay(struct pci_dev *dev);
+
 int pci_enable_sriov(struct pci_dev *dev, int nr_virtfn);
 void pci_disable_sriov(struct pci_dev *dev);

@@ -2100,6 +2110,8 @@  static inline int pci_iov_add_virtfn(struct pci_dev *dev, int id)
 }
 static inline void pci_iov_remove_virtfn(struct pci_dev *dev,
 					 int id) { }
+static inline int pci_enable_vf_overlay(struct pci_dev *dev) { return 0; }
+static inline void pci_disable_vf_overlay(struct pci_dev *dev) { }
 static inline void pci_disable_sriov(struct pci_dev *dev) { }
 static inline int pci_num_vf(struct pci_dev *dev) { return 0; }
 static inline int pci_vfs_assigned(struct pci_dev *dev)