diff mbox

PCI/MSI: don't try to apply MSI(-X) affinity for single vectors

Message ID 20170726201741.4842-1-hch@lst.de
State Not Applicable
Headers show

Commit Message

Christoph Hellwig July 26, 2017, 8:17 p.m. UTC
We'll always get NULL back in that case, so skip the call and the
resulting warning.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/pci/msi.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Bjorn Helgaas Aug. 2, 2017, 6:24 p.m. UTC | #1
On Wed, Jul 26, 2017 at 10:17:41PM +0200, Christoph Hellwig wrote:
> We'll always get NULL back in that case, so skip the call and the
> resulting warning.

1. I'm not sure PCI_IRQ_AFFINITY was the right name.  IIUC, a
MSI/MSI-X vector is always basically bound to CPU, so we always have
affinity.  The only difference with PCI_IRQ_AFFINITY is that instead
of binding them all to the same CPU, we spread them around.  Maybe
PCI_IRQ_SPREAD would be more suggestive.  But whatever, it is what it
is, and I'll expand the changelog something like this:

  Calling pci_alloc_irq_vectors() with PCI_IRQ_AFFINITY indicates
  that we should spread the MSI vectors around the available CPUs.
  But if we're only allocating one vector, there's nothing to spread
  around.

2. The patch makes sense in that if we're only allocating a single
vector, there's nothing to spread around and there's no need to
allocate a cpumask.  But I haven't figured out why we get a warning.
I assume it's because we're getting NULL back when we call
irq_create_affinity_masks() with nvecs==1, but that only happens if
affv==0 or the zalloc fails, and I don't see why either would be the
case.

> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/pci/msi.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
> index 253d92409bb3..19653e5cb68f 100644
> --- a/drivers/pci/msi.c
> +++ b/drivers/pci/msi.c
> @@ -538,7 +538,7 @@ msi_setup_entry(struct pci_dev *dev, int nvec, const struct irq_affinity *affd)
>  	struct msi_desc *entry;
>  	u16 control;
>  
> -	if (affd) {
> +	if (affd && nvec > 1) {
>  		masks = irq_create_affinity_masks(nvec, affd);
>  		if (!masks)
>  			dev_err(&dev->dev, "can't allocate MSI affinity masks for %d vectors\n",
> @@ -679,7 +679,7 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
>  	struct msi_desc *entry;
>  	int ret, i;
>  
> -	if (affd) {
> +	if (affd && nvec > 1) {
>  		masks = irq_create_affinity_masks(nvec, affd);
>  		if (!masks)
>  			dev_err(&dev->dev, "can't allocate MSI-X affinity masks for %d vectors\n",
> -- 
> 2.11.0
>
Bjorn Helgaas Aug. 14, 2017, 8:33 p.m. UTC | #2
On Wed, Aug 02, 2017 at 01:24:58PM -0500, Bjorn Helgaas wrote:
> On Wed, Jul 26, 2017 at 10:17:41PM +0200, Christoph Hellwig wrote:
> > We'll always get NULL back in that case, so skip the call and the
> > resulting warning.
> ...

> 2. The patch makes sense in that if we're only allocating a single
> vector, there's nothing to spread around and there's no need to
> allocate a cpumask.  But I haven't figured out why we get a warning.
> I assume it's because we're getting NULL back when we call
> irq_create_affinity_masks() with nvecs==1, but that only happens if
> affv==0 or the zalloc fails, and I don't see why either would be the
> case.

Waiting for clarification on this question...

> > Signed-off-by: Christoph Hellwig <hch@lst.de>
> > ---
> >  drivers/pci/msi.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
> > index 253d92409bb3..19653e5cb68f 100644
> > --- a/drivers/pci/msi.c
> > +++ b/drivers/pci/msi.c
> > @@ -538,7 +538,7 @@ msi_setup_entry(struct pci_dev *dev, int nvec, const struct irq_affinity *affd)
> >  	struct msi_desc *entry;
> >  	u16 control;
> >  
> > -	if (affd) {
> > +	if (affd && nvec > 1) {
> >  		masks = irq_create_affinity_masks(nvec, affd);
> >  		if (!masks)
> >  			dev_err(&dev->dev, "can't allocate MSI affinity masks for %d vectors\n",
> > @@ -679,7 +679,7 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
> >  	struct msi_desc *entry;
> >  	int ret, i;
> >  
> > -	if (affd) {
> > +	if (affd && nvec > 1) {
> >  		masks = irq_create_affinity_masks(nvec, affd);
> >  		if (!masks)
> >  			dev_err(&dev->dev, "can't allocate MSI-X affinity masks for %d vectors\n",
> > -- 
> > 2.11.0
> >
Christoph Hellwig Aug. 21, 2017, 6:39 p.m. UTC | #3
On Wed, Aug 02, 2017 at 01:24:58PM -0500, Bjorn Helgaas wrote:
> On Wed, Jul 26, 2017 at 10:17:41PM +0200, Christoph Hellwig wrote:
> > We'll always get NULL back in that case, so skip the call and the
> > resulting warning.
> 
> 1. I'm not sure PCI_IRQ_AFFINITY was the right name.  IIUC, a
> MSI/MSI-X vector is always basically bound to CPU,

This will depend on your architecture.

> so we always have
> affinity.  The only difference with PCI_IRQ_AFFINITY is that instead
> of binding them all to the same CPU, we spread them around.  Maybe
> PCI_IRQ_SPREAD would be more suggestive.  But whatever, it is what it
> is, and I'll expand the changelog something like this:

Yes, that might be a better name.  We don't have that many callers
yet, so we could probably still change it.

> 
>   Calling pci_alloc_irq_vectors() with PCI_IRQ_AFFINITY indicates
>   that we should spread the MSI vectors around the available CPUs.
>   But if we're only allocating one vector, there's nothing to spread
>   around.

Ok.

> 2. The patch makes sense in that if we're only allocating a single
> vector, there's nothing to spread around and there's no need to
> allocate a cpumask.  But I haven't figured out why we get a warning.
> I assume it's because we're getting NULL back when we call
> irq_create_affinity_masks() with nvecs==1, but that only happens if
> affv==0 or the zalloc fails, and I don't see why either would be the
> case.

It happens for the !CONFIG_SMP case.  It also happens for the case where
we pre_vectors or post_vectors reduces the affinity vector count to 1
inside irq_create_affinity_masks, so maybe this patch isn't the best and
the warning should either move into irq_create_affinity_masks or just
remove it entirely.
Bjorn Helgaas Aug. 26, 2017, 12:02 a.m. UTC | #4
On Mon, Aug 21, 2017 at 08:39:05PM +0200, Christoph Hellwig wrote:
> On Wed, Aug 02, 2017 at 01:24:58PM -0500, Bjorn Helgaas wrote:
> > On Wed, Jul 26, 2017 at 10:17:41PM +0200, Christoph Hellwig wrote:
> > > We'll always get NULL back in that case, so skip the call and the
> > > resulting warning.
> > 
> > 1. I'm not sure PCI_IRQ_AFFINITY was the right name.  IIUC, a
> > MSI/MSI-X vector is always basically bound to CPU,
> 
> This will depend on your architecture.
> 
> > so we always have
> > affinity.  The only difference with PCI_IRQ_AFFINITY is that instead
> > of binding them all to the same CPU, we spread them around.  Maybe
> > PCI_IRQ_SPREAD would be more suggestive.  But whatever, it is what it
> > is, and I'll expand the changelog something like this:
> 
> Yes, that might be a better name.  We don't have that many callers
> yet, so we could probably still change it.
> 
> > 
> >   Calling pci_alloc_irq_vectors() with PCI_IRQ_AFFINITY indicates
> >   that we should spread the MSI vectors around the available CPUs.
> >   But if we're only allocating one vector, there's nothing to spread
> >   around.
> 
> Ok.
> 
> > 2. The patch makes sense in that if we're only allocating a single
> > vector, there's nothing to spread around and there's no need to
> > allocate a cpumask.  But I haven't figured out why we get a warning.
> > I assume it's because we're getting NULL back when we call
> > irq_create_affinity_masks() with nvecs==1, but that only happens if
> > affv==0 or the zalloc fails, and I don't see why either would be the
> > case.
> 
> It happens for the !CONFIG_SMP case.  It also happens for the case where
> we pre_vectors or post_vectors reduces the affinity vector count to 1
> inside irq_create_affinity_masks, so maybe this patch isn't the best and
> the warning should either move into irq_create_affinity_masks or just
> remove it entirely.

Oh, thanks, I totally missed the !CONFIG_SMP case, sorry about that.

I applied the follow-on patch, which I think obsoletes this one.
diff mbox

Patch

diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index 253d92409bb3..19653e5cb68f 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -538,7 +538,7 @@  msi_setup_entry(struct pci_dev *dev, int nvec, const struct irq_affinity *affd)
 	struct msi_desc *entry;
 	u16 control;
 
-	if (affd) {
+	if (affd && nvec > 1) {
 		masks = irq_create_affinity_masks(nvec, affd);
 		if (!masks)
 			dev_err(&dev->dev, "can't allocate MSI affinity masks for %d vectors\n",
@@ -679,7 +679,7 @@  static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
 	struct msi_desc *entry;
 	int ret, i;
 
-	if (affd) {
+	if (affd && nvec > 1) {
 		masks = irq_create_affinity_masks(nvec, affd);
 		if (!masks)
 			dev_err(&dev->dev, "can't allocate MSI-X affinity masks for %d vectors\n",