diff mbox

[1/3] PCI: ensure the PCI device is locked over ->reset_notify calls

Message ID 20170601111039.8913-2-hch@lst.de
State Accepted
Headers show

Commit Message

Christoph Hellwig June 1, 2017, 11:10 a.m. UTC
Without this ->notify_reset instance may race with ->remove calls,
which can be easily triggered in NVMe.

Reported-by: Rakesh Pandit <rakesh@tuxera.com>
Tested-by: Rakesh Pandit <rakesh@tuxera.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/pci/pci.c | 20 ++++++++++++--------
 1 file changed, 12 insertions(+), 8 deletions(-)

Comments

Bjorn Helgaas June 6, 2017, 5:31 a.m. UTC | #1
On Thu, Jun 01, 2017 at 01:10:37PM +0200, Christoph Hellwig wrote:
> Without this ->notify_reset instance may race with ->remove calls,

s/notify_reset/reset_notify/

> which can be easily triggered in NVMe.

OK, sorry to be dense; it's taking me a long time to work out the
details here.  It feels like there should be a general principle to
help figure out where we need locking, and it would be really awesome
if we could include that in the changelog.  But it's not obvious to me
what that principle would be.

I think the two racing paths are these:

PATH 1 (write to sysfs "reset" file):

  sysfs_kf_write                      # <-- A (sysfs write)
    reset_store
      pci_reset_function
        pci_dev_lock                  # <-- patch moves lock here
          device_lock
        pci_dev_save_and_disable
          pci_reset_notify(dev, true)
            err_handler->reset_notify
              nvme_reset_notify       # nvme_err_handler.reset_notify
                nvme_dev_disable      # prepare == true
                  # shutdown == false
                  nvme_pci_disable
          pci_save_state
        pci_dev_reset
          pci_dev_lock                # <-- lock was originally here
          __pci_dev_reset
            pcie_flr                  # <-- B (issue reset)
          pci_dev_unlock              # <-- unlock was originally here
        pci_dev_restore
          pci_restore_state
          pci_reset_notify(dev, false)
            err_handler->reset_notify
              nvme_reset_notify       # nvme_err_handler.reset_notify
                dev = pci_get_drvdata(pdev)   # <-- F (dev == NULL)
                nvme_reset            # prepare == false
                  queue_work(..., &dev->reset_work)   # nvme_reset_work
        pci_dev_unlock                # <-- patch moves unlock here

  ...
  nvme_reset_work
    nvme_remove_dead_ctrl
      nvme_dev_disable
        if (!schedule_work(&dev->remove_work)) # nvme_remove_dead_ctrl_work
          nvme_put_ctrl

  ...
  nvme_remove_dead_ctrl_work
    if (pci_get_drvdata(pdev))
      device_release_driver(&pdev->dev)
        ...
          __device_release_driver
            drv->remove
              nvme_remove
                pci_set_drvdata(pdev, NULL)

PATH 2 (AER recovery):

  do_recovery                         # <-- C (AER interrupt)
    if (severity == AER_FATAL)
      state = pci_channel_io_frozen
    status = broadcast_error_message(..., report_error_detected)
      pci_walk_bus
        report_error_detected
          err_handler->error_detected
            nvme_error_detected
              return PCI_ERS_RESULT_NEED_RESET        # frozen case
    # status == PCI_ERS_RESULT_NEED_RESET
    if (severity == AER_FATAL)
      reset_link
    if (status == PCI_ERS_RESULT_NEED_RESET)
      broadcast_error_message(..., report_slot_reset)
        pci_walk_bus
          report_slot_reset
            device_lock               # <-- D (acquire device lock)
            err_handler->slot_reset
              nvme_slot_reset
                nvme_reset
                  queue_work(..., &dev->reset_work)   # nvme_reset_work
            device_lock               # <-- unlock

  ...
  nvme_reset_work
    ...
      schedule_work(&dev->remove_work)  # nvme_remove_dead_ctrl_work

  ...
  nvme_remove_dead_ctrl_work
    ...
      drv->remove
        nvme_remove                     # <-- E (driver remove() method)
          pci_set_drvdata(pdev, NULL)

So the critical point is that with the current code, we can have this
sequence:

  A sysfs write occurs
  B sysfs write thread issues FLR to device
  C AER thread takes an interrupt as a result of the FLR
  D AER thread acquires device lock
  E AER thread calls the driver remove() method and clears drvdata
  F sysfs write thread reads drvdata which is now NULL

The AER thread acquires the device lock before calling
err_handler->slot_reset, and this patch makes the sysfs thread hold
the lock until after it has read drvdata, so the patch avoids the NULL
pointer dereference at F by changing the sequence to this:

  A sysfs write occurs
  B sysfs write thread issues FLR to device
  C AER thread takes an interrupt as a result of the FLR
  F sysfs write thread reads drvdata
  D AER thread acquires device lock
  E AER thread calls the driver remove() method and clears drvdata

But I'm still nervous because I think both threads will queue
nvme_reset_work() work items for the same device, and I'm not sure
they're prepared to run concurrently.

I don't really think it should be the driver's responsibility to
understand issues like this and worry about things like
nvme_reset_work() running concurrently.  So I'm thinking maybe the PCI
core needs to be a little stricter here, but I don't know exactly
*how*.

What do you think?

Bjorn

> Reported-by: Rakesh Pandit <rakesh@tuxera.com>
> Tested-by: Rakesh Pandit <rakesh@tuxera.com>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/pci/pci.c | 20 ++++++++++++--------
>  1 file changed, 12 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index 563901cd9c06..92f7e5ae2e5e 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -4276,11 +4276,13 @@ int pci_reset_function(struct pci_dev *dev)
>  	if (rc)
>  		return rc;
>  
> +	pci_dev_lock(dev);
>  	pci_dev_save_and_disable(dev);
>  
> -	rc = pci_dev_reset(dev, 0);
> +	rc = __pci_dev_reset(dev, 0);
>  
>  	pci_dev_restore(dev);
> +	pci_dev_unlock(dev);
>  
>  	return rc;
>  }
> @@ -4300,16 +4302,14 @@ int pci_try_reset_function(struct pci_dev *dev)
>  	if (rc)
>  		return rc;
>  
> -	pci_dev_save_and_disable(dev);
> +	if (pci_dev_trylock(dev))
> +		return -EAGAIN;
>  
> -	if (pci_dev_trylock(dev)) {
> -		rc = __pci_dev_reset(dev, 0);
> -		pci_dev_unlock(dev);
> -	} else
> -		rc = -EAGAIN;
> +	pci_dev_save_and_disable(dev);
> +	rc = __pci_dev_reset(dev, 0);
> +	pci_dev_unlock(dev);
>  
>  	pci_dev_restore(dev);
> -
>  	return rc;
>  }
>  EXPORT_SYMBOL_GPL(pci_try_reset_function);
> @@ -4459,7 +4459,9 @@ static void pci_bus_save_and_disable(struct pci_bus *bus)
>  	struct pci_dev *dev;
>  
>  	list_for_each_entry(dev, &bus->devices, bus_list) {
> +		pci_dev_lock(dev);
>  		pci_dev_save_and_disable(dev);
> +		pci_dev_unlock(dev);
>  		if (dev->subordinate)
>  			pci_bus_save_and_disable(dev->subordinate);
>  	}
> @@ -4474,7 +4476,9 @@ static void pci_bus_restore(struct pci_bus *bus)
>  	struct pci_dev *dev;
>  
>  	list_for_each_entry(dev, &bus->devices, bus_list) {
> +		pci_dev_lock(dev);
>  		pci_dev_restore(dev);
> +		pci_dev_unlock(dev);
>  		if (dev->subordinate)
>  			pci_bus_restore(dev->subordinate);
>  	}
> -- 
> 2.11.0
>
Marta Rybczynska June 6, 2017, 7:28 a.m. UTC | #2
> 
> But I'm still nervous because I think both threads will queue
> nvme_reset_work() work items for the same device, and I'm not sure
> they're prepared to run concurrently.
> 
> I don't really think it should be the driver's responsibility to
> understand issues like this and worry about things like
> nvme_reset_work() running concurrently.  So I'm thinking maybe the PCI
> core needs to be a little stricter here, but I don't know exactly
> *how*.
> 
> What do you think?

From what I can see the nvme_reset_work if run twice may disable the
controller (the out label) if run concurrently. If run twice it will
initialize twice what isn't best either.

I think that the double nvme_reset_work should be prevented. Maybe
a state information saying that the device is in the reset procedure
and after that run nvme_reset_work just once?

Marta
Christoph Hellwig June 6, 2017, 10:48 a.m. UTC | #3
On Tue, Jun 06, 2017 at 12:31:42AM -0500, Bjorn Helgaas wrote:
> OK, sorry to be dense; it's taking me a long time to work out the
> details here.  It feels like there should be a general principle to
> help figure out where we need locking, and it would be really awesome
> if we could include that in the changelog.  But it's not obvious to me
> what that principle would be.

The principle is very simple: every method in struct device_driver
or structures derived from it like struct pci_driver MUST provide
exclusion vs ->remove.  Usuaull by using device_lock().

If we don't provide such an exclusion the method call can race with
a removal in one form or another.

> But I'm still nervous because I think both threads will queue
> nvme_reset_work() work items for the same device, and I'm not sure
> they're prepared to run concurrently.

We had another bug in that area, and the fix for that is hopefully
going to go into the next 4.12-rc.

> I don't really think it should be the driver's responsibility to
> understand issues like this and worry about things like
> nvme_reset_work() running concurrently.  So I'm thinking maybe the PCI
> core needs to be a little stricter here, but I don't know exactly
> *how*.
> 
> What do you think?

The driver core / bus driver must ensure that method calls don't
race with ->remove.  There is nothing the driver can do about it,
and the race is just as possible with explicit user removals or
hardware hotplug.
Bjorn Helgaas June 6, 2017, 9:14 p.m. UTC | #4
On Tue, Jun 06, 2017 at 12:48:36PM +0200, Christoph Hellwig wrote:
> On Tue, Jun 06, 2017 at 12:31:42AM -0500, Bjorn Helgaas wrote:
> > OK, sorry to be dense; it's taking me a long time to work out the
> > details here.  It feels like there should be a general principle to
> > help figure out where we need locking, and it would be really awesome
> > if we could include that in the changelog.  But it's not obvious to me
> > what that principle would be.
> 
> The principle is very simple: every method in struct device_driver
> or structures derived from it like struct pci_driver MUST provide
> exclusion vs ->remove.  Usuaull by using device_lock().
> 
> If we don't provide such an exclusion the method call can race with
> a removal in one form or another.

So I guess the method here is
dev->driver->err_handler->reset_notify(), and the PCI core should be
holding device_lock() while calling it?  That makes sense to me;
thanks a lot for articulating that!

1) The current patch protects the err_handler->reset_notify() uses by
adding or expanding device_lock regions in the paths that lead to
pci_reset_notify().  Could we simplify it by doing the locking
directly in pci_reset_notify()?  Then it would be easy to verify the
locking, and we would be less likely to add new callers without the
proper locking.

2) Stating the rule explicitly helps look for other problems, and I
think we have a similar problem in all the pcie_portdrv_err_handler
methods.  These are all called in the AER do_recovery() path, and the
functions there, e.g., report_error_detected() do hold device_lock().
But pcie_portdrv_error_detected() propagates this to all the children,
and we *don't* hold the lock for the children.

Bjorn
Christoph Hellwig June 7, 2017, 6:29 p.m. UTC | #5
On Tue, Jun 06, 2017 at 04:14:43PM -0500, Bjorn Helgaas wrote:
> So I guess the method here is
> dev->driver->err_handler->reset_notify(), and the PCI core should be
> holding device_lock() while calling it?  That makes sense to me;
> thanks a lot for articulating that!

Yes.

> 1) The current patch protects the err_handler->reset_notify() uses by
> adding or expanding device_lock regions in the paths that lead to
> pci_reset_notify().  Could we simplify it by doing the locking
> directly in pci_reset_notify()?  Then it would be easy to verify the
> locking, and we would be less likely to add new callers without the
> proper locking.

We could do that, except that I'd rather hold the lock over a longer
period if we have many calls following each other.  I also have
a patch to actually kill pci_reset_notify() later in the series as
well, as the calling convention for it and ->reset_notify() are
awkward - depending on prepare parameter they do two entirely
different things.  That being said I could also add new
pci_reset_prepare() and pci_reset_done() helpers.

> 2) Stating the rule explicitly helps look for other problems, and I
> think we have a similar problem in all the pcie_portdrv_err_handler
> methods.

Yes, I mentioned this earlier, and I also vaguely remember we got
bug reports from IBM on power for this a while ago.  I just don't
feel confident enough to touch all these without a good test plan.
Bjorn Helgaas June 12, 2017, 11:14 p.m. UTC | #6
On Wed, Jun 07, 2017 at 08:29:36PM +0200, Christoph Hellwig wrote:
> On Tue, Jun 06, 2017 at 04:14:43PM -0500, Bjorn Helgaas wrote:
> > So I guess the method here is
> > dev->driver->err_handler->reset_notify(), and the PCI core should be
> > holding device_lock() while calling it?  That makes sense to me;
> > thanks a lot for articulating that!
> 
> Yes.
> 
> > 1) The current patch protects the err_handler->reset_notify() uses by
> > adding or expanding device_lock regions in the paths that lead to
> > pci_reset_notify().  Could we simplify it by doing the locking
> > directly in pci_reset_notify()?  Then it would be easy to verify the
> > locking, and we would be less likely to add new callers without the
> > proper locking.
> 
> We could do that, except that I'd rather hold the lock over a longer
> period if we have many calls following each other.  

My main concern is being able to verify the locking.  I think that is
much easier if the locking is adjacent to the method invocation.  But
if you just add a comment at the method invocation about where the
locking is, that should be sufficient.

> I also have
> a patch to actually kill pci_reset_notify() later in the series as
> well, as the calling convention for it and ->reset_notify() are
> awkward - depending on prepare parameter they do two entirely
> different things.  That being said I could also add new
> pci_reset_prepare() and pci_reset_done() helpers.

I like your pci_reset_notify() changes; they make that much clearer.
I don't think new helpers are necessary.

> > 2) Stating the rule explicitly helps look for other problems, and I
> > think we have a similar problem in all the pcie_portdrv_err_handler
> > methods.
> 
> Yes, I mentioned this earlier, and I also vaguely remember we got
> bug reports from IBM on power for this a while ago.  I just don't
> feel confident enough to touch all these without a good test plan.

Hmmm.  I see your point, but I hate leaving a known bug unfixed.  I
wonder if some enterprising soul could tickle this bug by injecting
errors while removing and rescanning devices below the bridge?

Bjorn
Christoph Hellwig June 13, 2017, 7:08 a.m. UTC | #7
On Mon, Jun 12, 2017 at 06:14:23PM -0500, Bjorn Helgaas wrote:
> My main concern is being able to verify the locking.  I think that is
> much easier if the locking is adjacent to the method invocation.  But
> if you just add a comment at the method invocation about where the
> locking is, that should be sufficient.

Ok.  I can add comments for all the methods as a separate patch,
similar to Documentation/vfs/Locking

> > Yes, I mentioned this earlier, and I also vaguely remember we got
> > bug reports from IBM on power for this a while ago.  I just don't
> > feel confident enough to touch all these without a good test plan.
> 
> Hmmm.  I see your point, but I hate leaving a known bug unfixed.  I
> wonder if some enterprising soul could tickle this bug by injecting
> errors while removing and rescanning devices below the bridge?

I'm completely loaded up at the moment, but this sounds like a good
idea.

In the meantime how do you want to proceed with this patch?
Bjorn Helgaas June 13, 2017, 2:05 p.m. UTC | #8
On Tue, Jun 13, 2017 at 09:08:10AM +0200, Christoph Hellwig wrote:
> On Mon, Jun 12, 2017 at 06:14:23PM -0500, Bjorn Helgaas wrote:
> > My main concern is being able to verify the locking.  I think that is
> > much easier if the locking is adjacent to the method invocation.  But
> > if you just add a comment at the method invocation about where the
> > locking is, that should be sufficient.
> 
> Ok.  I can add comments for all the methods as a separate patch,
> similar to Documentation/vfs/Locking
> 
> > > Yes, I mentioned this earlier, and I also vaguely remember we got
> > > bug reports from IBM on power for this a while ago.  I just don't
> > > feel confident enough to touch all these without a good test plan.
> > 
> > Hmmm.  I see your point, but I hate leaving a known bug unfixed.  I
> > wonder if some enterprising soul could tickle this bug by injecting
> > errors while removing and rescanning devices below the bridge?
> 
> I'm completely loaded up at the moment, but this sounds like a good
> idea.
> 
> In the meantime how do you want to proceed with this patch?

Can you just add comments about the locking?  I'd prefer that in the
same patch that adds the locking because that's what I had a hard time
reviewing.  I'm not thinking of anything fancy like
Documentation/filesystems/Locking; I'm just thinking of something
along the lines of "caller must hold pci_dev_lock() to protect
err_handler->reset_notify from racing ->remove()".  And the changelog
should contain the general principle about the locking strategy.

Bjorn
Guilherme G. Piccoli June 22, 2017, 8:41 p.m. UTC | #9
On 06/12/2017 08:14 PM, Bjorn Helgaas wrote:
> On Wed, Jun 07, 2017 at 08:29:36PM +0200, Christoph Hellwig wrote:
>> On Tue, Jun 06, 2017 at 04:14:43PM -0500, Bjorn Helgaas wrote:
>>> So I guess the method here is
>>> dev->driver->err_handler->reset_notify(), and the PCI core should be
>>> holding device_lock() while calling it?  That makes sense to me;
>>> thanks a lot for articulating that!
>>
>> Yes.
>>
>>> 1) The current patch protects the err_handler->reset_notify() uses by
>>> adding or expanding device_lock regions in the paths that lead to
>>> pci_reset_notify().  Could we simplify it by doing the locking
>>> directly in pci_reset_notify()?  Then it would be easy to verify the
>>> locking, and we would be less likely to add new callers without the
>>> proper locking.
>>
>> We could do that, except that I'd rather hold the lock over a longer
>> period if we have many calls following each other.  
> 
> My main concern is being able to verify the locking.  I think that is
> much easier if the locking is adjacent to the method invocation.  But
> if you just add a comment at the method invocation about where the
> locking is, that should be sufficient.
> 
>> I also have
>> a patch to actually kill pci_reset_notify() later in the series as
>> well, as the calling convention for it and ->reset_notify() are
>> awkward - depending on prepare parameter they do two entirely
>> different things.  That being said I could also add new
>> pci_reset_prepare() and pci_reset_done() helpers.
> 
> I like your pci_reset_notify() changes; they make that much clearer.
> I don't think new helpers are necessary.
> 
>>> 2) Stating the rule explicitly helps look for other problems, and I
>>> think we have a similar problem in all the pcie_portdrv_err_handler
>>> methods.
>>
>> Yes, I mentioned this earlier, and I also vaguely remember we got
>> bug reports from IBM on power for this a while ago.  I just don't
>> feel confident enough to touch all these without a good test plan.
> 
> Hmmm.  I see your point, but I hate leaving a known bug unfixed.  I
> wonder if some enterprising soul could tickle this bug by injecting
> errors while removing and rescanning devices below the bridge?

Well, although I don't consider myself an enterprising soul...heheh
I can test it, just CC me in next spin and provide some comment on how
to test (or point me the thread of original report).

I guess it was myself the reporter of the issue, I tried a simple fix
for our case and Christoph mentioned issue was more generic and needed a
proper fix..

Hopefully this one is that fix!
Thanks,


Guilherme

> 
> Bjorn
>
diff mbox

Patch

diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 563901cd9c06..92f7e5ae2e5e 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -4276,11 +4276,13 @@  int pci_reset_function(struct pci_dev *dev)
 	if (rc)
 		return rc;
 
+	pci_dev_lock(dev);
 	pci_dev_save_and_disable(dev);
 
-	rc = pci_dev_reset(dev, 0);
+	rc = __pci_dev_reset(dev, 0);
 
 	pci_dev_restore(dev);
+	pci_dev_unlock(dev);
 
 	return rc;
 }
@@ -4300,16 +4302,14 @@  int pci_try_reset_function(struct pci_dev *dev)
 	if (rc)
 		return rc;
 
-	pci_dev_save_and_disable(dev);
+	if (pci_dev_trylock(dev))
+		return -EAGAIN;
 
-	if (pci_dev_trylock(dev)) {
-		rc = __pci_dev_reset(dev, 0);
-		pci_dev_unlock(dev);
-	} else
-		rc = -EAGAIN;
+	pci_dev_save_and_disable(dev);
+	rc = __pci_dev_reset(dev, 0);
+	pci_dev_unlock(dev);
 
 	pci_dev_restore(dev);
-
 	return rc;
 }
 EXPORT_SYMBOL_GPL(pci_try_reset_function);
@@ -4459,7 +4459,9 @@  static void pci_bus_save_and_disable(struct pci_bus *bus)
 	struct pci_dev *dev;
 
 	list_for_each_entry(dev, &bus->devices, bus_list) {
+		pci_dev_lock(dev);
 		pci_dev_save_and_disable(dev);
+		pci_dev_unlock(dev);
 		if (dev->subordinate)
 			pci_bus_save_and_disable(dev->subordinate);
 	}
@@ -4474,7 +4476,9 @@  static void pci_bus_restore(struct pci_bus *bus)
 	struct pci_dev *dev;
 
 	list_for_each_entry(dev, &bus->devices, bus_list) {
+		pci_dev_lock(dev);
 		pci_dev_restore(dev);
+		pci_dev_unlock(dev);
 		if (dev->subordinate)
 			pci_bus_restore(dev->subordinate);
 	}