diff mbox series

[2/5,V2] PCI: pciehp: check and wait port status out of DPC before handling DLLSC and PDC

Message ID 20200927032829.11321-3-haifeng.zhao@intel.com
State New
Headers show
Series Fix DPC hotplug race and enhance error handling | expand

Commit Message

Zhao, Haifeng Sept. 27, 2020, 3:28 a.m. UTC
When root port has DPC capability and it is enabled, then triggered by
errors, DPC DLLSC and PDC interrupts will be sent to DPC driver, pciehp
driver at the same time.
That will cause following result:

1. Link and device are recovered by hardware DPC and software DPC driver, 
   device
   isn't removed, but the pciehp might treat it as device was hot removed.

2. Race condition happens bettween pciehp_unconfigure_device() called by
   pciehp_ist() in pciehp driver and pci_do_recovery() called by
   dpc_handler in DPC driver. no luck, there is no lock to protect 
   pci_stop_and_remove_bus_device()
   against pci_walk_bus(), they hold different samphore and mutex,
   pci_stop_and_remove_bus_device holds pci_rescan_remove_lock, and
   pci_walk_bus() holds pci_bus_sem.

This race condition is not purely code analysis, it could be triggered by
following command series:

  # setpci -s 64:02.0 0x196.w=000a // 64:02.0 rootport has DPC capability
  # setpci -s 65:00.0 0x04.w=0544  // 65:00.0 NVMe SSD populated in port
  # mount /dev/nvme0n1p1 nvme

One shot will cause system panic and NULL pointer reference happened.
(tested on stable 5.8 & ICS(Ice Lake SP platform, see
https://en.wikichip.org/wiki/intel/microarchitectures/ice_lake_(server))

   Buffer I/O error on dev nvme0n1p1, logical block 3328, async page read
   BUG: kernel NULL pointer dereference, address: 0000000000000050
   #PF: supervisor read access in kernel mode
   #PF: error_code(0x0000) - not-present page
   PGD 0
   Oops: 0000 [#1] SMP NOPTI
   CPU: 12 PID: 513 Comm: irq/124-pcie-dp Not tainted 5.8.0 el8.x86_64+ #1
   RIP: 0010:report_error_detected.cold.4+0x7d/0xe6
   Code: b6 d0 e8 e8 fe 11 00 e8 16 c5 fb ff be 06 00 00 00 48 89 df e8 d3
   65 ff ff b8 06 00 00 00 e9 75 fc ff ff 48 8b 43 68 45 31 c9 <48> 8b 50
   50 48 83 3a 00 41 0f 94 c1 45 31 c0 48 85 d2 41 0f 94 c0
   RSP: 0018:ff8e06cf8762fda8 EFLAGS: 00010246
   RAX: 0000000000000000 RBX: ff4e3eaacf42a000 RCX: ff4e3eb31f223c01
   RDX: ff4e3eaacf42a140 RSI: ff4e3eb31f223c00 RDI: ff4e3eaacf42a138
   RBP: ff8e06cf8762fdd0 R08: 00000000000000bf R09: 0000000000000000
   R10: 000000eb8ebeab53 R11: ffffffff93453258 R12: 0000000000000002
   R13: ff4e3eaacf42a130 R14: ff8e06cf8762fe2c R15: ff4e3eab44733828
   FS:  0000000000000000(0000) GS:ff4e3eab1fd00000(0000) knl
   GS:0000000000000000
   CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
   CR2: 0000000000000050 CR3: 0000000f8f80a004 CR4: 0000000000761ee0
   DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
   DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
   PKRU: 55555554
   Call Trace:
   ? report_normal_detected+0x20/0x20
   report_frozen_detected+0x16/0x20
   pci_walk_bus+0x75/0x90
   ? dpc_irq+0x90/0x90
   pcie_do_recovery+0x157/0x201
   ? irq_finalize_oneshot.part.47+0xe0/0xe0
   dpc_handler+0x29/0x40
   irq_thread_fn+0x24/0x60
   irq_thread+0xea/0x170
   ? irq_forced_thread_fn+0x80/0x80
   ? irq_thread_check_affinity+0xf0/0xf0
   kthread+0x124/0x140
   ? kthread_park+0x90/0x90
   ret_from_fork+0x1f/0x30
   Modules linked in: nft_fib_inet.........
   CR2: 0000000000000050

With this patch, the handling flow of DPC containment and hotplug is
partly ordered and serialized, let hardware DPC do the controller reset
etc recovery action first, then DPC driver handling the call-back from
device drivers, clear the DPC status, at the end, pciehp handle the DLLSC
and PDC etc.

Signed-off-by: Ethan Zhao <haifeng.zhao@intel.com>
Tested-by: Wen Jin <wen.jin@intel.com>
Tested-by: Shanshan Zhang <ShanshanX.Zhang@intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
---
Changes:
 V2: revise doc according to Andy's suggestion.
 drivers/pci/hotplug/pciehp_hpc.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Comments

Sinan Kaya Sept. 27, 2020, 3:27 p.m. UTC | #1
On 9/26/2020 11:28 PM, Ethan Zhao wrote:
> diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
> index 53433b37e181..6f271160f18d 100644
> --- a/drivers/pci/hotplug/pciehp_hpc.c
> +++ b/drivers/pci/hotplug/pciehp_hpc.c
> @@ -710,8 +710,10 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
>  	down_read(&ctrl->reset_lock);
>  	if (events & DISABLE_SLOT)
>  		pciehp_handle_disable_request(ctrl);
> -	else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC))
> +	else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC)) {
> +		pci_wait_port_outdpc(pdev);
>  		pciehp_handle_presence_or_link_change(ctrl, events);
> +	}
>  	up_read(&ctrl->reset_lock);

This looks like a hack TBH.

Lukas, Keith;

What is your take on this?
Why is device lock not protecting this situation?

Is there a lock missing in hotplug driver?

Sinan
Zhao, Haifeng Sept. 28, 2020, 2:01 a.m. UTC | #2
Sinan,
   I explained the reason why locks don't protect this case in the patch description part. 
Write side and read side hold different semaphore and mutex.

Thanks,
Ethan

-----Original Message-----
From: Sinan Kaya <okaya@kernel.org> 
Sent: Sunday, September 27, 2020 11:28 PM
To: Zhao, Haifeng <haifeng.zhao@intel.com>; bhelgaas@google.com; oohall@gmail.com; ruscur@russell.cc; lukas@wunner.de; andriy.shevchenko@linux.intel.com; stuart.w.hayes@gmail.com; mr.nuke.me@gmail.com; mika.westerberg@linux.intel.com; Keith Busch <keith.busch@intel.com>
Cc: linux-pci@vger.kernel.org; linux-kernel@vger.kernel.org; Jia, Pei P <pei.p.jia@intel.com>; ashok.raj@linux.intel.com; Kuppuswamy, Sathyanarayanan <sathyanarayanan.kuppuswamy@intel.com>
Subject: Re: [PATCH 2/5 V2] PCI: pciehp: check and wait port status out of DPC before handling DLLSC and PDC

On 9/26/2020 11:28 PM, Ethan Zhao wrote:
> diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
> index 53433b37e181..6f271160f18d 100644
> --- a/drivers/pci/hotplug/pciehp_hpc.c
> +++ b/drivers/pci/hotplug/pciehp_hpc.c
> @@ -710,8 +710,10 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
>  	down_read(&ctrl->reset_lock);
>  	if (events & DISABLE_SLOT)
>  		pciehp_handle_disable_request(ctrl);
> -	else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC))
> +	else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC)) {
> +		pci_wait_port_outdpc(pdev);
>  		pciehp_handle_presence_or_link_change(ctrl, events);
> +	}
>  	up_read(&ctrl->reset_lock);

This looks like a hack TBH.

Lukas, Keith;

What is your take on this?
Why is device lock not protecting this situation?

Is there a lock missing in hotplug driver?

Sinan
Sinan Kaya Sept. 28, 2020, 11:10 a.m. UTC | #3
On 9/27/2020 10:01 PM, Zhao, Haifeng wrote:
> Sinan,
>    I explained the reason why locks don't protect this case in the patch description part. 
> Write side and read side hold different semaphore and mutex.
> 

I have been thinking about it some time but is there any reason why we
have to handle all port AER/DPC/HP events in different threads?

Can we go to single threaded event loop for all port drivers events?

This will require some refactoring but it wlll eliminate the lock
nightmares we are having.

This means no sleeping. All sleeps need to happen outside of the loop.

I wanted to see what you all are thinking about this.

It might become a performance problem if the system is
continuously observing a hotplug/aer/dpc events.

I always think that these should be rare events.
Sinan Kaya Sept. 28, 2020, 4:43 p.m. UTC | #4
On 9/28/2020 7:10 AM, Sinan Kaya wrote:
> On 9/27/2020 10:01 PM, Zhao, Haifeng wrote:
>> Sinan,
>>    I explained the reason why locks don't protect this case in the patch description part. 
>> Write side and read side hold different semaphore and mutex.
>>
> I have been thinking about it some time but is there any reason why we
> have to handle all port AER/DPC/HP events in different threads?
> 
> Can we go to single threaded event loop for all port drivers events?
> 
> This will require some refactoring but it wlll eliminate the lock
> nightmares we are having.
> 
> This means no sleeping. All sleeps need to happen outside of the loop.
> 
> I wanted to see what you all are thinking about this.
> 
> It might become a performance problem if the system is
> continuously observing a hotplug/aer/dpc events.
> 
> I always think that these should be rare events.

If restructuring would be too costly, the preferred solution should be
to fix the locks in hotplug driver rather than throwing there a random
wait call.
Kuppuswamy, Sathyanarayanan Sept. 28, 2020, 4:44 p.m. UTC | #5
On 9/28/20 9:43 AM, Sinan Kaya wrote:
> On 9/28/2020 7:10 AM, Sinan Kaya wrote:
>> On 9/27/2020 10:01 PM, Zhao, Haifeng wrote:
>>> Sinan,
>>>     I explained the reason why locks don't protect this case in the patch description part.
>>> Write side and read side hold different semaphore and mutex.
>>>
>> I have been thinking about it some time but is there any reason why we
>> have to handle all port AER/DPC/HP events in different threads?
>>
>> Can we go to single threaded event loop for all port drivers events?
>>
>> This will require some refactoring but it wlll eliminate the lock
>> nightmares we are having.
>>
>> This means no sleeping. All sleeps need to happen outside of the loop.
>>
>> I wanted to see what you all are thinking about this.
>>
>> It might become a performance problem if the system is
>> continuously observing a hotplug/aer/dpc events.
>>
>> I always think that these should be rare events.
> If restructuring would be too costly, the preferred solution should be
> to fix the locks in hotplug driver rather than throwing there a random
> wait call.
Since the current race condition is detected between DPC and
hotplug, I recommend synchronizing them.
Ethan Zhao Sept. 29, 2020, 2:28 a.m. UTC | #6
On Tue, Sep 29, 2020 at 12:45 AM Kuppuswamy, Sathyanarayanan
<sathyanarayanan.kuppuswamy@intel.com> wrote:
>
>
> On 9/28/20 9:43 AM, Sinan Kaya wrote:
> > On 9/28/2020 7:10 AM, Sinan Kaya wrote:
> >> On 9/27/2020 10:01 PM, Zhao, Haifeng wrote:
> >>> Sinan,
> >>>     I explained the reason why locks don't protect this case in the patch description part.
> >>> Write side and read side hold different semaphore and mutex.
> >>>
> >> I have been thinking about it some time but is there any reason why we
> >> have to handle all port AER/DPC/HP events in different threads?
> >>
> >> Can we go to single threaded event loop for all port drivers events?
> >>
> >> This will require some refactoring but it wlll eliminate the lock
> >> nightmares we are having.
> >>
> >> This means no sleeping. All sleeps need to happen outside of the loop.
> >>
> >> I wanted to see what you all are thinking about this.
> >>
> >> It might become a performance problem if the system is
> >> continuously observing a hotplug/aer/dpc events.
> >>
> >> I always think that these should be rare events.
> > If restructuring would be too costly, the preferred solution should be
> > to fix the locks in hotplug driver rather than throwing there a random
> > wait call.
> Since the current race condition is detected between DPC and
> hotplug, I recommend synchronizing them.

The locks are the first place to root cause and try to fix. but not so easy to
refactor the remove-scan-semaphore and the bus-walk-mutex. too expensive
work. --- rework every piece of code that uses them.

Thanks,
Ethan

>
> --
> Sathyanarayanan Kuppuswamy
> Linux Kernel Developer
>
Ethan Zhao Sept. 29, 2020, 2:50 a.m. UTC | #7
On Tue, Sep 29, 2020 at 12:44 AM Sinan Kaya <okaya@kernel.org> wrote:
>
> On 9/28/2020 7:10 AM, Sinan Kaya wrote:
> > On 9/27/2020 10:01 PM, Zhao, Haifeng wrote:
> >> Sinan,
> >>    I explained the reason why locks don't protect this case in the patch description part.
> >> Write side and read side hold different semaphore and mutex.
> >>
> > I have been thinking about it some time but is there any reason why we
> > have to handle all port AER/DPC/HP events in different threads?
> >
> > Can we go to single threaded event loop for all port drivers events?
> >
> > This will require some refactoring but it wlll eliminate the lock
> > nightmares we are having.
> >
> > This means no sleeping. All sleeps need to happen outside of the loop.
> >
> > I wanted to see what you all are thinking about this.
> >
> > It might become a performance problem if the system is
> > continuously observing a hotplug/aer/dpc events.
> >
> > I always think that these should be rare events.
>
> If restructuring would be too costly, the preferred solution should be
> to fix the locks in hotplug driver rather than throwing there a random
> wait call.

  My first though is to unify the pci_bus_sem & pci_rescan_remove_lock
to one sleepable lock, but verifying every
locking scenario to sort out dead lock warning, it is horrible job. I
gave up and then played the device status waiting trick
to workaround it.

    index 03d37128a24f..477d4c499f87 100644
--- a/drivers/pci/probe.c
+++ b/drivers/pci/probe.c
@@ -3223,17 +3223,19 @@ EXPORT_SYMBOL_GPL(pci_rescan_bus);
  * pci_rescan_bus(), pci_rescan_bus_bridge_resize() and PCI device removal
  * routines should always be executed under this mutex.
  */
-static DEFINE_MUTEX(pci_rescan_remove_lock);
+/* static DEFINE_MUTEX(pci_rescan_remove_lock); */

 void pci_lock_rescan_remove(void)
 {
- mutex_lock(&pci_rescan_remove_lock);
+ /*mutex_lock(&pci_rescan_remove_lock); */
+ down_write(&pci_bus_sem);
 }
 EXPORT_SYMBOL_GPL(pci_lock_rescan_remove);

 void pci_unlock_rescan_remove(void)
 {
- mutex_unlock(&pci_rescan_remove_lock);
+ /*mutex_unlock(&pci_rescan_remove_lock); */
+ up_write(&pci_bus_sem);
 }
 EXPORT_SYMBOL_GPL(pci_unlock_rescan_remove);

Thanks,
Ethan
Lukas Wunner Sept. 29, 2020, 8:18 a.m. UTC | #8
On Sun, Sep 27, 2020 at 11:27:46AM -0400, Sinan Kaya wrote:
> On 9/26/2020 11:28 PM, Ethan Zhao wrote:
> > --- a/drivers/pci/hotplug/pciehp_hpc.c
> > +++ b/drivers/pci/hotplug/pciehp_hpc.c
> > @@ -710,8 +710,10 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
> >  	down_read(&ctrl->reset_lock);
> >  	if (events & DISABLE_SLOT)
> >  		pciehp_handle_disable_request(ctrl);
> > -	else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC))
> > +	else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC)) {
> > +		pci_wait_port_outdpc(pdev);
> >  		pciehp_handle_presence_or_link_change(ctrl, events);
> > +	}
> >  	up_read(&ctrl->reset_lock);
> 
> This looks like a hack TBH.
> 
> Lukas, Keith;
> 
> What is your take on this?
> Why is device lock not protecting this situation?
> 
> Is there a lock missing in hotplug driver?

According to Ethan's commit message, there are two issues here:
One, that pciehp may remove a device even though DPC recovered the error,
and two, that a null pointer deref occurs.

The latter is most certainly not a locking issue but failure of DPC
to hold a reference on the pci_dev.

Thanks,

Lukas
Ethan Zhao Sept. 29, 2020, 9:46 a.m. UTC | #9
On Tue, Sep 29, 2020 at 4:29 PM Lukas Wunner <lukas@wunner.de> wrote:
>
> On Sun, Sep 27, 2020 at 11:27:46AM -0400, Sinan Kaya wrote:
> > On 9/26/2020 11:28 PM, Ethan Zhao wrote:
> > > --- a/drivers/pci/hotplug/pciehp_hpc.c
> > > +++ b/drivers/pci/hotplug/pciehp_hpc.c
> > > @@ -710,8 +710,10 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
> > >     down_read(&ctrl->reset_lock);
> > >     if (events & DISABLE_SLOT)
> > >             pciehp_handle_disable_request(ctrl);
> > > -   else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC))
> > > +   else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC)) {
> > > +           pci_wait_port_outdpc(pdev);
> > >             pciehp_handle_presence_or_link_change(ctrl, events);
> > > +   }
> > >     up_read(&ctrl->reset_lock);
> >
> > This looks like a hack TBH.
> >
> > Lukas, Keith;
> >
> > What is your take on this?
> > Why is device lock not protecting this situation?
> >
> > Is there a lock missing in hotplug driver?
>
> According to Ethan's commit message, there are two issues here:
> One, that pciehp may remove a device even though DPC recovered the error,
> and two, that a null pointer deref occurs.
>
> The latter is most certainly not a locking issue but failure of DPC
> to hold a reference on the pci_dev.

This is what patch 3/5 proposed to fix. while this one is to re-order
the mixed DPC
recovery procedure and DLLSC/PDC event handling, to make pciehp to know the
exact recovered result of DPC to malfunctional device ---- link
recovered, still there,
or is removed from the slot.

Thanks,
Ethan

>
> Thanks,
>
> Lukas
Lukas Wunner Sept. 29, 2020, 10:07 a.m. UTC | #10
On Tue, Sep 29, 2020 at 05:46:41PM +0800, Ethan Zhao wrote:
> On Tue, Sep 29, 2020 at 4:29 PM Lukas Wunner <lukas@wunner.de> wrote:
> > On Sun, Sep 27, 2020 at 11:27:46AM -0400, Sinan Kaya wrote:
> > > On 9/26/2020 11:28 PM, Ethan Zhao wrote:
> > > > --- a/drivers/pci/hotplug/pciehp_hpc.c
> > > > +++ b/drivers/pci/hotplug/pciehp_hpc.c
> > > > @@ -710,8 +710,10 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
> > > >     down_read(&ctrl->reset_lock);
> > > >     if (events & DISABLE_SLOT)
> > > >             pciehp_handle_disable_request(ctrl);
> > > > -   else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC))
> > > > +   else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC)) {
> > > > +           pci_wait_port_outdpc(pdev);
> > > >             pciehp_handle_presence_or_link_change(ctrl, events);
> > > > +   }
> > > >     up_read(&ctrl->reset_lock);
> > >
> > > This looks like a hack TBH.
[...]
> > > Why is device lock not protecting this situation?
> > > Is there a lock missing in hotplug driver?
> >
> > According to Ethan's commit message, there are two issues here:
> > One, that pciehp may remove a device even though DPC recovered the error,
> > and two, that a null pointer deref occurs.
> >
> > The latter is most certainly not a locking issue but failure of DPC
> > to hold a reference on the pci_dev.
> 
> This is what patch 3/5 proposed to fix.

Please reorder the series to fix the null pointer deref first,
i.e. move patch 3 before patch 2.  If the null pointer deref is
fixed by patch 3, do not mention it in patch 2.

Thanks,

Lukas
Ethan Zhao Sept. 30, 2020, 2:20 a.m. UTC | #11
On Tue, Sep 29, 2020 at 6:08 PM Lukas Wunner <lukas@wunner.de> wrote:
>
> On Tue, Sep 29, 2020 at 05:46:41PM +0800, Ethan Zhao wrote:
> > On Tue, Sep 29, 2020 at 4:29 PM Lukas Wunner <lukas@wunner.de> wrote:
> > > On Sun, Sep 27, 2020 at 11:27:46AM -0400, Sinan Kaya wrote:
> > > > On 9/26/2020 11:28 PM, Ethan Zhao wrote:
> > > > > --- a/drivers/pci/hotplug/pciehp_hpc.c
> > > > > +++ b/drivers/pci/hotplug/pciehp_hpc.c
> > > > > @@ -710,8 +710,10 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
> > > > >     down_read(&ctrl->reset_lock);
> > > > >     if (events & DISABLE_SLOT)
> > > > >             pciehp_handle_disable_request(ctrl);
> > > > > -   else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC))
> > > > > +   else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC)) {
> > > > > +           pci_wait_port_outdpc(pdev);
> > > > >             pciehp_handle_presence_or_link_change(ctrl, events);
> > > > > +   }
> > > > >     up_read(&ctrl->reset_lock);
> > > >
> > > > This looks like a hack TBH.
> [...]
> > > > Why is device lock not protecting this situation?
> > > > Is there a lock missing in hotplug driver?
> > >
> > > According to Ethan's commit message, there are two issues here:
> > > One, that pciehp may remove a device even though DPC recovered the error,
> > > and two, that a null pointer deref occurs.
> > >
> > > The latter is most certainly not a locking issue but failure of DPC
> > > to hold a reference on the pci_dev.
> >
> > This is what patch 3/5 proposed to fix.
>
> Please reorder the series to fix the null pointer deref first,
> i.e. move patch 3 before patch 2.  If the null pointer deref is
> fixed by patch 3, do not mention it in patch 2.

Make sense.

Thanks,
Ethan
>
> Thanks,
>
> Lukas
diff mbox series

Patch

diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
index 53433b37e181..6f271160f18d 100644
--- a/drivers/pci/hotplug/pciehp_hpc.c
+++ b/drivers/pci/hotplug/pciehp_hpc.c
@@ -710,8 +710,10 @@  static irqreturn_t pciehp_ist(int irq, void *dev_id)
 	down_read(&ctrl->reset_lock);
 	if (events & DISABLE_SLOT)
 		pciehp_handle_disable_request(ctrl);
-	else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC))
+	else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC)) {
+		pci_wait_port_outdpc(pdev);
 		pciehp_handle_presence_or_link_change(ctrl, events);
+	}
 	up_read(&ctrl->reset_lock);
 
 	ret = IRQ_HANDLED;