diff mbox series

[v5,1/1,SRU,OEM-5.6] UBUNTU: SAUCE: PCI: Do not use pcie_get_speed_cap() to determine when to start waiting

Message ID 20200514070554.17069-2-koba.ko@canonical.com
State New
Headers show
Series UBUNTU: SAUCE: pci: Speed up the process of s3 resume | expand

Commit Message

Koba Ko May 14, 2020, 7:05 a.m. UTC
From: Mika Westerberg <mika.westerberg@linux.intel.com>

BugLink: https://bugs.launchpad.net/bugs/1876844

Kai-Heng Feng reported that it takes long time (>1s) to resume
Thunderbolt connected PCIe devices from both runtime suspend and system
sleep (s2idle).

These PCIe downstream ports the second link capability (PCI_EXP_LNKCAP2)
announces support for speeds > 5 GT/s but it is then capped by the
second link control (PCI_EXP_LNKCTL2) register to 2.5 GT/s. This
possiblity was not considered in pci_bridge_wait_for_secondary_bus() so
it ended up waiting for 1100 ms as these ports do not support active
link layer reporting either.

PCIe spec 5.0 section 6.6.1 mandates that we must wait minimum of 100 ms
before sending configuration request to the device below, if the port
does not support speeds > 5 GT/s, and if it does we first need to wait
for the data link layer to become active before waiting for that 100 ms.

PCIe spec 5.0 section 7.5.3.6 further says that all downstream ports
that support speeds > 5 GT/s must support active link layer reporting so
instead of looking for the speed we can check for the active link layer
reporting capability and determine how to wait based on that (as they go
hand in hand).

Link: https://bugzilla.kernel.org/show_bug.cgi?id=206837
Reported-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
Tested-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
(cherry picked from
https://lore.kernel.org/linux-pci/20200416083245.73957-1-mika.westerberg@linux.intel.com/)
Signed-off-by: Koba Ko <koba.ko@canonical.com>
---
 drivers/pci/pci.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

Comments

Kai-Heng Feng May 15, 2020, 5:37 a.m. UTC | #1
> On May 14, 2020, at 15:05, koba.ko@canonical.com wrote:
> 
> From: Mika Westerberg <mika.westerberg@linux.intel.com>

Mika proposed a v2 here:
https://lore.kernel.org/linux-pci/20200514133043.27429-1-mika.westerberg@linux.intel.com/

I haven't tested it though.

Kai-Heng

> 
> BugLink: https://bugs.launchpad.net/bugs/1876844

If this is not that urgent, we can wait till it lands to PCI maintainer's tree.

Kai-Heng

> 
> Kai-Heng Feng reported that it takes long time (>1s) to resume
> Thunderbolt connected PCIe devices from both runtime suspend and system
> sleep (s2idle).
> 
> These PCIe downstream ports the second link capability (PCI_EXP_LNKCAP2)
> announces support for speeds > 5 GT/s but it is then capped by the
> second link control (PCI_EXP_LNKCTL2) register to 2.5 GT/s. This
> possiblity was not considered in pci_bridge_wait_for_secondary_bus() so
> it ended up waiting for 1100 ms as these ports do not support active
> link layer reporting either.
> 
> PCIe spec 5.0 section 6.6.1 mandates that we must wait minimum of 100 ms
> before sending configuration request to the device below, if the port
> does not support speeds > 5 GT/s, and if it does we first need to wait
> for the data link layer to become active before waiting for that 100 ms.
> 
> PCIe spec 5.0 section 7.5.3.6 further says that all downstream ports
> that support speeds > 5 GT/s must support active link layer reporting so
> instead of looking for the speed we can check for the active link layer
> reporting capability and determine how to wait based on that (as they go
> hand in hand).
> 
> Link: https://bugzilla.kernel.org/show_bug.cgi?id=206837
> Reported-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
> Tested-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
> Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
> (cherry picked from
> https://lore.kernel.org/linux-pci/20200416083245.73957-1-mika.westerberg@linux.intel.com/)
> Signed-off-by: Koba Ko <koba.ko@canonical.com>
> ---
> drivers/pci/pci.c | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index d828ca835a98..ebe626ad1b79 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -4765,7 +4765,13 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
> 	if (!pcie_downstream_port(dev))
> 		return;
> 
> -	if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
> +	/*
> +	 * Since PCIe spec mandates that all downstream ports that support
> +	 * speeds greater than 5 GT/s must support data link layer active
> +	 * reporting so we use that here to determine when the delay should
> +	 * be issued.
> +	 */
> +	if (!dev->link_active_reporting) {
> 		pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
> 		msleep(delay);
> 	} else {
> -- 
> 2.17.1
> 
> 
> -- 
> kernel-team mailing list
> kernel-team@lists.ubuntu.com
> https://lists.ubuntu.com/mailman/listinfo/kernel-team
diff mbox series

Patch

diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index d828ca835a98..ebe626ad1b79 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -4765,7 +4765,13 @@  void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
 	if (!pcie_downstream_port(dev))
 		return;
 
-	if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
+	/*
+	 * Since PCIe spec mandates that all downstream ports that support
+	 * speeds greater than 5 GT/s must support data link layer active
+	 * reporting so we use that here to determine when the delay should
+	 * be issued.
+	 */
+	if (!dev->link_active_reporting) {
 		pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
 		msleep(delay);
 	} else {