diff mbox series

[2/2] PCI: vmd: enable PCI PM's L1 substates of remapped PCIe port and NVMe

Message ID 20240130100050.14182-2-jhp@endlessos.org
State New
Headers show
Series [1/2] ata: ahci: Add force LPM policy quirk for ASUS B1400CEAE | expand

Commit Message

Jian-Hong Pan Jan. 30, 2024, 10 a.m. UTC
The remmapped PCIe port and NVMe have PCI PM L1 substates capability on
ASUS B1400CEAE, but they are disabled originally:

Capabilities: [900 v1] L1 PM Substates
        L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1- L1_PM_Substates+
                  PortCommonModeRestoreTime=32us PortTPowerOnTime=10us
        L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1-
                   T_CommonMode=0us LTR1.2_Threshold=0ns
        L1SubCtl2: T_PwrOn=10us

Power on all of the VMD remapped PCI devices before enable PCI-PM L1 PM
Substates by following "Section 5.5.4 of PCIe Base Spec Revision 5.0
Version 0.1". Then, PCI PM's L1 substates control are enabled
accordingly.

Link: https://bugzilla.kernel.org/show_bug.cgi?id=218394
Signed-off-by: Jian-Hong Pan <jhp@endlessos.org>
---
 drivers/pci/controller/vmd.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

Comments

Bjorn Helgaas Jan. 30, 2024, 4:35 p.m. UTC | #1
Capitalize subject line to match history ("Enable PM L1 Substates ...")

On Tue, Jan 30, 2024 at 06:00:51PM +0800, Jian-Hong Pan wrote:
> The remmapped PCIe port and NVMe have PCI PM L1 substates capability on
> ASUS B1400CEAE, but they are disabled originally:

s/remmapped/remapped/
s/PCIe port/PCIe Root Port/ (all devices with Links have Ports, so we
can be a little more specific here)

I doubt "ASUS B1400CEAE" is relevant here.

> Capabilities: [900 v1] L1 PM Substates
>         L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1- L1_PM_Substates+
>                   PortCommonModeRestoreTime=32us PortTPowerOnTime=10us
>         L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1-
>                    T_CommonMode=0us LTR1.2_Threshold=0ns
>         L1SubCtl2: T_PwrOn=10us
> 
> Power on all of the VMD remapped PCI devices before enable PCI-PM L1 PM
> Substates by following "Section 5.5.4 of PCIe Base Spec Revision 5.0
> Version 0.1". Then, PCI PM's L1 substates control are enabled
> accordingly.

Reference PCIe r6.0 or r6.1, since r5.0 is almost 5 years old now.

> Link: https://bugzilla.kernel.org/show_bug.cgi?id=218394
> Signed-off-by: Jian-Hong Pan <jhp@endlessos.org>
> ---
>  drivers/pci/controller/vmd.c | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
> 
> diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> index 87b7856f375a..b1bbe8e6075a 100644
> --- a/drivers/pci/controller/vmd.c
> +++ b/drivers/pci/controller/vmd.c
> @@ -738,6 +738,12 @@ static void vmd_copy_host_bridge_flags(struct pci_host_bridge *root_bridge,
>  	vmd_bridge->native_dpc = root_bridge->native_dpc;
>  }
>  
> +static int vmd_power_on_pci_device(struct pci_dev *pdev, void *userdata)
> +{
> +	pci_set_power_state(pdev, PCI_D0);
> +	return 0;
> +}
> +
>  /*
>   * Enable ASPM and LTR settings on devices that aren't configured by BIOS.
>   */
> @@ -928,6 +934,13 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
>  	vmd_acpi_begin();
>  
>  	pci_scan_child_bus(vmd->bus);
> +
> +	/*
> +	 * Make PCI devices at D0 when enable PCI-PM L1 PM Substates from
> +	 * Section 5.5.4 of PCIe Base Spec Revision 5.0 Version 0.1
> +	 */
> +	pci_walk_bus(vmd->bus, vmd_power_on_pci_device, NULL);

Sec 5.5.4 indeed says "If setting either or both of the enable bits
for PCI-PM L1 PM Substates, both ports must be configured ... while in
D0."

This applies to *all* PCIe devices, not just those below a VMD bridge,
so I'm not sure this is the right place to do this.  Is there anything
that prevents a similar issue for non-VMD hierarchies?

I guess the bridges (Root Ports and Switch Ports) must already be in
D0, or we wouldn't be able to enumerate anything below them, right?

It would be nice to connect this more closely with the L1 PM Substates
configuration.  I don't quite see the connection here.  The only path
I see for L1SS configuration is this:

  pci_scan_slot
    pcie_aspm_init_link_state
      pcie_aspm_cap_init
	aspm_l1ss_init

which of course is inside pci_scan_child_bus(), which happens *before*
this patch puts the devices in D0.  Where does the L1SS configuration
happen after this vmd_power_on_pci_device()?

>  	vmd_domain_reset(vmd);
>  
>  	/* When Intel VMD is enabled, the OS does not discover the Root Ports
> -- 
> 2.43.0
>
Bjorn Helgaas Jan. 30, 2024, 5:28 p.m. UTC | #2
[+cc Johan since qcom has similar code]

On Tue, Jan 30, 2024 at 10:35:19AM -0600, Bjorn Helgaas wrote:
> On Tue, Jan 30, 2024 at 06:00:51PM +0800, Jian-Hong Pan wrote:
> > The remmapped PCIe port and NVMe have PCI PM L1 substates capability on
> > ASUS B1400CEAE, but they are disabled originally:
> >
> > Capabilities: [900 v1] L1 PM Substates
> >         L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1- L1_PM_Substates+
> >                   PortCommonModeRestoreTime=32us PortTPowerOnTime=10us
> >         L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1-
> >                    T_CommonMode=0us LTR1.2_Threshold=0ns
> >         L1SubCtl2: T_PwrOn=10us
> > 
> > Power on all of the VMD remapped PCI devices before enable PCI-PM L1 PM
> > Substates by following "Section 5.5.4 of PCIe Base Spec Revision 5.0
> > Version 0.1". Then, PCI PM's L1 substates control are enabled
> > accordingly.
> > 
> > Link: https://bugzilla.kernel.org/show_bug.cgi?id=218394
> > Signed-off-by: Jian-Hong Pan <jhp@endlessos.org>
> > ---
> >  drivers/pci/controller/vmd.c | 13 +++++++++++++
> >  1 file changed, 13 insertions(+)
> > 
> > diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> > index 87b7856f375a..b1bbe8e6075a 100644
> > --- a/drivers/pci/controller/vmd.c
> > +++ b/drivers/pci/controller/vmd.c
> > @@ -738,6 +738,12 @@ static void vmd_copy_host_bridge_flags(struct pci_host_bridge *root_bridge,
> >  	vmd_bridge->native_dpc = root_bridge->native_dpc;
> >  }
> >  
> > +static int vmd_power_on_pci_device(struct pci_dev *pdev, void *userdata)
> > +{
> > +	pci_set_power_state(pdev, PCI_D0);
> > +	return 0;
> > +}
> > +
> >  /*
> >   * Enable ASPM and LTR settings on devices that aren't configured by BIOS.
> >   */
> > @@ -928,6 +934,13 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
> >  	vmd_acpi_begin();
> >  
> >  	pci_scan_child_bus(vmd->bus);
> > +
> > +	/*
> > +	 * Make PCI devices at D0 when enable PCI-PM L1 PM Substates from
> > +	 * Section 5.5.4 of PCIe Base Spec Revision 5.0 Version 0.1
> > +	 */
> > +	pci_walk_bus(vmd->bus, vmd_power_on_pci_device, NULL);
> 
> Sec 5.5.4 indeed says "If setting either or both of the enable bits
> for PCI-PM L1 PM Substates, both ports must be configured ... while in
> D0."
> 
> This applies to *all* PCIe devices, not just those below a VMD bridge,
> so I'm not sure this is the right place to do this.  Is there anything
> that prevents a similar issue for non-VMD hierarchies?
> 
> I guess the bridges (Root Ports and Switch Ports) must already be in
> D0, or we wouldn't be able to enumerate anything below them, right?
> 
> It would be nice to connect this more closely with the L1 PM Substates
> configuration.  I don't quite see the connection here.  The only path
> I see for L1SS configuration is this:
> 
>   pci_scan_slot
>     pcie_aspm_init_link_state
>       pcie_aspm_cap_init
> 	aspm_l1ss_init
> 
> which of course is inside pci_scan_child_bus(), which happens *before*
> this patch puts the devices in D0.  Where does the L1SS configuration
> happen after this vmd_power_on_pci_device()?

I think I found it; a more complete call tree is like this, where the
L1SS configuration is inside the pci_enable_link_state_locked() done
by the vmd_pm_enable_quirk() bus walk:

  vmd_enable_domain
    pci_scan_child_bus
      ...
        pci_scan_slot
          pcie_aspm_init_link_state
            pcie_aspm_cap_init
              aspm_l1ss_init
+   pci_walk_bus(vmd->bus, vmd_power_on_pci_device, NULL)
    ...
    pci_walk_bus(vmd->bus, vmd_pm_enable_quirk, &features)
      vmd_pm_enable_quirk
        pci_enable_link_state_locked(PCIE_LINK_STATE_ALL)
          __pci_enable_link_state
            link->aspm_default |= ...
            pcie_config_aspm_link
              pcie_config_aspm_l1ss
                pci_clear_and_set_config_dword(PCI_L1SS_CTL1)  # <--
              pcie_config_aspm_dev
                pcie_capability_clear_and_set_word(PCI_EXP_LNKCTL)

qcom_pcie_enable_aspm() does a similar pci_set_power_state(PCI_D0)
before calling pci_enable_link_state_locked().  I would prefer to
avoid the D0 precondition, but at the very least, I think we should
add a note to the pci_enable_link_state_locked() kernel-doc saying the
caller must ensure the device is in D0.

I think vmd_pm_enable_quirk() is also problematic because it
configures the LTR Capability *after* calling
pci_enable_link_state_locked(PCIE_LINK_STATE_ALL).

The pci_enable_link_state_locked() will enable ASPM L1.2, but sec
5.5.4 says LTR_L1.2_THRESHOLD must be programmed *before* ASPM L1.2
Enable is set.

Bjorn
Jian-Hong Pan Jan. 31, 2024, 7:37 a.m. UTC | #3
Bjorn Helgaas <helgaas@kernel.org> 於 2024年1月31日 週三 上午1:28寫道:
>
> [+cc Johan since qcom has similar code]
>
> On Tue, Jan 30, 2024 at 10:35:19AM -0600, Bjorn Helgaas wrote:
> > On Tue, Jan 30, 2024 at 06:00:51PM +0800, Jian-Hong Pan wrote:
> > > The remmapped PCIe port and NVMe have PCI PM L1 substates capability on
> > > ASUS B1400CEAE, but they are disabled originally:
> > >
> > > Capabilities: [900 v1] L1 PM Substates
> > >         L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1- L1_PM_Substates+
> > >                   PortCommonModeRestoreTime=32us PortTPowerOnTime=10us
> > >         L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1-
> > >                    T_CommonMode=0us LTR1.2_Threshold=0ns
> > >         L1SubCtl2: T_PwrOn=10us
> > >
> > > Power on all of the VMD remapped PCI devices before enable PCI-PM L1 PM
> > > Substates by following "Section 5.5.4 of PCIe Base Spec Revision 5.0
> > > Version 0.1". Then, PCI PM's L1 substates control are enabled
> > > accordingly.
> > >
> > > Link: https://bugzilla.kernel.org/show_bug.cgi?id=218394
> > > Signed-off-by: Jian-Hong Pan <jhp@endlessos.org>
> > > ---
> > >  drivers/pci/controller/vmd.c | 13 +++++++++++++
> > >  1 file changed, 13 insertions(+)
> > >
> > > diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> > > index 87b7856f375a..b1bbe8e6075a 100644
> > > --- a/drivers/pci/controller/vmd.c
> > > +++ b/drivers/pci/controller/vmd.c
> > > @@ -738,6 +738,12 @@ static void vmd_copy_host_bridge_flags(struct pci_host_bridge *root_bridge,
> > >     vmd_bridge->native_dpc = root_bridge->native_dpc;
> > >  }
> > >
> > > +static int vmd_power_on_pci_device(struct pci_dev *pdev, void *userdata)
> > > +{
> > > +   pci_set_power_state(pdev, PCI_D0);
> > > +   return 0;
> > > +}
> > > +
> > >  /*
> > >   * Enable ASPM and LTR settings on devices that aren't configured by BIOS.
> > >   */
> > > @@ -928,6 +934,13 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
> > >     vmd_acpi_begin();
> > >
> > >     pci_scan_child_bus(vmd->bus);
> > > +
> > > +   /*
> > > +    * Make PCI devices at D0 when enable PCI-PM L1 PM Substates from
> > > +    * Section 5.5.4 of PCIe Base Spec Revision 5.0 Version 0.1
> > > +    */
> > > +   pci_walk_bus(vmd->bus, vmd_power_on_pci_device, NULL);
> >
> > Sec 5.5.4 indeed says "If setting either or both of the enable bits
> > for PCI-PM L1 PM Substates, both ports must be configured ... while in
> > D0."
> >
> > This applies to *all* PCIe devices, not just those below a VMD bridge,
> > so I'm not sure this is the right place to do this.  Is there anything
> > that prevents a similar issue for non-VMD hierarchies?
> >
> > I guess the bridges (Root Ports and Switch Ports) must already be in
> > D0, or we wouldn't be able to enumerate anything below them, right?
> >
> > It would be nice to connect this more closely with the L1 PM Substates
> > configuration.  I don't quite see the connection here.  The only path
> > I see for L1SS configuration is this:
> >
> >   pci_scan_slot
> >     pcie_aspm_init_link_state
> >       pcie_aspm_cap_init
> >       aspm_l1ss_init
> >
> > which of course is inside pci_scan_child_bus(), which happens *before*
> > this patch puts the devices in D0.  Where does the L1SS configuration
> > happen after this vmd_power_on_pci_device()?
>
> I think I found it; a more complete call tree is like this, where the
> L1SS configuration is inside the pci_enable_link_state_locked() done
> by the vmd_pm_enable_quirk() bus walk:
>
>   vmd_enable_domain
>     pci_scan_child_bus
>       ...
>         pci_scan_slot
>           pcie_aspm_init_link_state
>             pcie_aspm_cap_init
>               aspm_l1ss_init
> +   pci_walk_bus(vmd->bus, vmd_power_on_pci_device, NULL)
>     ...
>     pci_walk_bus(vmd->bus, vmd_pm_enable_quirk, &features)
>       vmd_pm_enable_quirk
>         pci_enable_link_state_locked(PCIE_LINK_STATE_ALL)
>           __pci_enable_link_state
>             link->aspm_default |= ...
>             pcie_config_aspm_link
>               pcie_config_aspm_l1ss
>                 pci_clear_and_set_config_dword(PCI_L1SS_CTL1)  # <--
>               pcie_config_aspm_dev
>                 pcie_capability_clear_and_set_word(PCI_EXP_LNKCTL)
>
> qcom_pcie_enable_aspm() does a similar pci_set_power_state(PCI_D0)
> before calling pci_enable_link_state_locked().  I would prefer to
> avoid the D0 precondition, but at the very least, I think we should
> add a note to the pci_enable_link_state_locked() kernel-doc saying the
> caller must ensure the device is in D0.
>
> I think vmd_pm_enable_quirk() is also problematic because it
> configures the LTR Capability *after* calling
> pci_enable_link_state_locked(PCIE_LINK_STATE_ALL).
>
> The pci_enable_link_state_locked() will enable ASPM L1.2, but sec
> 5.5.4 says LTR_L1.2_THRESHOLD must be programmed *before* ASPM L1.2
> Enable is set.

Thanks!  This inspires me why LTR1.2_Threshold is 0ns here.

Jian-Hong Pan
Johan Hovold Jan. 31, 2024, 8:33 a.m. UTC | #4
On Tue, Jan 30, 2024 at 06:00:51PM +0800, Jian-Hong Pan wrote:
> The remmapped PCIe port and NVMe have PCI PM L1 substates capability on
> ASUS B1400CEAE, but they are disabled originally:
> 
> Capabilities: [900 v1] L1 PM Substates
>         L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1- L1_PM_Substates+
>                   PortCommonModeRestoreTime=32us PortTPowerOnTime=10us
>         L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1-
>                    T_CommonMode=0us LTR1.2_Threshold=0ns
>         L1SubCtl2: T_PwrOn=10us
> 
> Power on all of the VMD remapped PCI devices before enable PCI-PM L1 PM
> Substates by following "Section 5.5.4 of PCIe Base Spec Revision 5.0
> Version 0.1". Then, PCI PM's L1 substates control are enabled
> accordingly.
 
> +static int vmd_power_on_pci_device(struct pci_dev *pdev, void *userdata)
> +{
> +	pci_set_power_state(pdev, PCI_D0);

As I believe Bjorn already hinted at, this should probably be done in
vmd_pm_enable_quirk().

Also, you need to use the new pci_set_power_state_locked() helper since
these callbacks are called from pci_walk_bus() with the bus semaphore
held:

	https://lore.kernel.org/lkml/20240130100243.11011-1-johan+linaro@kernel.org/

> +	return 0;
> +}
> +
>  /*
>   * Enable ASPM and LTR settings on devices that aren't configured by BIOS.
>   */
> @@ -928,6 +934,13 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
>  	vmd_acpi_begin();
>  
>  	pci_scan_child_bus(vmd->bus);
> +
> +	/*
> +	 * Make PCI devices at D0 when enable PCI-PM L1 PM Substates from
> +	 * Section 5.5.4 of PCIe Base Spec Revision 5.0 Version 0.1
> +	 */
> +	pci_walk_bus(vmd->bus, vmd_power_on_pci_device, NULL);
> +
>  	vmd_domain_reset(vmd);
>  
>  	/* When Intel VMD is enabled, the OS does not discover the Root Ports

Johan
diff mbox series

Patch

diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index 87b7856f375a..b1bbe8e6075a 100644
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -738,6 +738,12 @@  static void vmd_copy_host_bridge_flags(struct pci_host_bridge *root_bridge,
 	vmd_bridge->native_dpc = root_bridge->native_dpc;
 }
 
+static int vmd_power_on_pci_device(struct pci_dev *pdev, void *userdata)
+{
+	pci_set_power_state(pdev, PCI_D0);
+	return 0;
+}
+
 /*
  * Enable ASPM and LTR settings on devices that aren't configured by BIOS.
  */
@@ -928,6 +934,13 @@  static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
 	vmd_acpi_begin();
 
 	pci_scan_child_bus(vmd->bus);
+
+	/*
+	 * Make PCI devices at D0 when enable PCI-PM L1 PM Substates from
+	 * Section 5.5.4 of PCIe Base Spec Revision 5.0 Version 0.1
+	 */
+	pci_walk_bus(vmd->bus, vmd_power_on_pci_device, NULL);
+
 	vmd_domain_reset(vmd);
 
 	/* When Intel VMD is enabled, the OS does not discover the Root Ports