diff mbox series

[05/10] dt-bindings: PCI: tegra: Add device tree support for T194

Message ID 1553613207-3988-6-git-send-email-vidyas@nvidia.com
State Changes Requested
Headers show
Series Add Tegra194 PCIe support | expand

Commit Message

Vidya Sagar March 26, 2019, 3:13 p.m. UTC
Add support for Tegra194 PCIe controllers. These controllers are based
on Synopsys DesignWare core IP.

Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
---
 .../bindings/pci/nvidia,tegra194-pcie.txt          | 209 +++++++++++++++++++++
 .../devicetree/bindings/phy/phy-tegra194-p2u.txt   |  34 ++++
 2 files changed, 243 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
 create mode 100644 Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt

Comments

Jon Hunter March 27, 2019, 10:10 a.m. UTC | #1
On 26/03/2019 15:13, Vidya Sagar wrote:
> Add support for Tegra194 PCIe controllers. These controllers are based
> on Synopsys DesignWare core IP.
> 
> Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
> ---
>  .../bindings/pci/nvidia,tegra194-pcie.txt          | 209 +++++++++++++++++++++
>  .../devicetree/bindings/phy/phy-tegra194-p2u.txt   |  34 ++++
>  2 files changed, 243 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>  create mode 100644 Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
> 
> diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
> new file mode 100644
> index 000000000000..31527283a0cd
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
> @@ -0,0 +1,209 @@
> +NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based)
> +
> +This PCIe host controller is based on the Synopsis Designware PCIe IP
> +and thus inherits all the common properties defined in designware-pcie.txt.
> +
> +Required properties:
> +- compatible: For Tegra19x, must contain "nvidia,tegra194-pcie".
> +- device_type: Must be "pci"
> +- reg: A list of physical base address and length for each set of controller
> +  registers. Must contain an entry for each entry in the reg-names property.
> +- reg-names: Must include the following entries:
> +  "appl": Controller's application logic registers
> +  "window1": This is the aperture of controller available under 4GB boundary
> +             (i.e. within 32-bit space). This aperture is typically used for
> +             accessing config space of root port itself and also the connected
> +             endpoints (by appropriately programming internal Address
> +             Translation Unit's (iATU) out bound region) and also to map
> +             prefetchable/non-prefetchable BARs.
> +  "config": As per the definition in designware-pcie.txt
> +  "atu_dma": iATU and DMA register. This is where the iATU (internal Address
> +             Translation Unit) registers of the PCIe core are made available
> +             fow SW access.
> +  "dbi": The aperture where root port's own configuration registers are
> +         available
> +  "window2": This is the larger (compared to window1) aperture available above
> +             4GB boundary (i.e. in 64-bit space). This is typically used for
> +             mapping prefetchable/non-prefetchable BARs of endpoints
> +- interrupts: A list of interrupt outputs of the controller. Must contain an
> +  entry for each entry in the interrupt-names property.
> +- interrupt-names: Must include the following entries:
> +  "intr": The Tegra interrupt that is asserted for controller interrupts
> +  "msi": The Tegra interrupt that is asserted when an MSI is received
> +- bus-range: Range of bus numbers associated with this controller
> +- #address-cells: Address representation for root ports (must be 3)
> +  - cell 0 specifies the bus and device numbers of the root port:
> +    [23:16]: bus number
> +    [15:11]: device number
> +  - cell 1 denotes the upper 32 address bits and should be 0
> +  - cell 2 contains the lower 32 address bits and is used to translate to the
> +    CPU address space
> +- #size-cells: Size representation for root ports (must be 2)
> +- ranges: Describes the translation of addresses for root ports and standard
> +  PCI regions. The entries must be 7 cells each, where the first three cells
> +  correspond to the address as described for the #address-cells property
> +  above, the fourth and fifth cells are for the physical CPU address to
> +  translate to and the sixth and seventh cells are as described for the
> +  #size-cells property above.
> +  - Entries setup the mapping for the standard I/O, memory and
> +    prefetchable PCI regions. The first cell determines the type of region
> +    that is setup:
> +    - 0x81000000: I/O memory region
> +    - 0x82000000: non-prefetchable memory region
> +    - 0xc2000000: prefetchable memory region
> +  Please refer to the standard PCI bus binding document for a more detailed
> +  explanation.
> +- #interrupt-cells: Size representation for interrupts (must be 1)
> +- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties
> +  Please refer to the standard PCI bus binding document for a more detailed
> +  explanation.
> +- clocks: Must contain an entry for each entry in clock-names.
> +  See ../clocks/clock-bindings.txt for details.
> +- clock-names: Must include the following entries:
> +  - core_clk
> +- resets: Must contain an entry for each entry in reset-names.
> +  See ../reset/reset.txt for details.
> +- reset-names: Must include the following entries:
> +  - core_apb_rst
> +  - core_rst
> +- phys: Must contain a phandle to P2U PHY for each entry in phy-names.
> +- phy-names: Must include an entry for each active lane.
> +  "pcie-p2u-N": where N ranges from 0 to one less than the total number of lanes
> +- Controller dependent register offsets
> +  - nvidia,event-cntr-ctrl: EVENT_COUNTER_CONTROL reg offset
> +      0x168 - FPGA
> +      0x1a8 - C1, C2 and C3
> +      0x1c4 - C4
> +      0x1d8 - C0 and C5
> +  - nvidia,event-cntr-data: EVENT_COUNTER_DATA reg offset
> +      0x16c - FPGA
> +      0x1ac - C1, C2 and C3
> +      0x1c8 - C4
> +      0x1dc - C0 and C5
> +- nvidia,controller-id : Controller specific ID
> +      0x0 - C0
> +      0x1 - C1
> +      0x2 - C2
> +      0x3 - C3
> +      0x4 - C4
> +      0x5 - C5
> +- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals

I don't see any power-domains listed here. We should have these for T194
right?

Cheers
Jon
Vidya Sagar March 27, 2019, 10:53 a.m. UTC | #2
On 3/27/2019 3:40 PM, Jon Hunter wrote:
> 
> On 26/03/2019 15:13, Vidya Sagar wrote:
>> Add support for Tegra194 PCIe controllers. These controllers are based
>> on Synopsys DesignWare core IP.
>>
>> Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
>> ---
>>   .../bindings/pci/nvidia,tegra194-pcie.txt          | 209 +++++++++++++++++++++
>>   .../devicetree/bindings/phy/phy-tegra194-p2u.txt   |  34 ++++
>>   2 files changed, 243 insertions(+)
>>   create mode 100644 Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>>   create mode 100644 Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
>>
>> diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>> new file mode 100644
>> index 000000000000..31527283a0cd
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>> @@ -0,0 +1,209 @@
>> +NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based)
>> +
>> +This PCIe host controller is based on the Synopsis Designware PCIe IP
>> +and thus inherits all the common properties defined in designware-pcie.txt.
>> +
>> +Required properties:
>> +- compatible: For Tegra19x, must contain "nvidia,tegra194-pcie".
>> +- device_type: Must be "pci"
>> +- reg: A list of physical base address and length for each set of controller
>> +  registers. Must contain an entry for each entry in the reg-names property.
>> +- reg-names: Must include the following entries:
>> +  "appl": Controller's application logic registers
>> +  "window1": This is the aperture of controller available under 4GB boundary
>> +             (i.e. within 32-bit space). This aperture is typically used for
>> +             accessing config space of root port itself and also the connected
>> +             endpoints (by appropriately programming internal Address
>> +             Translation Unit's (iATU) out bound region) and also to map
>> +             prefetchable/non-prefetchable BARs.
>> +  "config": As per the definition in designware-pcie.txt
>> +  "atu_dma": iATU and DMA register. This is where the iATU (internal Address
>> +             Translation Unit) registers of the PCIe core are made available
>> +             fow SW access.
>> +  "dbi": The aperture where root port's own configuration registers are
>> +         available
>> +  "window2": This is the larger (compared to window1) aperture available above
>> +             4GB boundary (i.e. in 64-bit space). This is typically used for
>> +             mapping prefetchable/non-prefetchable BARs of endpoints
>> +- interrupts: A list of interrupt outputs of the controller. Must contain an
>> +  entry for each entry in the interrupt-names property.
>> +- interrupt-names: Must include the following entries:
>> +  "intr": The Tegra interrupt that is asserted for controller interrupts
>> +  "msi": The Tegra interrupt that is asserted when an MSI is received
>> +- bus-range: Range of bus numbers associated with this controller
>> +- #address-cells: Address representation for root ports (must be 3)
>> +  - cell 0 specifies the bus and device numbers of the root port:
>> +    [23:16]: bus number
>> +    [15:11]: device number
>> +  - cell 1 denotes the upper 32 address bits and should be 0
>> +  - cell 2 contains the lower 32 address bits and is used to translate to the
>> +    CPU address space
>> +- #size-cells: Size representation for root ports (must be 2)
>> +- ranges: Describes the translation of addresses for root ports and standard
>> +  PCI regions. The entries must be 7 cells each, where the first three cells
>> +  correspond to the address as described for the #address-cells property
>> +  above, the fourth and fifth cells are for the physical CPU address to
>> +  translate to and the sixth and seventh cells are as described for the
>> +  #size-cells property above.
>> +  - Entries setup the mapping for the standard I/O, memory and
>> +    prefetchable PCI regions. The first cell determines the type of region
>> +    that is setup:
>> +    - 0x81000000: I/O memory region
>> +    - 0x82000000: non-prefetchable memory region
>> +    - 0xc2000000: prefetchable memory region
>> +  Please refer to the standard PCI bus binding document for a more detailed
>> +  explanation.
>> +- #interrupt-cells: Size representation for interrupts (must be 1)
>> +- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties
>> +  Please refer to the standard PCI bus binding document for a more detailed
>> +  explanation.
>> +- clocks: Must contain an entry for each entry in clock-names.
>> +  See ../clocks/clock-bindings.txt for details.
>> +- clock-names: Must include the following entries:
>> +  - core_clk
>> +- resets: Must contain an entry for each entry in reset-names.
>> +  See ../reset/reset.txt for details.
>> +- reset-names: Must include the following entries:
>> +  - core_apb_rst
>> +  - core_rst
>> +- phys: Must contain a phandle to P2U PHY for each entry in phy-names.
>> +- phy-names: Must include an entry for each active lane.
>> +  "pcie-p2u-N": where N ranges from 0 to one less than the total number of lanes
>> +- Controller dependent register offsets
>> +  - nvidia,event-cntr-ctrl: EVENT_COUNTER_CONTROL reg offset
>> +      0x168 - FPGA
>> +      0x1a8 - C1, C2 and C3
>> +      0x1c4 - C4
>> +      0x1d8 - C0 and C5
>> +  - nvidia,event-cntr-data: EVENT_COUNTER_DATA reg offset
>> +      0x16c - FPGA
>> +      0x1ac - C1, C2 and C3
>> +      0x1c8 - C4
>> +      0x1dc - C0 and C5
>> +- nvidia,controller-id : Controller specific ID
>> +      0x0 - C0
>> +      0x1 - C1
>> +      0x2 - C2
>> +      0x3 - C3
>> +      0x4 - C4
>> +      0x5 - C5
>> +- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals
> 
> I don't see any power-domains listed here. We should have these for T194
> right?
Thanks for catching this.
Yes. PCIe uses power-domains and I'll add that in the future patch.

> 
> Cheers
> Jon
>
Thierry Reding March 28, 2019, 1:15 p.m. UTC | #3
On Tue, Mar 26, 2019 at 08:43:22PM +0530, Vidya Sagar wrote:
> Add support for Tegra194 PCIe controllers. These controllers are based
> on Synopsys DesignWare core IP.
> 
> Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
> ---
>  .../bindings/pci/nvidia,tegra194-pcie.txt          | 209 +++++++++++++++++++++
>  .../devicetree/bindings/phy/phy-tegra194-p2u.txt   |  34 ++++
>  2 files changed, 243 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>  create mode 100644 Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
> 
> diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
> new file mode 100644
> index 000000000000..31527283a0cd
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
> @@ -0,0 +1,209 @@
> +NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based)
> +
> +This PCIe host controller is based on the Synopsis Designware PCIe IP
> +and thus inherits all the common properties defined in designware-pcie.txt.
> +
> +Required properties:
> +- compatible: For Tegra19x, must contain "nvidia,tegra194-pcie".
> +- device_type: Must be "pci"
> +- reg: A list of physical base address and length for each set of controller
> +  registers. Must contain an entry for each entry in the reg-names property.
> +- reg-names: Must include the following entries:
> +  "appl": Controller's application logic registers
> +  "window1": This is the aperture of controller available under 4GB boundary
> +             (i.e. within 32-bit space). This aperture is typically used for
> +             accessing config space of root port itself and also the connected
> +             endpoints (by appropriately programming internal Address
> +             Translation Unit's (iATU) out bound region) and also to map
> +             prefetchable/non-prefetchable BARs.
> +  "config": As per the definition in designware-pcie.txt

I see that you set this to a 256 KiB region for all controllers. Since
each function can have up to 4 KiB of extended configuration space, that
means you have space to address:

    256 KiB = 4 KiB * 8 functions * 8 devices

Each bus can have up to 32 devices (including the root port) and there
can be 256 busses, so I wonder how this is supposed to work. How does
the mapping work for configuration space? Does the controller allow
moving this 256 KiB window around so that more devices' configuration
space can be accessed?

> +  "atu_dma": iATU and DMA register. This is where the iATU (internal Address
> +             Translation Unit) registers of the PCIe core are made available
> +             fow SW access.
> +  "dbi": The aperture where root port's own configuration registers are
> +         available

This is slightly confusing because you already said in the description
of "window1" that it is used to access the configuration space of the
root port itself.

Is the root port configuration space available via the regular
configuration space registers?

> +  "window2": This is the larger (compared to window1) aperture available above
> +             4GB boundary (i.e. in 64-bit space). This is typically used for
> +             mapping prefetchable/non-prefetchable BARs of endpoints
> +- interrupts: A list of interrupt outputs of the controller. Must contain an
> +  entry for each entry in the interrupt-names property.
> +- interrupt-names: Must include the following entries:
> +  "intr": The Tegra interrupt that is asserted for controller interrupts
> +  "msi": The Tegra interrupt that is asserted when an MSI is received
> +- bus-range: Range of bus numbers associated with this controller
> +- #address-cells: Address representation for root ports (must be 3)
> +  - cell 0 specifies the bus and device numbers of the root port:
> +    [23:16]: bus number
> +    [15:11]: device number
> +  - cell 1 denotes the upper 32 address bits and should be 0
> +  - cell 2 contains the lower 32 address bits and is used to translate to the
> +    CPU address space
> +- #size-cells: Size representation for root ports (must be 2)
> +- ranges: Describes the translation of addresses for root ports and standard
> +  PCI regions. The entries must be 7 cells each, where the first three cells
> +  correspond to the address as described for the #address-cells property
> +  above, the fourth and fifth cells are for the physical CPU address to
> +  translate to and the sixth and seventh cells are as described for the
> +  #size-cells property above.
> +  - Entries setup the mapping for the standard I/O, memory and
> +    prefetchable PCI regions. The first cell determines the type of region
> +    that is setup:
> +    - 0x81000000: I/O memory region
> +    - 0x82000000: non-prefetchable memory region
> +    - 0xc2000000: prefetchable memory region
> +  Please refer to the standard PCI bus binding document for a more detailed
> +  explanation.
> +- #interrupt-cells: Size representation for interrupts (must be 1)
> +- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties
> +  Please refer to the standard PCI bus binding document for a more detailed
> +  explanation.
> +- clocks: Must contain an entry for each entry in clock-names.
> +  See ../clocks/clock-bindings.txt for details.
> +- clock-names: Must include the following entries:
> +  - core_clk

It's redundant to name a clock _clk. Is this already required by the
standard Designware bindings or is this new?

> +- resets: Must contain an entry for each entry in reset-names.
> +  See ../reset/reset.txt for details.
> +- reset-names: Must include the following entries:
> +  - core_apb_rst
> +  - core_rst

Same comment as for clock-names.

> +- phys: Must contain a phandle to P2U PHY for each entry in phy-names.
> +- phy-names: Must include an entry for each active lane.
> +  "pcie-p2u-N": where N ranges from 0 to one less than the total number of lanes

I'd leave away the "pcie-" prefix since the surrounding context already
makes it clear that this is for PCIe.

> +- Controller dependent register offsets
> +  - nvidia,event-cntr-ctrl: EVENT_COUNTER_CONTROL reg offset
> +      0x168 - FPGA
> +      0x1a8 - C1, C2 and C3
> +      0x1c4 - C4
> +      0x1d8 - C0 and C5
> +  - nvidia,event-cntr-data: EVENT_COUNTER_DATA reg offset
> +      0x16c - FPGA
> +      0x1ac - C1, C2 and C3
> +      0x1c8 - C4
> +      0x1dc - C0 and C5
> +- nvidia,controller-id : Controller specific ID
> +      0x0 - C0
> +      0x1 - C1
> +      0x2 - C2
> +      0x3 - C3
> +      0x4 - C4
> +      0x5 - C5

It's redundant to have both a controller ID and parameterized register
offsets based on that controller ID. I would recommend keeping the
controller ID and then moving the register offsets to the driver and
decide based on the controller ID.

> +- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals
> +
> +Optional properties:
> +- nvidia,max-speed: limits controllers max speed to this value.
> +    1 - Gen-1 (2.5 GT/s)
> +    2 - Gen-2 (5 GT/s)
> +    3 - Gen-3 (8 GT/s)
> +    4 - Gen-4 (16 GT/s)
> +- nvidia,init-speed: limits controllers init speed to this value.
> +    1 - Gen-1 (2. 5 GT/s)
> +    2 - Gen-2 (5 GT/s)
> +    3 - Gen-3 (8 GT/s)
> +    4 - Gen-4 (16 GT/s)
> +- nvidia,disable-aspm-states : controls advertisement of ASPM states
> +    bit-0 to '1' : disables advertisement of ASPM-L0s
> +    bit-1 to '1' : disables advertisement of ASPM-L1. This also disables
> +                 advertisement of ASPM-L1.1 and ASPM-L1.2
> +    bit-2 to '1' : disables advertisement of ASPM-L1.1
> +    bit-3 to '1' : disables advertisement of ASPM-L1.2

These seem more like configuration options rather than hardware
description.

> +- nvidia,disable-clock-request : gives a hint to driver that there is no
> +    CLKREQ signal routing on board
> +- nvidia,update-fc-fixup : needs it to improve perf when a platform is designed
> +    in such a way that it satisfies at least one of the following conditions
> +    1. If C0/C4/C5 run at x1/x2 link widths (irrespective of speed and MPS)
> +    2. If C0/C1/C2/C3/C4/C5 operate at their respective max link widths and
> +       a) speed is Gen-2 and MPS is 256B
> +       b) speed is >= Gen-3 with any MPS

If we know these conditions, can we not determine that the fixup is
needed at runtime?

> +- nvidia,cdm-check : Enables CDM checking. For more information, refer Synopsis
> +    DesignWare Cores  PCI Express Controller Databook r4.90a Chapter S.4

Why should this be configurable through device tree?

> +- nvidia,enable-power-down : Enables power down of respective controller and
> +    corresponding PLLs if they are not shared by any other entity

Wouldn't we want this to be the default? Why keep things powered up if
they are not needed?

> +- "nvidia,pex-wake" : Add PEX_WAKE gpio number to provide wake support.
> +- "nvidia,plat-gpios" : Add gpio number that needs to be configured before
> +   system goes for enumeration. There could be platforms where enabling 3.3V and
> +   12V power supplies are done through GPIOs, in which case, list of all such
> +   GPIOs can be specified through this property.

For power supplies we usually use the regulator bindings. Are there any
other cases where we'd need this?

> +- "nvidia,aspm-cmrt" : Common Mode Restore time for proper operation of ASPM to
> +   be specified in microseconds
> +- "nvidia,aspm-pwr-on-t" : Power On time for proper operation of ASPM to be
> +   specified in microseconds
> +- "nvidia,aspm-l0s-entrance-latency" : ASPM L0s entrance latency to be specified
> +   in microseconds
> +
> +Examples:
> +=========
> +
> +Tegra194:
> +--------
> +
> +SoC DTSI:
> +
> +	pcie@14180000 {
> +		compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";

It doesn't seem to me like claiming compatibility with "snps,dw-pcie" is
correct. There's a bunch of NVIDIA- or Tegra-specific properties below
and code in the driver. Would this device be able to function if no
driver was binding against the "nvidia,tegra194-pcie" compatible string?
Would it work if you left that out? I don't think so, so we should also
not list it here.

> +		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>;
> +		reg = <0x00 0x14180000 0x0 0x00020000   /* appl registers (128K)      */
> +		       0x00 0x38000000 0x0 0x00040000   /* configuration space (256K) */
> +		       0x00 0x38040000 0x0 0x00040000>; /* iATU_DMA reg space (256K)  */
> +		reg-names = "appl", "config", "atu_dma";
> +
> +		status = "disabled";
> +
> +		#address-cells = <3>;
> +		#size-cells = <2>;
> +		device_type = "pci";
> +		num-lanes = <8>;
> +		linux,pci-domain = <0>;
> +
> +		clocks = <&bpmp TEGRA194_CLK_PEX0_CORE_0>;
> +		clock-names = "core_clk";
> +
> +		resets = <&bpmp TEGRA194_RESET_PEX0_CORE_0_APB>,
> +			 <&bpmp TEGRA194_RESET_PEX0_CORE_0>;
> +		reset-names = "core_apb_rst", "core_rst";
> +
> +		interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>,	/* controller interrupt */
> +			     <GIC_SPI 73 IRQ_TYPE_LEVEL_HIGH>;	/* MSI interrupt */
> +		interrupt-names = "intr", "msi";
> +
> +		#interrupt-cells = <1>;
> +		interrupt-map-mask = <0 0 0 0>;
> +		interrupt-map = <0 0 0 0 &gic 0 72 0x04>;
> +
> +		nvidia,bpmp = <&bpmp>;
> +
> +		nvidia,max-speed = <4>;
> +		nvidia,disable-aspm-states = <0xf>;
> +		nvidia,controller-id = <&bpmp 0x0>;

Why is there a reference to the BPMP in this propert?

> +		nvidia,aux-clk-freq = <0x13>;
> +		nvidia,preset-init = <0x5>;

aux-clk-freq and preset-init are not defined in the binding above.

> +		nvidia,aspm-cmrt = <0x3C>;
> +		nvidia,aspm-pwr-on-t = <0x14>;
> +		nvidia,aspm-l0s-entrance-latency = <0x3>;

These should be in decimal notation to make them easier to deal with. I
don't usually read time in hexadecimal.

> +
> +		bus-range = <0x0 0xff>;
> +		ranges = <0x81000000 0x0 0x38100000 0x0 0x38100000 0x0 0x00100000      /* downstream I/O (1MB) */
> +			  0x82000000 0x0 0x38200000 0x0 0x38200000 0x0 0x01E00000      /* non-prefetchable memory (30MB) */
> +			  0xc2000000 0x18 0x00000000 0x18 0x00000000 0x4 0x00000000>;  /* prefetchable memory (16GB) */
> +
> +		nvidia,cfg-link-cap-l1sub = <0x1c4>;
> +		nvidia,cap-pl16g-status = <0x174>;
> +		nvidia,cap-pl16g-cap-off = <0x188>;
> +		nvidia,event-cntr-ctrl = <0x1d8>;
> +		nvidia,event-cntr-data = <0x1dc>;
> +		nvidia,dl-feature-cap = <0x30c>;

These are not defined in the binding above.

> +	};
> +
> +Board DTS:
> +
> +	pcie@14180000 {
> +		status = "okay";
> +
> +		vddio-pex-ctl-supply = <&vdd_1v8ao>;
> +
> +		phys = <&p2u_2>,
> +		       <&p2u_3>,
> +		       <&p2u_4>,
> +		       <&p2u_5>;
> +		phy-names = "pcie-p2u-0", "pcie-p2u-1", "pcie-p2u-2",
> +			    "pcie-p2u-3";
> +	};
> diff --git a/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt b/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt

Might be better to split this into a separate patch.

> new file mode 100644
> index 000000000000..cc0de8e8e8db
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
> @@ -0,0 +1,34 @@
> +NVIDIA Tegra194 P2U binding
> +
> +Tegra194 has two PHY bricks namely HSIO (High Speed IO) and NVHS (NVIDIA High
> +Speed) each interfacing with 12 and 8 P2U instances respectively.
> +A P2U instance is a glue logic between Synopsys DesignWare Core PCIe IP's PIPE
> +interface and PHY of HSIO/NVHS bricks. Each P2U instance represents one PCIe
> +lane.
> +
> +Required properties:
> +- compatible: For Tegra19x, must contain "nvidia,tegra194-phy-p2u".

Isn't the "phy-" implied by "p2u"? The name of the hardware block is
"Tegra194 P2U", so that "phy-" seems gratuitous to me.

> +- reg: Should be the physical address space and length of respective each P2U
> +       instance.
> +- reg-names: Must include the entry "base".

"base" is a bad name. Each of these entries will be a "base" of the
given region. The name should specify what region it is the base of.

Thierry

> +
> +Required properties for PHY port node:
> +- #phy-cells: Defined by generic PHY bindings.  Must be 0.
> +
> +Refer to phy/phy-bindings.txt for the generic PHY binding properties.
> +
> +Example:
> +
> +hsio-p2u {
> +	compatible = "simple-bus";
> +	#address-cells = <2>;
> +	#size-cells = <2>;
> +	ranges;
> +	p2u_0: p2u@03e10000 {
> +		compatible = "nvidia,tegra194-phy-p2u";
> +		reg = <0x0 0x03e10000 0x0 0x00010000>;
> +		reg-names = "base";
> +
> +		#phy-cells = <0>;
> +	};
> +}
> -- 
> 2.7.4
>
Rob Herring (Arm) March 31, 2019, 6:42 a.m. UTC | #4
On Tue, Mar 26, 2019 at 08:43:22PM +0530, Vidya Sagar wrote:
> Add support for Tegra194 PCIe controllers. These controllers are based
> on Synopsys DesignWare core IP.
> 
> Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
> ---
>  .../bindings/pci/nvidia,tegra194-pcie.txt          | 209 +++++++++++++++++++++
>  .../devicetree/bindings/phy/phy-tegra194-p2u.txt   |  34 ++++
>  2 files changed, 243 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>  create mode 100644 Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
> 
> diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
> new file mode 100644
> index 000000000000..31527283a0cd
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
> @@ -0,0 +1,209 @@
> +NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based)
> +
> +This PCIe host controller is based on the Synopsis Designware PCIe IP
> +and thus inherits all the common properties defined in designware-pcie.txt.
> +
> +Required properties:
> +- compatible: For Tegra19x, must contain "nvidia,tegra194-pcie".
> +- device_type: Must be "pci"
> +- reg: A list of physical base address and length for each set of controller
> +  registers. Must contain an entry for each entry in the reg-names property.
> +- reg-names: Must include the following entries:
> +  "appl": Controller's application logic registers
> +  "window1": This is the aperture of controller available under 4GB boundary
> +             (i.e. within 32-bit space). This aperture is typically used for
> +             accessing config space of root port itself and also the connected
> +             endpoints (by appropriately programming internal Address
> +             Translation Unit's (iATU) out bound region) and also to map
> +             prefetchable/non-prefetchable BARs.

This is usually represented in 'ranges' for BARs.

> +  "config": As per the definition in designware-pcie.txt
> +  "atu_dma": iATU and DMA register. This is where the iATU (internal Address
> +             Translation Unit) registers of the PCIe core are made available
> +             fow SW access.
> +  "dbi": The aperture where root port's own configuration registers are
> +         available
> +  "window2": This is the larger (compared to window1) aperture available above
> +             4GB boundary (i.e. in 64-bit space). This is typically used for
> +             mapping prefetchable/non-prefetchable BARs of endpoints
> +- interrupts: A list of interrupt outputs of the controller. Must contain an
> +  entry for each entry in the interrupt-names property.
> +- interrupt-names: Must include the following entries:
> +  "intr": The Tegra interrupt that is asserted for controller interrupts
> +  "msi": The Tegra interrupt that is asserted when an MSI is received
> +- bus-range: Range of bus numbers associated with this controller
> +- #address-cells: Address representation for root ports (must be 3)
> +  - cell 0 specifies the bus and device numbers of the root port:
> +    [23:16]: bus number
> +    [15:11]: device number
> +  - cell 1 denotes the upper 32 address bits and should be 0
> +  - cell 2 contains the lower 32 address bits and is used to translate to the
> +    CPU address space
> +- #size-cells: Size representation for root ports (must be 2)
> +- ranges: Describes the translation of addresses for root ports and standard
> +  PCI regions. The entries must be 7 cells each, where the first three cells
> +  correspond to the address as described for the #address-cells property
> +  above, the fourth and fifth cells are for the physical CPU address to
> +  translate to and the sixth and seventh cells are as described for the
> +  #size-cells property above.
> +  - Entries setup the mapping for the standard I/O, memory and
> +    prefetchable PCI regions. The first cell determines the type of region
> +    that is setup:
> +    - 0x81000000: I/O memory region
> +    - 0x82000000: non-prefetchable memory region
> +    - 0xc2000000: prefetchable memory region
> +  Please refer to the standard PCI bus binding document for a more detailed
> +  explanation.
> +- #interrupt-cells: Size representation for interrupts (must be 1)
> +- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties
> +  Please refer to the standard PCI bus binding document for a more detailed
> +  explanation.
> +- clocks: Must contain an entry for each entry in clock-names.
> +  See ../clocks/clock-bindings.txt for details.
> +- clock-names: Must include the following entries:
> +  - core_clk
> +- resets: Must contain an entry for each entry in reset-names.
> +  See ../reset/reset.txt for details.
> +- reset-names: Must include the following entries:
> +  - core_apb_rst
> +  - core_rst
> +- phys: Must contain a phandle to P2U PHY for each entry in phy-names.
> +- phy-names: Must include an entry for each active lane.
> +  "pcie-p2u-N": where N ranges from 0 to one less than the total number of lanes
> +- Controller dependent register offsets
> +  - nvidia,event-cntr-ctrl: EVENT_COUNTER_CONTROL reg offset
> +      0x168 - FPGA
> +      0x1a8 - C1, C2 and C3
> +      0x1c4 - C4
> +      0x1d8 - C0 and C5
> +  - nvidia,event-cntr-data: EVENT_COUNTER_DATA reg offset
> +      0x16c - FPGA
> +      0x1ac - C1, C2 and C3
> +      0x1c8 - C4
> +      0x1dc - C0 and C5
> +- nvidia,controller-id : Controller specific ID
> +      0x0 - C0
> +      0x1 - C1
> +      0x2 - C2
> +      0x3 - C3
> +      0x4 - C4
> +      0x5 - C5
> +- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals
> +
> +Optional properties:
> +- nvidia,max-speed: limits controllers max speed to this value.
> +    1 - Gen-1 (2.5 GT/s)
> +    2 - Gen-2 (5 GT/s)
> +    3 - Gen-3 (8 GT/s)
> +    4 - Gen-4 (16 GT/s)
> +- nvidia,init-speed: limits controllers init speed to this value.
> +    1 - Gen-1 (2. 5 GT/s)
> +    2 - Gen-2 (5 GT/s)
> +    3 - Gen-3 (8 GT/s)
> +    4 - Gen-4 (16 GT/s)

Don't we have standard speed properties?

Why do we need 2 values?

> +- nvidia,disable-aspm-states : controls advertisement of ASPM states
> +    bit-0 to '1' : disables advertisement of ASPM-L0s
> +    bit-1 to '1' : disables advertisement of ASPM-L1. This also disables
> +                 advertisement of ASPM-L1.1 and ASPM-L1.2
> +    bit-2 to '1' : disables advertisement of ASPM-L1.1
> +    bit-3 to '1' : disables advertisement of ASPM-L1.2

Seems like these too should be common.

> +- nvidia,disable-clock-request : gives a hint to driver that there is no
> +    CLKREQ signal routing on board
> +- nvidia,update-fc-fixup : needs it to improve perf when a platform is designed
> +    in such a way that it satisfies at least one of the following conditions
> +    1. If C0/C4/C5 run at x1/x2 link widths (irrespective of speed and MPS)
> +    2. If C0/C1/C2/C3/C4/C5 operate at their respective max link widths and

What is Cx?

> +       a) speed is Gen-2 and MPS is 256B
> +       b) speed is >= Gen-3 with any MPS
> +- nvidia,cdm-check : Enables CDM checking. For more information, refer Synopsis
> +    DesignWare Cores  PCI Express Controller Databook r4.90a Chapter S.4
> +- nvidia,enable-power-down : Enables power down of respective controller and
> +    corresponding PLLs if they are not shared by any other entity
> +- "nvidia,pex-wake" : Add PEX_WAKE gpio number to provide wake support.
> +- "nvidia,plat-gpios" : Add gpio number that needs to be configured before
> +   system goes for enumeration. There could be platforms where enabling 3.3V and
> +   12V power supplies are done through GPIOs, in which case, list of all such
> +   GPIOs can be specified through this property.

These should be split out to their specific function.

> +- "nvidia,aspm-cmrt" : Common Mode Restore time for proper operation of ASPM to
> +   be specified in microseconds
> +- "nvidia,aspm-pwr-on-t" : Power On time for proper operation of ASPM to be
> +   specified in microseconds
> +- "nvidia,aspm-l0s-entrance-latency" : ASPM L0s entrance latency to be specified
> +   in microseconds

properties with units need unit suffixes as defined in 
property-units.txt.

> +
> +Examples:
> +=========
> +
> +Tegra194:
> +--------
> +
> +SoC DTSI:
> +
> +	pcie@14180000 {
> +		compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
> +		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>;
> +		reg = <0x00 0x14180000 0x0 0x00020000   /* appl registers (128K)      */
> +		       0x00 0x38000000 0x0 0x00040000   /* configuration space (256K) */
> +		       0x00 0x38040000 0x0 0x00040000>; /* iATU_DMA reg space (256K)  */
> +		reg-names = "appl", "config", "atu_dma";
> +
> +		status = "disabled";
> +
> +		#address-cells = <3>;
> +		#size-cells = <2>;
> +		device_type = "pci";
> +		num-lanes = <8>;
> +		linux,pci-domain = <0>;
> +
> +		clocks = <&bpmp TEGRA194_CLK_PEX0_CORE_0>;
> +		clock-names = "core_clk";
> +
> +		resets = <&bpmp TEGRA194_RESET_PEX0_CORE_0_APB>,
> +			 <&bpmp TEGRA194_RESET_PEX0_CORE_0>;
> +		reset-names = "core_apb_rst", "core_rst";
> +
> +		interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>,	/* controller interrupt */
> +			     <GIC_SPI 73 IRQ_TYPE_LEVEL_HIGH>;	/* MSI interrupt */
> +		interrupt-names = "intr", "msi";
> +
> +		#interrupt-cells = <1>;
> +		interrupt-map-mask = <0 0 0 0>;
> +		interrupt-map = <0 0 0 0 &gic 0 72 0x04>;
> +
> +		nvidia,bpmp = <&bpmp>;
> +
> +		nvidia,max-speed = <4>;
> +		nvidia,disable-aspm-states = <0xf>;
> +		nvidia,controller-id = <&bpmp 0x0>;
> +		nvidia,aux-clk-freq = <0x13>;
> +		nvidia,preset-init = <0x5>;
> +		nvidia,aspm-cmrt = <0x3C>;
> +		nvidia,aspm-pwr-on-t = <0x14>;
> +		nvidia,aspm-l0s-entrance-latency = <0x3>;
> +
> +		bus-range = <0x0 0xff>;
> +		ranges = <0x81000000 0x0 0x38100000 0x0 0x38100000 0x0 0x00100000      /* downstream I/O (1MB) */
> +			  0x82000000 0x0 0x38200000 0x0 0x38200000 0x0 0x01E00000      /* non-prefetchable memory (30MB) */
> +			  0xc2000000 0x18 0x00000000 0x18 0x00000000 0x4 0x00000000>;  /* prefetchable memory (16GB) */
> +
> +		nvidia,cfg-link-cap-l1sub = <0x1c4>;
> +		nvidia,cap-pl16g-status = <0x174>;
> +		nvidia,cap-pl16g-cap-off = <0x188>;
> +		nvidia,event-cntr-ctrl = <0x1d8>;
> +		nvidia,event-cntr-data = <0x1dc>;
> +		nvidia,dl-feature-cap = <0x30c>;
> +	};
> +
> +Board DTS:
> +
> +	pcie@14180000 {
> +		status = "okay";
> +
> +		vddio-pex-ctl-supply = <&vdd_1v8ao>;
> +
> +		phys = <&p2u_2>,
> +		       <&p2u_3>,
> +		       <&p2u_4>,
> +		       <&p2u_5>;
> +		phy-names = "pcie-p2u-0", "pcie-p2u-1", "pcie-p2u-2",
> +			    "pcie-p2u-3";
> +	};
> diff --git a/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt b/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
> new file mode 100644
> index 000000000000..cc0de8e8e8db
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
> @@ -0,0 +1,34 @@
> +NVIDIA Tegra194 P2U binding
> +
> +Tegra194 has two PHY bricks namely HSIO (High Speed IO) and NVHS (NVIDIA High
> +Speed) each interfacing with 12 and 8 P2U instances respectively.
> +A P2U instance is a glue logic between Synopsys DesignWare Core PCIe IP's PIPE
> +interface and PHY of HSIO/NVHS bricks. Each P2U instance represents one PCIe
> +lane.
> +
> +Required properties:
> +- compatible: For Tegra19x, must contain "nvidia,tegra194-phy-p2u".
> +- reg: Should be the physical address space and length of respective each P2U
> +       instance.
> +- reg-names: Must include the entry "base".
> +
> +Required properties for PHY port node:
> +- #phy-cells: Defined by generic PHY bindings.  Must be 0.
> +
> +Refer to phy/phy-bindings.txt for the generic PHY binding properties.
> +
> +Example:
> +
> +hsio-p2u {
> +	compatible = "simple-bus";
> +	#address-cells = <2>;
> +	#size-cells = <2>;
> +	ranges;
> +	p2u_0: p2u@03e10000 {
> +		compatible = "nvidia,tegra194-phy-p2u";
> +		reg = <0x0 0x03e10000 0x0 0x00010000>;
> +		reg-names = "base";
> +
> +		#phy-cells = <0>;
> +	};
> +}
> -- 
> 2.7.4
>
Vidya Sagar April 1, 2019, 10:01 a.m. UTC | #5
On 3/28/2019 6:45 PM, Thierry Reding wrote:
> On Tue, Mar 26, 2019 at 08:43:22PM +0530, Vidya Sagar wrote:
>> Add support for Tegra194 PCIe controllers. These controllers are based
>> on Synopsys DesignWare core IP.
>>
>> Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
>> ---
>>   .../bindings/pci/nvidia,tegra194-pcie.txt          | 209 +++++++++++++++++++++
>>   .../devicetree/bindings/phy/phy-tegra194-p2u.txt   |  34 ++++
>>   2 files changed, 243 insertions(+)
>>   create mode 100644 Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>>   create mode 100644 Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
>>
>> diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>> new file mode 100644
>> index 000000000000..31527283a0cd
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>> @@ -0,0 +1,209 @@
>> +NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based)
>> +
>> +This PCIe host controller is based on the Synopsis Designware PCIe IP
>> +and thus inherits all the common properties defined in designware-pcie.txt.
>> +
>> +Required properties:
>> +- compatible: For Tegra19x, must contain "nvidia,tegra194-pcie".
>> +- device_type: Must be "pci"
>> +- reg: A list of physical base address and length for each set of controller
>> +  registers. Must contain an entry for each entry in the reg-names property.
>> +- reg-names: Must include the following entries:
>> +  "appl": Controller's application logic registers
>> +  "window1": This is the aperture of controller available under 4GB boundary
>> +             (i.e. within 32-bit space). This aperture is typically used for
>> +             accessing config space of root port itself and also the connected
>> +             endpoints (by appropriately programming internal Address
>> +             Translation Unit's (iATU) out bound region) and also to map
>> +             prefetchable/non-prefetchable BARs.
>> +  "config": As per the definition in designware-pcie.txt
> 
> I see that you set this to a 256 KiB region for all controllers. Since
> each function can have up to 4 KiB of extended configuration space, that
> means you have space to address:
> 
>      256 KiB = 4 KiB * 8 functions * 8 devices
> 
> Each bus can have up to 32 devices (including the root port) and there
> can be 256 busses, so I wonder how this is supposed to work. How does
> the mapping work for configuration space? Does the controller allow
> moving this 256 KiB window around so that more devices' configuration
> space can be accessed?
We are not using ECAM here instead only pick 4KB region from this 256 KB region
and program iATU (internal Address Translation Unit) of PCIe with the B:D:F of
the configuration space that is of interest to be able to view the respective
config space in that 4KB space. It is a hardware requirement to reserve 256KB of
space (though we use only 4K to access configuration space of any downstream B:D:F)

> 
>> +  "atu_dma": iATU and DMA register. This is where the iATU (internal Address
>> +             Translation Unit) registers of the PCIe core are made available
>> +             fow SW access.
>> +  "dbi": The aperture where root port's own configuration registers are
>> +         available
> 
> This is slightly confusing because you already said in the description
> of "window1" that it is used to access the configuration space of the
> root port itself.
> 
> Is the root port configuration space available via the regular
> configuration space registers?
Root port configuration space is hidden by default and 'dbi' property tells us
where we would like to *view* it. For this, we use a portion of window-1 aperture
and use it as 'dbi' base to *view* the config space of root port.
Basically Window-1 and window-2 are the umbrella entries (which I added based on
suggestion from Stephen Warren <swarren@nvidia.com> ) to give a complete picture of
number of apertures available and what they are used for. The windows 1 & 2 as such
are not used by the driver directly.

> 
>> +  "window2": This is the larger (compared to window1) aperture available above
>> +             4GB boundary (i.e. in 64-bit space). This is typically used for
>> +             mapping prefetchable/non-prefetchable BARs of endpoints
>> +- interrupts: A list of interrupt outputs of the controller. Must contain an
>> +  entry for each entry in the interrupt-names property.
>> +- interrupt-names: Must include the following entries:
>> +  "intr": The Tegra interrupt that is asserted for controller interrupts
>> +  "msi": The Tegra interrupt that is asserted when an MSI is received
>> +- bus-range: Range of bus numbers associated with this controller
>> +- #address-cells: Address representation for root ports (must be 3)
>> +  - cell 0 specifies the bus and device numbers of the root port:
>> +    [23:16]: bus number
>> +    [15:11]: device number
>> +  - cell 1 denotes the upper 32 address bits and should be 0
>> +  - cell 2 contains the lower 32 address bits and is used to translate to the
>> +    CPU address space
>> +- #size-cells: Size representation for root ports (must be 2)
>> +- ranges: Describes the translation of addresses for root ports and standard
>> +  PCI regions. The entries must be 7 cells each, where the first three cells
>> +  correspond to the address as described for the #address-cells property
>> +  above, the fourth and fifth cells are for the physical CPU address to
>> +  translate to and the sixth and seventh cells are as described for the
>> +  #size-cells property above.
>> +  - Entries setup the mapping for the standard I/O, memory and
>> +    prefetchable PCI regions. The first cell determines the type of region
>> +    that is setup:
>> +    - 0x81000000: I/O memory region
>> +    - 0x82000000: non-prefetchable memory region
>> +    - 0xc2000000: prefetchable memory region
>> +  Please refer to the standard PCI bus binding document for a more detailed
>> +  explanation.
>> +- #interrupt-cells: Size representation for interrupts (must be 1)
>> +- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties
>> +  Please refer to the standard PCI bus binding document for a more detailed
>> +  explanation.
>> +- clocks: Must contain an entry for each entry in clock-names.
>> +  See ../clocks/clock-bindings.txt for details.
>> +- clock-names: Must include the following entries:
>> +  - core_clk
> 
> It's redundant to name a clock _clk. Is this already required by the
> standard Designware bindings or is this new?
This is a new entry and not a standard Designware binding. I'll remove _clk
from the name in the next patch series.

> 
>> +- resets: Must contain an entry for each entry in reset-names.
>> +  See ../reset/reset.txt for details.
>> +- reset-names: Must include the following entries:
>> +  - core_apb_rst
>> +  - core_rst
> 
> Same comment as for clock-names.
I'll take of it in the next patch series

> 
>> +- phys: Must contain a phandle to P2U PHY for each entry in phy-names.
>> +- phy-names: Must include an entry for each active lane.
>> +  "pcie-p2u-N": where N ranges from 0 to one less than the total number of lanes
> 
> I'd leave away the "pcie-" prefix since the surrounding context already
> makes it clear that this is for PCIe.
I'll take of it in the next patch series

> 
>> +- Controller dependent register offsets
>> +  - nvidia,event-cntr-ctrl: EVENT_COUNTER_CONTROL reg offset
>> +      0x168 - FPGA
>> +      0x1a8 - C1, C2 and C3
>> +      0x1c4 - C4
>> +      0x1d8 - C0 and C5
>> +  - nvidia,event-cntr-data: EVENT_COUNTER_DATA reg offset
>> +      0x16c - FPGA
>> +      0x1ac - C1, C2 and C3
>> +      0x1c8 - C4
>> +      0x1dc - C0 and C5
>> +- nvidia,controller-id : Controller specific ID
>> +      0x0 - C0
>> +      0x1 - C1
>> +      0x2 - C2
>> +      0x3 - C3
>> +      0x4 - C4
>> +      0x5 - C5
> 
> It's redundant to have both a controller ID and parameterized register
> offsets based on that controller ID. I would recommend keeping the
> controller ID and then moving the register offsets to the driver and
> decide based on the controller ID.
Ok. I'll take of it in the next patch series

> 
>> +- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals
>> +
>> +Optional properties:
>> +- nvidia,max-speed: limits controllers max speed to this value.
>> +    1 - Gen-1 (2.5 GT/s)
>> +    2 - Gen-2 (5 GT/s)
>> +    3 - Gen-3 (8 GT/s)
>> +    4 - Gen-4 (16 GT/s)
>> +- nvidia,init-speed: limits controllers init speed to this value.
>> +    1 - Gen-1 (2. 5 GT/s)
>> +    2 - Gen-2 (5 GT/s)
>> +    3 - Gen-3 (8 GT/s)
>> +    4 - Gen-4 (16 GT/s)
>> +- nvidia,disable-aspm-states : controls advertisement of ASPM states
>> +    bit-0 to '1' : disables advertisement of ASPM-L0s
>> +    bit-1 to '1' : disables advertisement of ASPM-L1. This also disables
>> +                 advertisement of ASPM-L1.1 and ASPM-L1.2
>> +    bit-2 to '1' : disables advertisement of ASPM-L1.1
>> +    bit-3 to '1' : disables advertisement of ASPM-L1.2
> 
> These seem more like configuration options rather than hardware
> description.
Yes. Since the platforms like Jetson-Xavier based on T194 are going to go in
open market, we are providing these configuration options and hence they are
optional

> 
>> +- nvidia,disable-clock-request : gives a hint to driver that there is no
>> +    CLKREQ signal routing on board
>> +- nvidia,update-fc-fixup : needs it to improve perf when a platform is designed
>> +    in such a way that it satisfies at least one of the following conditions
>> +    1. If C0/C4/C5 run at x1/x2 link widths (irrespective of speed and MPS)
>> +    2. If C0/C1/C2/C3/C4/C5 operate at their respective max link widths and
>> +       a) speed is Gen-2 and MPS is 256B
>> +       b) speed is >= Gen-3 with any MPS
> 
> If we know these conditions, can we not determine that the fixup is
> needed at runtime?
Not really. The programming that should take place based on these flags need to
happen before PCIe link up and if we were to find them during run time, we can do
that only after the link is up. So, to avoid this chicken and egg situation, these
are passed as DT options

> 
>> +- nvidia,cdm-check : Enables CDM checking. For more information, refer Synopsis
>> +    DesignWare Cores  PCI Express Controller Databook r4.90a Chapter S.4
> 
> Why should this be configurable through device tree?
This is a hardware feature for safety and can be enabled if required. So, I made it
as an optional feature that can be controlled through DT.

> 
>> +- nvidia,enable-power-down : Enables power down of respective controller and
>> +    corresponding PLLs if they are not shared by any other entity
> 
> Wouldn't we want this to be the default? Why keep things powered up if
> they are not needed?
There could be platforms (automotive based), where it is not required to power down
controllers and hence needed a flag to control powering down of controllers

> 
>> +- "nvidia,pex-wake" : Add PEX_WAKE gpio number to provide wake support.
>> +- "nvidia,plat-gpios" : Add gpio number that needs to be configured before
>> +   system goes for enumeration. There could be platforms where enabling 3.3V and
>> +   12V power supplies are done through GPIOs, in which case, list of all such
>> +   GPIOs can be specified through this property.
> 
> For power supplies we usually use the regulator bindings. Are there any
> other cases where we'd need this?
Enabling power supplies is just one example, but there could be platforms where
programming of some GPIOs should happen (to configure muxes properly on PCB etc...)
before going for enumeration. All such GPIOs can be passed through this DT option.
> 
>> +- "nvidia,aspm-cmrt" : Common Mode Restore time for proper operation of ASPM to
>> +   be specified in microseconds
>> +- "nvidia,aspm-pwr-on-t" : Power On time for proper operation of ASPM to be
>> +   specified in microseconds
>> +- "nvidia,aspm-l0s-entrance-latency" : ASPM L0s entrance latency to be specified
>> +   in microseconds
>> +
>> +Examples:
>> +=========
>> +
>> +Tegra194:
>> +--------
>> +
>> +SoC DTSI:
>> +
>> +	pcie@14180000 {
>> +		compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
> 
> It doesn't seem to me like claiming compatibility with "snps,dw-pcie" is
> correct. There's a bunch of NVIDIA- or Tegra-specific properties below
> and code in the driver. Would this device be able to function if no
> driver was binding against the "nvidia,tegra194-pcie" compatible string?
> Would it work if you left that out? I don't think so, so we should also
> not list it here.
It is required for designware specific code to work properly. It is specified
by ../designware-pcie.txt file

> 
>> +		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>;
>> +		reg = <0x00 0x14180000 0x0 0x00020000   /* appl registers (128K)      */
>> +		       0x00 0x38000000 0x0 0x00040000   /* configuration space (256K) */
>> +		       0x00 0x38040000 0x0 0x00040000>; /* iATU_DMA reg space (256K)  */
>> +		reg-names = "appl", "config", "atu_dma";
>> +
>> +		status = "disabled";
>> +
>> +		#address-cells = <3>;
>> +		#size-cells = <2>;
>> +		device_type = "pci";
>> +		num-lanes = <8>;
>> +		linux,pci-domain = <0>;
>> +
>> +		clocks = <&bpmp TEGRA194_CLK_PEX0_CORE_0>;
>> +		clock-names = "core_clk";
>> +
>> +		resets = <&bpmp TEGRA194_RESET_PEX0_CORE_0_APB>,
>> +			 <&bpmp TEGRA194_RESET_PEX0_CORE_0>;
>> +		reset-names = "core_apb_rst", "core_rst";
>> +
>> +		interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>,	/* controller interrupt */
>> +			     <GIC_SPI 73 IRQ_TYPE_LEVEL_HIGH>;	/* MSI interrupt */
>> +		interrupt-names = "intr", "msi";
>> +
>> +		#interrupt-cells = <1>;
>> +		interrupt-map-mask = <0 0 0 0>;
>> +		interrupt-map = <0 0 0 0 &gic 0 72 0x04>;
>> +
>> +		nvidia,bpmp = <&bpmp>;
>> +
>> +		nvidia,max-speed = <4>;
>> +		nvidia,disable-aspm-states = <0xf>;
>> +		nvidia,controller-id = <&bpmp 0x0>;
> 
> Why is there a reference to the BPMP in this propert?
Ultimately Controller-ID is passed to BPMP-FW and a BPMP handle is required for that
which gets derived from this BPMP phandle.

> 
>> +		nvidia,aux-clk-freq = <0x13>;
>> +		nvidia,preset-init = <0x5>;
> 
> aux-clk-freq and preset-init are not defined in the binding above.
Ok. I'll take of it in the next patch series

> 
>> +		nvidia,aspm-cmrt = <0x3C>;
>> +		nvidia,aspm-pwr-on-t = <0x14>;
>> +		nvidia,aspm-l0s-entrance-latency = <0x3>;
> 
> These should be in decimal notation to make them easier to deal with. I
> don't usually read time in hexadecimal.
Ok. I'll take of it in the next patch series

> 
>> +
>> +		bus-range = <0x0 0xff>;
>> +		ranges = <0x81000000 0x0 0x38100000 0x0 0x38100000 0x0 0x00100000      /* downstream I/O (1MB) */
>> +			  0x82000000 0x0 0x38200000 0x0 0x38200000 0x0 0x01E00000      /* non-prefetchable memory (30MB) */
>> +			  0xc2000000 0x18 0x00000000 0x18 0x00000000 0x4 0x00000000>;  /* prefetchable memory (16GB) */
>> +
>> +		nvidia,cfg-link-cap-l1sub = <0x1c4>;
>> +		nvidia,cap-pl16g-status = <0x174>;
>> +		nvidia,cap-pl16g-cap-off = <0x188>;
>> +		nvidia,event-cntr-ctrl = <0x1d8>;
>> +		nvidia,event-cntr-data = <0x1dc>;
>> +		nvidia,dl-feature-cap = <0x30c>;
> 
> These are not defined in the binding above.
Ok. I'll take of it in the next patch series

> 
>> +	};
>> +
>> +Board DTS:
>> +
>> +	pcie@14180000 {
>> +		status = "okay";
>> +
>> +		vddio-pex-ctl-supply = <&vdd_1v8ao>;
>> +
>> +		phys = <&p2u_2>,
>> +		       <&p2u_3>,
>> +		       <&p2u_4>,
>> +		       <&p2u_5>;
>> +		phy-names = "pcie-p2u-0", "pcie-p2u-1", "pcie-p2u-2",
>> +			    "pcie-p2u-3";
>> +	};
>> diff --git a/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt b/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
> 
> Might be better to split this into a separate patch.
Done.

> 
>> new file mode 100644
>> index 000000000000..cc0de8e8e8db
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
>> @@ -0,0 +1,34 @@
>> +NVIDIA Tegra194 P2U binding
>> +
>> +Tegra194 has two PHY bricks namely HSIO (High Speed IO) and NVHS (NVIDIA High
>> +Speed) each interfacing with 12 and 8 P2U instances respectively.
>> +A P2U instance is a glue logic between Synopsys DesignWare Core PCIe IP's PIPE
>> +interface and PHY of HSIO/NVHS bricks. Each P2U instance represents one PCIe
>> +lane.
>> +
>> +Required properties:
>> +- compatible: For Tegra19x, must contain "nvidia,tegra194-phy-p2u".
> 
> Isn't the "phy-" implied by "p2u"? The name of the hardware block is
> "Tegra194 P2U", so that "phy-" seems gratuitous to me.
Done.

> 
>> +- reg: Should be the physical address space and length of respective each P2U
>> +       instance.
>> +- reg-names: Must include the entry "base".
> 
> "base" is a bad name. Each of these entries will be a "base" of the
> given region. The name should specify what region it is the base of.
I'll change it to "reg_base"

> 
> Thierry
> 
>> +
>> +Required properties for PHY port node:
>> +- #phy-cells: Defined by generic PHY bindings.  Must be 0.
>> +
>> +Refer to phy/phy-bindings.txt for the generic PHY binding properties.
>> +
>> +Example:
>> +
>> +hsio-p2u {
>> +	compatible = "simple-bus";
>> +	#address-cells = <2>;
>> +	#size-cells = <2>;
>> +	ranges;
>> +	p2u_0: p2u@03e10000 {
>> +		compatible = "nvidia,tegra194-phy-p2u";
>> +		reg = <0x0 0x03e10000 0x0 0x00010000>;
>> +		reg-names = "base";
>> +
>> +		#phy-cells = <0>;
>> +	};
>> +}
>> -- 
>> 2.7.4
Vidya Sagar April 1, 2019, 11:18 a.m. UTC | #6
On 3/31/2019 12:12 PM, Rob Herring wrote:
> On Tue, Mar 26, 2019 at 08:43:22PM +0530, Vidya Sagar wrote:
>> Add support for Tegra194 PCIe controllers. These controllers are based
>> on Synopsys DesignWare core IP.
>>
>> Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
>> ---
>>   .../bindings/pci/nvidia,tegra194-pcie.txt          | 209 +++++++++++++++++++++
>>   .../devicetree/bindings/phy/phy-tegra194-p2u.txt   |  34 ++++
>>   2 files changed, 243 insertions(+)
>>   create mode 100644 Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>>   create mode 100644 Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
>>
>> diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>> new file mode 100644
>> index 000000000000..31527283a0cd
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>> @@ -0,0 +1,209 @@
>> +NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based)
>> +
>> +This PCIe host controller is based on the Synopsis Designware PCIe IP
>> +and thus inherits all the common properties defined in designware-pcie.txt.
>> +
>> +Required properties:
>> +- compatible: For Tegra19x, must contain "nvidia,tegra194-pcie".
>> +- device_type: Must be "pci"
>> +- reg: A list of physical base address and length for each set of controller
>> +  registers. Must contain an entry for each entry in the reg-names property.
>> +- reg-names: Must include the following entries:
>> +  "appl": Controller's application logic registers
>> +  "window1": This is the aperture of controller available under 4GB boundary
>> +             (i.e. within 32-bit space). This aperture is typically used for
>> +             accessing config space of root port itself and also the connected
>> +             endpoints (by appropriately programming internal Address
>> +             Translation Unit's (iATU) out bound region) and also to map
>> +             prefetchable/non-prefetchable BARs.
> 
> This is usually represented in 'ranges' for BARs.
I added window1 and window2 as the umbrella apertures that each PCIe controller has
and gave a description of how each aperture *can* be used. This is an overview and as
such these two entries are not directly used by the driver.
'ranges' property gives us the information as to how exactly are window1 and window2
apertures used.
Thierry Reding also gave few comments about these entries. If this is confusing,
I'm ok to remove them as well. Please let me know.

> 
>> +  "config": As per the definition in designware-pcie.txt
>> +  "atu_dma": iATU and DMA register. This is where the iATU (internal Address
>> +             Translation Unit) registers of the PCIe core are made available
>> +             fow SW access.
>> +  "dbi": The aperture where root port's own configuration registers are
>> +         available
>> +  "window2": This is the larger (compared to window1) aperture available above
>> +             4GB boundary (i.e. in 64-bit space). This is typically used for
>> +             mapping prefetchable/non-prefetchable BARs of endpoints
>> +- interrupts: A list of interrupt outputs of the controller. Must contain an
>> +  entry for each entry in the interrupt-names property.
>> +- interrupt-names: Must include the following entries:
>> +  "intr": The Tegra interrupt that is asserted for controller interrupts
>> +  "msi": The Tegra interrupt that is asserted when an MSI is received
>> +- bus-range: Range of bus numbers associated with this controller
>> +- #address-cells: Address representation for root ports (must be 3)
>> +  - cell 0 specifies the bus and device numbers of the root port:
>> +    [23:16]: bus number
>> +    [15:11]: device number
>> +  - cell 1 denotes the upper 32 address bits and should be 0
>> +  - cell 2 contains the lower 32 address bits and is used to translate to the
>> +    CPU address space
>> +- #size-cells: Size representation for root ports (must be 2)
>> +- ranges: Describes the translation of addresses for root ports and standard
>> +  PCI regions. The entries must be 7 cells each, where the first three cells
>> +  correspond to the address as described for the #address-cells property
>> +  above, the fourth and fifth cells are for the physical CPU address to
>> +  translate to and the sixth and seventh cells are as described for the
>> +  #size-cells property above.
>> +  - Entries setup the mapping for the standard I/O, memory and
>> +    prefetchable PCI regions. The first cell determines the type of region
>> +    that is setup:
>> +    - 0x81000000: I/O memory region
>> +    - 0x82000000: non-prefetchable memory region
>> +    - 0xc2000000: prefetchable memory region
>> +  Please refer to the standard PCI bus binding document for a more detailed
>> +  explanation.
>> +- #interrupt-cells: Size representation for interrupts (must be 1)
>> +- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties
>> +  Please refer to the standard PCI bus binding document for a more detailed
>> +  explanation.
>> +- clocks: Must contain an entry for each entry in clock-names.
>> +  See ../clocks/clock-bindings.txt for details.
>> +- clock-names: Must include the following entries:
>> +  - core_clk
>> +- resets: Must contain an entry for each entry in reset-names.
>> +  See ../reset/reset.txt for details.
>> +- reset-names: Must include the following entries:
>> +  - core_apb_rst
>> +  - core_rst
>> +- phys: Must contain a phandle to P2U PHY for each entry in phy-names.
>> +- phy-names: Must include an entry for each active lane.
>> +  "pcie-p2u-N": where N ranges from 0 to one less than the total number of lanes
>> +- Controller dependent register offsets
>> +  - nvidia,event-cntr-ctrl: EVENT_COUNTER_CONTROL reg offset
>> +      0x168 - FPGA
>> +      0x1a8 - C1, C2 and C3
>> +      0x1c4 - C4
>> +      0x1d8 - C0 and C5
>> +  - nvidia,event-cntr-data: EVENT_COUNTER_DATA reg offset
>> +      0x16c - FPGA
>> +      0x1ac - C1, C2 and C3
>> +      0x1c8 - C4
>> +      0x1dc - C0 and C5
>> +- nvidia,controller-id : Controller specific ID
>> +      0x0 - C0
>> +      0x1 - C1
>> +      0x2 - C2
>> +      0x3 - C3
>> +      0x4 - C4
>> +      0x5 - C5
>> +- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals
>> +
>> +Optional properties:
>> +- nvidia,max-speed: limits controllers max speed to this value.
>> +    1 - Gen-1 (2.5 GT/s)
>> +    2 - Gen-2 (5 GT/s)
>> +    3 - Gen-3 (8 GT/s)
>> +    4 - Gen-4 (16 GT/s)
>> +- nvidia,init-speed: limits controllers init speed to this value.
>> +    1 - Gen-1 (2. 5 GT/s)
>> +    2 - Gen-2 (5 GT/s)
>> +    3 - Gen-3 (8 GT/s)
>> +    4 - Gen-4 (16 GT/s)
> 
> Don't we have standard speed properties?
Not that I'm aware of. If you come across any, please do let me know and
I'll change these.

> 
> Why do we need 2 values?
max-speed configures the controller to advertise the speed mentioned through
this flag, whereas, init-speed gets the link up at this speed and software
can further take the link speed to a different speed by retraining the link.
This is to give flexibility to developers depending on the platform.

> 
>> +- nvidia,disable-aspm-states : controls advertisement of ASPM states
>> +    bit-0 to '1' : disables advertisement of ASPM-L0s
>> +    bit-1 to '1' : disables advertisement of ASPM-L1. This also disables
>> +                 advertisement of ASPM-L1.1 and ASPM-L1.2
>> +    bit-2 to '1' : disables advertisement of ASPM-L1.1
>> +    bit-3 to '1' : disables advertisement of ASPM-L1.2
> 
> Seems like these too should be common.
This flag controls the advertisement of different ASPM states by root port.
Again, I'm not aware of any common method for this.
  
> 
>> +- nvidia,disable-clock-request : gives a hint to driver that there is no
>> +    CLKREQ signal routing on board
>> +- nvidia,update-fc-fixup : needs it to improve perf when a platform is designed
>> +    in such a way that it satisfies at least one of the following conditions
>> +    1. If C0/C4/C5 run at x1/x2 link widths (irrespective of speed and MPS)
>> +    2. If C0/C1/C2/C3/C4/C5 operate at their respective max link widths and
> 
> What is Cx?
Cx is the Controller with its ID.

> 
>> +       a) speed is Gen-2 and MPS is 256B
>> +       b) speed is >= Gen-3 with any MPS
>> +- nvidia,cdm-check : Enables CDM checking. For more information, refer Synopsis
>> +    DesignWare Cores  PCI Express Controller Databook r4.90a Chapter S.4
>> +- nvidia,enable-power-down : Enables power down of respective controller and
>> +    corresponding PLLs if they are not shared by any other entity
>> +- "nvidia,pex-wake" : Add PEX_WAKE gpio number to provide wake support.
>> +- "nvidia,plat-gpios" : Add gpio number that needs to be configured before
>> +   system goes for enumeration. There could be platforms where enabling 3.3V and
>> +   12V power supplies are done through GPIOs, in which case, list of all such
>> +   GPIOs can be specified through this property.
> 
> These should be split out to their specific function.
Enabling Power rails is just an example and depending on the platform, there could be
some on-board muxes which are controlled through GPIOs and all such platform specific
configuration can be handled through this flag.

> 
>> +- "nvidia,aspm-cmrt" : Common Mode Restore time for proper operation of ASPM to
>> +   be specified in microseconds
>> +- "nvidia,aspm-pwr-on-t" : Power On time for proper operation of ASPM to be
>> +   specified in microseconds
>> +- "nvidia,aspm-l0s-entrance-latency" : ASPM L0s entrance latency to be specified
>> +   in microseconds
> 
> properties with units need unit suffixes as defined in
> property-units.txt.
Done. I'll take care of this in next patch series.

> 
>> +
>> +Examples:
>> +=========
>> +
>> +Tegra194:
>> +--------
>> +
>> +SoC DTSI:
>> +
>> +	pcie@14180000 {
>> +		compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
>> +		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>;
>> +		reg = <0x00 0x14180000 0x0 0x00020000   /* appl registers (128K)      */
>> +		       0x00 0x38000000 0x0 0x00040000   /* configuration space (256K) */
>> +		       0x00 0x38040000 0x0 0x00040000>; /* iATU_DMA reg space (256K)  */
>> +		reg-names = "appl", "config", "atu_dma";
>> +
>> +		status = "disabled";
>> +
>> +		#address-cells = <3>;
>> +		#size-cells = <2>;
>> +		device_type = "pci";
>> +		num-lanes = <8>;
>> +		linux,pci-domain = <0>;
>> +
>> +		clocks = <&bpmp TEGRA194_CLK_PEX0_CORE_0>;
>> +		clock-names = "core_clk";
>> +
>> +		resets = <&bpmp TEGRA194_RESET_PEX0_CORE_0_APB>,
>> +			 <&bpmp TEGRA194_RESET_PEX0_CORE_0>;
>> +		reset-names = "core_apb_rst", "core_rst";
>> +
>> +		interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>,	/* controller interrupt */
>> +			     <GIC_SPI 73 IRQ_TYPE_LEVEL_HIGH>;	/* MSI interrupt */
>> +		interrupt-names = "intr", "msi";
>> +
>> +		#interrupt-cells = <1>;
>> +		interrupt-map-mask = <0 0 0 0>;
>> +		interrupt-map = <0 0 0 0 &gic 0 72 0x04>;
>> +
>> +		nvidia,bpmp = <&bpmp>;
>> +
>> +		nvidia,max-speed = <4>;
>> +		nvidia,disable-aspm-states = <0xf>;
>> +		nvidia,controller-id = <&bpmp 0x0>;
>> +		nvidia,aux-clk-freq = <0x13>;
>> +		nvidia,preset-init = <0x5>;
>> +		nvidia,aspm-cmrt = <0x3C>;
>> +		nvidia,aspm-pwr-on-t = <0x14>;
>> +		nvidia,aspm-l0s-entrance-latency = <0x3>;
>> +
>> +		bus-range = <0x0 0xff>;
>> +		ranges = <0x81000000 0x0 0x38100000 0x0 0x38100000 0x0 0x00100000      /* downstream I/O (1MB) */
>> +			  0x82000000 0x0 0x38200000 0x0 0x38200000 0x0 0x01E00000      /* non-prefetchable memory (30MB) */
>> +			  0xc2000000 0x18 0x00000000 0x18 0x00000000 0x4 0x00000000>;  /* prefetchable memory (16GB) */
>> +
>> +		nvidia,cfg-link-cap-l1sub = <0x1c4>;
>> +		nvidia,cap-pl16g-status = <0x174>;
>> +		nvidia,cap-pl16g-cap-off = <0x188>;
>> +		nvidia,event-cntr-ctrl = <0x1d8>;
>> +		nvidia,event-cntr-data = <0x1dc>;
>> +		nvidia,dl-feature-cap = <0x30c>;
>> +	};
>> +
>> +Board DTS:
>> +
>> +	pcie@14180000 {
>> +		status = "okay";
>> +
>> +		vddio-pex-ctl-supply = <&vdd_1v8ao>;
>> +
>> +		phys = <&p2u_2>,
>> +		       <&p2u_3>,
>> +		       <&p2u_4>,
>> +		       <&p2u_5>;
>> +		phy-names = "pcie-p2u-0", "pcie-p2u-1", "pcie-p2u-2",
>> +			    "pcie-p2u-3";
>> +	};
>> diff --git a/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt b/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
>> new file mode 100644
>> index 000000000000..cc0de8e8e8db
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
>> @@ -0,0 +1,34 @@
>> +NVIDIA Tegra194 P2U binding
>> +
>> +Tegra194 has two PHY bricks namely HSIO (High Speed IO) and NVHS (NVIDIA High
>> +Speed) each interfacing with 12 and 8 P2U instances respectively.
>> +A P2U instance is a glue logic between Synopsys DesignWare Core PCIe IP's PIPE
>> +interface and PHY of HSIO/NVHS bricks. Each P2U instance represents one PCIe
>> +lane.
>> +
>> +Required properties:
>> +- compatible: For Tegra19x, must contain "nvidia,tegra194-phy-p2u".
>> +- reg: Should be the physical address space and length of respective each P2U
>> +       instance.
>> +- reg-names: Must include the entry "base".
>> +
>> +Required properties for PHY port node:
>> +- #phy-cells: Defined by generic PHY bindings.  Must be 0.
>> +
>> +Refer to phy/phy-bindings.txt for the generic PHY binding properties.
>> +
>> +Example:
>> +
>> +hsio-p2u {
>> +	compatible = "simple-bus";
>> +	#address-cells = <2>;
>> +	#size-cells = <2>;
>> +	ranges;
>> +	p2u_0: p2u@03e10000 {
>> +		compatible = "nvidia,tegra194-phy-p2u";
>> +		reg = <0x0 0x03e10000 0x0 0x00010000>;
>> +		reg-names = "base";
>> +
>> +		#phy-cells = <0>;
>> +	};
>> +}
>> -- 
>> 2.7.4
>>
>
Thierry Reding April 1, 2019, 2:31 p.m. UTC | #7
On Mon, Apr 01, 2019 at 04:48:42PM +0530, Vidya Sagar wrote:
> On 3/31/2019 12:12 PM, Rob Herring wrote:
> > On Tue, Mar 26, 2019 at 08:43:22PM +0530, Vidya Sagar wrote:
> > > Add support for Tegra194 PCIe controllers. These controllers are based
> > > on Synopsys DesignWare core IP.
> > > 
> > > Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
> > > ---
> > >   .../bindings/pci/nvidia,tegra194-pcie.txt          | 209 +++++++++++++++++++++
> > >   .../devicetree/bindings/phy/phy-tegra194-p2u.txt   |  34 ++++
> > >   2 files changed, 243 insertions(+)
> > >   create mode 100644 Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
> > >   create mode 100644 Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
> > > 
> > > diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
> > > new file mode 100644
> > > index 000000000000..31527283a0cd
> > > --- /dev/null
> > > +++ b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
> > > @@ -0,0 +1,209 @@
> > > +NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based)
> > > +
> > > +This PCIe host controller is based on the Synopsis Designware PCIe IP
> > > +and thus inherits all the common properties defined in designware-pcie.txt.
> > > +
> > > +Required properties:
> > > +- compatible: For Tegra19x, must contain "nvidia,tegra194-pcie".
> > > +- device_type: Must be "pci"
> > > +- reg: A list of physical base address and length for each set of controller
> > > +  registers. Must contain an entry for each entry in the reg-names property.
> > > +- reg-names: Must include the following entries:
> > > +  "appl": Controller's application logic registers
> > > +  "window1": This is the aperture of controller available under 4GB boundary
> > > +             (i.e. within 32-bit space). This aperture is typically used for
> > > +             accessing config space of root port itself and also the connected
> > > +             endpoints (by appropriately programming internal Address
> > > +             Translation Unit's (iATU) out bound region) and also to map
> > > +             prefetchable/non-prefetchable BARs.
> > 
> > This is usually represented in 'ranges' for BARs.
> I added window1 and window2 as the umbrella apertures that each PCIe controller has
> and gave a description of how each aperture *can* be used. This is an overview and as
> such these two entries are not directly used by the driver.
> 'ranges' property gives us the information as to how exactly are window1 and window2
> apertures used.
> Thierry Reding also gave few comments about these entries. If this is confusing,
> I'm ok to remove them as well. Please let me know.

If all you want to do is document how these are used, it may be better
to enhance the device tree bindings for the ranges property if it does
not describe this fully enough yet, or add comments in the DT nodes to
clarify.

> > > +  "config": As per the definition in designware-pcie.txt
> > > +  "atu_dma": iATU and DMA register. This is where the iATU (internal Address
> > > +             Translation Unit) registers of the PCIe core are made available
> > > +             fow SW access.
> > > +  "dbi": The aperture where root port's own configuration registers are
> > > +         available
> > > +  "window2": This is the larger (compared to window1) aperture available above
> > > +             4GB boundary (i.e. in 64-bit space). This is typically used for
> > > +             mapping prefetchable/non-prefetchable BARs of endpoints
> > > +- interrupts: A list of interrupt outputs of the controller. Must contain an
> > > +  entry for each entry in the interrupt-names property.
> > > +- interrupt-names: Must include the following entries:
> > > +  "intr": The Tegra interrupt that is asserted for controller interrupts
> > > +  "msi": The Tegra interrupt that is asserted when an MSI is received
> > > +- bus-range: Range of bus numbers associated with this controller
> > > +- #address-cells: Address representation for root ports (must be 3)
> > > +  - cell 0 specifies the bus and device numbers of the root port:
> > > +    [23:16]: bus number
> > > +    [15:11]: device number
> > > +  - cell 1 denotes the upper 32 address bits and should be 0
> > > +  - cell 2 contains the lower 32 address bits and is used to translate to the
> > > +    CPU address space
> > > +- #size-cells: Size representation for root ports (must be 2)
> > > +- ranges: Describes the translation of addresses for root ports and standard
> > > +  PCI regions. The entries must be 7 cells each, where the first three cells
> > > +  correspond to the address as described for the #address-cells property
> > > +  above, the fourth and fifth cells are for the physical CPU address to
> > > +  translate to and the sixth and seventh cells are as described for the
> > > +  #size-cells property above.
> > > +  - Entries setup the mapping for the standard I/O, memory and
> > > +    prefetchable PCI regions. The first cell determines the type of region
> > > +    that is setup:
> > > +    - 0x81000000: I/O memory region
> > > +    - 0x82000000: non-prefetchable memory region
> > > +    - 0xc2000000: prefetchable memory region
> > > +  Please refer to the standard PCI bus binding document for a more detailed
> > > +  explanation.
> > > +- #interrupt-cells: Size representation for interrupts (must be 1)
> > > +- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties
> > > +  Please refer to the standard PCI bus binding document for a more detailed
> > > +  explanation.
> > > +- clocks: Must contain an entry for each entry in clock-names.
> > > +  See ../clocks/clock-bindings.txt for details.
> > > +- clock-names: Must include the following entries:
> > > +  - core_clk
> > > +- resets: Must contain an entry for each entry in reset-names.
> > > +  See ../reset/reset.txt for details.
> > > +- reset-names: Must include the following entries:
> > > +  - core_apb_rst
> > > +  - core_rst
> > > +- phys: Must contain a phandle to P2U PHY for each entry in phy-names.
> > > +- phy-names: Must include an entry for each active lane.
> > > +  "pcie-p2u-N": where N ranges from 0 to one less than the total number of lanes
> > > +- Controller dependent register offsets
> > > +  - nvidia,event-cntr-ctrl: EVENT_COUNTER_CONTROL reg offset
> > > +      0x168 - FPGA
> > > +      0x1a8 - C1, C2 and C3
> > > +      0x1c4 - C4
> > > +      0x1d8 - C0 and C5
> > > +  - nvidia,event-cntr-data: EVENT_COUNTER_DATA reg offset
> > > +      0x16c - FPGA
> > > +      0x1ac - C1, C2 and C3
> > > +      0x1c8 - C4
> > > +      0x1dc - C0 and C5
> > > +- nvidia,controller-id : Controller specific ID
> > > +      0x0 - C0
> > > +      0x1 - C1
> > > +      0x2 - C2
> > > +      0x3 - C3
> > > +      0x4 - C4
> > > +      0x5 - C5
> > > +- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals
> > > +
> > > +Optional properties:
> > > +- nvidia,max-speed: limits controllers max speed to this value.
> > > +    1 - Gen-1 (2.5 GT/s)
> > > +    2 - Gen-2 (5 GT/s)
> > > +    3 - Gen-3 (8 GT/s)
> > > +    4 - Gen-4 (16 GT/s)
> > > +- nvidia,init-speed: limits controllers init speed to this value.
> > > +    1 - Gen-1 (2. 5 GT/s)
> > > +    2 - Gen-2 (5 GT/s)
> > > +    3 - Gen-3 (8 GT/s)
> > > +    4 - Gen-4 (16 GT/s)
> > 
> > Don't we have standard speed properties?
> Not that I'm aware of. If you come across any, please do let me know and
> I'll change these.

See Documentation/devicetree/bindings/pci/pci.txt, max-link-speed.
There's a standard of_pci_get_max_link_speed() property that reads it
from device tree.

> > Why do we need 2 values?
> max-speed configures the controller to advertise the speed mentioned through
> this flag, whereas, init-speed gets the link up at this speed and software
> can further take the link speed to a different speed by retraining the link.
> This is to give flexibility to developers depending on the platform.

This seems to me like overcomplicating things. Couldn't we do something
like start in the slowest mode by default and then upgrade if endpoints
support higher speeds?

I'm assuming that the maximum speed is already fixed by the IP hardware
instantiation, so why would we want to limit it additionally? Similarly,
what's the use-case for setting the initial link speed to something
other than the lowest speed?

> > > +- nvidia,disable-aspm-states : controls advertisement of ASPM states
> > > +    bit-0 to '1' : disables advertisement of ASPM-L0s
> > > +    bit-1 to '1' : disables advertisement of ASPM-L1. This also disables
> > > +                 advertisement of ASPM-L1.1 and ASPM-L1.2
> > > +    bit-2 to '1' : disables advertisement of ASPM-L1.1
> > > +    bit-3 to '1' : disables advertisement of ASPM-L1.2
> > 
> > Seems like these too should be common.
> This flag controls the advertisement of different ASPM states by root port.
> Again, I'm not aware of any common method for this.

rockchip-pcie-host.txt documents an "aspm-no-l0s" property that prevents
the root complex from advertising L0s. Sounds like maybe following a
similar scheme would be best for consistency. I think we'll also want
these to be non-NVIDIA specific, so drop the "nvidia," prefix and maybe
document them in pci.txt so that they can be more broadly used.

> > > +- nvidia,disable-clock-request : gives a hint to driver that there is no
> > > +    CLKREQ signal routing on board
> > > +- nvidia,update-fc-fixup : needs it to improve perf when a platform is designed
> > > +    in such a way that it satisfies at least one of the following conditions
> > > +    1. If C0/C4/C5 run at x1/x2 link widths (irrespective of speed and MPS)
> > > +    2. If C0/C1/C2/C3/C4/C5 operate at their respective max link widths and
> > 
> > What is Cx?
> Cx is the Controller with its ID.
> 
> > 
> > > +       a) speed is Gen-2 and MPS is 256B
> > > +       b) speed is >= Gen-3 with any MPS
> > > +- nvidia,cdm-check : Enables CDM checking. For more information, refer Synopsis
> > > +    DesignWare Cores  PCI Express Controller Databook r4.90a Chapter S.4
> > > +- nvidia,enable-power-down : Enables power down of respective controller and
> > > +    corresponding PLLs if they are not shared by any other entity
> > > +- "nvidia,pex-wake" : Add PEX_WAKE gpio number to provide wake support.
> > > +- "nvidia,plat-gpios" : Add gpio number that needs to be configured before
> > > +   system goes for enumeration. There could be platforms where enabling 3.3V and
> > > +   12V power supplies are done through GPIOs, in which case, list of all such
> > > +   GPIOs can be specified through this property.
> > 
> > These should be split out to their specific function.
> Enabling Power rails is just an example and depending on the platform, there could be
> some on-board muxes which are controlled through GPIOs and all such platform specific
> configuration can be handled through this flag.

Doing this via a "generic" GPIO binding is bound to break at some point.
What if at some point one of those muxes needs additional power, perhaps
controlled through an I2C regulator? Or if the mux requires multiple
GPIOs for the correct configuration? How do you allow the mux to be
reconfigured to one of the other options?

If all you have is a generic GPIO consumer of GPIOs there's not enough
context for the driver to do anything with these GPIOs other than
perhaps setting them to a specific value at probe time. In that case you
could achieve the same thing using a gpio-hog.

I think we need to identify what the various uses for these can be and
then find the right bindings (or come up with new ones) to properly
describe the actual hardware. Otherwise we're going to paint ourselves
into a corner.

Are there any use-cases besides regulators and muxes? For regulators we
already have fixed and GPIO regulators, and for muxes there are a number
of bindings defined in Documentation/devicetree/bindings/mux.

Thierry
Thierry Reding April 1, 2019, 3:07 p.m. UTC | #8
On Mon, Apr 01, 2019 at 03:31:54PM +0530, Vidya Sagar wrote:
> On 3/28/2019 6:45 PM, Thierry Reding wrote:
> > On Tue, Mar 26, 2019 at 08:43:22PM +0530, Vidya Sagar wrote:
> > > Add support for Tegra194 PCIe controllers. These controllers are based
> > > on Synopsys DesignWare core IP.
> > > 
> > > Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
> > > ---
> > >   .../bindings/pci/nvidia,tegra194-pcie.txt          | 209 +++++++++++++++++++++
> > >   .../devicetree/bindings/phy/phy-tegra194-p2u.txt   |  34 ++++
> > >   2 files changed, 243 insertions(+)
> > >   create mode 100644 Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
> > >   create mode 100644 Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
> > > 
> > > diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
> > > new file mode 100644
> > > index 000000000000..31527283a0cd
> > > --- /dev/null
> > > +++ b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
> > > @@ -0,0 +1,209 @@
> > > +NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based)
> > > +
> > > +This PCIe host controller is based on the Synopsis Designware PCIe IP
> > > +and thus inherits all the common properties defined in designware-pcie.txt.
> > > +
> > > +Required properties:
> > > +- compatible: For Tegra19x, must contain "nvidia,tegra194-pcie".
> > > +- device_type: Must be "pci"
> > > +- reg: A list of physical base address and length for each set of controller
> > > +  registers. Must contain an entry for each entry in the reg-names property.
> > > +- reg-names: Must include the following entries:
> > > +  "appl": Controller's application logic registers
> > > +  "window1": This is the aperture of controller available under 4GB boundary
> > > +             (i.e. within 32-bit space). This aperture is typically used for
> > > +             accessing config space of root port itself and also the connected
> > > +             endpoints (by appropriately programming internal Address
> > > +             Translation Unit's (iATU) out bound region) and also to map
> > > +             prefetchable/non-prefetchable BARs.
> > > +  "config": As per the definition in designware-pcie.txt
> > 
> > I see that you set this to a 256 KiB region for all controllers. Since
> > each function can have up to 4 KiB of extended configuration space, that
> > means you have space to address:
> > 
> >      256 KiB = 4 KiB * 8 functions * 8 devices
> > 
> > Each bus can have up to 32 devices (including the root port) and there
> > can be 256 busses, so I wonder how this is supposed to work. How does
> > the mapping work for configuration space? Does the controller allow
> > moving this 256 KiB window around so that more devices' configuration
> > space can be accessed?
> We are not using ECAM here instead only pick 4KB region from this 256 KB region
> and program iATU (internal Address Translation Unit) of PCIe with the B:D:F of
> the configuration space that is of interest to be able to view the respective
> config space in that 4KB space. It is a hardware requirement to reserve 256KB of
> space (though we use only 4K to access configuration space of any downstream B:D:F)

Okay, sounds good. I'm wondering if we should maybe note here that
window1 needs to be a 256 KiB window if that's what the hardware
requires.

> > > +  "atu_dma": iATU and DMA register. This is where the iATU (internal Address
> > > +             Translation Unit) registers of the PCIe core are made available
> > > +             fow SW access.
> > > +  "dbi": The aperture where root port's own configuration registers are
> > > +         available
> > 
> > This is slightly confusing because you already said in the description
> > of "window1" that it is used to access the configuration space of the
> > root port itself.
> > 
> > Is the root port configuration space available via the regular
> > configuration space registers?
> Root port configuration space is hidden by default and 'dbi' property tells us
> where we would like to *view* it. For this, we use a portion of window-1 aperture
> and use it as 'dbi' base to *view* the config space of root port.
> Basically Window-1 and window-2 are the umbrella entries (which I added based on
> suggestion from Stephen Warren <swarren@nvidia.com> ) to give a complete picture of
> number of apertures available and what they are used for. The windows 1 & 2 as such
> are not used by the driver directly.

So I'm not exactly sure I understand how this works. Does the "dbi"
entry contain a physical address and size of the aperture that we want
to map into a subregion of "window-1"? Is this part of a region where
similar subregions exist for all of the controllers? Could the offset
into such a region be derived from the controller ID?

> > > +  "window2": This is the larger (compared to window1) aperture available above
> > > +             4GB boundary (i.e. in 64-bit space). This is typically used for
> > > +             mapping prefetchable/non-prefetchable BARs of endpoints
> > > +- interrupts: A list of interrupt outputs of the controller. Must contain an
> > > +  entry for each entry in the interrupt-names property.
> > > +- interrupt-names: Must include the following entries:
> > > +  "intr": The Tegra interrupt that is asserted for controller interrupts
> > > +  "msi": The Tegra interrupt that is asserted when an MSI is received
> > > +- bus-range: Range of bus numbers associated with this controller
> > > +- #address-cells: Address representation for root ports (must be 3)
> > > +  - cell 0 specifies the bus and device numbers of the root port:
> > > +    [23:16]: bus number
> > > +    [15:11]: device number
> > > +  - cell 1 denotes the upper 32 address bits and should be 0
> > > +  - cell 2 contains the lower 32 address bits and is used to translate to the
> > > +    CPU address space
> > > +- #size-cells: Size representation for root ports (must be 2)
> > > +- ranges: Describes the translation of addresses for root ports and standard
> > > +  PCI regions. The entries must be 7 cells each, where the first three cells
> > > +  correspond to the address as described for the #address-cells property
> > > +  above, the fourth and fifth cells are for the physical CPU address to
> > > +  translate to and the sixth and seventh cells are as described for the
> > > +  #size-cells property above.
> > > +  - Entries setup the mapping for the standard I/O, memory and
> > > +    prefetchable PCI regions. The first cell determines the type of region
> > > +    that is setup:
> > > +    - 0x81000000: I/O memory region
> > > +    - 0x82000000: non-prefetchable memory region
> > > +    - 0xc2000000: prefetchable memory region
> > > +  Please refer to the standard PCI bus binding document for a more detailed
> > > +  explanation.
> > > +- #interrupt-cells: Size representation for interrupts (must be 1)
> > > +- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties
> > > +  Please refer to the standard PCI bus binding document for a more detailed
> > > +  explanation.
> > > +- clocks: Must contain an entry for each entry in clock-names.
> > > +  See ../clocks/clock-bindings.txt for details.
> > > +- clock-names: Must include the following entries:
> > > +  - core_clk
> > 
> > It's redundant to name a clock _clk. Is this already required by the
> > standard Designware bindings or is this new?
> This is a new entry and not a standard Designware binding. I'll remove _clk
> from the name in the next patch series.
> 
> > 
> > > +- resets: Must contain an entry for each entry in reset-names.
> > > +  See ../reset/reset.txt for details.
> > > +- reset-names: Must include the following entries:
> > > +  - core_apb_rst
> > > +  - core_rst
> > 
> > Same comment as for clock-names.
> I'll take of it in the next patch series
> 
> > 
> > > +- phys: Must contain a phandle to P2U PHY for each entry in phy-names.
> > > +- phy-names: Must include an entry for each active lane.
> > > +  "pcie-p2u-N": where N ranges from 0 to one less than the total number of lanes
> > 
> > I'd leave away the "pcie-" prefix since the surrounding context already
> > makes it clear that this is for PCIe.
> I'll take of it in the next patch series
> 
> > 
> > > +- Controller dependent register offsets
> > > +  - nvidia,event-cntr-ctrl: EVENT_COUNTER_CONTROL reg offset
> > > +      0x168 - FPGA
> > > +      0x1a8 - C1, C2 and C3
> > > +      0x1c4 - C4
> > > +      0x1d8 - C0 and C5
> > > +  - nvidia,event-cntr-data: EVENT_COUNTER_DATA reg offset
> > > +      0x16c - FPGA
> > > +      0x1ac - C1, C2 and C3
> > > +      0x1c8 - C4
> > > +      0x1dc - C0 and C5
> > > +- nvidia,controller-id : Controller specific ID
> > > +      0x0 - C0
> > > +      0x1 - C1
> > > +      0x2 - C2
> > > +      0x3 - C3
> > > +      0x4 - C4
> > > +      0x5 - C5
> > 
> > It's redundant to have both a controller ID and parameterized register
> > offsets based on that controller ID. I would recommend keeping the
> > controller ID and then moving the register offsets to the driver and
> > decide based on the controller ID.
> Ok. I'll take of it in the next patch series
> 
> > 
> > > +- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals
> > > +
> > > +Optional properties:
> > > +- nvidia,max-speed: limits controllers max speed to this value.
> > > +    1 - Gen-1 (2.5 GT/s)
> > > +    2 - Gen-2 (5 GT/s)
> > > +    3 - Gen-3 (8 GT/s)
> > > +    4 - Gen-4 (16 GT/s)
> > > +- nvidia,init-speed: limits controllers init speed to this value.
> > > +    1 - Gen-1 (2. 5 GT/s)
> > > +    2 - Gen-2 (5 GT/s)
> > > +    3 - Gen-3 (8 GT/s)
> > > +    4 - Gen-4 (16 GT/s)
> > > +- nvidia,disable-aspm-states : controls advertisement of ASPM states
> > > +    bit-0 to '1' : disables advertisement of ASPM-L0s
> > > +    bit-1 to '1' : disables advertisement of ASPM-L1. This also disables
> > > +                 advertisement of ASPM-L1.1 and ASPM-L1.2
> > > +    bit-2 to '1' : disables advertisement of ASPM-L1.1
> > > +    bit-3 to '1' : disables advertisement of ASPM-L1.2
> > 
> > These seem more like configuration options rather than hardware
> > description.
> Yes. Since the platforms like Jetson-Xavier based on T194 are going to go in
> open market, we are providing these configuration options and hence they are
> optional

Under what circumstances would we want to disable certain ASPM states?
My understanding is that PCI device drivers can already disable
individual ASPM states if they don't support them, so why would we ever
want to disable advertisement of certain ASPM states?

> > > +- nvidia,disable-clock-request : gives a hint to driver that there is no
> > > +    CLKREQ signal routing on board

Sounds like this could be useful for designs other than Tegra, so maybe
remove the "nvidia," prefix? The name also doesn't match the description
very well. "disable" kind of implies that we want to disable this
feature despite it being available. However, what we really want to
express here is that there's no CLKREQ signal on a design at all. So
perhaps it would be better to invert this and add a property named
"supports-clock-request" on boards where we have a CLKREQ signal.

> > > +- nvidia,update-fc-fixup : needs it to improve perf when a platform is designed
> > > +    in such a way that it satisfies at least one of the following conditions
> > > +    1. If C0/C4/C5 run at x1/x2 link widths (irrespective of speed and MPS)
> > > +    2. If C0/C1/C2/C3/C4/C5 operate at their respective max link widths and
> > > +       a) speed is Gen-2 and MPS is 256B
> > > +       b) speed is >= Gen-3 with any MPS
> > 
> > If we know these conditions, can we not determine that the fixup is
> > needed at runtime?
> Not really. The programming that should take place based on these flags need to
> happen before PCIe link up and if we were to find them during run time, we can do
> that only after the link is up. So, to avoid this chicken and egg situation, these
> are passed as DT options

Might be worth explaining what FC is in this context. Also, perhaps
explain how and why setting this would improve performance. You're also
not explicit here what the type of the property is. From the context it
sounds like it's just a boolean, but you may want to spell that out.

> > > +- nvidia,cdm-check : Enables CDM checking. For more information, refer Synopsis
> > > +    DesignWare Cores  PCI Express Controller Databook r4.90a Chapter S.4

If this is documented in the DesignWare documentation, why not make this
a generic property that applies to all DesignWare instantiations? Might
also be worth giving a one or two sentence description of what this is
so that people don't have to go look at the databook.

> > Why should this be configurable through device tree?
> This is a hardware feature for safety and can be enabled if required. So, I made it
> as an optional feature that can be controlled through DT.
> 
> > 
> > > +- nvidia,enable-power-down : Enables power down of respective controller and
> > > +    corresponding PLLs if they are not shared by any other entity
> > 
> > Wouldn't we want this to be the default? Why keep things powered up if
> > they are not needed?
> There could be platforms (automotive based), where it is not required to power down
> controllers and hence needed a flag to control powering down of controllers

Is it harmful to power down the controllers on such platforms? It
strikes me as odd to leave something enabled if it isn't needed,
independent of the platform.

> > > +- "nvidia,pex-wake" : Add PEX_WAKE gpio number to provide wake support.
> > > +- "nvidia,plat-gpios" : Add gpio number that needs to be configured before
> > > +   system goes for enumeration. There could be platforms where enabling 3.3V and
> > > +   12V power supplies are done through GPIOs, in which case, list of all such
> > > +   GPIOs can be specified through this property.
> > 
> > For power supplies we usually use the regulator bindings. Are there any
> > other cases where we'd need this?
> Enabling power supplies is just one example, but there could be platforms where
> programming of some GPIOs should happen (to configure muxes properly on PCB etc...)
> before going for enumeration. All such GPIOs can be passed through this DT option.

As explained in the other subthread, I think it's better to model these
properly to make sure we have the flexibility that we need. One mux may
be controlled by a GPIO, another may be connected to I2C.

> > > +- "nvidia,aspm-cmrt" : Common Mode Restore time for proper operation of ASPM to
> > > +   be specified in microseconds
> > > +- "nvidia,aspm-pwr-on-t" : Power On time for proper operation of ASPM to be
> > > +   specified in microseconds
> > > +- "nvidia,aspm-l0s-entrance-latency" : ASPM L0s entrance latency to be specified
> > > +   in microseconds
> > > +
> > > +Examples:
> > > +=========
> > > +
> > > +Tegra194:
> > > +--------
> > > +
> > > +SoC DTSI:
> > > +
> > > +	pcie@14180000 {
> > > +		compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
> > 
> > It doesn't seem to me like claiming compatibility with "snps,dw-pcie" is
> > correct. There's a bunch of NVIDIA- or Tegra-specific properties below
> > and code in the driver. Would this device be able to function if no
> > driver was binding against the "nvidia,tegra194-pcie" compatible string?
> > Would it work if you left that out? I don't think so, so we should also
> > not list it here.
> It is required for designware specific code to work properly. It is specified
> by ../designware-pcie.txt file

That sounds like a bug to me. Why does the driver need that? I mean the
Tegra instantiation clearly isn't going to work if the driver matches on
that compatible string, so by definition it is not compatible.

Rob, was this intentional? Seems like all other users of the DesignWare
PCIe core use the same scheme, so perhaps I'm missing something?

> > > +		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>;
> > > +		reg = <0x00 0x14180000 0x0 0x00020000   /* appl registers (128K)      */
> > > +		       0x00 0x38000000 0x0 0x00040000   /* configuration space (256K) */
> > > +		       0x00 0x38040000 0x0 0x00040000>; /* iATU_DMA reg space (256K)  */
> > > +		reg-names = "appl", "config", "atu_dma";
> > > +
> > > +		status = "disabled";
> > > +
> > > +		#address-cells = <3>;
> > > +		#size-cells = <2>;
> > > +		device_type = "pci";
> > > +		num-lanes = <8>;
> > > +		linux,pci-domain = <0>;
> > > +
> > > +		clocks = <&bpmp TEGRA194_CLK_PEX0_CORE_0>;
> > > +		clock-names = "core_clk";
> > > +
> > > +		resets = <&bpmp TEGRA194_RESET_PEX0_CORE_0_APB>,
> > > +			 <&bpmp TEGRA194_RESET_PEX0_CORE_0>;
> > > +		reset-names = "core_apb_rst", "core_rst";
> > > +
> > > +		interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>,	/* controller interrupt */
> > > +			     <GIC_SPI 73 IRQ_TYPE_LEVEL_HIGH>;	/* MSI interrupt */
> > > +		interrupt-names = "intr", "msi";
> > > +
> > > +		#interrupt-cells = <1>;
> > > +		interrupt-map-mask = <0 0 0 0>;
> > > +		interrupt-map = <0 0 0 0 &gic 0 72 0x04>;
> > > +
> > > +		nvidia,bpmp = <&bpmp>;
> > > +
> > > +		nvidia,max-speed = <4>;
> > > +		nvidia,disable-aspm-states = <0xf>;
> > > +		nvidia,controller-id = <&bpmp 0x0>;
> > 
> > Why is there a reference to the BPMP in this propert?
> Ultimately Controller-ID is passed to BPMP-FW and a BPMP handle is required for that
> which gets derived from this BPMP phandle.

The binding doesn't say that the nvidia,controller-id is a (phandle, ID)
pair. Also, you already have the nvidia,bpmp property that contains the
phandle, although you don't describe that property in the binding above.
I think you need to either get rid of the nvidia,bpmp property or drop
the &bpmp phandle from the nvidia,controller-id property.

My preference is the latter because the controller ID is really
independent of the BPMP firmware, even if it may be used as part of a
call to the BPMP firmware.

> > > +		nvidia,aux-clk-freq = <0x13>;
> > > +		nvidia,preset-init = <0x5>;
> > 
> > aux-clk-freq and preset-init are not defined in the binding above.
> Ok. I'll take of it in the next patch series
> 
> > 
> > > +		nvidia,aspm-cmrt = <0x3C>;
> > > +		nvidia,aspm-pwr-on-t = <0x14>;
> > > +		nvidia,aspm-l0s-entrance-latency = <0x3>;
> > 
> > These should be in decimal notation to make them easier to deal with. I
> > don't usually read time in hexadecimal.
> Ok. I'll take of it in the next patch series
> 
> > 
> > > +
> > > +		bus-range = <0x0 0xff>;
> > > +		ranges = <0x81000000 0x0 0x38100000 0x0 0x38100000 0x0 0x00100000      /* downstream I/O (1MB) */
> > > +			  0x82000000 0x0 0x38200000 0x0 0x38200000 0x0 0x01E00000      /* non-prefetchable memory (30MB) */
> > > +			  0xc2000000 0x18 0x00000000 0x18 0x00000000 0x4 0x00000000>;  /* prefetchable memory (16GB) */
> > > +
> > > +		nvidia,cfg-link-cap-l1sub = <0x1c4>;
> > > +		nvidia,cap-pl16g-status = <0x174>;
> > > +		nvidia,cap-pl16g-cap-off = <0x188>;
> > > +		nvidia,event-cntr-ctrl = <0x1d8>;
> > > +		nvidia,event-cntr-data = <0x1dc>;
> > > +		nvidia,dl-feature-cap = <0x30c>;
> > 
> > These are not defined in the binding above.
> Ok. I'll take of it in the next patch series
> 
> > 
> > > +	};
> > > +
> > > +Board DTS:
> > > +
> > > +	pcie@14180000 {
> > > +		status = "okay";
> > > +
> > > +		vddio-pex-ctl-supply = <&vdd_1v8ao>;
> > > +
> > > +		phys = <&p2u_2>,
> > > +		       <&p2u_3>,
> > > +		       <&p2u_4>,
> > > +		       <&p2u_5>;
> > > +		phy-names = "pcie-p2u-0", "pcie-p2u-1", "pcie-p2u-2",
> > > +			    "pcie-p2u-3";
> > > +	};
> > > diff --git a/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt b/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
> > 
> > Might be better to split this into a separate patch.
> Done.
> 
> > 
> > > new file mode 100644
> > > index 000000000000..cc0de8e8e8db
> > > --- /dev/null
> > > +++ b/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
> > > @@ -0,0 +1,34 @@
> > > +NVIDIA Tegra194 P2U binding
> > > +
> > > +Tegra194 has two PHY bricks namely HSIO (High Speed IO) and NVHS (NVIDIA High
> > > +Speed) each interfacing with 12 and 8 P2U instances respectively.
> > > +A P2U instance is a glue logic between Synopsys DesignWare Core PCIe IP's PIPE
> > > +interface and PHY of HSIO/NVHS bricks. Each P2U instance represents one PCIe
> > > +lane.
> > > +
> > > +Required properties:
> > > +- compatible: For Tegra19x, must contain "nvidia,tegra194-phy-p2u".
> > 
> > Isn't the "phy-" implied by "p2u"? The name of the hardware block is
> > "Tegra194 P2U", so that "phy-" seems gratuitous to me.
> Done.
> 
> > 
> > > +- reg: Should be the physical address space and length of respective each P2U
> > > +       instance.
> > > +- reg-names: Must include the entry "base".
> > 
> > "base" is a bad name. Each of these entries will be a "base" of the
> > given region. The name should specify what region it is the base of.
> I'll change it to "reg_base"

Each of these entries will contain a "base" address for "registers" of
some sort. I'm thinking more along the lines of "ctl" if they are
control registers for the P2U, or perhaps just "p2u" if there is no
better name.

Thierry

> > > +Required properties for PHY port node:
> > > +- #phy-cells: Defined by generic PHY bindings.  Must be 0.
> > > +
> > > +Refer to phy/phy-bindings.txt for the generic PHY binding properties.
> > > +
> > > +Example:
> > > +
> > > +hsio-p2u {
> > > +	compatible = "simple-bus";
> > > +	#address-cells = <2>;
> > > +	#size-cells = <2>;
> > > +	ranges;
> > > +	p2u_0: p2u@03e10000 {
> > > +		compatible = "nvidia,tegra194-phy-p2u";
> > > +		reg = <0x0 0x03e10000 0x0 0x00010000>;
> > > +		reg-names = "base";
> > > +
> > > +		#phy-cells = <0>;
> > > +	};
> > > +}
> > > -- 
> > > 2.7.4
>
Vidya Sagar April 2, 2019, 9:16 a.m. UTC | #9
On 4/1/2019 8:01 PM, Thierry Reding wrote:
> On Mon, Apr 01, 2019 at 04:48:42PM +0530, Vidya Sagar wrote:
>> On 3/31/2019 12:12 PM, Rob Herring wrote:
>>> On Tue, Mar 26, 2019 at 08:43:22PM +0530, Vidya Sagar wrote:
>>>> Add support for Tegra194 PCIe controllers. These controllers are based
>>>> on Synopsys DesignWare core IP.
>>>>
>>>> Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
>>>> ---
>>>>    .../bindings/pci/nvidia,tegra194-pcie.txt          | 209 +++++++++++++++++++++
>>>>    .../devicetree/bindings/phy/phy-tegra194-p2u.txt   |  34 ++++
>>>>    2 files changed, 243 insertions(+)
>>>>    create mode 100644 Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>>>>    create mode 100644 Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
>>>>
>>>> diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>>>> new file mode 100644
>>>> index 000000000000..31527283a0cd
>>>> --- /dev/null
>>>> +++ b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>>>> @@ -0,0 +1,209 @@
>>>> +NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based)
>>>> +
>>>> +This PCIe host controller is based on the Synopsis Designware PCIe IP
>>>> +and thus inherits all the common properties defined in designware-pcie.txt.
>>>> +
>>>> +Required properties:
>>>> +- compatible: For Tegra19x, must contain "nvidia,tegra194-pcie".
>>>> +- device_type: Must be "pci"
>>>> +- reg: A list of physical base address and length for each set of controller
>>>> +  registers. Must contain an entry for each entry in the reg-names property.
>>>> +- reg-names: Must include the following entries:
>>>> +  "appl": Controller's application logic registers
>>>> +  "window1": This is the aperture of controller available under 4GB boundary
>>>> +             (i.e. within 32-bit space). This aperture is typically used for
>>>> +             accessing config space of root port itself and also the connected
>>>> +             endpoints (by appropriately programming internal Address
>>>> +             Translation Unit's (iATU) out bound region) and also to map
>>>> +             prefetchable/non-prefetchable BARs.
>>>
>>> This is usually represented in 'ranges' for BARs.
>> I added window1 and window2 as the umbrella apertures that each PCIe controller has
>> and gave a description of how each aperture *can* be used. This is an overview and as
>> such these two entries are not directly used by the driver.
>> 'ranges' property gives us the information as to how exactly are window1 and window2
>> apertures used.
>> Thierry Reding also gave few comments about these entries. If this is confusing,
>> I'm ok to remove them as well. Please let me know.
> 
> If all you want to do is document how these are used, it may be better
> to enhance the device tree bindings for the ranges property if it does
> not describe this fully enough yet, or add comments in the DT nodes to
> clarify.
It looks like having window1 and window2 is causing confusion here. I'll remove
them in my next patch.

> 
>>>> +  "config": As per the definition in designware-pcie.txt
>>>> +  "atu_dma": iATU and DMA register. This is where the iATU (internal Address
>>>> +             Translation Unit) registers of the PCIe core are made available
>>>> +             fow SW access.
>>>> +  "dbi": The aperture where root port's own configuration registers are
>>>> +         available
>>>> +  "window2": This is the larger (compared to window1) aperture available above
>>>> +             4GB boundary (i.e. in 64-bit space). This is typically used for
>>>> +             mapping prefetchable/non-prefetchable BARs of endpoints
>>>> +- interrupts: A list of interrupt outputs of the controller. Must contain an
>>>> +  entry for each entry in the interrupt-names property.
>>>> +- interrupt-names: Must include the following entries:
>>>> +  "intr": The Tegra interrupt that is asserted for controller interrupts
>>>> +  "msi": The Tegra interrupt that is asserted when an MSI is received
>>>> +- bus-range: Range of bus numbers associated with this controller
>>>> +- #address-cells: Address representation for root ports (must be 3)
>>>> +  - cell 0 specifies the bus and device numbers of the root port:
>>>> +    [23:16]: bus number
>>>> +    [15:11]: device number
>>>> +  - cell 1 denotes the upper 32 address bits and should be 0
>>>> +  - cell 2 contains the lower 32 address bits and is used to translate to the
>>>> +    CPU address space
>>>> +- #size-cells: Size representation for root ports (must be 2)
>>>> +- ranges: Describes the translation of addresses for root ports and standard
>>>> +  PCI regions. The entries must be 7 cells each, where the first three cells
>>>> +  correspond to the address as described for the #address-cells property
>>>> +  above, the fourth and fifth cells are for the physical CPU address to
>>>> +  translate to and the sixth and seventh cells are as described for the
>>>> +  #size-cells property above.
>>>> +  - Entries setup the mapping for the standard I/O, memory and
>>>> +    prefetchable PCI regions. The first cell determines the type of region
>>>> +    that is setup:
>>>> +    - 0x81000000: I/O memory region
>>>> +    - 0x82000000: non-prefetchable memory region
>>>> +    - 0xc2000000: prefetchable memory region
>>>> +  Please refer to the standard PCI bus binding document for a more detailed
>>>> +  explanation.
>>>> +- #interrupt-cells: Size representation for interrupts (must be 1)
>>>> +- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties
>>>> +  Please refer to the standard PCI bus binding document for a more detailed
>>>> +  explanation.
>>>> +- clocks: Must contain an entry for each entry in clock-names.
>>>> +  See ../clocks/clock-bindings.txt for details.
>>>> +- clock-names: Must include the following entries:
>>>> +  - core_clk
>>>> +- resets: Must contain an entry for each entry in reset-names.
>>>> +  See ../reset/reset.txt for details.
>>>> +- reset-names: Must include the following entries:
>>>> +  - core_apb_rst
>>>> +  - core_rst
>>>> +- phys: Must contain a phandle to P2U PHY for each entry in phy-names.
>>>> +- phy-names: Must include an entry for each active lane.
>>>> +  "pcie-p2u-N": where N ranges from 0 to one less than the total number of lanes
>>>> +- Controller dependent register offsets
>>>> +  - nvidia,event-cntr-ctrl: EVENT_COUNTER_CONTROL reg offset
>>>> +      0x168 - FPGA
>>>> +      0x1a8 - C1, C2 and C3
>>>> +      0x1c4 - C4
>>>> +      0x1d8 - C0 and C5
>>>> +  - nvidia,event-cntr-data: EVENT_COUNTER_DATA reg offset
>>>> +      0x16c - FPGA
>>>> +      0x1ac - C1, C2 and C3
>>>> +      0x1c8 - C4
>>>> +      0x1dc - C0 and C5
>>>> +- nvidia,controller-id : Controller specific ID
>>>> +      0x0 - C0
>>>> +      0x1 - C1
>>>> +      0x2 - C2
>>>> +      0x3 - C3
>>>> +      0x4 - C4
>>>> +      0x5 - C5
>>>> +- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals
>>>> +
>>>> +Optional properties:
>>>> +- nvidia,max-speed: limits controllers max speed to this value.
>>>> +    1 - Gen-1 (2.5 GT/s)
>>>> +    2 - Gen-2 (5 GT/s)
>>>> +    3 - Gen-3 (8 GT/s)
>>>> +    4 - Gen-4 (16 GT/s)
>>>> +- nvidia,init-speed: limits controllers init speed to this value.
>>>> +    1 - Gen-1 (2. 5 GT/s)
>>>> +    2 - Gen-2 (5 GT/s)
>>>> +    3 - Gen-3 (8 GT/s)
>>>> +    4 - Gen-4 (16 GT/s)
>>>
>>> Don't we have standard speed properties?
>> Not that I'm aware of. If you come across any, please do let me know and
>> I'll change these.
> 
> See Documentation/devicetree/bindings/pci/pci.txt, max-link-speed.
> There's a standard of_pci_get_max_link_speed() property that reads it
> from device tree.
Thanks for the pointer. I'll switch to this in the next patch.

> 
>>> Why do we need 2 values?
>> max-speed configures the controller to advertise the speed mentioned through
>> this flag, whereas, init-speed gets the link up at this speed and software
>> can further take the link speed to a different speed by retraining the link.
>> This is to give flexibility to developers depending on the platform.
> 
> This seems to me like overcomplicating things. Couldn't we do something
> like start in the slowest mode by default and then upgrade if endpoints
> support higher speeds?
> 
> I'm assuming that the maximum speed is already fixed by the IP hardware
> instantiation, so why would we want to limit it additionally? Similarly,
> what's the use-case for setting the initial link speed to something
> other than the lowest speed?
You are right that maximum speed supported by hardware is fixed and through
max-link-speed DT option, flexibility is given to limit it to the desired speed
for a controller on a given platform. As mentioned in the documentation for max-link-speed,
this is a strategy to avoid unnecessary operation for unsupported link speed.
There is no real use-case as such even for setting the initial link speed, but it is
there to give flexibility (for any debugging) to get the link up at a certain speed
and then take it to a higher speed at a later point of time. Please note that, hardware
as such already has the capability to take the link to maximum speed agreed by both
upstream and downstream ports. 'nvidia,init-speed' is only to give more flexibility
while debugging. I'm OK to remove it if this is not adding much value here.

> 
>>>> +- nvidia,disable-aspm-states : controls advertisement of ASPM states
>>>> +    bit-0 to '1' : disables advertisement of ASPM-L0s
>>>> +    bit-1 to '1' : disables advertisement of ASPM-L1. This also disables
>>>> +                 advertisement of ASPM-L1.1 and ASPM-L1.2
>>>> +    bit-2 to '1' : disables advertisement of ASPM-L1.1
>>>> +    bit-3 to '1' : disables advertisement of ASPM-L1.2
>>>
>>> Seems like these too should be common.
>> This flag controls the advertisement of different ASPM states by root port.
>> Again, I'm not aware of any common method for this.
> 
> rockchip-pcie-host.txt documents an "aspm-no-l0s" property that prevents
> the root complex from advertising L0s. Sounds like maybe following a
> similar scheme would be best for consistency. I think we'll also want
> these to be non-NVIDIA specific, so drop the "nvidia," prefix and maybe
> document them in pci.txt so that they can be more broadly used.
Since we have ASPM-L0s, L1, L1.1 and L1.2 states, I prefer to have just one entry
with different bit positions indicating which particular state should not be
advertised by root port. Do you see any particular advantage to have 4 different options?
If having one options is fine, I'll remove "nvidia," and document it in pci.txt.

> 
>>>> +- nvidia,disable-clock-request : gives a hint to driver that there is no
>>>> +    CLKREQ signal routing on board
>>>> +- nvidia,update-fc-fixup : needs it to improve perf when a platform is designed
>>>> +    in such a way that it satisfies at least one of the following conditions
>>>> +    1. If C0/C4/C5 run at x1/x2 link widths (irrespective of speed and MPS)
>>>> +    2. If C0/C1/C2/C3/C4/C5 operate at their respective max link widths and
>>>
>>> What is Cx?
>> Cx is the Controller with its ID.
>>
>>>
>>>> +       a) speed is Gen-2 and MPS is 256B
>>>> +       b) speed is >= Gen-3 with any MPS
>>>> +- nvidia,cdm-check : Enables CDM checking. For more information, refer Synopsis
>>>> +    DesignWare Cores  PCI Express Controller Databook r4.90a Chapter S.4
>>>> +- nvidia,enable-power-down : Enables power down of respective controller and
>>>> +    corresponding PLLs if they are not shared by any other entity
>>>> +- "nvidia,pex-wake" : Add PEX_WAKE gpio number to provide wake support.
>>>> +- "nvidia,plat-gpios" : Add gpio number that needs to be configured before
>>>> +   system goes for enumeration. There could be platforms where enabling 3.3V and
>>>> +   12V power supplies are done through GPIOs, in which case, list of all such
>>>> +   GPIOs can be specified through this property.
>>>
>>> These should be split out to their specific function.
>> Enabling Power rails is just an example and depending on the platform, there could be
>> some on-board muxes which are controlled through GPIOs and all such platform specific
>> configuration can be handled through this flag.
> 
> Doing this via a "generic" GPIO binding is bound to break at some point.
> What if at some point one of those muxes needs additional power, perhaps
> controlled through an I2C regulator? Or if the mux requires multiple
> GPIOs for the correct configuration? How do you allow the mux to be
> reconfigured to one of the other options?
> 
> If all you have is a generic GPIO consumer of GPIOs there's not enough
> context for the driver to do anything with these GPIOs other than
> perhaps setting them to a specific value at probe time. In that case you
> could achieve the same thing using a gpio-hog.
> 
> I think we need to identify what the various uses for these can be and
> then find the right bindings (or come up with new ones) to properly
> describe the actual hardware. Otherwise we're going to paint ourselves
> into a corner.
> 
> Are there any use-cases besides regulators and muxes? For regulators we
> already have fixed and GPIO regulators, and for muxes there are a number
> of bindings defined in Documentation/devicetree/bindings/mux.
We don't have any other use cases apart from regulator and muxes and I agree with
your comment that we should use regulator/mux frameworks than this generic GPIO
configuration. I'll make changes for this in the next patch series.

> 
> Thierry
>
Vidya Sagar April 2, 2019, 11:41 a.m. UTC | #10
On 4/1/2019 8:37 PM, Thierry Reding wrote:
> On Mon, Apr 01, 2019 at 03:31:54PM +0530, Vidya Sagar wrote:
>> On 3/28/2019 6:45 PM, Thierry Reding wrote:
>>> On Tue, Mar 26, 2019 at 08:43:22PM +0530, Vidya Sagar wrote:
>>>> Add support for Tegra194 PCIe controllers. These controllers are based
>>>> on Synopsys DesignWare core IP.
>>>>
>>>> Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
>>>> ---
>>>>    .../bindings/pci/nvidia,tegra194-pcie.txt          | 209 +++++++++++++++++++++
>>>>    .../devicetree/bindings/phy/phy-tegra194-p2u.txt   |  34 ++++
>>>>    2 files changed, 243 insertions(+)
>>>>    create mode 100644 Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>>>>    create mode 100644 Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
>>>>
>>>> diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>>>> new file mode 100644
>>>> index 000000000000..31527283a0cd
>>>> --- /dev/null
>>>> +++ b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>>>> @@ -0,0 +1,209 @@
>>>> +NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based)
>>>> +
>>>> +This PCIe host controller is based on the Synopsis Designware PCIe IP
>>>> +and thus inherits all the common properties defined in designware-pcie.txt.
>>>> +
>>>> +Required properties:
>>>> +- compatible: For Tegra19x, must contain "nvidia,tegra194-pcie".
>>>> +- device_type: Must be "pci"
>>>> +- reg: A list of physical base address and length for each set of controller
>>>> +  registers. Must contain an entry for each entry in the reg-names property.
>>>> +- reg-names: Must include the following entries:
>>>> +  "appl": Controller's application logic registers
>>>> +  "window1": This is the aperture of controller available under 4GB boundary
>>>> +             (i.e. within 32-bit space). This aperture is typically used for
>>>> +             accessing config space of root port itself and also the connected
>>>> +             endpoints (by appropriately programming internal Address
>>>> +             Translation Unit's (iATU) out bound region) and also to map
>>>> +             prefetchable/non-prefetchable BARs.
>>>> +  "config": As per the definition in designware-pcie.txt
>>>
>>> I see that you set this to a 256 KiB region for all controllers. Since
>>> each function can have up to 4 KiB of extended configuration space, that
>>> means you have space to address:
>>>
>>>       256 KiB = 4 KiB * 8 functions * 8 devices
>>>
>>> Each bus can have up to 32 devices (including the root port) and there
>>> can be 256 busses, so I wonder how this is supposed to work. How does
>>> the mapping work for configuration space? Does the controller allow
>>> moving this 256 KiB window around so that more devices' configuration
>>> space can be accessed?
>> We are not using ECAM here instead only pick 4KB region from this 256 KB region
>> and program iATU (internal Address Translation Unit) of PCIe with the B:D:F of
>> the configuration space that is of interest to be able to view the respective
>> config space in that 4KB space. It is a hardware requirement to reserve 256KB of
>> space (though we use only 4K to access configuration space of any downstream B:D:F)
> 
> Okay, sounds good. I'm wondering if we should maybe note here that
> window1 needs to be a 256 KiB window if that's what the hardware
> requires.
I'll be removing window1 and window2 as they seem to cause unnecessary confusion

> 
>>>> +  "atu_dma": iATU and DMA register. This is where the iATU (internal Address
>>>> +             Translation Unit) registers of the PCIe core are made available
>>>> +             fow SW access.
>>>> +  "dbi": The aperture where root port's own configuration registers are
>>>> +         available
>>>
>>> This is slightly confusing because you already said in the description
>>> of "window1" that it is used to access the configuration space of the
>>> root port itself.
>>>
>>> Is the root port configuration space available via the regular
>>> configuration space registers?
>> Root port configuration space is hidden by default and 'dbi' property tells us
>> where we would like to *view* it. For this, we use a portion of window-1 aperture
>> and use it as 'dbi' base to *view* the config space of root port.
>> Basically Window-1 and window-2 are the umbrella entries (which I added based on
>> suggestion from Stephen Warren <swarren@nvidia.com> ) to give a complete picture of
>> number of apertures available and what they are used for. The windows 1 & 2 as such
>> are not used by the driver directly.
> 
> So I'm not exactly sure I understand how this works. Does the "dbi"
> entry contain a physical address and size of the aperture that we want
> to map into a subregion of "window-1"? Is this part of a region where
> similar subregions exist for all of the controllers? Could the offset
> into such a region be derived from the controller ID?
DBI region is not available for SW immediately after power on. Address where we would
like to see 'dbi' needs to be programmed in one of the APPL registers. Since window1
is one of the apertures (under 4GB boundary) available for each controller (one window1
aperture per controller), we are reserving some portion of window1 to view DBI registers.
Provided 'window1' is available in DT, 'dbi' can be derived run time also. I added it
explicitly to so give more clarity on where it is being reserved (just like how window2
aperture usage is explicitly mentioned through 'ranges'). If the correct approach
is to have only 'window1' and derive 'dbi' in the code, I'll change it to that way.
Please let me know.

> 
>>>> +  "window2": This is the larger (compared to window1) aperture available above
>>>> +             4GB boundary (i.e. in 64-bit space). This is typically used for
>>>> +             mapping prefetchable/non-prefetchable BARs of endpoints
>>>> +- interrupts: A list of interrupt outputs of the controller. Must contain an
>>>> +  entry for each entry in the interrupt-names property.
>>>> +- interrupt-names: Must include the following entries:
>>>> +  "intr": The Tegra interrupt that is asserted for controller interrupts
>>>> +  "msi": The Tegra interrupt that is asserted when an MSI is received
>>>> +- bus-range: Range of bus numbers associated with this controller
>>>> +- #address-cells: Address representation for root ports (must be 3)
>>>> +  - cell 0 specifies the bus and device numbers of the root port:
>>>> +    [23:16]: bus number
>>>> +    [15:11]: device number
>>>> +  - cell 1 denotes the upper 32 address bits and should be 0
>>>> +  - cell 2 contains the lower 32 address bits and is used to translate to the
>>>> +    CPU address space
>>>> +- #size-cells: Size representation for root ports (must be 2)
>>>> +- ranges: Describes the translation of addresses for root ports and standard
>>>> +  PCI regions. The entries must be 7 cells each, where the first three cells
>>>> +  correspond to the address as described for the #address-cells property
>>>> +  above, the fourth and fifth cells are for the physical CPU address to
>>>> +  translate to and the sixth and seventh cells are as described for the
>>>> +  #size-cells property above.
>>>> +  - Entries setup the mapping for the standard I/O, memory and
>>>> +    prefetchable PCI regions. The first cell determines the type of region
>>>> +    that is setup:
>>>> +    - 0x81000000: I/O memory region
>>>> +    - 0x82000000: non-prefetchable memory region
>>>> +    - 0xc2000000: prefetchable memory region
>>>> +  Please refer to the standard PCI bus binding document for a more detailed
>>>> +  explanation.
>>>> +- #interrupt-cells: Size representation for interrupts (must be 1)
>>>> +- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties
>>>> +  Please refer to the standard PCI bus binding document for a more detailed
>>>> +  explanation.
>>>> +- clocks: Must contain an entry for each entry in clock-names.
>>>> +  See ../clocks/clock-bindings.txt for details.
>>>> +- clock-names: Must include the following entries:
>>>> +  - core_clk
>>>
>>> It's redundant to name a clock _clk. Is this already required by the
>>> standard Designware bindings or is this new?
>> This is a new entry and not a standard Designware binding. I'll remove _clk
>> from the name in the next patch series.
>>
>>>
>>>> +- resets: Must contain an entry for each entry in reset-names.
>>>> +  See ../reset/reset.txt for details.
>>>> +- reset-names: Must include the following entries:
>>>> +  - core_apb_rst
>>>> +  - core_rst
>>>
>>> Same comment as for clock-names.
>> I'll take of it in the next patch series
>>
>>>
>>>> +- phys: Must contain a phandle to P2U PHY for each entry in phy-names.
>>>> +- phy-names: Must include an entry for each active lane.
>>>> +  "pcie-p2u-N": where N ranges from 0 to one less than the total number of lanes
>>>
>>> I'd leave away the "pcie-" prefix since the surrounding context already
>>> makes it clear that this is for PCIe.
>> I'll take of it in the next patch series
>>
>>>
>>>> +- Controller dependent register offsets
>>>> +  - nvidia,event-cntr-ctrl: EVENT_COUNTER_CONTROL reg offset
>>>> +      0x168 - FPGA
>>>> +      0x1a8 - C1, C2 and C3
>>>> +      0x1c4 - C4
>>>> +      0x1d8 - C0 and C5
>>>> +  - nvidia,event-cntr-data: EVENT_COUNTER_DATA reg offset
>>>> +      0x16c - FPGA
>>>> +      0x1ac - C1, C2 and C3
>>>> +      0x1c8 - C4
>>>> +      0x1dc - C0 and C5
>>>> +- nvidia,controller-id : Controller specific ID
>>>> +      0x0 - C0
>>>> +      0x1 - C1
>>>> +      0x2 - C2
>>>> +      0x3 - C3
>>>> +      0x4 - C4
>>>> +      0x5 - C5
>>>
>>> It's redundant to have both a controller ID and parameterized register
>>> offsets based on that controller ID. I would recommend keeping the
>>> controller ID and then moving the register offsets to the driver and
>>> decide based on the controller ID.
>> Ok. I'll take of it in the next patch series
>>
>>>
>>>> +- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals
>>>> +
>>>> +Optional properties:
>>>> +- nvidia,max-speed: limits controllers max speed to this value.
>>>> +    1 - Gen-1 (2.5 GT/s)
>>>> +    2 - Gen-2 (5 GT/s)
>>>> +    3 - Gen-3 (8 GT/s)
>>>> +    4 - Gen-4 (16 GT/s)
>>>> +- nvidia,init-speed: limits controllers init speed to this value.
>>>> +    1 - Gen-1 (2. 5 GT/s)
>>>> +    2 - Gen-2 (5 GT/s)
>>>> +    3 - Gen-3 (8 GT/s)
>>>> +    4 - Gen-4 (16 GT/s)
>>>> +- nvidia,disable-aspm-states : controls advertisement of ASPM states
>>>> +    bit-0 to '1' : disables advertisement of ASPM-L0s
>>>> +    bit-1 to '1' : disables advertisement of ASPM-L1. This also disables
>>>> +                 advertisement of ASPM-L1.1 and ASPM-L1.2
>>>> +    bit-2 to '1' : disables advertisement of ASPM-L1.1
>>>> +    bit-3 to '1' : disables advertisement of ASPM-L1.2
>>>
>>> These seem more like configuration options rather than hardware
>>> description.
>> Yes. Since the platforms like Jetson-Xavier based on T194 are going to go in
>> open market, we are providing these configuration options and hence they are
>> optional
> 
> Under what circumstances would we want to disable certain ASPM states?
> My understanding is that PCI device drivers can already disable
> individual ASPM states if they don't support them, so why would we ever
> want to disable advertisement of certain ASPM states?
Well, this is given to give more flexibility while debugging and given that there is going
to be only one config for different platforms in future, it may be possible to have ASPM
config enabled by default and having this DT option would give more controlled enablement
of ASPM states by controlling the advertisement of ASPM states by root port.
  
> 
>>>> +- nvidia,disable-clock-request : gives a hint to driver that there is no
>>>> +    CLKREQ signal routing on board
> 
> Sounds like this could be useful for designs other than Tegra, so maybe
> remove the "nvidia," prefix? The name also doesn't match the description
> very well. "disable" kind of implies that we want to disable this
> feature despite it being available. However, what we really want to
> express here is that there's no CLKREQ signal on a design at all. So
> perhaps it would be better to invert this and add a property named
> "supports-clock-request" on boards where we have a CLKREQ signal.
Done. I'll add this to pci.txt documentation.

> 
>>>> +- nvidia,update-fc-fixup : needs it to improve perf when a platform is designed
>>>> +    in such a way that it satisfies at least one of the following conditions
>>>> +    1. If C0/C4/C5 run at x1/x2 link widths (irrespective of speed and MPS)
>>>> +    2. If C0/C1/C2/C3/C4/C5 operate at their respective max link widths and
>>>> +       a) speed is Gen-2 and MPS is 256B
>>>> +       b) speed is >= Gen-3 with any MPS
>>>
>>> If we know these conditions, can we not determine that the fixup is
>>> needed at runtime?
>> Not really. The programming that should take place based on these flags need to
>> happen before PCIe link up and if we were to find them during run time, we can do
>> that only after the link is up. So, to avoid this chicken and egg situation, these
>> are passed as DT options
> 
> Might be worth explaining what FC is in this context. Also, perhaps
> explain how and why setting this would improve performance. You're also
> not explicit here what the type of the property is. From the context it
> sounds like it's just a boolean, but you may want to spell that out.
Done.

> 
>>>> +- nvidia,cdm-check : Enables CDM checking. For more information, refer Synopsis
>>>> +    DesignWare Cores  PCI Express Controller Databook r4.90a Chapter S.4
> 
> If this is documented in the DesignWare documentation, why not make this
> a generic property that applies to all DesignWare instantiations? Might
> also be worth giving a one or two sentence description of what this is
> so that people don't have to go look at the databook.
Done.

> 
>>> Why should this be configurable through device tree?
>> This is a hardware feature for safety and can be enabled if required. So, I made it
>> as an optional feature that can be controlled through DT.
>>
>>>
>>>> +- nvidia,enable-power-down : Enables power down of respective controller and
>>>> +    corresponding PLLs if they are not shared by any other entity
>>>
>>> Wouldn't we want this to be the default? Why keep things powered up if
>>> they are not needed?
>> There could be platforms (automotive based), where it is not required to power down
>> controllers and hence needed a flag to control powering down of controllers
> 
> Is it harmful to power down the controllers on such platforms? It
> strikes me as odd to leave something enabled if it isn't needed,
> independent of the platform.
It is not harmful as such. This is just a flexibility. Also, this might be required for
hot-plug feature.
Are you saying that we should have controller getting powered down as default and a flag
to stop that happening? i.e. something like 'nvidia,disable-power-down' ?

> 
>>>> +- "nvidia,pex-wake" : Add PEX_WAKE gpio number to provide wake support.
>>>> +- "nvidia,plat-gpios" : Add gpio number that needs to be configured before
>>>> +   system goes for enumeration. There could be platforms where enabling 3.3V and
>>>> +   12V power supplies are done through GPIOs, in which case, list of all such
>>>> +   GPIOs can be specified through this property.
>>>
>>> For power supplies we usually use the regulator bindings. Are there any
>>> other cases where we'd need this?
>> Enabling power supplies is just one example, but there could be platforms where
>> programming of some GPIOs should happen (to configure muxes properly on PCB etc...)
>> before going for enumeration. All such GPIOs can be passed through this DT option.
> 
> As explained in the other subthread, I think it's better to model these
> properly to make sure we have the flexibility that we need. One mux may
> be controlled by a GPIO, another may be connected to I2C.
Done. I'll take care of this in the next patch series.

> 
>>>> +- "nvidia,aspm-cmrt" : Common Mode Restore time for proper operation of ASPM to
>>>> +   be specified in microseconds
>>>> +- "nvidia,aspm-pwr-on-t" : Power On time for proper operation of ASPM to be
>>>> +   specified in microseconds
>>>> +- "nvidia,aspm-l0s-entrance-latency" : ASPM L0s entrance latency to be specified
>>>> +   in microseconds
>>>> +
>>>> +Examples:
>>>> +=========
>>>> +
>>>> +Tegra194:
>>>> +--------
>>>> +
>>>> +SoC DTSI:
>>>> +
>>>> +	pcie@14180000 {
>>>> +		compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
>>>
>>> It doesn't seem to me like claiming compatibility with "snps,dw-pcie" is
>>> correct. There's a bunch of NVIDIA- or Tegra-specific properties below
>>> and code in the driver. Would this device be able to function if no
>>> driver was binding against the "nvidia,tegra194-pcie" compatible string?
>>> Would it work if you left that out? I don't think so, so we should also
>>> not list it here.
>> It is required for designware specific code to work properly. It is specified
>> by ../designware-pcie.txt file
> 
> That sounds like a bug to me. Why does the driver need that? I mean the
> Tegra instantiation clearly isn't going to work if the driver matches on
> that compatible string, so by definition it is not compatible.
> 
> Rob, was this intentional? Seems like all other users of the DesignWare
> PCIe core use the same scheme, so perhaps I'm missing something?
This is the standard usage procedure across all Designware based implementations.
Probably Rob can give more info on this.

> 
>>>> +		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>;
>>>> +		reg = <0x00 0x14180000 0x0 0x00020000   /* appl registers (128K)      */
>>>> +		       0x00 0x38000000 0x0 0x00040000   /* configuration space (256K) */
>>>> +		       0x00 0x38040000 0x0 0x00040000>; /* iATU_DMA reg space (256K)  */
>>>> +		reg-names = "appl", "config", "atu_dma";
>>>> +
>>>> +		status = "disabled";
>>>> +
>>>> +		#address-cells = <3>;
>>>> +		#size-cells = <2>;
>>>> +		device_type = "pci";
>>>> +		num-lanes = <8>;
>>>> +		linux,pci-domain = <0>;
>>>> +
>>>> +		clocks = <&bpmp TEGRA194_CLK_PEX0_CORE_0>;
>>>> +		clock-names = "core_clk";
>>>> +
>>>> +		resets = <&bpmp TEGRA194_RESET_PEX0_CORE_0_APB>,
>>>> +			 <&bpmp TEGRA194_RESET_PEX0_CORE_0>;
>>>> +		reset-names = "core_apb_rst", "core_rst";
>>>> +
>>>> +		interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>,	/* controller interrupt */
>>>> +			     <GIC_SPI 73 IRQ_TYPE_LEVEL_HIGH>;	/* MSI interrupt */
>>>> +		interrupt-names = "intr", "msi";
>>>> +
>>>> +		#interrupt-cells = <1>;
>>>> +		interrupt-map-mask = <0 0 0 0>;
>>>> +		interrupt-map = <0 0 0 0 &gic 0 72 0x04>;
>>>> +
>>>> +		nvidia,bpmp = <&bpmp>;
>>>> +
>>>> +		nvidia,max-speed = <4>;
>>>> +		nvidia,disable-aspm-states = <0xf>;
>>>> +		nvidia,controller-id = <&bpmp 0x0>;
>>>
>>> Why is there a reference to the BPMP in this propert?
>> Ultimately Controller-ID is passed to BPMP-FW and a BPMP handle is required for that
>> which gets derived from this BPMP phandle.
> 
> The binding doesn't say that the nvidia,controller-id is a (phandle, ID)
> pair. Also, you already have the nvidia,bpmp property that contains the
> phandle, although you don't describe that property in the binding above.
> I think you need to either get rid of the nvidia,bpmp property or drop
> the &bpmp phandle from the nvidia,controller-id property.
> 
> My preference is the latter because the controller ID is really
> independent of the BPMP firmware, even if it may be used as part of a
> call to the BPMP firmware.
Done. I'll drop phandle from controller-id property.

> 
>>>> +		nvidia,aux-clk-freq = <0x13>;
>>>> +		nvidia,preset-init = <0x5>;
>>>
>>> aux-clk-freq and preset-init are not defined in the binding above.
>> Ok. I'll take of it in the next patch series
>>
>>>
>>>> +		nvidia,aspm-cmrt = <0x3C>;
>>>> +		nvidia,aspm-pwr-on-t = <0x14>;
>>>> +		nvidia,aspm-l0s-entrance-latency = <0x3>;
>>>
>>> These should be in decimal notation to make them easier to deal with. I
>>> don't usually read time in hexadecimal.
>> Ok. I'll take of it in the next patch series
>>
>>>
>>>> +
>>>> +		bus-range = <0x0 0xff>;
>>>> +		ranges = <0x81000000 0x0 0x38100000 0x0 0x38100000 0x0 0x00100000      /* downstream I/O (1MB) */
>>>> +			  0x82000000 0x0 0x38200000 0x0 0x38200000 0x0 0x01E00000      /* non-prefetchable memory (30MB) */
>>>> +			  0xc2000000 0x18 0x00000000 0x18 0x00000000 0x4 0x00000000>;  /* prefetchable memory (16GB) */
>>>> +
>>>> +		nvidia,cfg-link-cap-l1sub = <0x1c4>;
>>>> +		nvidia,cap-pl16g-status = <0x174>;
>>>> +		nvidia,cap-pl16g-cap-off = <0x188>;
>>>> +		nvidia,event-cntr-ctrl = <0x1d8>;
>>>> +		nvidia,event-cntr-data = <0x1dc>;
>>>> +		nvidia,dl-feature-cap = <0x30c>;
>>>
>>> These are not defined in the binding above.
>> Ok. I'll take of it in the next patch series
>>
>>>
>>>> +	};
>>>> +
>>>> +Board DTS:
>>>> +
>>>> +	pcie@14180000 {
>>>> +		status = "okay";
>>>> +
>>>> +		vddio-pex-ctl-supply = <&vdd_1v8ao>;
>>>> +
>>>> +		phys = <&p2u_2>,
>>>> +		       <&p2u_3>,
>>>> +		       <&p2u_4>,
>>>> +		       <&p2u_5>;
>>>> +		phy-names = "pcie-p2u-0", "pcie-p2u-1", "pcie-p2u-2",
>>>> +			    "pcie-p2u-3";
>>>> +	};
>>>> diff --git a/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt b/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
>>>
>>> Might be better to split this into a separate patch.
>> Done.
>>
>>>
>>>> new file mode 100644
>>>> index 000000000000..cc0de8e8e8db
>>>> --- /dev/null
>>>> +++ b/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
>>>> @@ -0,0 +1,34 @@
>>>> +NVIDIA Tegra194 P2U binding
>>>> +
>>>> +Tegra194 has two PHY bricks namely HSIO (High Speed IO) and NVHS (NVIDIA High
>>>> +Speed) each interfacing with 12 and 8 P2U instances respectively.
>>>> +A P2U instance is a glue logic between Synopsys DesignWare Core PCIe IP's PIPE
>>>> +interface and PHY of HSIO/NVHS bricks. Each P2U instance represents one PCIe
>>>> +lane.
>>>> +
>>>> +Required properties:
>>>> +- compatible: For Tegra19x, must contain "nvidia,tegra194-phy-p2u".
>>>
>>> Isn't the "phy-" implied by "p2u"? The name of the hardware block is
>>> "Tegra194 P2U", so that "phy-" seems gratuitous to me.
>> Done.
>>
>>>
>>>> +- reg: Should be the physical address space and length of respective each P2U
>>>> +       instance.
>>>> +- reg-names: Must include the entry "base".
>>>
>>> "base" is a bad name. Each of these entries will be a "base" of the
>>> given region. The name should specify what region it is the base of.
>> I'll change it to "reg_base"
> 
> Each of these entries will contain a "base" address for "registers" of
> some sort. I'm thinking more along the lines of "ctl" if they are
> control registers for the P2U, or perhaps just "p2u" if there is no
> better name.
Done. I'll go with 'ctl'.

> 
> Thierry
> 
>>>> +Required properties for PHY port node:
>>>> +- #phy-cells: Defined by generic PHY bindings.  Must be 0.
>>>> +
>>>> +Refer to phy/phy-bindings.txt for the generic PHY binding properties.
>>>> +
>>>> +Example:
>>>> +
>>>> +hsio-p2u {
>>>> +	compatible = "simple-bus";
>>>> +	#address-cells = <2>;
>>>> +	#size-cells = <2>;
>>>> +	ranges;
>>>> +	p2u_0: p2u@03e10000 {
>>>> +		compatible = "nvidia,tegra194-phy-p2u";
>>>> +		reg = <0x0 0x03e10000 0x0 0x00010000>;
>>>> +		reg-names = "base";
>>>> +
>>>> +		#phy-cells = <0>;
>>>> +	};
>>>> +}
>>>> -- 
>>>> 2.7.4
>>
Thierry Reding April 2, 2019, 2:20 p.m. UTC | #11
On Tue, Apr 02, 2019 at 02:46:27PM +0530, Vidya Sagar wrote:
> On 4/1/2019 8:01 PM, Thierry Reding wrote:
> > On Mon, Apr 01, 2019 at 04:48:42PM +0530, Vidya Sagar wrote:
> > > On 3/31/2019 12:12 PM, Rob Herring wrote:
> > > > On Tue, Mar 26, 2019 at 08:43:22PM +0530, Vidya Sagar wrote:
[...]
> > > > > +Optional properties:
> > > > > +- nvidia,max-speed: limits controllers max speed to this value.
> > > > > +    1 - Gen-1 (2.5 GT/s)
> > > > > +    2 - Gen-2 (5 GT/s)
> > > > > +    3 - Gen-3 (8 GT/s)
> > > > > +    4 - Gen-4 (16 GT/s)
> > > > > +- nvidia,init-speed: limits controllers init speed to this value.
> > > > > +    1 - Gen-1 (2. 5 GT/s)
> > > > > +    2 - Gen-2 (5 GT/s)
> > > > > +    3 - Gen-3 (8 GT/s)
> > > > > +    4 - Gen-4 (16 GT/s)
> > > > 
> > > > Don't we have standard speed properties?
> > > Not that I'm aware of. If you come across any, please do let me know and
> > > I'll change these.
> > 
> > See Documentation/devicetree/bindings/pci/pci.txt, max-link-speed.
> > There's a standard of_pci_get_max_link_speed() property that reads it
> > from device tree.
> Thanks for the pointer. I'll switch to this in the next patch.
> 
> > 
> > > > Why do we need 2 values?
> > > max-speed configures the controller to advertise the speed mentioned through
> > > this flag, whereas, init-speed gets the link up at this speed and software
> > > can further take the link speed to a different speed by retraining the link.
> > > This is to give flexibility to developers depending on the platform.
> > 
> > This seems to me like overcomplicating things. Couldn't we do something
> > like start in the slowest mode by default and then upgrade if endpoints
> > support higher speeds?
> > 
> > I'm assuming that the maximum speed is already fixed by the IP hardware
> > instantiation, so why would we want to limit it additionally? Similarly,
> > what's the use-case for setting the initial link speed to something
> > other than the lowest speed?
> You are right that maximum speed supported by hardware is fixed and through
> max-link-speed DT option, flexibility is given to limit it to the desired speed
> for a controller on a given platform. As mentioned in the documentation for max-link-speed,
> this is a strategy to avoid unnecessary operation for unsupported link speed.
> There is no real use-case as such even for setting the initial link speed, but it is
> there to give flexibility (for any debugging) to get the link up at a certain speed
> and then take it to a higher speed at a later point of time. Please note that, hardware
> as such already has the capability to take the link to maximum speed agreed by both
> upstream and downstream ports. 'nvidia,init-speed' is only to give more flexibility
> while debugging. I'm OK to remove it if this is not adding much value here.

If this is primarily used for debugging or troubleshooting, maybe making
it a module parameter is a better choice?

I can see how max-link-speed might be good in certain situations where a
board layout may mandate that a link speed slower than the one supported
by the hardware is used, but I can't imagine a case where the initial
link speed would have to be limited based on the hardware designe.

Rob, do you know of any other cases where something like this is being
used?

> > > > > +- nvidia,disable-aspm-states : controls advertisement of ASPM states
> > > > > +    bit-0 to '1' : disables advertisement of ASPM-L0s
> > > > > +    bit-1 to '1' : disables advertisement of ASPM-L1. This also disables
> > > > > +                 advertisement of ASPM-L1.1 and ASPM-L1.2
> > > > > +    bit-2 to '1' : disables advertisement of ASPM-L1.1
> > > > > +    bit-3 to '1' : disables advertisement of ASPM-L1.2
> > > > 
> > > > Seems like these too should be common.
> > > This flag controls the advertisement of different ASPM states by root port.
> > > Again, I'm not aware of any common method for this.
> > 
> > rockchip-pcie-host.txt documents an "aspm-no-l0s" property that prevents
> > the root complex from advertising L0s. Sounds like maybe following a
> > similar scheme would be best for consistency. I think we'll also want
> > these to be non-NVIDIA specific, so drop the "nvidia," prefix and maybe
> > document them in pci.txt so that they can be more broadly used.
> Since we have ASPM-L0s, L1, L1.1 and L1.2 states, I prefer to have just one entry
> with different bit positions indicating which particular state should not be
> advertised by root port. Do you see any particular advantage to have 4 different options?
> If having one options is fine, I'll remove "nvidia," and document it in pci.txt.

I don't care strongly either way. It's really up to Rob to decide.

Thierry
Thierry Reding April 2, 2019, 2:35 p.m. UTC | #12
On Tue, Apr 02, 2019 at 05:11:25PM +0530, Vidya Sagar wrote:
> On 4/1/2019 8:37 PM, Thierry Reding wrote:
> > On Mon, Apr 01, 2019 at 03:31:54PM +0530, Vidya Sagar wrote:
> > > On 3/28/2019 6:45 PM, Thierry Reding wrote:
> > > > On Tue, Mar 26, 2019 at 08:43:22PM +0530, Vidya Sagar wrote:
> > > > > Add support for Tegra194 PCIe controllers. These controllers are based
> > > > > on Synopsys DesignWare core IP.
> > > > > 
> > > > > Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
> > > > > ---
> > > > >    .../bindings/pci/nvidia,tegra194-pcie.txt          | 209 +++++++++++++++++++++
> > > > >    .../devicetree/bindings/phy/phy-tegra194-p2u.txt   |  34 ++++
> > > > >    2 files changed, 243 insertions(+)
> > > > >    create mode 100644 Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
> > > > >    create mode 100644 Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
> > > > > 
> > > > > diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
> > > > > new file mode 100644
> > > > > index 000000000000..31527283a0cd
> > > > > --- /dev/null
> > > > > +++ b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
> > > > > @@ -0,0 +1,209 @@
> > > > > +NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based)
> > > > > +
> > > > > +This PCIe host controller is based on the Synopsis Designware PCIe IP
> > > > > +and thus inherits all the common properties defined in designware-pcie.txt.
> > > > > +
> > > > > +Required properties:
> > > > > +- compatible: For Tegra19x, must contain "nvidia,tegra194-pcie".
> > > > > +- device_type: Must be "pci"
> > > > > +- reg: A list of physical base address and length for each set of controller
> > > > > +  registers. Must contain an entry for each entry in the reg-names property.
> > > > > +- reg-names: Must include the following entries:
> > > > > +  "appl": Controller's application logic registers
> > > > > +  "window1": This is the aperture of controller available under 4GB boundary
> > > > > +             (i.e. within 32-bit space). This aperture is typically used for
> > > > > +             accessing config space of root port itself and also the connected
> > > > > +             endpoints (by appropriately programming internal Address
> > > > > +             Translation Unit's (iATU) out bound region) and also to map
> > > > > +             prefetchable/non-prefetchable BARs.
> > > > > +  "config": As per the definition in designware-pcie.txt
> > > > 
> > > > I see that you set this to a 256 KiB region for all controllers. Since
> > > > each function can have up to 4 KiB of extended configuration space, that
> > > > means you have space to address:
> > > > 
> > > >       256 KiB = 4 KiB * 8 functions * 8 devices
> > > > 
> > > > Each bus can have up to 32 devices (including the root port) and there
> > > > can be 256 busses, so I wonder how this is supposed to work. How does
> > > > the mapping work for configuration space? Does the controller allow
> > > > moving this 256 KiB window around so that more devices' configuration
> > > > space can be accessed?
> > > We are not using ECAM here instead only pick 4KB region from this 256 KB region
> > > and program iATU (internal Address Translation Unit) of PCIe with the B:D:F of
> > > the configuration space that is of interest to be able to view the respective
> > > config space in that 4KB space. It is a hardware requirement to reserve 256KB of
> > > space (though we use only 4K to access configuration space of any downstream B:D:F)
> > 
> > Okay, sounds good. I'm wondering if we should maybe note here that
> > window1 needs to be a 256 KiB window if that's what the hardware
> > requires.
> I'll be removing window1 and window2 as they seem to cause unnecessary confusion
> 
> > 
> > > > > +  "atu_dma": iATU and DMA register. This is where the iATU (internal Address
> > > > > +             Translation Unit) registers of the PCIe core are made available
> > > > > +             fow SW access.
> > > > > +  "dbi": The aperture where root port's own configuration registers are
> > > > > +         available
> > > > 
> > > > This is slightly confusing because you already said in the description
> > > > of "window1" that it is used to access the configuration space of the
> > > > root port itself.
> > > > 
> > > > Is the root port configuration space available via the regular
> > > > configuration space registers?
> > > Root port configuration space is hidden by default and 'dbi' property tells us
> > > where we would like to *view* it. For this, we use a portion of window-1 aperture
> > > and use it as 'dbi' base to *view* the config space of root port.
> > > Basically Window-1 and window-2 are the umbrella entries (which I added based on
> > > suggestion from Stephen Warren <swarren@nvidia.com> ) to give a complete picture of
> > > number of apertures available and what they are used for. The windows 1 & 2 as such
> > > are not used by the driver directly.
> > 
> > So I'm not exactly sure I understand how this works. Does the "dbi"
> > entry contain a physical address and size of the aperture that we want
> > to map into a subregion of "window-1"? Is this part of a region where
> > similar subregions exist for all of the controllers? Could the offset
> > into such a region be derived from the controller ID?
> DBI region is not available for SW immediately after power on. Address where we would
> like to see 'dbi' needs to be programmed in one of the APPL registers. Since window1
> is one of the apertures (under 4GB boundary) available for each controller (one window1
> aperture per controller), we are reserving some portion of window1 to view DBI registers.
> Provided 'window1' is available in DT, 'dbi' can be derived run time also. I added it
> explicitly to so give more clarity on where it is being reserved (just like how window2
> aperture usage is explicitly mentioned through 'ranges'). If the correct approach
> is to have only 'window1' and derive 'dbi' in the code, I'll change it to that way.
> Please let me know.

If there are no other requirements other than being in window1, then I
think it's fine to drop "dbi" here. From what you say elsewhere the
window1 aperture is 256 KiB and we can partition it as we like, right?
If so, I don't think there's a need to expose it in a more fine-grained
way.

One exception may be if other values in DT depend on the value. In that
case it would be good to have them all in DT so that they can be
validated.

Having this information in two places technically would require us to
check that the data is consistent. If we allocate from window1 at
runtime, things become much easier.

[...]
> > > > > +- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals
> > > > > +
> > > > > +Optional properties:
> > > > > +- nvidia,max-speed: limits controllers max speed to this value.
> > > > > +    1 - Gen-1 (2.5 GT/s)
> > > > > +    2 - Gen-2 (5 GT/s)
> > > > > +    3 - Gen-3 (8 GT/s)
> > > > > +    4 - Gen-4 (16 GT/s)
> > > > > +- nvidia,init-speed: limits controllers init speed to this value.
> > > > > +    1 - Gen-1 (2. 5 GT/s)
> > > > > +    2 - Gen-2 (5 GT/s)
> > > > > +    3 - Gen-3 (8 GT/s)
> > > > > +    4 - Gen-4 (16 GT/s)
> > > > > +- nvidia,disable-aspm-states : controls advertisement of ASPM states
> > > > > +    bit-0 to '1' : disables advertisement of ASPM-L0s
> > > > > +    bit-1 to '1' : disables advertisement of ASPM-L1. This also disables
> > > > > +                 advertisement of ASPM-L1.1 and ASPM-L1.2
> > > > > +    bit-2 to '1' : disables advertisement of ASPM-L1.1
> > > > > +    bit-3 to '1' : disables advertisement of ASPM-L1.2
> > > > 
> > > > These seem more like configuration options rather than hardware
> > > > description.
> > > Yes. Since the platforms like Jetson-Xavier based on T194 are going to go in
> > > open market, we are providing these configuration options and hence they are
> > > optional
> > 
> > Under what circumstances would we want to disable certain ASPM states?
> > My understanding is that PCI device drivers can already disable
> > individual ASPM states if they don't support them, so why would we ever
> > want to disable advertisement of certain ASPM states?
> Well, this is given to give more flexibility while debugging and given that there is going
> to be only one config for different platforms in future, it may be possible to have ASPM
> config enabled by default and having this DT option would give more controlled enablement
> of ASPM states by controlling the advertisement of ASPM states by root port.

Again, I think if this is primarily for debugging purposes I think there
are better ways. If you need to modify and reflash the DTB in order to
debug, that's suboptimal. Ideally you'd want something that you can just
enable at runtime (via a module parameter, or a kernel command-line
option, for example). That will allow you to do some debugging without
having to rebuild and reflash.

> > > > > +- nvidia,enable-power-down : Enables power down of respective controller and
> > > > > +    corresponding PLLs if they are not shared by any other entity
> > > > 
> > > > Wouldn't we want this to be the default? Why keep things powered up if
> > > > they are not needed?
> > > There could be platforms (automotive based), where it is not required to power down
> > > controllers and hence needed a flag to control powering down of controllers
> > 
> > Is it harmful to power down the controllers on such platforms? It
> > strikes me as odd to leave something enabled if it isn't needed,
> > independent of the platform.
> It is not harmful as such. This is just a flexibility. Also, this might be required for
> hot-plug feature.
> Are you saying that we should have controller getting powered down as default and a flag
> to stop that happening? i.e. something like 'nvidia,disable-power-down' ?

Yeah, I think by default we should always power down hardware if it
isn't needed. But I don't see why we would need to disable power down.
What's the use-case? You say this "might be required for hotplug", so
it sounds like this is not something that has currently been tested? I
prefer not to have bindings that are based on guesses, because that is
likely to result in bindings that don't actually meet the real
requirements and that becomes nasty to maintain in the long run. So if
we don't currently have a case where we need to keep the controller
powered on, let's just disable them by default and drop this from the
bindings. We can always add this back in a backwards-compatible manner
if it turns out that we do need it. At that point we'll also have more
data about the context, so we can design better bindings.

> > > > > +Tegra194:
> > > > > +--------
> > > > > +
> > > > > +SoC DTSI:
> > > > > +
> > > > > +	pcie@14180000 {
> > > > > +		compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
> > > > 
> > > > It doesn't seem to me like claiming compatibility with "snps,dw-pcie" is
> > > > correct. There's a bunch of NVIDIA- or Tegra-specific properties below
> > > > and code in the driver. Would this device be able to function if no
> > > > driver was binding against the "nvidia,tegra194-pcie" compatible string?
> > > > Would it work if you left that out? I don't think so, so we should also
> > > > not list it here.
> > > It is required for designware specific code to work properly. It is specified
> > > by ../designware-pcie.txt file
> > 
> > That sounds like a bug to me. Why does the driver need that? I mean the
> > Tegra instantiation clearly isn't going to work if the driver matches on
> > that compatible string, so by definition it is not compatible.
> > 
> > Rob, was this intentional? Seems like all other users of the DesignWare
> > PCIe core use the same scheme, so perhaps I'm missing something?
> This is the standard usage procedure across all Designware based implementations.
> Probably Rob can give more info on this.

If everyone does this, I suppose it's fine. Maybe Rob can confirm that
this is really the way it was meant to be, or whether we should take
action now before propagating this any further.

Thierry
Bjorn Helgaas April 2, 2019, 7:21 p.m. UTC | #13
On Mon, Apr 01, 2019 at 05:07:40PM +0200, Thierry Reding wrote:
> On Mon, Apr 01, 2019 at 03:31:54PM +0530, Vidya Sagar wrote:
> > On 3/28/2019 6:45 PM, Thierry Reding wrote:
> > > On Tue, Mar 26, 2019 at 08:43:22PM +0530, Vidya Sagar wrote:
> > > > Add support for Tegra194 PCIe controllers. These controllers are based
> > > > on Synopsys DesignWare core IP.

> > > > +- nvidia,disable-aspm-states : controls advertisement of ASPM states
> > > > +    bit-0 to '1' : disables advertisement of ASPM-L0s
> > > > +    bit-1 to '1' : disables advertisement of ASPM-L1. This also disables
> > > > +                 advertisement of ASPM-L1.1 and ASPM-L1.2
> > > > +    bit-2 to '1' : disables advertisement of ASPM-L1.1
> > > > +    bit-3 to '1' : disables advertisement of ASPM-L1.2
> > > 
> > > These seem more like configuration options rather than hardware
> > > description.
> > Yes. Since the platforms like Jetson-Xavier based on T194 are going to go in
> > open market, we are providing these configuration options and hence they are
> > optional
> 
> Under what circumstances would we want to disable certain ASPM states?
> My understanding is that PCI device drivers can already disable
> individual ASPM states if they don't support them, so why would we ever
> want to disable advertisement of certain ASPM states?

The PCI core (not individual device drivers) configures ASPM, and if both
ends of the link implement the spec correctly, there is nothing the device
driver needs to do.

There *are* a few drivers that fiddle with ASPM-related things.  I think
this is because either the core isn't doing things correctly and we haven't
bothered to fix the root cause, or the device itself is broken and we need
to disable something the hardware claims to support.

Bjorn
Vidya Sagar April 3, 2019, 5:29 a.m. UTC | #14
On 4/2/2019 7:50 PM, Thierry Reding wrote:
> On Tue, Apr 02, 2019 at 02:46:27PM +0530, Vidya Sagar wrote:
>> On 4/1/2019 8:01 PM, Thierry Reding wrote:
>>> On Mon, Apr 01, 2019 at 04:48:42PM +0530, Vidya Sagar wrote:
>>>> On 3/31/2019 12:12 PM, Rob Herring wrote:
>>>>> On Tue, Mar 26, 2019 at 08:43:22PM +0530, Vidya Sagar wrote:
> [...]
>>>>>> +Optional properties:
>>>>>> +- nvidia,max-speed: limits controllers max speed to this value.
>>>>>> +    1 - Gen-1 (2.5 GT/s)
>>>>>> +    2 - Gen-2 (5 GT/s)
>>>>>> +    3 - Gen-3 (8 GT/s)
>>>>>> +    4 - Gen-4 (16 GT/s)
>>>>>> +- nvidia,init-speed: limits controllers init speed to this value.
>>>>>> +    1 - Gen-1 (2. 5 GT/s)
>>>>>> +    2 - Gen-2 (5 GT/s)
>>>>>> +    3 - Gen-3 (8 GT/s)
>>>>>> +    4 - Gen-4 (16 GT/s)
>>>>>
>>>>> Don't we have standard speed properties?
>>>> Not that I'm aware of. If you come across any, please do let me know and
>>>> I'll change these.
>>>
>>> See Documentation/devicetree/bindings/pci/pci.txt, max-link-speed.
>>> There's a standard of_pci_get_max_link_speed() property that reads it
>>> from device tree.
>> Thanks for the pointer. I'll switch to this in the next patch.
>>
>>>
>>>>> Why do we need 2 values?
>>>> max-speed configures the controller to advertise the speed mentioned through
>>>> this flag, whereas, init-speed gets the link up at this speed and software
>>>> can further take the link speed to a different speed by retraining the link.
>>>> This is to give flexibility to developers depending on the platform.
>>>
>>> This seems to me like overcomplicating things. Couldn't we do something
>>> like start in the slowest mode by default and then upgrade if endpoints
>>> support higher speeds?
>>>
>>> I'm assuming that the maximum speed is already fixed by the IP hardware
>>> instantiation, so why would we want to limit it additionally? Similarly,
>>> what's the use-case for setting the initial link speed to something
>>> other than the lowest speed?
>> You are right that maximum speed supported by hardware is fixed and through
>> max-link-speed DT option, flexibility is given to limit it to the desired speed
>> for a controller on a given platform. As mentioned in the documentation for max-link-speed,
>> this is a strategy to avoid unnecessary operation for unsupported link speed.
>> There is no real use-case as such even for setting the initial link speed, but it is
>> there to give flexibility (for any debugging) to get the link up at a certain speed
>> and then take it to a higher speed at a later point of time. Please note that, hardware
>> as such already has the capability to take the link to maximum speed agreed by both
>> upstream and downstream ports. 'nvidia,init-speed' is only to give more flexibility
>> while debugging. I'm OK to remove it if this is not adding much value here.
> 
> If this is primarily used for debugging or troubleshooting, maybe making
> it a module parameter is a better choice?
> 
> I can see how max-link-speed might be good in certain situations where a
> board layout may mandate that a link speed slower than the one supported
> by the hardware is used, but I can't imagine a case where the initial
> link speed would have to be limited based on the hardware designe.
> 
> Rob, do you know of any other cases where something like this is being
> used?
Fair enough. I'll make max-link-speed as an optional DT parameter and
leave 'nvidia,init-speed' to Rob to decided whether it is OK to have it
or it is not acceptable.

> 
>>>>>> +- nvidia,disable-aspm-states : controls advertisement of ASPM states
>>>>>> +    bit-0 to '1' : disables advertisement of ASPM-L0s
>>>>>> +    bit-1 to '1' : disables advertisement of ASPM-L1. This also disables
>>>>>> +                 advertisement of ASPM-L1.1 and ASPM-L1.2
>>>>>> +    bit-2 to '1' : disables advertisement of ASPM-L1.1
>>>>>> +    bit-3 to '1' : disables advertisement of ASPM-L1.2
>>>>>
>>>>> Seems like these too should be common.
>>>> This flag controls the advertisement of different ASPM states by root port.
>>>> Again, I'm not aware of any common method for this.
>>>
>>> rockchip-pcie-host.txt documents an "aspm-no-l0s" property that prevents
>>> the root complex from advertising L0s. Sounds like maybe following a
>>> similar scheme would be best for consistency. I think we'll also want
>>> these to be non-NVIDIA specific, so drop the "nvidia," prefix and maybe
>>> document them in pci.txt so that they can be more broadly used.
>> Since we have ASPM-L0s, L1, L1.1 and L1.2 states, I prefer to have just one entry
>> with different bit positions indicating which particular state should not be
>> advertised by root port. Do you see any particular advantage to have 4 different options?
>> If having one options is fine, I'll remove "nvidia," and document it in pci.txt.
> 
> I don't care strongly either way. It's really up to Rob to decide.
Rob, please let us know your comments on this also.

> 
> Thierry
>
Vidya Sagar April 3, 2019, 6:22 a.m. UTC | #15
On 4/2/2019 8:05 PM, Thierry Reding wrote:
> On Tue, Apr 02, 2019 at 05:11:25PM +0530, Vidya Sagar wrote:
>> On 4/1/2019 8:37 PM, Thierry Reding wrote:
>>> On Mon, Apr 01, 2019 at 03:31:54PM +0530, Vidya Sagar wrote:
>>>> On 3/28/2019 6:45 PM, Thierry Reding wrote:
>>>>> On Tue, Mar 26, 2019 at 08:43:22PM +0530, Vidya Sagar wrote:
>>>>>> Add support for Tegra194 PCIe controllers. These controllers are based
>>>>>> on Synopsys DesignWare core IP.
>>>>>>
>>>>>> Signed-off-by: Vidya Sagar <vidyas@nvidia.com>
>>>>>> ---
>>>>>>     .../bindings/pci/nvidia,tegra194-pcie.txt          | 209 +++++++++++++++++++++
>>>>>>     .../devicetree/bindings/phy/phy-tegra194-p2u.txt   |  34 ++++
>>>>>>     2 files changed, 243 insertions(+)
>>>>>>     create mode 100644 Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>>>>>>     create mode 100644 Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
>>>>>>
>>>>>> diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>>>>>> new file mode 100644
>>>>>> index 000000000000..31527283a0cd
>>>>>> --- /dev/null
>>>>>> +++ b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
>>>>>> @@ -0,0 +1,209 @@
>>>>>> +NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based)
>>>>>> +
>>>>>> +This PCIe host controller is based on the Synopsis Designware PCIe IP
>>>>>> +and thus inherits all the common properties defined in designware-pcie.txt.
>>>>>> +
>>>>>> +Required properties:
>>>>>> +- compatible: For Tegra19x, must contain "nvidia,tegra194-pcie".
>>>>>> +- device_type: Must be "pci"
>>>>>> +- reg: A list of physical base address and length for each set of controller
>>>>>> +  registers. Must contain an entry for each entry in the reg-names property.
>>>>>> +- reg-names: Must include the following entries:
>>>>>> +  "appl": Controller's application logic registers
>>>>>> +  "window1": This is the aperture of controller available under 4GB boundary
>>>>>> +             (i.e. within 32-bit space). This aperture is typically used for
>>>>>> +             accessing config space of root port itself and also the connected
>>>>>> +             endpoints (by appropriately programming internal Address
>>>>>> +             Translation Unit's (iATU) out bound region) and also to map
>>>>>> +             prefetchable/non-prefetchable BARs.
>>>>>> +  "config": As per the definition in designware-pcie.txt
>>>>>
>>>>> I see that you set this to a 256 KiB region for all controllers. Since
>>>>> each function can have up to 4 KiB of extended configuration space, that
>>>>> means you have space to address:
>>>>>
>>>>>        256 KiB = 4 KiB * 8 functions * 8 devices
>>>>>
>>>>> Each bus can have up to 32 devices (including the root port) and there
>>>>> can be 256 busses, so I wonder how this is supposed to work. How does
>>>>> the mapping work for configuration space? Does the controller allow
>>>>> moving this 256 KiB window around so that more devices' configuration
>>>>> space can be accessed?
>>>> We are not using ECAM here instead only pick 4KB region from this 256 KB region
>>>> and program iATU (internal Address Translation Unit) of PCIe with the B:D:F of
>>>> the configuration space that is of interest to be able to view the respective
>>>> config space in that 4KB space. It is a hardware requirement to reserve 256KB of
>>>> space (though we use only 4K to access configuration space of any downstream B:D:F)
>>>
>>> Okay, sounds good. I'm wondering if we should maybe note here that
>>> window1 needs to be a 256 KiB window if that's what the hardware
>>> requires.
>> I'll be removing window1 and window2 as they seem to cause unnecessary confusion
>>
>>>
>>>>>> +  "atu_dma": iATU and DMA register. This is where the iATU (internal Address
>>>>>> +             Translation Unit) registers of the PCIe core are made available
>>>>>> +             fow SW access.
>>>>>> +  "dbi": The aperture where root port's own configuration registers are
>>>>>> +         available
>>>>>
>>>>> This is slightly confusing because you already said in the description
>>>>> of "window1" that it is used to access the configuration space of the
>>>>> root port itself.
>>>>>
>>>>> Is the root port configuration space available via the regular
>>>>> configuration space registers?
>>>> Root port configuration space is hidden by default and 'dbi' property tells us
>>>> where we would like to *view* it. For this, we use a portion of window-1 aperture
>>>> and use it as 'dbi' base to *view* the config space of root port.
>>>> Basically Window-1 and window-2 are the umbrella entries (which I added based on
>>>> suggestion from Stephen Warren <swarren@nvidia.com> ) to give a complete picture of
>>>> number of apertures available and what they are used for. The windows 1 & 2 as such
>>>> are not used by the driver directly.
>>>
>>> So I'm not exactly sure I understand how this works. Does the "dbi"
>>> entry contain a physical address and size of the aperture that we want
>>> to map into a subregion of "window-1"? Is this part of a region where
>>> similar subregions exist for all of the controllers? Could the offset
>>> into such a region be derived from the controller ID?
>> DBI region is not available for SW immediately after power on. Address where we would
>> like to see 'dbi' needs to be programmed in one of the APPL registers. Since window1
>> is one of the apertures (under 4GB boundary) available for each controller (one window1
>> aperture per controller), we are reserving some portion of window1 to view DBI registers.
>> Provided 'window1' is available in DT, 'dbi' can be derived run time also. I added it
>> explicitly to so give more clarity on where it is being reserved (just like how window2
>> aperture usage is explicitly mentioned through 'ranges'). If the correct approach
>> is to have only 'window1' and derive 'dbi' in the code, I'll change it to that way.
>> Please let me know.
> 
> If there are no other requirements other than being in window1, then I
> think it's fine to drop "dbi" here. From what you say elsewhere the
> window1 aperture is 256 KiB and we can partition it as we like, right?
> If so, I don't think there's a need to expose it in a more fine-grained
> way.
window1 size is actually 32MB and we are partitioning it for different purposes
like 'dbi' (to view root port's own config space) and 'config' (to view downstream
devices config space) and 'atu_dma' (to view iATU and DMA registers)

> 
> One exception may be if other values in DT depend on the value. In that
> case it would be good to have them all in DT so that they can be
> validated.
Unfortunately we do have one exception that Designware sub-system expects 'config'
property as a mandatory property. Other implementations might be having that space
as fixed but in Tegra, it can be fit anywhere in window1 (for that matter, in window2
also). So,  since 'config' has to be there to be compliant with Designware core DT
expectations, I think is it better to have others ('dbi' & 'atu_dma') also fixed in DT
and drop 'window1' and 'window2' from DT altogether. (How 'window2' is being used
is already present in 'ranges' property). I hope this should be fine.

> 
> Having this information in two places technically would require us to
> check that the data is consistent. If we allocate from window1 at
> runtime, things become much easier.
> 
> [...]
>>>>>> +- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals
>>>>>> +
>>>>>> +Optional properties:
>>>>>> +- nvidia,max-speed: limits controllers max speed to this value.
>>>>>> +    1 - Gen-1 (2.5 GT/s)
>>>>>> +    2 - Gen-2 (5 GT/s)
>>>>>> +    3 - Gen-3 (8 GT/s)
>>>>>> +    4 - Gen-4 (16 GT/s)
>>>>>> +- nvidia,init-speed: limits controllers init speed to this value.
>>>>>> +    1 - Gen-1 (2. 5 GT/s)
>>>>>> +    2 - Gen-2 (5 GT/s)
>>>>>> +    3 - Gen-3 (8 GT/s)
>>>>>> +    4 - Gen-4 (16 GT/s)
>>>>>> +- nvidia,disable-aspm-states : controls advertisement of ASPM states
>>>>>> +    bit-0 to '1' : disables advertisement of ASPM-L0s
>>>>>> +    bit-1 to '1' : disables advertisement of ASPM-L1. This also disables
>>>>>> +                 advertisement of ASPM-L1.1 and ASPM-L1.2
>>>>>> +    bit-2 to '1' : disables advertisement of ASPM-L1.1
>>>>>> +    bit-3 to '1' : disables advertisement of ASPM-L1.2
>>>>>
>>>>> These seem more like configuration options rather than hardware
>>>>> description.
>>>> Yes. Since the platforms like Jetson-Xavier based on T194 are going to go in
>>>> open market, we are providing these configuration options and hence they are
>>>> optional
>>>
>>> Under what circumstances would we want to disable certain ASPM states?
>>> My understanding is that PCI device drivers can already disable
>>> individual ASPM states if they don't support them, so why would we ever
>>> want to disable advertisement of certain ASPM states?
>> Well, this is given to give more flexibility while debugging and given that there is going
>> to be only one config for different platforms in future, it may be possible to have ASPM
>> config enabled by default and having this DT option would give more controlled enablement
>> of ASPM states by controlling the advertisement of ASPM states by root port.
> 
> Again, I think if this is primarily for debugging purposes I think there
> are better ways. If you need to modify and reflash the DTB in order to
> debug, that's suboptimal. Ideally you'd want something that you can just
> enable at runtime (via a module parameter, or a kernel command-line
> option, for example). That will allow you to do some debugging without
> having to rebuild and reflash.Agree that it is good to have controls during run-time. I'll make a note to look into
that in near future. Currently we already have a method to enable or disable ASPM i.e.
all states but there is no method to selectively enable/disable individual states.
Apart from debugging, other use case would be that, when a new platform is designed
and if the platform's POR (Plan Of Record) is to have only certain ASPM states to be
advertised by the root port, having this DT entry comes handy.

> 
>>>>>> +- nvidia,enable-power-down : Enables power down of respective controller and
>>>>>> +    corresponding PLLs if they are not shared by any other entity
>>>>>
>>>>> Wouldn't we want this to be the default? Why keep things powered up if
>>>>> they are not needed?
>>>> There could be platforms (automotive based), where it is not required to power down
>>>> controllers and hence needed a flag to control powering down of controllers
>>>
>>> Is it harmful to power down the controllers on such platforms? It
>>> strikes me as odd to leave something enabled if it isn't needed,
>>> independent of the platform.
>> It is not harmful as such. This is just a flexibility. Also, this might be required for
>> hot-plug feature.
>> Are you saying that we should have controller getting powered down as default and a flag
>> to stop that happening? i.e. something like 'nvidia,disable-power-down' ?
> 
> Yeah, I think by default we should always power down hardware if it
> isn't needed. But I don't see why we would need to disable power down.
> What's the use-case? You say this "might be required for hotplug", so
> it sounds like this is not something that has currently been tested? I
> prefer not to have bindings that are based on guesses, because that is
> likely to result in bindings that don't actually meet the real
> requirements and that becomes nasty to maintain in the long run. So if
> we don't currently have a case where we need to keep the controller
> powered on, let's just disable them by default and drop this from the
> bindings. We can always add this back in a backwards-compatible manner
> if it turns out that we do need it. At that point we'll also have more
> data about the context, so we can design better bindings.
At this point, I also don't have strong reasons to back it up. I'll drop
this for now and see if there comes any strong reason to have this support
back in future.

> 
>>>>>> +Tegra194:
>>>>>> +--------
>>>>>> +
>>>>>> +SoC DTSI:
>>>>>> +
>>>>>> +	pcie@14180000 {
>>>>>> +		compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
>>>>>
>>>>> It doesn't seem to me like claiming compatibility with "snps,dw-pcie" is
>>>>> correct. There's a bunch of NVIDIA- or Tegra-specific properties below
>>>>> and code in the driver. Would this device be able to function if no
>>>>> driver was binding against the "nvidia,tegra194-pcie" compatible string?
>>>>> Would it work if you left that out? I don't think so, so we should also
>>>>> not list it here.
>>>> It is required for designware specific code to work properly. It is specified
>>>> by ../designware-pcie.txt file
>>>
>>> That sounds like a bug to me. Why does the driver need that? I mean the
>>> Tegra instantiation clearly isn't going to work if the driver matches on
>>> that compatible string, so by definition it is not compatible.
>>>
>>> Rob, was this intentional? Seems like all other users of the DesignWare
>>> PCIe core use the same scheme, so perhaps I'm missing something?
>> This is the standard usage procedure across all Designware based implementations.
>> Probably Rob can give more info on this.
> 
> If everyone does this, I suppose it's fine. Maybe Rob can confirm that
> this is really the way it was meant to be, or whether we should take
> action now before propagating this any further.
> 
> Thierry
>
diff mbox series

Patch

diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
new file mode 100644
index 000000000000..31527283a0cd
--- /dev/null
+++ b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
@@ -0,0 +1,209 @@ 
+NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based)
+
+This PCIe host controller is based on the Synopsis Designware PCIe IP
+and thus inherits all the common properties defined in designware-pcie.txt.
+
+Required properties:
+- compatible: For Tegra19x, must contain "nvidia,tegra194-pcie".
+- device_type: Must be "pci"
+- reg: A list of physical base address and length for each set of controller
+  registers. Must contain an entry for each entry in the reg-names property.
+- reg-names: Must include the following entries:
+  "appl": Controller's application logic registers
+  "window1": This is the aperture of controller available under 4GB boundary
+             (i.e. within 32-bit space). This aperture is typically used for
+             accessing config space of root port itself and also the connected
+             endpoints (by appropriately programming internal Address
+             Translation Unit's (iATU) out bound region) and also to map
+             prefetchable/non-prefetchable BARs.
+  "config": As per the definition in designware-pcie.txt
+  "atu_dma": iATU and DMA register. This is where the iATU (internal Address
+             Translation Unit) registers of the PCIe core are made available
+             fow SW access.
+  "dbi": The aperture where root port's own configuration registers are
+         available
+  "window2": This is the larger (compared to window1) aperture available above
+             4GB boundary (i.e. in 64-bit space). This is typically used for
+             mapping prefetchable/non-prefetchable BARs of endpoints
+- interrupts: A list of interrupt outputs of the controller. Must contain an
+  entry for each entry in the interrupt-names property.
+- interrupt-names: Must include the following entries:
+  "intr": The Tegra interrupt that is asserted for controller interrupts
+  "msi": The Tegra interrupt that is asserted when an MSI is received
+- bus-range: Range of bus numbers associated with this controller
+- #address-cells: Address representation for root ports (must be 3)
+  - cell 0 specifies the bus and device numbers of the root port:
+    [23:16]: bus number
+    [15:11]: device number
+  - cell 1 denotes the upper 32 address bits and should be 0
+  - cell 2 contains the lower 32 address bits and is used to translate to the
+    CPU address space
+- #size-cells: Size representation for root ports (must be 2)
+- ranges: Describes the translation of addresses for root ports and standard
+  PCI regions. The entries must be 7 cells each, where the first three cells
+  correspond to the address as described for the #address-cells property
+  above, the fourth and fifth cells are for the physical CPU address to
+  translate to and the sixth and seventh cells are as described for the
+  #size-cells property above.
+  - Entries setup the mapping for the standard I/O, memory and
+    prefetchable PCI regions. The first cell determines the type of region
+    that is setup:
+    - 0x81000000: I/O memory region
+    - 0x82000000: non-prefetchable memory region
+    - 0xc2000000: prefetchable memory region
+  Please refer to the standard PCI bus binding document for a more detailed
+  explanation.
+- #interrupt-cells: Size representation for interrupts (must be 1)
+- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties
+  Please refer to the standard PCI bus binding document for a more detailed
+  explanation.
+- clocks: Must contain an entry for each entry in clock-names.
+  See ../clocks/clock-bindings.txt for details.
+- clock-names: Must include the following entries:
+  - core_clk
+- resets: Must contain an entry for each entry in reset-names.
+  See ../reset/reset.txt for details.
+- reset-names: Must include the following entries:
+  - core_apb_rst
+  - core_rst
+- phys: Must contain a phandle to P2U PHY for each entry in phy-names.
+- phy-names: Must include an entry for each active lane.
+  "pcie-p2u-N": where N ranges from 0 to one less than the total number of lanes
+- Controller dependent register offsets
+  - nvidia,event-cntr-ctrl: EVENT_COUNTER_CONTROL reg offset
+      0x168 - FPGA
+      0x1a8 - C1, C2 and C3
+      0x1c4 - C4
+      0x1d8 - C0 and C5
+  - nvidia,event-cntr-data: EVENT_COUNTER_DATA reg offset
+      0x16c - FPGA
+      0x1ac - C1, C2 and C3
+      0x1c8 - C4
+      0x1dc - C0 and C5
+- nvidia,controller-id : Controller specific ID
+      0x0 - C0
+      0x1 - C1
+      0x2 - C2
+      0x3 - C3
+      0x4 - C4
+      0x5 - C5
+- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals
+
+Optional properties:
+- nvidia,max-speed: limits controllers max speed to this value.
+    1 - Gen-1 (2.5 GT/s)
+    2 - Gen-2 (5 GT/s)
+    3 - Gen-3 (8 GT/s)
+    4 - Gen-4 (16 GT/s)
+- nvidia,init-speed: limits controllers init speed to this value.
+    1 - Gen-1 (2. 5 GT/s)
+    2 - Gen-2 (5 GT/s)
+    3 - Gen-3 (8 GT/s)
+    4 - Gen-4 (16 GT/s)
+- nvidia,disable-aspm-states : controls advertisement of ASPM states
+    bit-0 to '1' : disables advertisement of ASPM-L0s
+    bit-1 to '1' : disables advertisement of ASPM-L1. This also disables
+                 advertisement of ASPM-L1.1 and ASPM-L1.2
+    bit-2 to '1' : disables advertisement of ASPM-L1.1
+    bit-3 to '1' : disables advertisement of ASPM-L1.2
+- nvidia,disable-clock-request : gives a hint to driver that there is no
+    CLKREQ signal routing on board
+- nvidia,update-fc-fixup : needs it to improve perf when a platform is designed
+    in such a way that it satisfies at least one of the following conditions
+    1. If C0/C4/C5 run at x1/x2 link widths (irrespective of speed and MPS)
+    2. If C0/C1/C2/C3/C4/C5 operate at their respective max link widths and
+       a) speed is Gen-2 and MPS is 256B
+       b) speed is >= Gen-3 with any MPS
+- nvidia,cdm-check : Enables CDM checking. For more information, refer Synopsis
+    DesignWare Cores  PCI Express Controller Databook r4.90a Chapter S.4
+- nvidia,enable-power-down : Enables power down of respective controller and
+    corresponding PLLs if they are not shared by any other entity
+- "nvidia,pex-wake" : Add PEX_WAKE gpio number to provide wake support.
+- "nvidia,plat-gpios" : Add gpio number that needs to be configured before
+   system goes for enumeration. There could be platforms where enabling 3.3V and
+   12V power supplies are done through GPIOs, in which case, list of all such
+   GPIOs can be specified through this property.
+- "nvidia,aspm-cmrt" : Common Mode Restore time for proper operation of ASPM to
+   be specified in microseconds
+- "nvidia,aspm-pwr-on-t" : Power On time for proper operation of ASPM to be
+   specified in microseconds
+- "nvidia,aspm-l0s-entrance-latency" : ASPM L0s entrance latency to be specified
+   in microseconds
+
+Examples:
+=========
+
+Tegra194:
+--------
+
+SoC DTSI:
+
+	pcie@14180000 {
+		compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
+		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>;
+		reg = <0x00 0x14180000 0x0 0x00020000   /* appl registers (128K)      */
+		       0x00 0x38000000 0x0 0x00040000   /* configuration space (256K) */
+		       0x00 0x38040000 0x0 0x00040000>; /* iATU_DMA reg space (256K)  */
+		reg-names = "appl", "config", "atu_dma";
+
+		status = "disabled";
+
+		#address-cells = <3>;
+		#size-cells = <2>;
+		device_type = "pci";
+		num-lanes = <8>;
+		linux,pci-domain = <0>;
+
+		clocks = <&bpmp TEGRA194_CLK_PEX0_CORE_0>;
+		clock-names = "core_clk";
+
+		resets = <&bpmp TEGRA194_RESET_PEX0_CORE_0_APB>,
+			 <&bpmp TEGRA194_RESET_PEX0_CORE_0>;
+		reset-names = "core_apb_rst", "core_rst";
+
+		interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>,	/* controller interrupt */
+			     <GIC_SPI 73 IRQ_TYPE_LEVEL_HIGH>;	/* MSI interrupt */
+		interrupt-names = "intr", "msi";
+
+		#interrupt-cells = <1>;
+		interrupt-map-mask = <0 0 0 0>;
+		interrupt-map = <0 0 0 0 &gic 0 72 0x04>;
+
+		nvidia,bpmp = <&bpmp>;
+
+		nvidia,max-speed = <4>;
+		nvidia,disable-aspm-states = <0xf>;
+		nvidia,controller-id = <&bpmp 0x0>;
+		nvidia,aux-clk-freq = <0x13>;
+		nvidia,preset-init = <0x5>;
+		nvidia,aspm-cmrt = <0x3C>;
+		nvidia,aspm-pwr-on-t = <0x14>;
+		nvidia,aspm-l0s-entrance-latency = <0x3>;
+
+		bus-range = <0x0 0xff>;
+		ranges = <0x81000000 0x0 0x38100000 0x0 0x38100000 0x0 0x00100000      /* downstream I/O (1MB) */
+			  0x82000000 0x0 0x38200000 0x0 0x38200000 0x0 0x01E00000      /* non-prefetchable memory (30MB) */
+			  0xc2000000 0x18 0x00000000 0x18 0x00000000 0x4 0x00000000>;  /* prefetchable memory (16GB) */
+
+		nvidia,cfg-link-cap-l1sub = <0x1c4>;
+		nvidia,cap-pl16g-status = <0x174>;
+		nvidia,cap-pl16g-cap-off = <0x188>;
+		nvidia,event-cntr-ctrl = <0x1d8>;
+		nvidia,event-cntr-data = <0x1dc>;
+		nvidia,dl-feature-cap = <0x30c>;
+	};
+
+Board DTS:
+
+	pcie@14180000 {
+		status = "okay";
+
+		vddio-pex-ctl-supply = <&vdd_1v8ao>;
+
+		phys = <&p2u_2>,
+		       <&p2u_3>,
+		       <&p2u_4>,
+		       <&p2u_5>;
+		phy-names = "pcie-p2u-0", "pcie-p2u-1", "pcie-p2u-2",
+			    "pcie-p2u-3";
+	};
diff --git a/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt b/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
new file mode 100644
index 000000000000..cc0de8e8e8db
--- /dev/null
+++ b/Documentation/devicetree/bindings/phy/phy-tegra194-p2u.txt
@@ -0,0 +1,34 @@ 
+NVIDIA Tegra194 P2U binding
+
+Tegra194 has two PHY bricks namely HSIO (High Speed IO) and NVHS (NVIDIA High
+Speed) each interfacing with 12 and 8 P2U instances respectively.
+A P2U instance is a glue logic between Synopsys DesignWare Core PCIe IP's PIPE
+interface and PHY of HSIO/NVHS bricks. Each P2U instance represents one PCIe
+lane.
+
+Required properties:
+- compatible: For Tegra19x, must contain "nvidia,tegra194-phy-p2u".
+- reg: Should be the physical address space and length of respective each P2U
+       instance.
+- reg-names: Must include the entry "base".
+
+Required properties for PHY port node:
+- #phy-cells: Defined by generic PHY bindings.  Must be 0.
+
+Refer to phy/phy-bindings.txt for the generic PHY binding properties.
+
+Example:
+
+hsio-p2u {
+	compatible = "simple-bus";
+	#address-cells = <2>;
+	#size-cells = <2>;
+	ranges;
+	p2u_0: p2u@03e10000 {
+		compatible = "nvidia,tegra194-phy-p2u";
+		reg = <0x0 0x03e10000 0x0 0x00010000>;
+		reg-names = "base";
+
+		#phy-cells = <0>;
+	};
+}