mbox series

[v2,0/9] Tegra234 Memory interconnect support

Message ID 20230220140559.28289-1-sumitg@nvidia.com
Headers show
Series Tegra234 Memory interconnect support | expand

Message

Sumit Gupta Feb. 20, 2023, 2:05 p.m. UTC
This patch series adds memory interconnect support for Tegra234 SoC.
It is used to dynamically scale DRAM Frequency as per the bandwidth
requests from different Memory Controller (MC) clients.
MC Clients use ICC Framework's icc_set_bw() api to dynamically request
for the DRAM bandwidth (BW). As per path, the request will be routed
from MC to the EMC driver. MC driver passes the request info like the
Client ID, type, and frequency request info to the BPMP-FW which will
set the final DRAM freq considering all exisiting requests.

MC and EMC are the ICC providers. Nodes in path for a request will be:
     Client[1-n] -> MC -> EMC -> EMEM/DRAM

The patch series also adds interconnect support in below client drivers:
1) CPUFREQ driver for scaling bandwidth with CPU frequency. For that,
   added per cluster OPP table which will be used in the CPUFREQ driver
   by requesting the minimum BW respective to the given CPU frequency in
   the OPP table of given cluster.
2) PCIE driver to request BW required for different modes.

---
v1[1] -> v2:
- moved BW setting to tegra234_mc_icc_set() from EMC driver.
- moved sw clients to the 'tegra_mc_clients' table.
- point 'node->data' to the entry within 'tegra_mc_clients'.
- removed 'struct tegra_icc_node' and get client info using 'node->data'.
- changed error handling in and around tegra_emc_interconnect_init().
- moved 'tegra-icc.h' from 'include/soc/tegra' to 'include/linux'.
- added interconnect support to PCIE driver in 'Patch 9'.
- merged 'Patch 9 & 10' from [1] to get num_channels and use.
- merged 'Patch 2 & 3' from [1] to add ISO and NISO clients.
- added 'Acked-by' of Krzysztof from 'Patch 05/10' of [1].
- Removed 'Patch 7' from [1] as that is merged now.

Sumit Gupta (9):
  firmware: tegra: add function to get BPMP data
  memory: tegra: add interconnect support for DRAM scaling in Tegra234
  memory: tegra: add mc clients for Tegra234
  memory: tegra: add software mc clients in Tegra234
  dt-bindings: tegra: add icc ids for dummy MC clients
  arm64: tegra: Add cpu OPP tables and interconnects property
  cpufreq: tegra194: add OPP support and set bandwidth
  memory: tegra: make cpu cluster bw request a multiple of mc channels
  PCI: tegra194: add interconnect support in Tegra234

 arch/arm64/boot/dts/nvidia/tegra234.dtsi   | 276 ++++++++++
 drivers/cpufreq/tegra194-cpufreq.c         | 152 +++++-
 drivers/firmware/tegra/bpmp.c              |  38 ++
 drivers/memory/tegra/mc.c                  |  24 +
 drivers/memory/tegra/mc.h                  |   1 +
 drivers/memory/tegra/tegra186-emc.c        | 117 ++++
 drivers/memory/tegra/tegra234.c            | 593 ++++++++++++++++++++-
 drivers/pci/controller/dwc/pcie-tegra194.c |  40 +-
 include/dt-bindings/memory/tegra234-mc.h   |   5 +
 include/linux/tegra-icc.h                  |  65 +++
 include/soc/tegra/bpmp.h                   |   5 +
 include/soc/tegra/mc.h                     |   6 +
 12 files changed, 1300 insertions(+), 22 deletions(-)
 create mode 100644 include/linux/tegra-icc.h

[1] https://lore.kernel.org/lkml/20221220160240.27494-1-sumitg@nvidia.com/T/

Comments

Krzysztof Kozlowski March 6, 2023, 3:05 p.m. UTC | #1
On 20/02/2023 15:05, Sumit Gupta wrote:
> This patch series adds memory interconnect support for Tegra234 SoC.
> It is used to dynamically scale DRAM Frequency as per the bandwidth
> requests from different Memory Controller (MC) clients.
> MC Clients use ICC Framework's icc_set_bw() api to dynamically request
> for the DRAM bandwidth (BW). As per path, the request will be routed
> from MC to the EMC driver. MC driver passes the request info like the
> Client ID, type, and frequency request info to the BPMP-FW which will
> set the final DRAM freq considering all exisiting requests.
> 
> MC and EMC are the ICC providers. Nodes in path for a request will be:
>      Client[1-n] -> MC -> EMC -> EMEM/DRAM
> 
> The patch series also adds interconnect support in below client drivers:
> 1) CPUFREQ driver for scaling bandwidth with CPU frequency. For that,
>    added per cluster OPP table which will be used in the CPUFREQ driver
>    by requesting the minimum BW respective to the given CPU frequency in
>    the OPP table of given cluster.
> 2) PCIE driver to request BW required for different modes.

No dependencies or ordering written, so I am free to take memory
controller bits, I assume.

Best regards,
Krzysztof
Krzysztof Kozlowski March 6, 2023, 3:07 p.m. UTC | #2
On 06/03/2023 16:05, Krzysztof Kozlowski wrote:
> On 20/02/2023 15:05, Sumit Gupta wrote:
>> This patch series adds memory interconnect support for Tegra234 SoC.
>> It is used to dynamically scale DRAM Frequency as per the bandwidth
>> requests from different Memory Controller (MC) clients.
>> MC Clients use ICC Framework's icc_set_bw() api to dynamically request
>> for the DRAM bandwidth (BW). As per path, the request will be routed
>> from MC to the EMC driver. MC driver passes the request info like the
>> Client ID, type, and frequency request info to the BPMP-FW which will
>> set the final DRAM freq considering all exisiting requests.
>>
>> MC and EMC are the ICC providers. Nodes in path for a request will be:
>>      Client[1-n] -> MC -> EMC -> EMEM/DRAM
>>
>> The patch series also adds interconnect support in below client drivers:
>> 1) CPUFREQ driver for scaling bandwidth with CPU frequency. For that,
>>    added per cluster OPP table which will be used in the CPUFREQ driver
>>    by requesting the minimum BW respective to the given CPU frequency in
>>    the OPP table of given cluster.
>> 2) PCIE driver to request BW required for different modes.
> 
> No dependencies or ordering written, so I am free to take memory
> controller bits, I assume.

And not.. NAK, since you decided to ignore my comments. Really, we do
not have time for such useless ping pong.

Best regards,
Krzysztof
Sumit Gupta March 6, 2023, 8:19 p.m. UTC | #3
On 06/03/23 20:35, Krzysztof Kozlowski wrote:
> External email: Use caution opening links or attachments
> 
> 
> On 20/02/2023 15:05, Sumit Gupta wrote:
>> This patch series adds memory interconnect support for Tegra234 SoC.
>> It is used to dynamically scale DRAM Frequency as per the bandwidth
>> requests from different Memory Controller (MC) clients.
>> MC Clients use ICC Framework's icc_set_bw() api to dynamically request
>> for the DRAM bandwidth (BW). As per path, the request will be routed
>> from MC to the EMC driver. MC driver passes the request info like the
>> Client ID, type, and frequency request info to the BPMP-FW which will
>> set the final DRAM freq considering all exisiting requests.
>>
>> MC and EMC are the ICC providers. Nodes in path for a request will be:
>>       Client[1-n] -> MC -> EMC -> EMEM/DRAM
>>
>> The patch series also adds interconnect support in below client drivers:
>> 1) CPUFREQ driver for scaling bandwidth with CPU frequency. For that,
>>     added per cluster OPP table which will be used in the CPUFREQ driver
>>     by requesting the minimum BW respective to the given CPU frequency in
>>     the OPP table of given cluster.
>> 2) PCIE driver to request BW required for different modes.
> 
> No dependencies or ordering written, so I am free to take memory
> controller bits, I assume.
> 
> Best regards,
> Krzysztof
> 

Apologies for not mentioning the order in cover letter. The patches are 
divided into below groups.
  Patch [9]: Memory Interconnect support in PCI (MC client)
  Patch [4-8]: Memory Interconnect support in CPUFREQ (MC client)
  Patch [1-3]: Memory Interconnect base support

Both the Memory Controller (MC) client patches are dependent on the 
'Memory Interconnect base support patch [1-3]'.

Thanks,
Sumit
Sumit Gupta March 6, 2023, 8:43 p.m. UTC | #4
On 06/03/23 20:37, Krzysztof Kozlowski wrote:
> External email: Use caution opening links or attachments
> 
> 
> On 06/03/2023 16:05, Krzysztof Kozlowski wrote:
>> On 20/02/2023 15:05, Sumit Gupta wrote:
>>> This patch series adds memory interconnect support for Tegra234 SoC.
>>> It is used to dynamically scale DRAM Frequency as per the bandwidth
>>> requests from different Memory Controller (MC) clients.
>>> MC Clients use ICC Framework's icc_set_bw() api to dynamically request
>>> for the DRAM bandwidth (BW). As per path, the request will be routed
>>> from MC to the EMC driver. MC driver passes the request info like the
>>> Client ID, type, and frequency request info to the BPMP-FW which will
>>> set the final DRAM freq considering all exisiting requests.
>>>
>>> MC and EMC are the ICC providers. Nodes in path for a request will be:
>>>       Client[1-n] -> MC -> EMC -> EMEM/DRAM
>>>
>>> The patch series also adds interconnect support in below client drivers:
>>> 1) CPUFREQ driver for scaling bandwidth with CPU frequency. For that,
>>>     added per cluster OPP table which will be used in the CPUFREQ driver
>>>     by requesting the minimum BW respective to the given CPU frequency in
>>>     the OPP table of given cluster.
>>> 2) PCIE driver to request BW required for different modes.
>>
>> No dependencies or ordering written, so I am free to take memory
>> controller bits, I assume.
> 
> And not.. NAK, since you decided to ignore my comments. Really, we do
> not have time for such useless ping pong.
> 
> Best regards,
> Krzysztof
> 

Hi Krzysztof,

I tried to address the comments given during review of v1 in v2.
I am sorry if in case I missed any suggestion. Please let me know so I 
can incorporate that.

Thanks,
Sumit
Krzysztof Kozlowski March 7, 2023, 9:21 a.m. UTC | #5
On 06/03/2023 21:43, Sumit Gupta wrote:
> 
> 
> On 06/03/23 20:37, Krzysztof Kozlowski wrote:
>> External email: Use caution opening links or attachments
>>
>>
>> On 06/03/2023 16:05, Krzysztof Kozlowski wrote:
>>> On 20/02/2023 15:05, Sumit Gupta wrote:
>>>> This patch series adds memory interconnect support for Tegra234 SoC.
>>>> It is used to dynamically scale DRAM Frequency as per the bandwidth
>>>> requests from different Memory Controller (MC) clients.
>>>> MC Clients use ICC Framework's icc_set_bw() api to dynamically request
>>>> for the DRAM bandwidth (BW). As per path, the request will be routed
>>>> from MC to the EMC driver. MC driver passes the request info like the
>>>> Client ID, type, and frequency request info to the BPMP-FW which will
>>>> set the final DRAM freq considering all exisiting requests.
>>>>
>>>> MC and EMC are the ICC providers. Nodes in path for a request will be:
>>>>       Client[1-n] -> MC -> EMC -> EMEM/DRAM
>>>>
>>>> The patch series also adds interconnect support in below client drivers:
>>>> 1) CPUFREQ driver for scaling bandwidth with CPU frequency. For that,
>>>>     added per cluster OPP table which will be used in the CPUFREQ driver
>>>>     by requesting the minimum BW respective to the given CPU frequency in
>>>>     the OPP table of given cluster.
>>>> 2) PCIE driver to request BW required for different modes.
>>>
>>> No dependencies or ordering written, so I am free to take memory
>>> controller bits, I assume.
>>
>> And not.. NAK, since you decided to ignore my comments. Really, we do
>> not have time for such useless ping pong.
>>
>> Best regards,
>> Krzysztof
>>
> 
> Hi Krzysztof,
> 
> I tried to address the comments given during review of v1 in v2.
> I am sorry if in case I missed any suggestion. Please let me know so I 
> can incorporate that.
> 

I never got any feedback and my first glance suggested nothing changed.
Let me check again.

Best regards,
Krzysztof