mbox series

[RFC,0/3] Unify CPU topology across ARM64 & RISC-V

Message ID 1541728209-3224-1-git-send-email-atish.patra@wdc.com
Headers show
Series Unify CPU topology across ARM64 & RISC-V | expand

Message

Atish Patra Nov. 9, 2018, 1:50 a.m. UTC
The cpu-map DT entry in ARM64 can describe the CPU topology in
much better way compared to other existing approaches. RISC-V can
easily adopt this binding to represent it's own CPU topology.
Thus, both cpu-map DT binding and topology parsing code can be
moved to a common location so that RISC-V or any other
architecture can leverage that.

The relevant discussion regarding unifying cpu topology can be
found in [1].

arch_topology seems to be a perfect place to move the common
code. I have not introduced any functional changes in the moved
to code. The only downside in this approach is that the capacity
code will be executed for RISC-V as well. But, it will exit
immediately after not able to find the appropriate DT node. If
the overhead is considered too much, we can always compile out
capacity related functions under a different config for the
architectures that do not support them.

The patches have been tested for RISC-V and compile tested for
ARM64.

The socket changes[2] can be merged on top of this series or vice
versa.

[1] https://lkml.org/lkml/2018/11/6/19
[2] https://lkml.org/lkml/2018/11/7/918

Atish Patra (3):
  dt-binding: cpu-topology: Move cpu-map to a common binding.
  cpu-topology: Move cpu topology code to common code.
  RISC-V: Parse cpu topology during boot.

 Documentation/devicetree/bindings/arm/topology.txt | 475 -------------------
 .../devicetree/bindings/cpu/cpu-topology.txt       | 526 +++++++++++++++++++++
 arch/arm64/include/asm/topology.h                  |  23 +-
 arch/arm64/kernel/topology.c                       | 305 +-----------
 arch/riscv/Kconfig                                 |   1 +
 arch/riscv/kernel/smpboot.c                        |   6 +-
 drivers/base/arch_topology.c                       | 303 ++++++++++++
 include/linux/arch_topology.h                      |  23 +
 include/linux/topology.h                           |   1 +
 9 files changed, 864 insertions(+), 799 deletions(-)
 delete mode 100644 Documentation/devicetree/bindings/arm/topology.txt
 create mode 100644 Documentation/devicetree/bindings/cpu/cpu-topology.txt

Comments

Jeffrey Hugo Nov. 15, 2018, 6:31 p.m. UTC | #1
On 11/8/2018 6:50 PM, Atish Patra wrote:
> The cpu-map DT entry in ARM64 can describe the CPU topology in
> much better way compared to other existing approaches. RISC-V can
> easily adopt this binding to represent it's own CPU topology.
> Thus, both cpu-map DT binding and topology parsing code can be
> moved to a common location so that RISC-V or any other
> architecture can leverage that.
> 
> The relevant discussion regarding unifying cpu topology can be
> found in [1].
> 
> arch_topology seems to be a perfect place to move the common
> code. I have not introduced any functional changes in the moved
> to code. The only downside in this approach is that the capacity
> code will be executed for RISC-V as well. But, it will exit
> immediately after not able to find the appropriate DT node. If
> the overhead is considered too much, we can always compile out
> capacity related functions under a different config for the
> architectures that do not support them.
> 
> The patches have been tested for RISC-V and compile tested for
> ARM64.
> 
> The socket changes[2] can be merged on top of this series or vice
> versa.
> 
> [1] https://lkml.org/lkml/2018/11/6/19
> [2] https://lkml.org/lkml/2018/11/7/918
> 
> Atish Patra (3):
>    dt-binding: cpu-topology: Move cpu-map to a common binding.
>    cpu-topology: Move cpu topology code to common code.
>    RISC-V: Parse cpu topology during boot.
> 
>   Documentation/devicetree/bindings/arm/topology.txt | 475 -------------------
>   .../devicetree/bindings/cpu/cpu-topology.txt       | 526 +++++++++++++++++++++
>   arch/arm64/include/asm/topology.h                  |  23 +-
>   arch/arm64/kernel/topology.c                       | 305 +-----------
>   arch/riscv/Kconfig                                 |   1 +
>   arch/riscv/kernel/smpboot.c                        |   6 +-
>   drivers/base/arch_topology.c                       | 303 ++++++++++++
>   include/linux/arch_topology.h                      |  23 +
>   include/linux/topology.h                           |   1 +
>   9 files changed, 864 insertions(+), 799 deletions(-)
>   delete mode 100644 Documentation/devicetree/bindings/arm/topology.txt
>   create mode 100644 Documentation/devicetree/bindings/cpu/cpu-topology.txt
> 

I was interested in testing these on QDF2400, an ARM64 platform, since 
this series touches core ARM64 code and I'd hate to see a regression. 
However, I can't figure out what baseline to use to apply these. 
Different patches cause different conflicts of a variety of baselines I 
attempted.

What are these intended to apply to?

Also, you might want to run them through checkpatch next time.  There 
are several whitespace errors.
Atish Patra Nov. 19, 2018, 5:46 p.m. UTC | #2
On 11/15/18 10:31 AM, Jeffrey Hugo wrote:
> On 11/8/2018 6:50 PM, Atish Patra wrote:
>> The cpu-map DT entry in ARM64 can describe the CPU topology in
>> much better way compared to other existing approaches. RISC-V can
>> easily adopt this binding to represent it's own CPU topology.
>> Thus, both cpu-map DT binding and topology parsing code can be
>> moved to a common location so that RISC-V or any other
>> architecture can leverage that.
>>
>> The relevant discussion regarding unifying cpu topology can be
>> found in [1].
>>
>> arch_topology seems to be a perfect place to move the common
>> code. I have not introduced any functional changes in the moved
>> to code. The only downside in this approach is that the capacity
>> code will be executed for RISC-V as well. But, it will exit
>> immediately after not able to find the appropriate DT node. If
>> the overhead is considered too much, we can always compile out
>> capacity related functions under a different config for the
>> architectures that do not support them.
>>
>> The patches have been tested for RISC-V and compile tested for
>> ARM64.
>>
>> The socket changes[2] can be merged on top of this series or vice
>> versa.
>>
>> [1] https://lkml.org/lkml/2018/11/6/19
>> [2] https://lkml.org/lkml/2018/11/7/918
>>
>> Atish Patra (3):
>>     dt-binding: cpu-topology: Move cpu-map to a common binding.
>>     cpu-topology: Move cpu topology code to common code.
>>     RISC-V: Parse cpu topology during boot.
>>
>>    Documentation/devicetree/bindings/arm/topology.txt | 475 -------------------
>>    .../devicetree/bindings/cpu/cpu-topology.txt       | 526 +++++++++++++++++++++
>>    arch/arm64/include/asm/topology.h                  |  23 +-
>>    arch/arm64/kernel/topology.c                       | 305 +-----------
>>    arch/riscv/Kconfig                                 |   1 +
>>    arch/riscv/kernel/smpboot.c                        |   6 +-
>>    drivers/base/arch_topology.c                       | 303 ++++++++++++
>>    include/linux/arch_topology.h                      |  23 +
>>    include/linux/topology.h                           |   1 +
>>    9 files changed, 864 insertions(+), 799 deletions(-)
>>    delete mode 100644 Documentation/devicetree/bindings/arm/topology.txt
>>    create mode 100644 Documentation/devicetree/bindings/cpu/cpu-topology.txt
>>
> 
> I was interested in testing these on QDF2400, an ARM64 platform, since
> this series touches core ARM64 code and I'd hate to see a regression.
> However, I can't figure out what baseline to use to apply these.
> Different patches cause different conflicts of a variety of baselines I
> attempted.
> 
> What are these intended to apply to?
> 
I had rebased them on top of 4.20-rc1.

> Also, you might want to run them through checkpatch next time.  There
> are several whitespace errors.
> 
Sorry I missed couple of them.
Thanks for trying to test the patches. I will send a next version as Rob 
suggested. Please test that.


Regards,
Atish
Sudeep Holla Nov. 20, 2018, 11:11 a.m. UTC | #3
On Thu, Nov 15, 2018 at 11:31:33AM -0700, Jeffrey Hugo wrote:

[...]

>
> I was interested in testing these on QDF2400, an ARM64 platform, since this
> series touches core ARM64 code and I'd hate to see a regression. However, I
> can't figure out what baseline to use to apply these. Different patches
> cause different conflicts of a variety of baselines I attempted.
>

Good to know that we can test DT configuration on QDF2400. I always assumed
it's ACPI only.

> What are these intended to apply to?
>

The series alone may not get the package/socket ids correct on QDF2400.
I have not yet added support for the same as I wanted to get the initial
feedback on DT bindings. The movement of DT binding and corresponding
code should not regress and you should be able to validate only that
part.

--
Regards,
Sudeep
Jeffrey Hugo Nov. 20, 2018, 3:28 p.m. UTC | #4
On 11/20/2018 4:11 AM, Sudeep Holla wrote:
> On Thu, Nov 15, 2018 at 11:31:33AM -0700, Jeffrey Hugo wrote:
> 
> [...]
> 
>>
>> I was interested in testing these on QDF2400, an ARM64 platform, since this
>> series touches core ARM64 code and I'd hate to see a regression. However, I
>> can't figure out what baseline to use to apply these. Different patches
>> cause different conflicts of a variety of baselines I attempted.
>>
> 
> Good to know that we can test DT configuration on QDF2400. I always assumed
> it's ACPI only.

It is ACPI only in the production configuration.  I suppose we could 
hack things up to do basic DT sanity, but I expect it would be nasty and 
non-trivial.

> 
>> What are these intended to apply to?
>>
> 
> The series alone may not get the package/socket ids correct on QDF2400.
> I have not yet added support for the same as I wanted to get the initial
> feedback on DT bindings. The movement of DT binding and corresponding
> code should not regress and you should be able to validate only that
> part.
> 

On a cursory glance, it looks like some of the reorganized code would 
also be used in the ACPI path (things that are common between DT and 
ACPI).  I do not expect problems, but I still feel its prudent to do a 
sanity check on actual hardware.