diff mbox series

[v6,3/5] hw/arm/virt: Add cpu-map to device tree

Message ID 20210824122016.144364-4-wangyanan55@huawei.com
State New
Headers show
Series hw/arm/virt: Introduce cpu topology support | expand

Commit Message

wangyanan (Y) Aug. 24, 2021, 12:20 p.m. UTC
From: Andrew Jones <drjones@redhat.com>

Support device tree CPU topology descriptions.

In accordance with the Devicetree Specification, the Linux Doc
"arm/cpus.yaml" requires that cpus and cpu nodes in the DT are
present. And we have already met the requirement by generating
/cpus/cpu@* nodes for members within ms->smp.cpus. Accordingly,
we should also create subnodes in cpu-map for the present cpus,
each of which relates to an unique cpu node.

The Linux Doc "cpu/cpu-topology.txt" states that the hierarchy
of CPUs in a SMP system is defined through four entities and
they are socket/cluster/core/thread. It is also required that
a socket node's child nodes must be one or more cluster nodes.
Given that currently we are only provided with information of
socket/core/thread, we assume there is one cluster child node
in each socket node when creating cpu-map.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Co-developed-by: Yanan Wang <wangyanan55@huawei.com>
Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
---
 hw/arm/virt.c | 70 +++++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 60 insertions(+), 10 deletions(-)

Comments

wangyanan (Y) Sept. 2, 2021, 11:20 a.m. UTC | #1
Hello Peter,

Is the solution in this patch ok for you? Would appreciate if I can get 
some feedback. :)

Thanks,
Yanan
.

On 2021/8/24 20:20, Yanan Wang wrote:
> From: Andrew Jones <drjones@redhat.com>
>
> Support device tree CPU topology descriptions.
>
> In accordance with the Devicetree Specification, the Linux Doc
> "arm/cpus.yaml" requires that cpus and cpu nodes in the DT are
> present. And we have already met the requirement by generating
> /cpus/cpu@* nodes for members within ms->smp.cpus. Accordingly,
> we should also create subnodes in cpu-map for the present cpus,
> each of which relates to an unique cpu node.
>
> The Linux Doc "cpu/cpu-topology.txt" states that the hierarchy
> of CPUs in a SMP system is defined through four entities and
> they are socket/cluster/core/thread. It is also required that
> a socket node's child nodes must be one or more cluster nodes.
> Given that currently we are only provided with information of
> socket/core/thread, we assume there is one cluster child node
> in each socket node when creating cpu-map.
>
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> Co-developed-by: Yanan Wang <wangyanan55@huawei.com>
> Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
> ---
>   hw/arm/virt.c | 70 +++++++++++++++++++++++++++++++++++++++++++--------
>   1 file changed, 60 insertions(+), 10 deletions(-)
>
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index 82f2eba6bd..bdcf7435f0 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -350,20 +350,21 @@ static void fdt_add_cpu_nodes(const VirtMachineState *vms)
>       int cpu;
>       int addr_cells = 1;
>       const MachineState *ms = MACHINE(vms);
> +    const VirtMachineClass *vmc = VIRT_MACHINE_GET_CLASS(vms);
>       int smp_cpus = ms->smp.cpus;
>   
>       /*
> -     * From Documentation/devicetree/bindings/arm/cpus.txt
> -     *  On ARM v8 64-bit systems value should be set to 2,
> -     *  that corresponds to the MPIDR_EL1 register size.
> -     *  If MPIDR_EL1[63:32] value is equal to 0 on all CPUs
> -     *  in the system, #address-cells can be set to 1, since
> -     *  MPIDR_EL1[63:32] bits are not used for CPUs
> -     *  identification.
> +     * See Linux Documentation/devicetree/bindings/arm/cpus.yaml
> +     * On ARM v8 64-bit systems value should be set to 2,
> +     * that corresponds to the MPIDR_EL1 register size.
> +     * If MPIDR_EL1[63:32] value is equal to 0 on all CPUs
> +     * in the system, #address-cells can be set to 1, since
> +     * MPIDR_EL1[63:32] bits are not used for CPUs
> +     * identification.
>        *
> -     *  Here we actually don't know whether our system is 32- or 64-bit one.
> -     *  The simplest way to go is to examine affinity IDs of all our CPUs. If
> -     *  at least one of them has Aff3 populated, we set #address-cells to 2.
> +     * Here we actually don't know whether our system is 32- or 64-bit one.
> +     * The simplest way to go is to examine affinity IDs of all our CPUs. If
> +     * at least one of them has Aff3 populated, we set #address-cells to 2.
>        */
>       for (cpu = 0; cpu < smp_cpus; cpu++) {
>           ARMCPU *armcpu = ARM_CPU(qemu_get_cpu(cpu));
> @@ -406,8 +407,57 @@ static void fdt_add_cpu_nodes(const VirtMachineState *vms)
>                   ms->possible_cpus->cpus[cs->cpu_index].props.node_id);
>           }
>   
> +        if (!vmc->no_cpu_topology) {
> +            qemu_fdt_setprop_cell(ms->fdt, nodename, "phandle",
> +                                  qemu_fdt_alloc_phandle(ms->fdt));
> +        }
> +
>           g_free(nodename);
>       }
> +
> +    if (!vmc->no_cpu_topology) {
> +        /*
> +         * Add vCPU topology description through fdt node cpu-map.
> +         *
> +         * See Linux Documentation/devicetree/bindings/cpu/cpu-topology.txt
> +         * In a SMP system, the hierarchy of CPUs can be defined through
> +         * four entities that are used to describe the layout of CPUs in
> +         * the system: socket/cluster/core/thread.
> +         *
> +         * A socket node represents the boundary of system physical package
> +         * and its child nodes must be one or more cluster nodes. A system
> +         * can contain several layers of clustering within a single physical
> +         * package and cluster nodes can be contained in parent cluster nodes.
> +         *
> +         * Given that cluster is not yet supported in the vCPU topology,
> +         * we currently generate one cluster node within each socket node
> +         * by default.
> +         */
> +        qemu_fdt_add_subnode(ms->fdt, "/cpus/cpu-map");
> +
> +        for (cpu = smp_cpus - 1; cpu >= 0; cpu--) {
> +            char *cpu_path = g_strdup_printf("/cpus/cpu@%d", cpu);
> +            char *map_path;
> +
> +            if (ms->smp.threads > 1) {
> +                map_path = g_strdup_printf(
> +                    "/cpus/cpu-map/socket%d/cluster0/core%d/thread%d",
> +                    cpu / (ms->smp.cores * ms->smp.threads),
> +                    (cpu / ms->smp.threads) % ms->smp.cores,
> +                    cpu % ms->smp.threads);
> +            } else {
> +                map_path = g_strdup_printf(
> +                    "/cpus/cpu-map/socket%d/cluster0/core%d",
> +                    cpu / ms->smp.cores,
> +                    cpu % ms->smp.cores);
> +            }
> +            qemu_fdt_add_path(ms->fdt, map_path);
> +            qemu_fdt_setprop_phandle(ms->fdt, map_path, "cpu", cpu_path);
> +
> +            g_free(map_path);
> +            g_free(cpu_path);
> +        }
> +    }
>   }
>   
>   static void fdt_add_its_gic_node(VirtMachineState *vms)
diff mbox series

Patch

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 82f2eba6bd..bdcf7435f0 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -350,20 +350,21 @@  static void fdt_add_cpu_nodes(const VirtMachineState *vms)
     int cpu;
     int addr_cells = 1;
     const MachineState *ms = MACHINE(vms);
+    const VirtMachineClass *vmc = VIRT_MACHINE_GET_CLASS(vms);
     int smp_cpus = ms->smp.cpus;
 
     /*
-     * From Documentation/devicetree/bindings/arm/cpus.txt
-     *  On ARM v8 64-bit systems value should be set to 2,
-     *  that corresponds to the MPIDR_EL1 register size.
-     *  If MPIDR_EL1[63:32] value is equal to 0 on all CPUs
-     *  in the system, #address-cells can be set to 1, since
-     *  MPIDR_EL1[63:32] bits are not used for CPUs
-     *  identification.
+     * See Linux Documentation/devicetree/bindings/arm/cpus.yaml
+     * On ARM v8 64-bit systems value should be set to 2,
+     * that corresponds to the MPIDR_EL1 register size.
+     * If MPIDR_EL1[63:32] value is equal to 0 on all CPUs
+     * in the system, #address-cells can be set to 1, since
+     * MPIDR_EL1[63:32] bits are not used for CPUs
+     * identification.
      *
-     *  Here we actually don't know whether our system is 32- or 64-bit one.
-     *  The simplest way to go is to examine affinity IDs of all our CPUs. If
-     *  at least one of them has Aff3 populated, we set #address-cells to 2.
+     * Here we actually don't know whether our system is 32- or 64-bit one.
+     * The simplest way to go is to examine affinity IDs of all our CPUs. If
+     * at least one of them has Aff3 populated, we set #address-cells to 2.
      */
     for (cpu = 0; cpu < smp_cpus; cpu++) {
         ARMCPU *armcpu = ARM_CPU(qemu_get_cpu(cpu));
@@ -406,8 +407,57 @@  static void fdt_add_cpu_nodes(const VirtMachineState *vms)
                 ms->possible_cpus->cpus[cs->cpu_index].props.node_id);
         }
 
+        if (!vmc->no_cpu_topology) {
+            qemu_fdt_setprop_cell(ms->fdt, nodename, "phandle",
+                                  qemu_fdt_alloc_phandle(ms->fdt));
+        }
+
         g_free(nodename);
     }
+
+    if (!vmc->no_cpu_topology) {
+        /*
+         * Add vCPU topology description through fdt node cpu-map.
+         *
+         * See Linux Documentation/devicetree/bindings/cpu/cpu-topology.txt
+         * In a SMP system, the hierarchy of CPUs can be defined through
+         * four entities that are used to describe the layout of CPUs in
+         * the system: socket/cluster/core/thread.
+         *
+         * A socket node represents the boundary of system physical package
+         * and its child nodes must be one or more cluster nodes. A system
+         * can contain several layers of clustering within a single physical
+         * package and cluster nodes can be contained in parent cluster nodes.
+         *
+         * Given that cluster is not yet supported in the vCPU topology,
+         * we currently generate one cluster node within each socket node
+         * by default.
+         */
+        qemu_fdt_add_subnode(ms->fdt, "/cpus/cpu-map");
+
+        for (cpu = smp_cpus - 1; cpu >= 0; cpu--) {
+            char *cpu_path = g_strdup_printf("/cpus/cpu@%d", cpu);
+            char *map_path;
+
+            if (ms->smp.threads > 1) {
+                map_path = g_strdup_printf(
+                    "/cpus/cpu-map/socket%d/cluster0/core%d/thread%d",
+                    cpu / (ms->smp.cores * ms->smp.threads),
+                    (cpu / ms->smp.threads) % ms->smp.cores,
+                    cpu % ms->smp.threads);
+            } else {
+                map_path = g_strdup_printf(
+                    "/cpus/cpu-map/socket%d/cluster0/core%d",
+                    cpu / ms->smp.cores,
+                    cpu % ms->smp.cores);
+            }
+            qemu_fdt_add_path(ms->fdt, map_path);
+            qemu_fdt_setprop_phandle(ms->fdt, map_path, "cpu", cpu_path);
+
+            g_free(map_path);
+            g_free(cpu_path);
+        }
+    }
 }
 
 static void fdt_add_its_gic_node(VirtMachineState *vms)