diff mbox

[qom-cpu,31/59] monitor: Simplify do_info_numa()

Message ID 1370805206-26574-32-git-send-email-afaerber@suse.de
State New
Headers show

Commit Message

Andreas Färber June 9, 2013, 7:12 p.m. UTC
Use new qemu_for_each_cpu().

Signed-off-by: Andreas Färber <afaerber@suse.de>
---
 monitor.c | 27 +++++++++++++++++++--------
 1 file changed, 19 insertions(+), 8 deletions(-)

Comments

Markus Armbruster June 10, 2013, 7:20 a.m. UTC | #1
Andreas Färber <afaerber@suse.de> writes:

> Use new qemu_for_each_cpu().
>
> Signed-off-by: Andreas Färber <afaerber@suse.de>
> ---
>  monitor.c | 27 +++++++++++++++++++--------
>  1 file changed, 19 insertions(+), 8 deletions(-)
>
> diff --git a/monitor.c b/monitor.c
> index 9be515c..f37bf3d 100644
> --- a/monitor.c
> +++ b/monitor.c
> @@ -1803,21 +1803,32 @@ static void do_info_mtree(Monitor *mon, const QDict *qdict)
>      mtree_info((fprintf_function)monitor_printf, mon);
>  }
>  
> +typedef struct DoInfoNUMAData {
> +    Monitor *mon;
> +    int numa_node;
> +} DoInfoNUMAData;
> +
> +static void do_info_numa_one(CPUState *cpu, void *data)
> +{
> +    DoInfoNUMAData *s = data;
> +
> +    if (cpu->numa_node == s->numa_node) {
> +        monitor_printf(s->mon, " %d", cpu->cpu_index);
> +    }
> +}
> +
>  static void do_info_numa(Monitor *mon, const QDict *qdict)
>  {
>      int i;
> -    CPUArchState *env;
> -    CPUState *cpu;
> +    DoInfoNUMAData s = {
> +        .mon = mon,
> +    };
>  
>      monitor_printf(mon, "%d nodes\n", nb_numa_nodes);
>      for (i = 0; i < nb_numa_nodes; i++) {
>          monitor_printf(mon, "node %d cpus:", i);
> -        for (env = first_cpu; env != NULL; env = env->next_cpu) {
> -            cpu = ENV_GET_CPU(env);
> -            if (cpu->numa_node == i) {
> -                monitor_printf(mon, " %d", cpu->cpu_index);
> -            }
> -        }
> +        s.numa_node = i;
> +        qemu_for_each_cpu(do_info_numa_one, &s);
>          monitor_printf(mon, "\n");
>          monitor_printf(mon, "node %d size: %" PRId64 " MB\n", i,
>              node_mem[i] >> 20);

This again demonstrates the relative clunkiness of higher order
functions in C.  Control flow jumps back and forth in the source
(lambda, how I miss you), you need an extra type, and you need to go
around the type system.

In my experience, loops are a much more natural fit for C.

Would it be possible to have a function cpu_next(CPUState *)?  So you
can simply do:

        for (cpu = cpu_next(NULL); cpu; cpu = cpu_next(cpu) {
            if (cpu->numa_node == i) {
                monitor_printf(mon, " %d", cpu->cpu_index);
            }
        }

Simple and type safe.

Precedence: bdrv_next().
Andreas Färber June 10, 2013, 9:23 p.m. UTC | #2
Am 10.06.2013 09:20, schrieb Markus Armbruster:
> Andreas Färber <afaerber@suse.de> writes:
> 
>> Use new qemu_for_each_cpu().
>>
>> Signed-off-by: Andreas Färber <afaerber@suse.de>
>> ---
>>  monitor.c | 27 +++++++++++++++++++--------
>>  1 file changed, 19 insertions(+), 8 deletions(-)
>>
>> diff --git a/monitor.c b/monitor.c
>> index 9be515c..f37bf3d 100644
>> --- a/monitor.c
>> +++ b/monitor.c
>> @@ -1803,21 +1803,32 @@ static void do_info_mtree(Monitor *mon, const QDict *qdict)
>>      mtree_info((fprintf_function)monitor_printf, mon);
>>  }
>>  
>> +typedef struct DoInfoNUMAData {
>> +    Monitor *mon;
>> +    int numa_node;
>> +} DoInfoNUMAData;
>> +
>> +static void do_info_numa_one(CPUState *cpu, void *data)
>> +{
>> +    DoInfoNUMAData *s = data;
>> +
>> +    if (cpu->numa_node == s->numa_node) {
>> +        monitor_printf(s->mon, " %d", cpu->cpu_index);
>> +    }
>> +}
>> +
>>  static void do_info_numa(Monitor *mon, const QDict *qdict)
>>  {
>>      int i;
>> -    CPUArchState *env;
>> -    CPUState *cpu;
>> +    DoInfoNUMAData s = {
>> +        .mon = mon,
>> +    };
>>  
>>      monitor_printf(mon, "%d nodes\n", nb_numa_nodes);
>>      for (i = 0; i < nb_numa_nodes; i++) {
>>          monitor_printf(mon, "node %d cpus:", i);
>> -        for (env = first_cpu; env != NULL; env = env->next_cpu) {
>> -            cpu = ENV_GET_CPU(env);
>> -            if (cpu->numa_node == i) {
>> -                monitor_printf(mon, " %d", cpu->cpu_index);
>> -            }
>> -        }
>> +        s.numa_node = i;
>> +        qemu_for_each_cpu(do_info_numa_one, &s);
>>          monitor_printf(mon, "\n");
>>          monitor_printf(mon, "node %d size: %" PRId64 " MB\n", i,
>>              node_mem[i] >> 20);
> 
> This again demonstrates the relative clunkiness of higher order
> functions in C.  Control flow jumps back and forth in the source
> (lambda, how I miss you), you need an extra type, and you need to go
> around the type system.
> 
> In my experience, loops are a much more natural fit for C.
> 
> Would it be possible to have a function cpu_next(CPUState *)?  So you
> can simply do:
> 
>         for (cpu = cpu_next(NULL); cpu; cpu = cpu_next(cpu) {
>             if (cpu->numa_node == i) {
>                 monitor_printf(mon, " %d", cpu->cpu_index);
>             }
>         }
> 
> Simple and type safe.
> 
> Precedence: bdrv_next().

I can see your point, but I don't like the suggested alternative.

Are you generally opposed to qemu_for_each_cpu() for iteration?
http://git.qemu.org/?p=qemu.git;a=commit;h=d6b9e0d60cc511eca210834428bb74508cff3d33

Or is it just the need to pass in more than one value via struct, as
done in this patch among others, that you dislike? I.e., are you okay
with patch 03/59 where each loop iteration is independent?

Before this function came up, I had a macro in mind to abstract the
loop. Then however I thought such a loop would be duplicating the
qemu/queue.h macros we already have with their _SAFE_ variants, so I was
hoping to replace the CPU_COMMON next_cpu field with one of those macros
in CPUState.

Unfortunately at least two places did CPUArchState** pointer magic, so
that I couldn't just change first_cpu and leave next_cpu for later.

Now, I don't see a huge benefit of doing
for (cpu = cpu_next(NULL); cpu != NULL; cpu = cpu_next(cpu)) {...}
over
for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {...}
with the latter being much clearer to read IMO.

If having patch 57/59 larger is not a concern (which getting rid of
open-coded CPU loops tried to address), then I can just convert the
loops in place alongside first_cpu / next_cpu.
Then still the question remains of whether you'd want to inline and drop
qemu_for_each_cpu() afterwards, or whether we can establish some rules
of thumb when to use which?

Regards,
Andreas
Markus Armbruster June 11, 2013, 7:41 a.m. UTC | #3
Andreas Färber <afaerber@suse.de> writes:

> Am 10.06.2013 09:20, schrieb Markus Armbruster:
>> Andreas Färber <afaerber@suse.de> writes:
>> 
>>> Use new qemu_for_each_cpu().
>>>
>>> Signed-off-by: Andreas Färber <afaerber@suse.de>
>>> ---
>>>  monitor.c | 27 +++++++++++++++++++--------
>>>  1 file changed, 19 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/monitor.c b/monitor.c
>>> index 9be515c..f37bf3d 100644
>>> --- a/monitor.c
>>> +++ b/monitor.c
>>> @@ -1803,21 +1803,32 @@ static void do_info_mtree(Monitor *mon, const QDict *qdict)
>>>      mtree_info((fprintf_function)monitor_printf, mon);
>>>  }
>>>  
>>> +typedef struct DoInfoNUMAData {
>>> +    Monitor *mon;
>>> +    int numa_node;
>>> +} DoInfoNUMAData;
>>> +
>>> +static void do_info_numa_one(CPUState *cpu, void *data)
>>> +{
>>> +    DoInfoNUMAData *s = data;
>>> +
>>> +    if (cpu->numa_node == s->numa_node) {
>>> +        monitor_printf(s->mon, " %d", cpu->cpu_index);
>>> +    }
>>> +}
>>> +
>>>  static void do_info_numa(Monitor *mon, const QDict *qdict)
>>>  {
>>>      int i;
>>> -    CPUArchState *env;
>>> -    CPUState *cpu;
>>> +    DoInfoNUMAData s = {
>>> +        .mon = mon,
>>> +    };
>>>  
>>>      monitor_printf(mon, "%d nodes\n", nb_numa_nodes);
>>>      for (i = 0; i < nb_numa_nodes; i++) {
>>>          monitor_printf(mon, "node %d cpus:", i);
>>> -        for (env = first_cpu; env != NULL; env = env->next_cpu) {
>>> -            cpu = ENV_GET_CPU(env);
>>> -            if (cpu->numa_node == i) {
>>> -                monitor_printf(mon, " %d", cpu->cpu_index);
>>> -            }
>>> -        }
>>> +        s.numa_node = i;
>>> +        qemu_for_each_cpu(do_info_numa_one, &s);
>>>          monitor_printf(mon, "\n");
>>>          monitor_printf(mon, "node %d size: %" PRId64 " MB\n", i,
>>>              node_mem[i] >> 20);
>> 
>> This again demonstrates the relative clunkiness of higher order
>> functions in C.  Control flow jumps back and forth in the source
>> (lambda, how I miss you), you need an extra type, and you need to go
>> around the type system.
>> 
>> In my experience, loops are a much more natural fit for C.
>> 
>> Would it be possible to have a function cpu_next(CPUState *)?  So you
>> can simply do:
>> 
>>         for (cpu = cpu_next(NULL); cpu; cpu = cpu_next(cpu) {
>>             if (cpu->numa_node == i) {
>>                 monitor_printf(mon, " %d", cpu->cpu_index);
>>             }
>>         }
>> 
>> Simple and type safe.
>> 
>> Precedence: bdrv_next().
>
> I can see your point, but I don't like the suggested alternative.
>
> Are you generally opposed to qemu_for_each_cpu() for iteration?
> http://git.qemu.org/?p=qemu.git;a=commit;h=d6b9e0d60cc511eca210834428bb74508cff3d33
>
> Or is it just the need to pass in more than one value via struct, as
> done in this patch among others, that you dislike? I.e., are you okay
> with patch 03/59 where each loop iteration is independent?

"Dislike" is more accurate than "oppose".

I dislike the notational overhead and weak typing of higher order
functions in C in general.  The additional type merely adds a little to
the notational overhead.

Higher order functions can still be sensible in C when the iterator has
complex state.  For instance, when the iterator is most easily expressed
as recursive function.

> Before this function came up, I had a macro in mind to abstract the
> loop. Then however I thought such a loop would be duplicating the
> qemu/queue.h macros we already have with their _SAFE_ variants, so I was
> hoping to replace the CPU_COMMON next_cpu field with one of those macros
> in CPUState.
>
> Unfortunately at least two places did CPUArchState** pointer magic, so
> that I couldn't just change first_cpu and leave next_cpu for later.

I agree that we better not reinvent queue.h.

> Now, I don't see a huge benefit of doing
> for (cpu = cpu_next(NULL); cpu != NULL; cpu = cpu_next(cpu)) {...}
> over
> for (cpu = first_cpu; cpu != NULL; cpu = cpu->next_cpu) {...}
> with the latter being much clearer to read IMO.

You're right.  The iterator is too trivial to warrant encapsulation.
But making the encapsulation less trivial by using a higher order
function doesn't change that one bit :)

What can change it is a desire to make the type of first_cpu abstract,
because then you can't dereference cpu->next_cpu, or a desire to give
first_cpu internal linkage.  That's what motivated bdrv_next().

> If having patch 57/59 larger is not a concern (which getting rid of
> open-coded CPU loops tried to address), then I can just convert the
> loops in place alongside first_cpu / next_cpu.
> Then still the question remains of whether you'd want to inline and drop
> qemu_for_each_cpu() afterwards, or whether we can establish some rules
> of thumb when to use which?

To be honest, I see no value in qemu_for_each_cpu().

At a glance, PATCH 57 looks fairly mechanical to me.  Is it?  Would it
remain so if refrains from converting simple loops to
qemu_for_each_cpu()?
Andreas Färber June 16, 2013, 4:41 p.m. UTC | #4
Am 11.06.2013 09:41, schrieb Markus Armbruster:
> Andreas Färber <afaerber@suse.de> writes:
>> If having patch 57/59 larger is not a concern (which getting rid of
>> open-coded CPU loops tried to address), then I can just convert the
>> loops in place alongside first_cpu / next_cpu.
>> Then still the question remains of whether you'd want to inline and drop
>> qemu_for_each_cpu() afterwards, or whether we can establish some rules
>> of thumb when to use which?
> 
> To be honest, I see no value in qemu_for_each_cpu().
> 
> At a glance, PATCH 57 looks fairly mechanical to me.  Is it?  Would it
> remain so if refrains from converting simple loops to
> qemu_for_each_cpu()?

v2 does that now.

Andreas
Michael S. Tsirkin June 16, 2013, 8:31 p.m. UTC | #5
On Sun, Jun 16, 2013 at 06:41:24PM +0200, Andreas Färber wrote:
> Am 11.06.2013 09:41, schrieb Markus Armbruster:
> > Andreas Färber <afaerber@suse.de> writes:
> >> If having patch 57/59 larger is not a concern (which getting rid of
> >> open-coded CPU loops tried to address), then I can just convert the
> >> loops in place alongside first_cpu / next_cpu.
> >> Then still the question remains of whether you'd want to inline and drop
> >> qemu_for_each_cpu() afterwards, or whether we can establish some rules
> >> of thumb when to use which?
> > 
> > To be honest, I see no value in qemu_for_each_cpu().
> > 
> > At a glance, PATCH 57 looks fairly mechanical to me.  Is it?  Would it
> > remain so if refrains from converting simple loops to
> > qemu_for_each_cpu()?
> 
> v2 does that now.
> 
> Andreas

I think it's a bad idea to have everyone looking at CPU state
also care about the data structure used to keep all CPUs.
If we switch to another datastructure what then? Rework it again?

And I think it *would* be a good idea to rework it -
there's a singly linked list there manually put together
using the next_cpu pointer.
We have macros for list manipulation, it should use that.


> -- 
> SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
> GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
diff mbox

Patch

diff --git a/monitor.c b/monitor.c
index 9be515c..f37bf3d 100644
--- a/monitor.c
+++ b/monitor.c
@@ -1803,21 +1803,32 @@  static void do_info_mtree(Monitor *mon, const QDict *qdict)
     mtree_info((fprintf_function)monitor_printf, mon);
 }
 
+typedef struct DoInfoNUMAData {
+    Monitor *mon;
+    int numa_node;
+} DoInfoNUMAData;
+
+static void do_info_numa_one(CPUState *cpu, void *data)
+{
+    DoInfoNUMAData *s = data;
+
+    if (cpu->numa_node == s->numa_node) {
+        monitor_printf(s->mon, " %d", cpu->cpu_index);
+    }
+}
+
 static void do_info_numa(Monitor *mon, const QDict *qdict)
 {
     int i;
-    CPUArchState *env;
-    CPUState *cpu;
+    DoInfoNUMAData s = {
+        .mon = mon,
+    };
 
     monitor_printf(mon, "%d nodes\n", nb_numa_nodes);
     for (i = 0; i < nb_numa_nodes; i++) {
         monitor_printf(mon, "node %d cpus:", i);
-        for (env = first_cpu; env != NULL; env = env->next_cpu) {
-            cpu = ENV_GET_CPU(env);
-            if (cpu->numa_node == i) {
-                monitor_printf(mon, " %d", cpu->cpu_index);
-            }
-        }
+        s.numa_node = i;
+        qemu_for_each_cpu(do_info_numa_one, &s);
         monitor_printf(mon, "\n");
         monitor_printf(mon, "node %d size: %" PRId64 " MB\n", i,
             node_mem[i] >> 20);