Patchwork Re: [PATCH 0/4] Improve -icount, fix it with iothread

login
register
mail settings
Submitter Paolo Bonzini
Date Feb. 25, 2011, 7:33 p.m.
Message ID <4D680424.8050305@redhat.com>
Download mbox | patch
Permalink /patch/84567/
State New
Headers show

Comments

Paolo Bonzini - Feb. 25, 2011, 7:33 p.m.
On 02/23/2011 12:39 PM, Jan Kiszka wrote:
> You should try to trace the event flow in qemu, either via strace, via
> the built-in tracer (which likely requires a bit more tracepoints), or
> via a system-level tracer (ftrace / kernelshark).

The apparent problem is that 25% of cycles is spent in mutex locking and
unlocking.  But in fact, the real problem is that 90% of the time is
spent doing something else than executing code.

QEMU exits _a lot_ due to the vm_clock timers.  The deadlines are rarely more
than a few ms ahead, and at 1 MIPS that leaves room for executing a few
thousand instructions for each context switch.  The iothread overhead
is what makes the situation so bad, because it takes a lot more time to
execute those instructions.

We do so many (almost) useless passes through cpu_exec_all that even
microoptimization helps, for example this:


is enough to cut all_cpu_threads_idle from 9 to 4.5% (not unexpected: the
number of calls is halved).  But it shouldn't be that high anyway, so
I'm not proposing the patch formally.

Additionally, the fact that the execution is 99.99% lockstep means you cannot
really overlap any part of the I/O and VCPU threads.

I found a couple of inaccuracies in my patches that already cut 50% of the
time, though.

> Did my patches contribute a bit to overhead reduction? They specifically
> target the costly vcpu/iothread switches in TCG mode (caused by TCGs
> excessive lock-holding times).

Yes, they cut 15%.

Paolo

Patch

--- a/cpus.c
+++ b/cpus.c
@@ -767,10 +767,6 @@  static void qemu_wait_io_event_common(CPUState *env)
 {
     CPUState *env;
 
-    while (all_cpu_threads_idle()) {
-        qemu_cond_timedwait(tcg_halt_cond, &qemu_global_mutex, 1000);
-    }
-
     qemu_mutex_unlock(&qemu_global_mutex);
 
     /*
@@ -1110,7 +1111,15 @@  bool cpu_exec_all(void)
         }
     }
     exit_request = 0;
+
+#ifdef CONFIG_IOTHREAD
+    while (all_cpu_threads_idle()) {
+       qemu_cond_timedwait(tcg_halt_cond, &qemu_global_mutex, 1000);
+    }
+    return true;
+#else
     return !all_cpu_threads_idle();
+#endif
 }
 
 void set_numa_modes(void)