Patchwork [v3,1/4] really fix -icount in the iothread case

login
register
mail settings
Submitter Paolo Bonzini
Date April 12, 2011, 8:44 a.m.
Message ID <1302597850-10708-2-git-send-email-pbonzini@redhat.com>
Download mbox | patch
Permalink /patch/90747/
State New
Headers show

Comments

Paolo Bonzini - April 12, 2011, 8:44 a.m.
The correct fix for -icount is to consider the biggest difference
between iothread and non-iothread modes.  In the traditional model,
CPUs run _before_ the iothread calls select (or WaitForMultipleObjects
for Win32).  In the iothread model, CPUs run while the iothread
isn't holding the mutex, i.e. _during_ those same calls.

So, the iothread should always block as long as possible to let
the CPUs run smoothly---the timeout might as well be infinite---and
either the OS or the CPU thread itself will let the iothread know
when something happens.  At this point, the iothread wakes up and
interrupts the CPU.

This is exactly the approach that this patch takes: when cpu_exec_all
returns in -icount mode, and it is because a vm_clock deadline has
been met, it wakes up the iothread to process the timers.  This is
really the "bulk" of fixing icount.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 cpus.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

Patch

diff --git a/cpus.c b/cpus.c
index 41bec7c..c72fbb7 100644
--- a/cpus.c
+++ b/cpus.c
@@ -830,6 +830,9 @@  static void *qemu_tcg_cpu_thread_fn(void *arg)
 
     while (1) {
         cpu_exec_all();
+        if (use_icount && qemu_next_deadline() <= 0) {
+            qemu_notify_event();
+	}
         qemu_tcg_wait_io_event();
     }