Patchwork [3/3] really fix -icount in the iothread case

login
register
mail settings
Submitter Paolo Bonzini
Date March 5, 2011, 5:14 p.m.
Message ID <1299345255-577-4-git-send-email-pbonzini@redhat.com>
Download mbox | patch
Permalink /patch/85537/
State New
Headers show

Comments

Paolo Bonzini - March 5, 2011, 5:14 p.m.
The correct fix for -icount is obvious once you consider the biggest

Patch

difference between iothread and non-iothread modes.  In the traditional
model, CPUs run _before_ the iothread calls select.  In the iothread
model, CPUs run while the iothread isn't holding the mutex, i.e. _during_
those same calls.

So, the iothread should always block as long as possible to let
the CPUs run smoothly---the timeout might as well be infinite---and
either the OS or the CPU thread itself will let the iothread know
when something happens.  At this point, the iothread will wake up and
interrupt execution of the emulated CPU(s).

This is exactly the approach that this patch takes.  When a vm_clock
deadline is met, tcg_cpu_exec will stop executing instructions
(count == 0) so that cpu_exec_all will return very soon.  The TCG
thread should then check if this is the case and, if so, wake up
the iothread to process the timers.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 cpus.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/cpus.c b/cpus.c
index 0f33945..a953bac 100644
--- a/cpus.c
+++ b/cpus.c
@@ -861,6 +861,9 @@  static void *qemu_tcg_cpu_thread_fn(void *arg)
 
     while (1) {
         cpu_exec_all();
+        if (use_icount && qemu_next_deadline() <= 0) {
+            qemu_notify_event();
+        }
         qemu_tcg_wait_io_event();
     }