diff mbox

fix halt emulation with icount and CONFIG_IOTHREAD

Message ID 20110215175410.GA13487@amt.cnet
State New
Headers show

Commit Message

Marcelo Tosatti Feb. 15, 2011, 5:54 p.m. UTC
Note: to be applied to uq/master.

In icount mode, halt emulation should take into account the nearest event when sleeping.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Reported-and-tested-by: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>

Comments

Jan Kiszka Feb. 15, 2011, 6:58 p.m. UTC | #1
On 2011-02-15 18:54, Marcelo Tosatti wrote:
> 
> Note: to be applied to uq/master.
> 
> In icount mode, halt emulation should take into account the nearest event when sleeping.
> 
> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> Reported-and-tested-by: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
> 
> diff --git a/cpus.c b/cpus.c
> index 468544c..21c3eba 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -770,7 +770,7 @@ static void qemu_tcg_wait_io_event(void)
>      CPUState *env;
>  
>      while (all_cpu_threads_idle()) {
> -        qemu_cond_timedwait(tcg_halt_cond, &qemu_global_mutex, 1000);
> +        qemu_cond_timedwait(tcg_halt_cond, &qemu_global_mutex, qemu_calculate_timeout());

checkpatch.pl would complain here.

More important: Paolo was proposing patches to eliminate all those fishy
cond_wait timeouts. That's probably the better way to go. The timeouts
only paper over missing signaling.

>      }
>  
>      qemu_mutex_unlock(&qemu_global_mutex);
> diff --git a/vl.c b/vl.c
> index b436952..8ba7e9d 100644
> --- a/vl.c
> +++ b/vl.c
> @@ -1335,7 +1335,7 @@ void main_loop_wait(int nonblocking)
>      if (nonblocking)
>          timeout = 0;
>      else {
> -        timeout = qemu_calculate_timeout();
> +        timeout = 1000;
>          qemu_bh_update_timeout(&timeout);
>      }
>  

Isn't this path also relevant for !IOTHREAD? What's the impact of this
change for that configuration?

Jan
Marcelo Tosatti Feb. 15, 2011, 8:04 p.m. UTC | #2
On Tue, Feb 15, 2011 at 07:58:53PM +0100, Jan Kiszka wrote:
> On 2011-02-15 18:54, Marcelo Tosatti wrote:
> > 
> > Note: to be applied to uq/master.
> > 
> > In icount mode, halt emulation should take into account the nearest event when sleeping.
> > 
> > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> > Reported-and-tested-by: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
> > 
> > diff --git a/cpus.c b/cpus.c
> > index 468544c..21c3eba 100644
> > --- a/cpus.c
> > +++ b/cpus.c
> > @@ -770,7 +770,7 @@ static void qemu_tcg_wait_io_event(void)
> >      CPUState *env;
> >  
> >      while (all_cpu_threads_idle()) {
> > -        qemu_cond_timedwait(tcg_halt_cond, &qemu_global_mutex, 1000);
> > +        qemu_cond_timedwait(tcg_halt_cond, &qemu_global_mutex, qemu_calculate_timeout());
> 
> checkpatch.pl would complain here.
> 
> More important: Paolo was proposing patches to eliminate all those fishy
> cond_wait timeouts. That's probably the better way to go. The timeouts
> only paper over missing signaling.
>
> >      }
> >  
> >      qemu_mutex_unlock(&qemu_global_mutex);
> > diff --git a/vl.c b/vl.c
> > index b436952..8ba7e9d 100644
> > --- a/vl.c
> > +++ b/vl.c
> > @@ -1335,7 +1335,7 @@ void main_loop_wait(int nonblocking)
> >      if (nonblocking)
> >          timeout = 0;
> >      else {
> > -        timeout = qemu_calculate_timeout();
> > +        timeout = 1000;
> >          qemu_bh_update_timeout(&timeout);
> >      }
> >  
> 
> Isn't this path also relevant for !IOTHREAD? What's the impact of this
> change for that configuration?

Timeout changes from 5s to 1s.
Jan Kiszka Feb. 15, 2011, 8:33 p.m. UTC | #3
On 2011-02-15 21:04, Marcelo Tosatti wrote:
> On Tue, Feb 15, 2011 at 07:58:53PM +0100, Jan Kiszka wrote:
>> On 2011-02-15 18:54, Marcelo Tosatti wrote:
>>>
>>> Note: to be applied to uq/master.
>>>
>>> In icount mode, halt emulation should take into account the nearest event when sleeping.
>>>
>>> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
>>> Reported-and-tested-by: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
>>>
>>> diff --git a/cpus.c b/cpus.c
>>> index 468544c..21c3eba 100644
>>> --- a/cpus.c
>>> +++ b/cpus.c
>>> @@ -770,7 +770,7 @@ static void qemu_tcg_wait_io_event(void)
>>>      CPUState *env;
>>>  
>>>      while (all_cpu_threads_idle()) {
>>> -        qemu_cond_timedwait(tcg_halt_cond, &qemu_global_mutex, 1000);
>>> +        qemu_cond_timedwait(tcg_halt_cond, &qemu_global_mutex, qemu_calculate_timeout());
>>
>> checkpatch.pl would complain here.
>>
>> More important: Paolo was proposing patches to eliminate all those fishy
>> cond_wait timeouts. That's probably the better way to go. The timeouts
>> only paper over missing signaling.
>>
>>>      }
>>>  
>>>      qemu_mutex_unlock(&qemu_global_mutex);
>>> diff --git a/vl.c b/vl.c
>>> index b436952..8ba7e9d 100644
>>> --- a/vl.c
>>> +++ b/vl.c
>>> @@ -1335,7 +1335,7 @@ void main_loop_wait(int nonblocking)
>>>      if (nonblocking)
>>>          timeout = 0;
>>>      else {
>>> -        timeout = qemu_calculate_timeout();
>>> +        timeout = 1000;
>>>          qemu_bh_update_timeout(&timeout);
>>>      }
>>>  
>>
>> Isn't this path also relevant for !IOTHREAD? What's the impact of this
>> change for that configuration?
> 
> Timeout changes from 5s to 1s.
> 

... if (!vm_running).

This patch does have side effects on !IOTHREAD. I doubt the above hunk
can be correct.

What kind of timeout is qemu_calculate_timeout returning?

Jan
Marcelo Tosatti Feb. 15, 2011, 8:55 p.m. UTC | #4
On Tue, Feb 15, 2011 at 09:33:00PM +0100, Jan Kiszka wrote:
> On 2011-02-15 21:04, Marcelo Tosatti wrote:
> > On Tue, Feb 15, 2011 at 07:58:53PM +0100, Jan Kiszka wrote:
> >> On 2011-02-15 18:54, Marcelo Tosatti wrote:
> >>>
> >>> Note: to be applied to uq/master.
> >>>
> >>> In icount mode, halt emulation should take into account the nearest event when sleeping.
> >>>
> >>> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> >>> Reported-and-tested-by: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
> >>>
> >>> diff --git a/cpus.c b/cpus.c
> >>> index 468544c..21c3eba 100644
> >>> --- a/cpus.c
> >>> +++ b/cpus.c
> >>> @@ -770,7 +770,7 @@ static void qemu_tcg_wait_io_event(void)
> >>>      CPUState *env;
> >>>  
> >>>      while (all_cpu_threads_idle()) {
> >>> -        qemu_cond_timedwait(tcg_halt_cond, &qemu_global_mutex, 1000);
> >>> +        qemu_cond_timedwait(tcg_halt_cond, &qemu_global_mutex, qemu_calculate_timeout());
> >>
> >> checkpatch.pl would complain here.
> >>
> >> More important: Paolo was proposing patches to eliminate all those fishy
> >> cond_wait timeouts. That's probably the better way to go. The timeouts
> >> only paper over missing signaling.

With icount VM_TIMER timeouts are converted to realtime. This is what i
understand qemu_calculate_timeout does.

Otherwise, yes, timeouts are papering over missing signaling.

> >>
> >>>      }
> >>>  
> >>>      qemu_mutex_unlock(&qemu_global_mutex);
> >>> diff --git a/vl.c b/vl.c
> >>> index b436952..8ba7e9d 100644
> >>> --- a/vl.c
> >>> +++ b/vl.c
> >>> @@ -1335,7 +1335,7 @@ void main_loop_wait(int nonblocking)
> >>>      if (nonblocking)
> >>>          timeout = 0;
> >>>      else {
> >>> -        timeout = qemu_calculate_timeout();
> >>> +        timeout = 1000;
> >>>          qemu_bh_update_timeout(&timeout);
> >>>      }
> >>>  
> >>
> >> Isn't this path also relevant for !IOTHREAD? What's the impact of this
> >> change for that configuration?
> > 
> > Timeout changes from 5s to 1s.
> > 
> 
> ... if (!vm_running).
> 
> This patch does have side effects on !IOTHREAD. I doubt the above hunk
> can be correct.
> 
> What kind of timeout is qemu_calculate_timeout returning?

You're right.
diff mbox

Patch

diff --git a/cpus.c b/cpus.c
index 468544c..21c3eba 100644
--- a/cpus.c
+++ b/cpus.c
@@ -770,7 +770,7 @@  static void qemu_tcg_wait_io_event(void)
     CPUState *env;
 
     while (all_cpu_threads_idle()) {
-        qemu_cond_timedwait(tcg_halt_cond, &qemu_global_mutex, 1000);
+        qemu_cond_timedwait(tcg_halt_cond, &qemu_global_mutex, qemu_calculate_timeout());
     }
 
     qemu_mutex_unlock(&qemu_global_mutex);
diff --git a/vl.c b/vl.c
index b436952..8ba7e9d 100644
--- a/vl.c
+++ b/vl.c
@@ -1335,7 +1335,7 @@  void main_loop_wait(int nonblocking)
     if (nonblocking)
         timeout = 0;
     else {
-        timeout = qemu_calculate_timeout();
+        timeout = 1000;
         qemu_bh_update_timeout(&timeout);
     }