diff mbox

qemu_rearm_alarm_timer: do not call rearm if the next deadline is INT64_MAX

Message ID alpine.DEB.2.00.1205291426350.26786@kaball-desktop
State New
Headers show

Commit Message

Stefano Stabellini May 29, 2012, 1:35 p.m. UTC
qemu_rearm_alarm_timer partially duplicates the code in
qemu_next_alarm_deadline to figure out if it needs to rearm the timer.
If it calls qemu_next_alarm_deadline, it always rearms the timer even if
the next deadline is INT64_MAX.

This patch simplifies the behavior of qemu_rearm_alarm_timer and removes
the duplicated code, always calling qemu_next_alarm_deadline and only
rearming the timer if the deadline is less than INT64_MAX.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Comments

Stefan Weil May 29, 2012, 5:09 p.m. UTC | #1
Am 29.05.2012 15:35, schrieb Stefano Stabellini:
> qemu_rearm_alarm_timer partially duplicates the code in
> qemu_next_alarm_deadline to figure out if it needs to rearm the timer.
> If it calls qemu_next_alarm_deadline, it always rearms the timer even if
> the next deadline is INT64_MAX.
>
> This patch simplifies the behavior of qemu_rearm_alarm_timer and removes
> the duplicated code, always calling qemu_next_alarm_deadline and only
> rearming the timer if the deadline is less than INT64_MAX.
>
> Signed-off-by: Stefano Stabellini<stefano.stabellini@eu.citrix.com>
>
> diff --git a/qemu-timer.c b/qemu-timer.c
> index de98977..81ff824 100644
> --- a/qemu-timer.c
> +++ b/qemu-timer.c
> @@ -112,14 +112,10 @@ static int64_t qemu_next_alarm_deadline(void)
>
>   static void qemu_rearm_alarm_timer(struct qemu_alarm_timer *t)
>   {
> -    int64_t nearest_delta_ns;
> -    if (!rt_clock->active_timers&&
> -        !vm_clock->active_timers&&
> -        !host_clock->active_timers) {
> -        return;
> +    int64_t nearest_delta_ns = qemu_next_alarm_deadline();
> +    if (nearest_delta_ns<  INT64_MAX) {
> +        t->rearm(t, nearest_delta_ns);
>       }
> -    nearest_delta_ns = qemu_next_alarm_deadline();
> -    t->rearm(t, nearest_delta_ns);
>   }
>
>   /* TODO: MIN_TIMER_REARM_NS should be optimized */

Reviewed-by: Stefan Weil <sw@weilnetz.de>

This patch clearly improves the current code and fixes
an abort on Darwin (reported by Andreas Färber) and maybe
other hosts. Therefore I changed the subject and suggest
to consider this patch for QEMU 1.1.

There remain issues which can be fixed after 1.1:

nearest_delta_ns also gets negative values (rtdelta < 0,
maybe because the expiration time already expired).
I did not check whether all different timers handle
a negative time gracefully.

nearest_delta_ns should also be limited to INT32_MAX
seconds, because some timers assign the seconds
to a long (see setitimer) or UINT value. On 32 bit
Linux and on all variants of Windows, long is less
or equal INT32_MAX. If we limit nearest_delta_ns
to 1000000 seconds (or some other limit which allows
ULONG milliseconds), we could further simplify the code
because most timers would no longer have to test the
upper limit.

Regards,
Stefan W.
Stefano Stabellini May 29, 2012, 5:23 p.m. UTC | #2
On Tue, 29 May 2012, Stefan Weil wrote:
> Am 29.05.2012 15:35, schrieb Stefano Stabellini:
> > qemu_rearm_alarm_timer partially duplicates the code in
> > qemu_next_alarm_deadline to figure out if it needs to rearm the timer.
> > If it calls qemu_next_alarm_deadline, it always rearms the timer even if
> > the next deadline is INT64_MAX.
> >
> > This patch simplifies the behavior of qemu_rearm_alarm_timer and removes
> > the duplicated code, always calling qemu_next_alarm_deadline and only
> > rearming the timer if the deadline is less than INT64_MAX.
> >
> > Signed-off-by: Stefano Stabellini<stefano.stabellini@eu.citrix.com>
> >
> > diff --git a/qemu-timer.c b/qemu-timer.c
> > index de98977..81ff824 100644
> > --- a/qemu-timer.c
> > +++ b/qemu-timer.c
> > @@ -112,14 +112,10 @@ static int64_t qemu_next_alarm_deadline(void)
> >
> >   static void qemu_rearm_alarm_timer(struct qemu_alarm_timer *t)
> >   {
> > -    int64_t nearest_delta_ns;
> > -    if (!rt_clock->active_timers&&
> > -        !vm_clock->active_timers&&
> > -        !host_clock->active_timers) {
> > -        return;
> > +    int64_t nearest_delta_ns = qemu_next_alarm_deadline();
> > +    if (nearest_delta_ns<  INT64_MAX) {
> > +        t->rearm(t, nearest_delta_ns);
> >       }
> > -    nearest_delta_ns = qemu_next_alarm_deadline();
> > -    t->rearm(t, nearest_delta_ns);
> >   }
> >
> >   /* TODO: MIN_TIMER_REARM_NS should be optimized */
> 
> Reviewed-by: Stefan Weil <sw@weilnetz.de>

thanks

> This patch clearly improves the current code and fixes
> an abort on Darwin (reported by Andreas Färber) and maybe
> other hosts. Therefore I changed the subject and suggest
> to consider this patch for QEMU 1.1.
> 
> There remain issues which can be fixed after 1.1:
> 
> nearest_delta_ns also gets negative values (rtdelta < 0,
> maybe because the expiration time already expired).
> I did not check whether all different timers handle
> a negative time gracefully.
> 
> nearest_delta_ns should also be limited to INT32_MAX
> seconds, because some timers assign the seconds
> to a long (see setitimer) or UINT value. On 32 bit
> Linux and on all variants of Windows, long is less
> or equal INT32_MAX. If we limit nearest_delta_ns
> to 1000000 seconds (or some other limit which allows
> ULONG milliseconds), we could further simplify the code
> because most timers would no longer have to test the
> upper limit.

If that's the issue we could limit nearest_delta_ns to LONG_MAX.

However I got the feeling that Darwin has an undocumented limit
for tv_sec, lower than INT32_MAX.
Stefan Weil May 29, 2012, 5:46 p.m. UTC | #3
Am 29.05.2012 19:23, schrieb Stefano Stabellini:
> On Tue, 29 May 2012, Stefan Weil wrote:
>> Am 29.05.2012 15:35, schrieb Stefano Stabellini:
>>> qemu_rearm_alarm_timer partially duplicates the code in
>>> qemu_next_alarm_deadline to figure out if it needs to rearm the timer.
>>> If it calls qemu_next_alarm_deadline, it always rearms the timer even if
>>> the next deadline is INT64_MAX.
>>>
>>> This patch simplifies the behavior of qemu_rearm_alarm_timer and removes
>>> the duplicated code, always calling qemu_next_alarm_deadline and only
>>> rearming the timer if the deadline is less than INT64_MAX.
>>>
>>> Signed-off-by: Stefano Stabellini<stefano.stabellini@eu.citrix.com>
>>>
>>> diff --git a/qemu-timer.c b/qemu-timer.c
>>> index de98977..81ff824 100644
>>> --- a/qemu-timer.c
>>> +++ b/qemu-timer.c
>>> @@ -112,14 +112,10 @@ static int64_t qemu_next_alarm_deadline(void)
>>>
>>>    static void qemu_rearm_alarm_timer(struct qemu_alarm_timer *t)
>>>    {
>>> -    int64_t nearest_delta_ns;
>>> -    if (!rt_clock->active_timers&&
>>> -        !vm_clock->active_timers&&
>>> -        !host_clock->active_timers) {
>>> -        return;
>>> +    int64_t nearest_delta_ns = qemu_next_alarm_deadline();
>>> +    if (nearest_delta_ns<   INT64_MAX) {
>>> +        t->rearm(t, nearest_delta_ns);
>>>        }
>>> -    nearest_delta_ns = qemu_next_alarm_deadline();
>>> -    t->rearm(t, nearest_delta_ns);
>>>    }
>>>
>>>    /* TODO: MIN_TIMER_REARM_NS should be optimized */
>>
>> Reviewed-by: Stefan Weil<sw@weilnetz.de>
>
> thanks
>
>> This patch clearly improves the current code and fixes
>> an abort on Darwin (reported by Andreas Färber) and maybe
>> other hosts. Therefore I changed the subject and suggest
>> to consider this patch for QEMU 1.1.
>>
>> There remain issues which can be fixed after 1.1:
>>
>> nearest_delta_ns also gets negative values (rtdelta<  0,
>> maybe because the expiration time already expired).
>> I did not check whether all different timers handle
>> a negative time gracefully.
>>
>> nearest_delta_ns should also be limited to INT32_MAX
>> seconds, because some timers assign the seconds
>> to a long (see setitimer) or UINT value. On 32 bit
>> Linux and on all variants of Windows, long is less
>> or equal INT32_MAX. If we limit nearest_delta_ns
>> to 1000000 seconds (or some other limit which allows
>> ULONG milliseconds), we could further simplify the code
>> because most timers would no longer have to test the
>> upper limit.
>
> If that's the issue we could limit nearest_delta_ns to LONG_MAX.
>
> However I got the feeling that Darwin has an undocumented limit
> for tv_sec, lower than INT32_MAX.

Yes, we could set the upper limit to LONG_MAX seconds for some
timers, but I did not want to have a dependency of the
upper limit on sizeof(long). The function win32_rearm_timer
only allows 4294967 seconds. Is there any reason why we
should allow timers which expire after more than 11 days
(1000000 seconds are about 11 days)? If there is none,
1000000 seconds would be a good upper limit for most timers
(maybe also for Darwin). mm_rearm_timer is the only timer
which then still needs its own upper limit.

Regards,
Stefan W.
Stefano Stabellini May 29, 2012, 6:13 p.m. UTC | #4
On Tue, 29 May 2012, Stefan Weil wrote:
> Yes, we could set the upper limit to LONG_MAX seconds for some
> timers, but I did not want to have a dependency of the
> upper limit on sizeof(long). The function win32_rearm_timer
> only allows 4294967 seconds. Is there any reason why we
> should allow timers which expire after more than 11 days
> (1000000 seconds are about 11 days)? If there is none,
> 1000000 seconds would be a good upper limit for most timers
> (maybe also for Darwin). mm_rearm_timer is the only timer
> which then still needs its own upper limit.

If 1000000 works for Darwin, then I think it is a good idea.
Paolo Bonzini May 29, 2012, 9:07 p.m. UTC | #5
Il 29/05/2012 19:09, Stefan Weil ha scritto:
> Am 29.05.2012 15:35, schrieb Stefano Stabellini:
>> qemu_rearm_alarm_timer partially duplicates the code in
>> qemu_next_alarm_deadline to figure out if it needs to rearm the timer.
>> If it calls qemu_next_alarm_deadline, it always rearms the timer even if
>> the next deadline is INT64_MAX.
>>
>> This patch simplifies the behavior of qemu_rearm_alarm_timer and removes
>> the duplicated code, always calling qemu_next_alarm_deadline and only
>> rearming the timer if the deadline is less than INT64_MAX.
>>
>> Signed-off-by: Stefano Stabellini<stefano.stabellini@eu.citrix.com>
>>
>> diff --git a/qemu-timer.c b/qemu-timer.c
>> index de98977..81ff824 100644
>> --- a/qemu-timer.c
>> +++ b/qemu-timer.c
>> @@ -112,14 +112,10 @@ static int64_t qemu_next_alarm_deadline(void)
>>
>>   static void qemu_rearm_alarm_timer(struct qemu_alarm_timer *t)
>>   {
>> -    int64_t nearest_delta_ns;
>> -    if (!rt_clock->active_timers&&
>> -        !vm_clock->active_timers&&
>> -        !host_clock->active_timers) {
>> -        return;
>> +    int64_t nearest_delta_ns = qemu_next_alarm_deadline();
>> +    if (nearest_delta_ns<  INT64_MAX) {
>> +        t->rearm(t, nearest_delta_ns);
>>       }
>> -    nearest_delta_ns = qemu_next_alarm_deadline();
>> -    t->rearm(t, nearest_delta_ns);
>>   }
>>
>>   /* TODO: MIN_TIMER_REARM_NS should be optimized */
> 
> Reviewed-by: Stefan Weil <sw@weilnetz.de>
> 
> This patch clearly improves the current code and fixes
> an abort on Darwin (reported by Andreas Färber) and maybe
> other hosts. Therefore I changed the subject and suggest
> to consider this patch for QEMU 1.1.

Only with a Tested-by from Andreas.

Paolo
Anthony Liguori May 30, 2012, 2:40 a.m. UTC | #6
On 05/30/2012 05:07 AM, Paolo Bonzini wrote:
> Il 29/05/2012 19:09, Stefan Weil ha scritto:
>> Am 29.05.2012 15:35, schrieb Stefano Stabellini:
>>> qemu_rearm_alarm_timer partially duplicates the code in
>>> qemu_next_alarm_deadline to figure out if it needs to rearm the timer.
>>> If it calls qemu_next_alarm_deadline, it always rearms the timer even if
>>> the next deadline is INT64_MAX.
>>>
>>> This patch simplifies the behavior of qemu_rearm_alarm_timer and removes
>>> the duplicated code, always calling qemu_next_alarm_deadline and only
>>> rearming the timer if the deadline is less than INT64_MAX.
>>>
>>> Signed-off-by: Stefano Stabellini<stefano.stabellini@eu.citrix.com>
>>>
>>> diff --git a/qemu-timer.c b/qemu-timer.c
>>> index de98977..81ff824 100644
>>> --- a/qemu-timer.c
>>> +++ b/qemu-timer.c
>>> @@ -112,14 +112,10 @@ static int64_t qemu_next_alarm_deadline(void)
>>>
>>>    static void qemu_rearm_alarm_timer(struct qemu_alarm_timer *t)
>>>    {
>>> -    int64_t nearest_delta_ns;
>>> -    if (!rt_clock->active_timers&&
>>> -        !vm_clock->active_timers&&
>>> -        !host_clock->active_timers) {
>>> -        return;
>>> +    int64_t nearest_delta_ns = qemu_next_alarm_deadline();
>>> +    if (nearest_delta_ns<   INT64_MAX) {
>>> +        t->rearm(t, nearest_delta_ns);
>>>        }
>>> -    nearest_delta_ns = qemu_next_alarm_deadline();
>>> -    t->rearm(t, nearest_delta_ns);
>>>    }
>>>
>>>    /* TODO: MIN_TIMER_REARM_NS should be optimized */
>>
>> Reviewed-by: Stefan Weil<sw@weilnetz.de>
>>
>> This patch clearly improves the current code and fixes
>> an abort on Darwin (reported by Andreas Färber) and maybe
>> other hosts. Therefore I changed the subject and suggest
>> to consider this patch for QEMU 1.1.

Not for 1.1.0.  I'm just a few hours away from pushing the 1.1.0-rc4 tag and I 
don't plan on doing any updates before GA unless something critical emerges.

Regards,

Anthony Liguori

>
> Only with a Tested-by from Andreas.
>
> Paolo
>
Stefano Stabellini June 11, 2012, 9:55 a.m. UTC | #7
On Wed, 30 May 2012, Anthony Liguori wrote:
> >> Reviewed-by: Stefan Weil<sw@weilnetz.de>
> >>
> >> This patch clearly improves the current code and fixes
> >> an abort on Darwin (reported by Andreas Färber) and maybe
> >> other hosts. Therefore I changed the subject and suggest
> >> to consider this patch for QEMU 1.1.
> 
> Not for 1.1.0.  I'm just a few hours away from pushing the 1.1.0-rc4 tag and I 
> don't plan on doing any updates before GA unless something critical emerges.

Anthony,
are you going to pick this one up for 1.1.1? Do you need me to do
anything?
Andreas Färber June 11, 2012, 10:12 a.m. UTC | #8
Am 11.06.2012 11:55, schrieb Stefano Stabellini:
> On Wed, 30 May 2012, Anthony Liguori wrote:
>>>> Reviewed-by: Stefan Weil<sw@weilnetz.de>
>>>>
>>>> This patch clearly improves the current code and fixes
>>>> an abort on Darwin (reported by Andreas Färber) and maybe
>>>> other hosts. Therefore I changed the subject and suggest
>>>> to consider this patch for QEMU 1.1.
>>
>> Not for 1.1.0.  I'm just a few hours away from pushing the 1.1.0-rc4 tag and I 
>> don't plan on doing any updates before GA unless something critical emerges.
> 
> Anthony,
> are you going to pick this one up for 1.1.1? Do you need me to do
> anything?

I think I still need to test this one... thanks for the reminder. ;)

Andreas
Andreas Färber June 12, 2012, 8:24 a.m. UTC | #9
Am 29.05.2012 15:35, schrieb Stefano Stabellini:
> qemu_rearm_alarm_timer partially duplicates the code in
> qemu_next_alarm_deadline to figure out if it needs to rearm the timer.
> If it calls qemu_next_alarm_deadline, it always rearms the timer even if
> the next deadline is INT64_MAX.
> 
> This patch simplifies the behavior of qemu_rearm_alarm_timer and removes
> the duplicated code, always calling qemu_next_alarm_deadline and only
> rearming the timer if the deadline is less than INT64_MAX.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Tested-by: Andreas Färber <andreas.faerber@web.de>

This resolves the assertion I had previously reported.

The check-qtest-i386 qemu-system-i386 process now hangs at ~98% CPU,
just as with my INT64_MAX hack before. How would I best debug this qtest
scenario, and what should I be looking for? Since my 1.1 patch this is
no longer going through any Cocoa event handling, so the only causes I
can think of are timers and signals...

Andreas
Andreas Färber June 12, 2012, 8:35 a.m. UTC | #10
Am 12.06.2012 10:24, schrieb Andreas Färber:
> Am 29.05.2012 15:35, schrieb Stefano Stabellini:
> The check-qtest-i386 qemu-system-i386 process now hangs at ~98% CPU,
> just as with my INT64_MAX hack before. How would I best debug this qtest
> scenario, and what should I be looking for? Since my 1.1 patch this is
> no longer going through any Cocoa event handling, so the only causes I
> can think of are timers and signals...

Might this shed any light?

Analysis of sampling qemu-system-i386 (pid 19531) every 1 millisecond
Call graph:
    2337 Thread_2503
      2337 0xffc
        2337 start
          2337 main
            2337 qemu_main
              2337 main_loop_wait
                2337 qemu_iohandler_poll
                  2337 tcp_chr_read
                    2337 qtest_read
                      2337 memory_region_iorange_write
                        2337 rtc_change_mon_event
                          2337 monitor_protocol_event
                            2337 monitor_json_emitter
                              2337 monitor_puts
                                2337 monitor_flush
                                  2177 write
                                    2177 write
                                  92 send_all
                                    81 cerror
                                      57 malloc_zone_malloc
                                        35 __error
                                          35 __error
                                        17 dyld_stub___error
                                          17 dyld_stub___error
                                        5 cthread_set_errno_self
                                          5 cthread_set_errno_self
                                      24 cerror
                                    11 send_all
                                  36 dyld_stub_write
                                    36 dyld_stub_write
                                  24 dyld_stub___error
                                    24 dyld_stub___error
                                  6 cerror
                                    6 cerror
                                  2 __error
                                    2 __error
    2337 Thread_2603
      2337 _pthread_start
        2337 sigwait_compat
          2337 sigwait
            2337 __sigwait
              2337 __sigwait
    2337 Thread_2703
      2337 _pthread_start
        2337 qemu_dummy_cpu_thread_fn
          2337 sigwait
            2337 __sigwait
              2337 __sigwait

rtc-test is still blocked by the system() call apparently, and gtester
is polling in its main loop.

Andreas
Stefano Stabellini June 12, 2012, 12:37 p.m. UTC | #11
On Tue, 12 Jun 2012, Andreas Färber wrote:
> Am 12.06.2012 10:24, schrieb Andreas Färber:
> > Am 29.05.2012 15:35, schrieb Stefano Stabellini:
> > The check-qtest-i386 qemu-system-i386 process now hangs at ~98% CPU,

Does this mean that increasing the timeout caused a busy loop somewhere
in the test? But if we set the max timeout value to INT32_MAX doesn't
happen?


> > just as with my INT64_MAX hack before. How would I best debug this qtest
> > scenario, and what should I be looking for? Since my 1.1 patch this is
> > no longer going through any Cocoa event handling, so the only causes I
> > can think of are timers and signals...
> 
> Might this shed any light?
> 
> Analysis of sampling qemu-system-i386 (pid 19531) every 1 millisecond

So I take that the call graph below repeats itself every 1m?


> Call graph:
>     2337 Thread_2503
>       2337 0xffc
>         2337 start
>           2337 main
>             2337 qemu_main
>               2337 main_loop_wait
>                 2337 qemu_iohandler_poll
>                   2337 tcp_chr_read
>                     2337 qtest_read
>                       2337 memory_region_iorange_write
>                         2337 rtc_change_mon_event
>                           2337 monitor_protocol_event
>                             2337 monitor_json_emitter
>                               2337 monitor_puts
>                                 2337 monitor_flush
>                                   2177 write
>                                     2177 write
>                                   92 send_all
>                                     81 cerror
>                                       57 malloc_zone_malloc
>                                         35 __error
>                                           35 __error
>                                         17 dyld_stub___error
>                                           17 dyld_stub___error
>                                         5 cthread_set_errno_self
>                                           5 cthread_set_errno_self
>                                       24 cerror
>                                     11 send_all
>                                   36 dyld_stub_write
>                                     36 dyld_stub_write
>                                   24 dyld_stub___error
>                                     24 dyld_stub___error
>                                   6 cerror
>                                     6 cerror
>                                   2 __error
>                                     2 __error

What is the cause of these errors?


>     2337 Thread_2603
>       2337 _pthread_start
>         2337 sigwait_compat
>           2337 sigwait
>             2337 __sigwait
>               2337 __sigwait
>     2337 Thread_2703
>       2337 _pthread_start
>         2337 qemu_dummy_cpu_thread_fn
>           2337 sigwait
>             2337 __sigwait
>               2337 __sigwait
> 
> rtc-test is still blocked by the system() call apparently, and gtester
> is polling in its main loop.

Which system call?
Andreas Färber June 12, 2012, 12:58 p.m. UTC | #12
Am 12.06.2012 14:37, schrieb Stefano Stabellini:
> On Tue, 12 Jun 2012, Andreas Färber wrote:
>> Am 12.06.2012 10:24, schrieb Andreas Färber:
>>> Am 29.05.2012 15:35, schrieb Stefano Stabellini:
>>> The check-qtest-i386 qemu-system-i386 process now hangs at ~98% CPU,
> 
> Does this mean that increasing the timeout caused a busy loop somewhere
> in the test? But if we set the max timeout value to INT32_MAX doesn't
> happen?

Note that this is solely about qtest, which I never saw working
(probably didn't try before). Regular system emulation seemed to work
just fine.

Where would I try INT32_MAX for comparison?

>>> just as with my INT64_MAX hack before. How would I best debug this qtest
>>> scenario, and what should I be looking for? Since my 1.1 patch this is
>>> no longer going through any Cocoa event handling, so the only causes I
>>> can think of are timers and signals...
>>
>> Might this shed any light?
>>
>> Analysis of sampling qemu-system-i386 (pid 19531) every 1 millisecond
> 
> So I take that the call graph below repeats itself every 1m?

Copy&paste from Mac OS X v10.5.8 process analysis.

>> Call graph:
>>     2337 Thread_2503
>>       2337 0xffc
>>         2337 start
>>           2337 main
>>             2337 qemu_main
>>               2337 main_loop_wait
>>                 2337 qemu_iohandler_poll
>>                   2337 tcp_chr_read
>>                     2337 qtest_read
>>                       2337 memory_region_iorange_write
>>                         2337 rtc_change_mon_event
>>                           2337 monitor_protocol_event
>>                             2337 monitor_json_emitter
>>                               2337 monitor_puts
>>                                 2337 monitor_flush
>>                                   2177 write
>>                                     2177 write
>>                                   92 send_all
>>                                     81 cerror
>>                                       57 malloc_zone_malloc
>>                                         35 __error
>>                                           35 __error
>>                                         17 dyld_stub___error
>>                                           17 dyld_stub___error
>>                                         5 cthread_set_errno_self
>>                                           5 cthread_set_errno_self
>>                                       24 cerror
>>                                     11 send_all
>>                                   36 dyld_stub_write
>>                                     36 dyld_stub_write
>>                                   24 dyld_stub___error
>>                                     24 dyld_stub___error
>>                                   6 cerror
>>                                     6 cerror
>>                                   2 __error
>>                                     2 __error
> 
> What is the cause of these errors?

Dunno... It looks weird that qtest_read() would be calling
memory_region_iorange_write().

>>     2337 Thread_2603
>>       2337 _pthread_start
>>         2337 sigwait_compat
>>           2337 sigwait
>>             2337 __sigwait
>>               2337 __sigwait
>>     2337 Thread_2703
>>       2337 _pthread_start
>>         2337 qemu_dummy_cpu_thread_fn
>>           2337 sigwait
>>             2337 __sigwait
>>               2337 __sigwait
>>
>> rtc-test is still blocked by the system() call apparently, and gtester
>> is polling in its main loop.
> 
> Which system call?

Was summarizing the two other processes' analysis report call graphs.
`git grep "system("` makes this one likely:

tests/libqtest.c:        ret = system(command);

I'm still lacking substantial understanding of how qtest actually
works... My impression is that the libqtest code is being used in the
*-test executables, launching a regular QEMU process put into qtest mode
via -machine accel=qtest and communicating via the -qtest socket.

If that is so, then my guess about the above error is that the monitor
socket is not being drained...?

Andreas
Andreas Färber July 27, 2012, 5 p.m. UTC | #13
Am 12.06.2012 10:24, schrieb Andreas Färber:
> Am 29.05.2012 15:35, schrieb Stefano Stabellini:
>> qemu_rearm_alarm_timer partially duplicates the code in
>> qemu_next_alarm_deadline to figure out if it needs to rearm the timer.
>> If it calls qemu_next_alarm_deadline, it always rearms the timer even if
>> the next deadline is INT64_MAX.
>>
>> This patch simplifies the behavior of qemu_rearm_alarm_timer and removes
>> the duplicated code, always calling qemu_next_alarm_deadline and only
>> rearming the timer if the deadline is less than INT64_MAX.
>>
>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Tested-by: Andreas Färber <andreas.faerber@web.de>

Ping! Can the patch please be applied? Note: Patchwork apparently got
confused by the later follow-up inline patches - only the original patch
is needed.

Also cc'ing qemu-stable for stable-1.1.

Andreas
Andreas Färber Aug. 9, 2012, 3:35 p.m. UTC | #14
Am 27.07.2012 19:00, schrieb Andreas Färber:
> Am 12.06.2012 10:24, schrieb Andreas Färber:
>> Am 29.05.2012 15:35, schrieb Stefano Stabellini:
>>> qemu_rearm_alarm_timer partially duplicates the code in
>>> qemu_next_alarm_deadline to figure out if it needs to rearm the timer.
>>> If it calls qemu_next_alarm_deadline, it always rearms the timer even if
>>> the next deadline is INT64_MAX.
>>>
>>> This patch simplifies the behavior of qemu_rearm_alarm_timer and removes
>>> the duplicated code, always calling qemu_next_alarm_deadline and only
>>> rearming the timer if the deadline is less than INT64_MAX.
>>>
>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>>
>> Tested-by: Andreas Färber <andreas.faerber@web.de>
> 
> Ping! Can the patch please be applied? Note: Patchwork apparently got
> confused by the later follow-up inline patches - only the original patch
> is needed.

Ping^2! This didn't make it into 1.1, would be nice to have 1.2 working.

Correct Patchwork link is: http://patchwork.ozlabs.org/patch/161749/

/-F

> 
> Also cc'ing qemu-stable for stable-1.1.
> 
> Andreas
>
Blue Swirl Aug. 9, 2012, 7:58 p.m. UTC | #15
Thanks, applied.

On Thu, Aug 9, 2012 at 3:35 PM, Andreas Färber <andreas.faerber@web.de> wrote:
> Am 27.07.2012 19:00, schrieb Andreas Färber:
>> Am 12.06.2012 10:24, schrieb Andreas Färber:
>>> Am 29.05.2012 15:35, schrieb Stefano Stabellini:
>>>> qemu_rearm_alarm_timer partially duplicates the code in
>>>> qemu_next_alarm_deadline to figure out if it needs to rearm the timer.
>>>> If it calls qemu_next_alarm_deadline, it always rearms the timer even if
>>>> the next deadline is INT64_MAX.
>>>>
>>>> This patch simplifies the behavior of qemu_rearm_alarm_timer and removes
>>>> the duplicated code, always calling qemu_next_alarm_deadline and only
>>>> rearming the timer if the deadline is less than INT64_MAX.
>>>>
>>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>>>
>>> Tested-by: Andreas Färber <andreas.faerber@web.de>
>>
>> Ping! Can the patch please be applied? Note: Patchwork apparently got
>> confused by the later follow-up inline patches - only the original patch
>> is needed.
>
> Ping^2! This didn't make it into 1.1, would be nice to have 1.2 working.
>
> Correct Patchwork link is: http://patchwork.ozlabs.org/patch/161749/
>
> /-F
>
>>
>> Also cc'ing qemu-stable for stable-1.1.
>>
>> Andreas
>>
>
diff mbox

Patch

diff --git a/qemu-timer.c b/qemu-timer.c
index de98977..81ff824 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -112,14 +112,10 @@  static int64_t qemu_next_alarm_deadline(void)
 
 static void qemu_rearm_alarm_timer(struct qemu_alarm_timer *t)
 {
-    int64_t nearest_delta_ns;
-    if (!rt_clock->active_timers &&
-        !vm_clock->active_timers &&
-        !host_clock->active_timers) {
-        return;
+    int64_t nearest_delta_ns = qemu_next_alarm_deadline();
+    if (nearest_delta_ns < INT64_MAX) {
+        t->rearm(t, nearest_delta_ns);
     }
-    nearest_delta_ns = qemu_next_alarm_deadline();
-    t->rearm(t, nearest_delta_ns);
 }
 
 /* TODO: MIN_TIMER_REARM_NS should be optimized */