Message ID | 20100621230607.GA19203@amt.cnet |
---|---|
State | New |
Headers | show |
Marcelo Tosatti wrote: > On Mon, Jun 21, 2010 at 10:58:32PM +0200, Jan Kiszka wrote: >> Jan Kiszka wrote: >>> Marcelo Tosatti wrote: >>>> Clear exit_request when iothread grabs the global lock. >>>> >>>> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> >>>> >>>> diff --git a/cpu-exec.c b/cpu-exec.c >>>> index 026980a..74cb973 100644 >>>> --- a/cpu-exec.c >>>> +++ b/cpu-exec.c >>>> @@ -236,10 +236,8 @@ int cpu_exec(CPUState *env1) >>>> asm(""); >>>> env = env1; >>>> >>>> - if (exit_request) { >>>> + if (exit_request) >>>> env->exit_request = 1; >>>> - exit_request = 0; >>>> - } >>> Coding style... >>> >>>> >>>> #if defined(TARGET_I386) >>>> if (!kvm_enabled()) { >>>> diff --git a/cpus.c b/cpus.c >>>> index fcd0f09..ef1ab22 100644 >>>> --- a/cpus.c >>>> +++ b/cpus.c >>>> @@ -598,6 +598,7 @@ void qemu_mutex_lock_iothread(void) >>>> } >>>> qemu_mutex_unlock(&qemu_fair_mutex); >>>> } >>>> + exit_request = 0; >>>> } >>>> >>>> void qemu_mutex_unlock_iothread(void) >>>> >>>> >>> I looked into this a bit as well, and that's what I also have in my >>> queue. >>> >>> But things are still widely broken: pause_all_vcpus and run_on_cpu as >>> there is no guarantee that all VCPUs regularly call into >>> qemu_wait_io_event. Also breakpoints don't work, not only in SMP mode. > > This fixes pause for me: > Partially. It caused regressions on the SMP scheduling without the early loop exit in my patch. I will break up my changes later today and post them as series. > > diff --git a/cpu-exec.c b/cpu-exec.c > index c776605..0149da5 100644 > --- a/cpu-exec.c > +++ b/cpu-exec.c > @@ -238,7 +238,6 @@ int cpu_exec(CPUState *env1) > > if (exit_request) { > env->exit_request = 1; > - exit_request = 0; > } > > #if defined(TARGET_I386) > diff --git a/cpus.c b/cpus.c > index 826886c..14f7cfc 100644 > --- a/cpus.c > +++ b/cpus.c > @@ -403,6 +403,8 @@ static void qemu_wait_io_event_common(CPUState *env) > > static void qemu_wait_io_event(CPUState *env) > { > + CPUState *e; > + > while (!tcg_has_work()) > qemu_cond_timedwait(env->halt_cond, &qemu_global_mutex, 1000); > > @@ -417,7 +419,9 @@ static void qemu_wait_io_event(CPUState *env) > qemu_mutex_unlock(&qemu_fair_mutex); > > qemu_mutex_lock(&qemu_global_mutex); > - qemu_wait_io_event_common(env); > + > + for (e = first_cpu; e != NULL; e = e->next_cpu) > + qemu_wait_io_event_common(e); > } > > static void qemu_kvm_eat_signal(CPUState *env, int timeout) > @@ -614,6 +618,7 @@ void qemu_mutex_lock_iothread(void) > } > qemu_mutex_unlock(&qemu_fair_mutex); > } > + exit_request = 0; > } > > void qemu_mutex_unlock_iothread(void) > > > > Perhaps there is a similar problem with debugging (round robin > in tcg_cpu_exec fails when there is a timer pending, and the > iothread is not processing pending timers). > Frankly, I still can't explain the round-robin logic. What complicates the situation is that it currently has to work in both threading modes. I really think we need a proper time-slicing model, maybe with early yield if the guest runs on some pause instruction, ie. spins on a lock. Jan
Jan Kiszka wrote: > Marcelo Tosatti wrote: >> On Mon, Jun 21, 2010 at 10:58:32PM +0200, Jan Kiszka wrote: >>> Jan Kiszka wrote: >>>> Marcelo Tosatti wrote: >>>>> Clear exit_request when iothread grabs the global lock. >>>>> >>>>> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> >>>>> >>>>> diff --git a/cpu-exec.c b/cpu-exec.c >>>>> index 026980a..74cb973 100644 >>>>> --- a/cpu-exec.c >>>>> +++ b/cpu-exec.c >>>>> @@ -236,10 +236,8 @@ int cpu_exec(CPUState *env1) >>>>> asm(""); >>>>> env = env1; >>>>> >>>>> - if (exit_request) { >>>>> + if (exit_request) >>>>> env->exit_request = 1; >>>>> - exit_request = 0; >>>>> - } >>>> Coding style... >>>> >>>>> >>>>> #if defined(TARGET_I386) >>>>> if (!kvm_enabled()) { >>>>> diff --git a/cpus.c b/cpus.c >>>>> index fcd0f09..ef1ab22 100644 >>>>> --- a/cpus.c >>>>> +++ b/cpus.c >>>>> @@ -598,6 +598,7 @@ void qemu_mutex_lock_iothread(void) >>>>> } >>>>> qemu_mutex_unlock(&qemu_fair_mutex); >>>>> } >>>>> + exit_request = 0; >>>>> } >>>>> >>>>> void qemu_mutex_unlock_iothread(void) >>>>> >>>>> >>>> I looked into this a bit as well, and that's what I also have in my >>>> queue. >>>> >>>> But things are still widely broken: pause_all_vcpus and run_on_cpu as >>>> there is no guarantee that all VCPUs regularly call into >>>> qemu_wait_io_event. Also breakpoints don't work, not only in SMP mode. >> This fixes pause for me: >> > > Partially. It caused regressions on the SMP scheduling without the early > loop exit in my patch. I will break up my changes later today and post > them as series. After fixing the APIC/IOAPIC fallouts, the series is almost done. Unfortunately, host&guest debugging is totally broken for CONFIG_IOTHREAD (I also noticed that [1] is still not merged). I will try to fix this first as it may require some more refactorings. Jan [1] http://thread.gmane.org/gmane.comp.emulators.kvm.devel/52718
On 06/23/2010 02:42 AM, Jan Kiszka wrote: > Jan Kiszka wrote: > >> Marcelo Tosatti wrote: >> >>> On Mon, Jun 21, 2010 at 10:58:32PM +0200, Jan Kiszka wrote: >>> >>>> Jan Kiszka wrote: >>>> >>>>> Marcelo Tosatti wrote: >>>>> >>>>>> Clear exit_request when iothread grabs the global lock. >>>>>> >>>>>> Signed-off-by: Marcelo Tosatti<mtosatti@redhat.com> >>>>>> >>>>>> diff --git a/cpu-exec.c b/cpu-exec.c >>>>>> index 026980a..74cb973 100644 >>>>>> --- a/cpu-exec.c >>>>>> +++ b/cpu-exec.c >>>>>> @@ -236,10 +236,8 @@ int cpu_exec(CPUState *env1) >>>>>> asm(""); >>>>>> env = env1; >>>>>> >>>>>> - if (exit_request) { >>>>>> + if (exit_request) >>>>>> env->exit_request = 1; >>>>>> - exit_request = 0; >>>>>> - } >>>>>> >>>>> Coding style... >>>>> >>>>> >>>>>> >>>>>> #if defined(TARGET_I386) >>>>>> if (!kvm_enabled()) { >>>>>> diff --git a/cpus.c b/cpus.c >>>>>> index fcd0f09..ef1ab22 100644 >>>>>> --- a/cpus.c >>>>>> +++ b/cpus.c >>>>>> @@ -598,6 +598,7 @@ void qemu_mutex_lock_iothread(void) >>>>>> } >>>>>> qemu_mutex_unlock(&qemu_fair_mutex); >>>>>> } >>>>>> + exit_request = 0; >>>>>> } >>>>>> >>>>>> void qemu_mutex_unlock_iothread(void) >>>>>> >>>>>> >>>>>> >>>>> I looked into this a bit as well, and that's what I also have in my >>>>> queue. >>>>> >>>>> But things are still widely broken: pause_all_vcpus and run_on_cpu as >>>>> there is no guarantee that all VCPUs regularly call into >>>>> qemu_wait_io_event. Also breakpoints don't work, not only in SMP mode. >>>>> >>> This fixes pause for me: >>> >>> >> Partially. It caused regressions on the SMP scheduling without the early >> loop exit in my patch. I will break up my changes later today and post >> them as series. >> > After fixing the APIC/IOAPIC fallouts, the series is almost done. > Unfortunately, host&guest debugging is totally broken for > CONFIG_IOTHREAD (I also noticed that [1] is still not merged). Did it not get applied to uq/master or has there just not been a merge request yet? Regards, Anthony LIguori > I will > try to fix this first as it may require some more refactorings. > > Jan > > [1] http://thread.gmane.org/gmane.comp.emulators.kvm.devel/52718 > >
diff --git a/cpu-exec.c b/cpu-exec.c index c776605..0149da5 100644 --- a/cpu-exec.c +++ b/cpu-exec.c @@ -238,7 +238,6 @@ int cpu_exec(CPUState *env1) if (exit_request) { env->exit_request = 1; - exit_request = 0; } #if defined(TARGET_I386) diff --git a/cpus.c b/cpus.c index 826886c..14f7cfc 100644 --- a/cpus.c +++ b/cpus.c @@ -403,6 +403,8 @@ static void qemu_wait_io_event_common(CPUState *env) static void qemu_wait_io_event(CPUState *env) { + CPUState *e; + while (!tcg_has_work()) qemu_cond_timedwait(env->halt_cond, &qemu_global_mutex, 1000); @@ -417,7 +419,9 @@ static void qemu_wait_io_event(CPUState *env) qemu_mutex_unlock(&qemu_fair_mutex); qemu_mutex_lock(&qemu_global_mutex); - qemu_wait_io_event_common(env); + + for (e = first_cpu; e != NULL; e = e->next_cpu) + qemu_wait_io_event_common(e); } static void qemu_kvm_eat_signal(CPUState *env, int timeout) @@ -614,6 +618,7 @@ void qemu_mutex_lock_iothread(void) } qemu_mutex_unlock(&qemu_fair_mutex); } + exit_request = 0; } void qemu_mutex_unlock_iothread(void)