diff mbox

[1/9] main-loop: use qemu_mutex_lock_iothread consistently

Message ID 1434646046-27150-2-git-send-email-pbonzini@redhat.com
State New
Headers show

Commit Message

Paolo Bonzini June 18, 2015, 4:47 p.m. UTC
The next patch will require the BQL to be always taken with
qemu_mutex_lock_iothread(), while right now this isn't the case.

Outside TCG mode this is not a problem.  In TCG mode, we need to be
careful and avoid the "prod out of compiled code" step if already
in a VCPU thread.  This is easily done with a check on current_cpu,
i.e. qemu_in_vcpu_thread().

Hopefully, multithreaded TCG will get rid of the whole logic to kick
VCPUs whenever an I/O event occurs!

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 cpus.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

     } else {

Comments

fred.konrad@greensocs.com June 23, 2015, 1:49 p.m. UTC | #1
On 18/06/2015 18:47, Paolo Bonzini wrote:
> The next patch will require the BQL to be always taken with
> qemu_mutex_lock_iothread(), while right now this isn't the case.
>
> Outside TCG mode this is not a problem.  In TCG mode, we need to be
> careful and avoid the "prod out of compiled code" step if already
> in a VCPU thread.  This is easily done with a check on current_cpu,
> i.e. qemu_in_vcpu_thread().
>
> Hopefully, multithreaded TCG will get rid of the whole logic to kick
> VCPUs whenever an I/O event occurs!
Hopefully :), this means dropping the iothread mutex as soon as possible and
removing the iothread_requesting_mutex I guess..

Fred

>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>   cpus.c | 13 ++++++++-----
>   1 file changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/cpus.c b/cpus.c
> index de6469f..2e807f9 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -924,7 +924,7 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
>       CPUState *cpu = arg;
>       int r;
>   
> -    qemu_mutex_lock(&qemu_global_mutex);
> +    qemu_mutex_lock_iothread();
>       qemu_thread_get_self(cpu->thread);
>       cpu->thread_id = qemu_get_thread_id();
>       cpu->can_do_io = 1;
> @@ -1004,10 +1004,10 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
>   {
>       CPUState *cpu = arg;
>   
> +    qemu_mutex_lock_iothread();
>       qemu_tcg_init_cpu_signals();
>       qemu_thread_get_self(cpu->thread);
>   
> -    qemu_mutex_lock(&qemu_global_mutex);
>       CPU_FOREACH(cpu) {
>           cpu->thread_id = qemu_get_thread_id();
>           cpu->created = true;
> @@ -1118,7 +1118,11 @@ bool qemu_in_vcpu_thread(void)
>   
>   void qemu_mutex_lock_iothread(void)
>   {
>       atomic_inc(&iothread_requesting_mutex);
> -    if (!tcg_enabled() || !first_cpu || !first_cpu->thread) {
> +    /* In the simple case there is no need to bump the VCPU thread out of
> +     * TCG code execution.
> +     */
> +    if (!tcg_enabled() || qemu_in_vcpu_thread() ||
> +        !first_cpu || !first_cpu->thread) {
>           qemu_mutex_lock(&qemu_global_mutex);
>           atomic_dec(&iothread_requesting_mutex);
>       } else {
Paolo Bonzini June 23, 2015, 1:56 p.m. UTC | #2
On 23/06/2015 15:49, Frederic Konrad wrote:
>>
>> Hopefully, multithreaded TCG will get rid of the whole logic to kick
>> VCPUs whenever an I/O event occurs!
> Hopefully :), this means dropping the iothread mutex as soon as possible
> and removing the iothread_requesting_mutex I guess..

Yes---running most of cpu_exec outside the BQL, like KVM.  io_read and
io_write would have to get and release the lock if necessary.

cpu_resume_from_signal also might have to release the lock.

Paolo
fred.konrad@greensocs.com June 23, 2015, 2:18 p.m. UTC | #3
On 23/06/2015 15:56, Paolo Bonzini wrote:
>
> On 23/06/2015 15:49, Frederic Konrad wrote:
>>> Hopefully, multithreaded TCG will get rid of the whole logic to kick
>>> VCPUs whenever an I/O event occurs!
>> Hopefully :), this means dropping the iothread mutex as soon as possible
>> and removing the iothread_requesting_mutex I guess..
> Yes---running most of cpu_exec outside the BQL, like KVM.  io_read and
> io_write would have to get and release the lock if necessary.
>
> cpu_resume_from_signal also might have to release the lock.
>
> Paolo

Ok good,
Can you add me in CC to this series so I can see when it's pulled etc 
please.

Thanks,
Fred
diff mbox

Patch

diff --git a/cpus.c b/cpus.c
index de6469f..2e807f9 100644
--- a/cpus.c
+++ b/cpus.c
@@ -924,7 +924,7 @@  static void *qemu_kvm_cpu_thread_fn(void *arg)
     CPUState *cpu = arg;
     int r;
 
-    qemu_mutex_lock(&qemu_global_mutex);
+    qemu_mutex_lock_iothread();
     qemu_thread_get_self(cpu->thread);
     cpu->thread_id = qemu_get_thread_id();
     cpu->can_do_io = 1;
@@ -1004,10 +1004,10 @@  static void *qemu_tcg_cpu_thread_fn(void *arg)
 {
     CPUState *cpu = arg;
 
+    qemu_mutex_lock_iothread();
     qemu_tcg_init_cpu_signals();
     qemu_thread_get_self(cpu->thread);
 
-    qemu_mutex_lock(&qemu_global_mutex);
     CPU_FOREACH(cpu) {
         cpu->thread_id = qemu_get_thread_id();
         cpu->created = true;
@@ -1118,7 +1118,11 @@  bool qemu_in_vcpu_thread(void)
 
 void qemu_mutex_lock_iothread(void)
 {
     atomic_inc(&iothread_requesting_mutex);
-    if (!tcg_enabled() || !first_cpu || !first_cpu->thread) {
+    /* In the simple case there is no need to bump the VCPU thread out of
+     * TCG code execution.
+     */
+    if (!tcg_enabled() || qemu_in_vcpu_thread() ||
+        !first_cpu || !first_cpu->thread) {
         qemu_mutex_lock(&qemu_global_mutex);
         atomic_dec(&iothread_requesting_mutex);