Message ID | 1435163110-2724-2-git-send-email-pbonzini@redhat.com |
---|---|
State | New |
Headers | show |
On Wed, 06/24 18:25, Paolo Bonzini wrote: > The next patch will require the BQL to be always taken with > qemu_mutex_lock_iothread(), while right now this isn't the case. > > Outside TCG mode this is not a problem. In TCG mode, we need to be > careful and avoid the "prod out of compiled code" step if already > in a VCPU thread. This is easily done with a check on current_cpu, > i.e. qemu_in_vcpu_thread(). > > Hopefully, multithreaded TCG will get rid of the whole logic to kick > VCPUs whenever an I/O event occurs! > > Cc: Frederic Konrad <fred.konrad@greensocs.com> > Message-Id: <1434646046-27150-2-git-send-email-pbonzini@redhat.com> Why is this "Message-Id:" included in the commit message if it's not final? > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> > --- > cpus.c | 10 +++++++--- > 1 file changed, 7 insertions(+), 3 deletions(-) > > diff --git a/cpus.c b/cpus.c > index b85fb5f..02cca5d 100644 > --- a/cpus.c > +++ b/cpus.c > @@ -953,7 +953,7 @@ static void *qemu_kvm_cpu_thread_fn(void *arg) > CPUState *cpu = arg; > int r; > > - qemu_mutex_lock(&qemu_global_mutex); > + qemu_mutex_lock_iothread(); > qemu_thread_get_self(cpu->thread); > cpu->thread_id = qemu_get_thread_id(); > cpu->can_do_io = 1; > @@ -1033,10 +1033,10 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) > { > CPUState *cpu = arg; > > + qemu_mutex_lock_iothread(); > qemu_tcg_init_cpu_signals(); > qemu_thread_get_self(cpu->thread); > > - qemu_mutex_lock(&qemu_global_mutex); > CPU_FOREACH(cpu) { > cpu->thread_id = qemu_get_thread_id(); > cpu->created = true; > @@ -1148,7 +1148,11 @@ bool qemu_in_vcpu_thread(void) > void qemu_mutex_lock_iothread(void) > { > atomic_inc(&iothread_requesting_mutex); > - if (!tcg_enabled() || !first_cpu || !first_cpu->thread) { > + /* In the simple case there is no need to bump the VCPU thread out of > + * TCG code execution. > + */ > + if (!tcg_enabled() || qemu_in_vcpu_thread() || > + !first_cpu || !first_cpu->thread) { This looks like a separate change from the above "qemu_mutex_lock(&qemu_global_mutex)" conversion. Why do they belong to the same patch? Fam > qemu_mutex_lock(&qemu_global_mutex); > atomic_dec(&iothread_requesting_mutex); > } else { > -- > 1.8.3.1 > >
On 25/06/2015 05:39, Fam Zheng wrote: > On Wed, 06/24 18:25, Paolo Bonzini wrote: >> The next patch will require the BQL to be always taken with >> qemu_mutex_lock_iothread(), while right now this isn't the case. >> >> Outside TCG mode this is not a problem. In TCG mode, we need to be >> careful and avoid the "prod out of compiled code" step if already >> in a VCPU thread. This is easily done with a check on current_cpu, >> i.e. qemu_in_vcpu_thread(). >> >> Hopefully, multithreaded TCG will get rid of the whole logic to kick >> VCPUs whenever an I/O event occurs! >> >> Cc: Frederic Konrad <fred.konrad@greensocs.com> >> Message-Id: <1434646046-27150-2-git-send-email-pbonzini@redhat.com> > > Why is this "Message-Id:" included in the commit message if it's not final? > >> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> >> --- >> cpus.c | 10 +++++++--- >> 1 file changed, 7 insertions(+), 3 deletions(-) >> >> diff --git a/cpus.c b/cpus.c >> index b85fb5f..02cca5d 100644 >> --- a/cpus.c >> +++ b/cpus.c >> @@ -953,7 +953,7 @@ static void *qemu_kvm_cpu_thread_fn(void *arg) >> CPUState *cpu = arg; >> int r; >> >> - qemu_mutex_lock(&qemu_global_mutex); >> + qemu_mutex_lock_iothread(); >> qemu_thread_get_self(cpu->thread); >> cpu->thread_id = qemu_get_thread_id(); >> cpu->can_do_io = 1; >> @@ -1033,10 +1033,10 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) >> { >> CPUState *cpu = arg; >> >> + qemu_mutex_lock_iothread(); >> qemu_tcg_init_cpu_signals(); >> qemu_thread_get_self(cpu->thread); >> >> - qemu_mutex_lock(&qemu_global_mutex); >> CPU_FOREACH(cpu) { >> cpu->thread_id = qemu_get_thread_id(); >> cpu->created = true; >> @@ -1148,7 +1148,11 @@ bool qemu_in_vcpu_thread(void) >> void qemu_mutex_lock_iothread(void) >> { >> atomic_inc(&iothread_requesting_mutex); >> - if (!tcg_enabled() || !first_cpu || !first_cpu->thread) { >> + /* In the simple case there is no need to bump the VCPU thread out of >> + * TCG code execution. >> + */ >> + if (!tcg_enabled() || qemu_in_vcpu_thread() || >> + !first_cpu || !first_cpu->thread) { > > This looks like a separate change from the above > "qemu_mutex_lock(&qemu_global_mutex)" conversion. Why do they belong to the > same patch? Previously, a VCPU thread would never call qemu_mutex_lock_iothread(). Now, it can. While the change is only necessary when a call is introduced with current_cpu != NULL (that's patch 5), it should be in this patch or before because this is the one that changes the invariant. I put it in the same because the patch is already very very smal. Paolo > Fam > >> qemu_mutex_lock(&qemu_global_mutex); >> atomic_dec(&iothread_requesting_mutex); >> } else { >> -- >> 1.8.3.1 >> >>
diff --git a/cpus.c b/cpus.c index b85fb5f..02cca5d 100644 --- a/cpus.c +++ b/cpus.c @@ -953,7 +953,7 @@ static void *qemu_kvm_cpu_thread_fn(void *arg) CPUState *cpu = arg; int r; - qemu_mutex_lock(&qemu_global_mutex); + qemu_mutex_lock_iothread(); qemu_thread_get_self(cpu->thread); cpu->thread_id = qemu_get_thread_id(); cpu->can_do_io = 1; @@ -1033,10 +1033,10 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) { CPUState *cpu = arg; + qemu_mutex_lock_iothread(); qemu_tcg_init_cpu_signals(); qemu_thread_get_self(cpu->thread); - qemu_mutex_lock(&qemu_global_mutex); CPU_FOREACH(cpu) { cpu->thread_id = qemu_get_thread_id(); cpu->created = true; @@ -1148,7 +1148,11 @@ bool qemu_in_vcpu_thread(void) void qemu_mutex_lock_iothread(void) { atomic_inc(&iothread_requesting_mutex); - if (!tcg_enabled() || !first_cpu || !first_cpu->thread) { + /* In the simple case there is no need to bump the VCPU thread out of + * TCG code execution. + */ + if (!tcg_enabled() || qemu_in_vcpu_thread() || + !first_cpu || !first_cpu->thread) { qemu_mutex_lock(&qemu_global_mutex); atomic_dec(&iothread_requesting_mutex); } else {
The next patch will require the BQL to be always taken with qemu_mutex_lock_iothread(), while right now this isn't the case. Outside TCG mode this is not a problem. In TCG mode, we need to be careful and avoid the "prod out of compiled code" step if already in a VCPU thread. This is easily done with a check on current_cpu, i.e. qemu_in_vcpu_thread(). Hopefully, multithreaded TCG will get rid of the whole logic to kick VCPUs whenever an I/O event occurs! Cc: Frederic Konrad <fred.konrad@greensocs.com> Message-Id: <1434646046-27150-2-git-send-email-pbonzini@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> --- cpus.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-)