Patchwork [4/7] Get rid of QemuMutex and teach its callers about GStaticMutex

login
register
mail settings
Submitter Anthony Liguori
Date Jan. 24, 2011, 9 p.m.
Message ID <1295902845-29807-5-git-send-email-aliguori@us.ibm.com>
Download mbox | patch
Permalink /patch/80246/
State New
Headers show

Comments

Anthony Liguori - Jan. 24, 2011, 9 p.m.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Jan Kiszka - Jan. 24, 2011, 10:24 p.m.
On 2011-01-24 22:00, Anthony Liguori wrote:
> Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
> 
> diff --git a/cpus.c b/cpus.c
> index 9cf7e6e..0f8e33b 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -321,8 +321,8 @@ void vm_stop(int reason)
>  
>  #include "qemu-thread.h"
>  
> -QemuMutex qemu_global_mutex;
> -static QemuMutex qemu_fair_mutex;
> +GStaticMutex qemu_global_mutex;
> +static GStaticMutex qemu_fair_mutex;
>  
>  static QemuThread io_thread;
>  
> @@ -416,9 +416,9 @@ int qemu_init_main_loop(void)
>      qemu_cond_init(&qemu_system_cond);
>      qemu_cond_init(&qemu_pause_cond);
>      qemu_cond_init(&qemu_work_cond);
> -    qemu_mutex_init(&qemu_fair_mutex);
> -    qemu_mutex_init(&qemu_global_mutex);
> -    qemu_mutex_lock(&qemu_global_mutex);
> +    g_static_mutex_init(&qemu_fair_mutex);
> +    g_static_mutex_init(&qemu_global_mutex);
> +    g_static_mutex_lock(&qemu_global_mutex);
>  

Just replacing our own abstraction with glib's looks like a step in the
wrong direction. From a first glance at that library and its semantics
it has at least two major drawbacks:

 - Error handling of things like g_mutex_lock or g_cond_wait is, well,
   very "simplistic". Once we start to use more sophisticated locking,
   more bugs will occur here, and we will need more support than glib is
   able to provide (or can you control error handling elsewhere?).

 - GMutex is not powerful enough for optional things like PI mutexes -
   which is required once we want to schedule parts of qemu with RT
   priorities (I did it, it works surprisingly well).

The same concerns apply to other abstractions glib provides for
threading and synchronization. One may work around them, but that will
break abstractions again.

Glib seems to fit standard use case quite comfortably but fails in more
advanced scenarios qemu is already useable for (just lacking a few
additional lines of code).

In short: we need full POSIX where available.

Jan
Anthony Liguori - Jan. 25, 2011, 12:02 a.m.
On 01/24/2011 04:24 PM, Jan Kiszka wrote:
> On 2011-01-24 22:00, Anthony Liguori wrote:
>    
>> Signed-off-by: Anthony Liguori<aliguori@us.ibm.com>
>>
>> diff --git a/cpus.c b/cpus.c
>> index 9cf7e6e..0f8e33b 100644
>> --- a/cpus.c
>> +++ b/cpus.c
>> @@ -321,8 +321,8 @@ void vm_stop(int reason)
>>
>>   #include "qemu-thread.h"
>>
>> -QemuMutex qemu_global_mutex;
>> -static QemuMutex qemu_fair_mutex;
>> +GStaticMutex qemu_global_mutex;
>> +static GStaticMutex qemu_fair_mutex;
>>
>>   static QemuThread io_thread;
>>
>> @@ -416,9 +416,9 @@ int qemu_init_main_loop(void)
>>       qemu_cond_init(&qemu_system_cond);
>>       qemu_cond_init(&qemu_pause_cond);
>>       qemu_cond_init(&qemu_work_cond);
>> -    qemu_mutex_init(&qemu_fair_mutex);
>> -    qemu_mutex_init(&qemu_global_mutex);
>> -    qemu_mutex_lock(&qemu_global_mutex);
>> +    g_static_mutex_init(&qemu_fair_mutex);
>> +    g_static_mutex_init(&qemu_global_mutex);
>> +    g_static_mutex_lock(&qemu_global_mutex);
>>
>>      
> Just replacing our own abstraction with glib's looks like a step in the
> wrong direction. From a first glance at that library and its semantics
> it has at least two major drawbacks:
>
>   - Error handling of things like g_mutex_lock or g_cond_wait is, well,
>     very "simplistic". Once we start to use more sophisticated locking,
>     more bugs will occur here, and we will need more support than glib is
>     able to provide (or can you control error handling elsewhere?).
>
>   - GMutex is not powerful enough for optional things like PI mutexes -
>     which is required once we want to schedule parts of qemu with RT
>     priorities (I did it, it works surprisingly well).
>    

One of the nice design characteristics of glib/gobject/gtk is that it 
cohabitates well with other APIs.

Nothing stops you from using pthread mutex directly if you really need 
to.  It makes you less portable, but sometimes it's a price that has to 
be paid for functionality.

> The same concerns apply to other abstractions glib provides for
> threading and synchronization. One may work around them, but that will
> break abstractions again.
>
> Glib seems to fit standard use case quite comfortably but fails in more
> advanced scenarios qemu is already useable for (just lacking a few
> additional lines of code).
>
> In short: we need full POSIX where available.
>    

If the problem we have is that we have such advanced use of threading 
and locking in QEMU such that the glib API is not enough and we find 
ourselves constantly calling into the pthread's API directly, then 
that's a wonderful problem to have.

Regards,

Anthony Liguori

> Jan
>
>
Jan Kiszka - Jan. 25, 2011, 7:39 a.m.
On 2011-01-25 01:02, Anthony Liguori wrote:
> On 01/24/2011 04:24 PM, Jan Kiszka wrote:
>> On 2011-01-24 22:00, Anthony Liguori wrote:
>>   
>>> Signed-off-by: Anthony Liguori<aliguori@us.ibm.com>
>>>
>>> diff --git a/cpus.c b/cpus.c
>>> index 9cf7e6e..0f8e33b 100644
>>> --- a/cpus.c
>>> +++ b/cpus.c
>>> @@ -321,8 +321,8 @@ void vm_stop(int reason)
>>>
>>>   #include "qemu-thread.h"
>>>
>>> -QemuMutex qemu_global_mutex;
>>> -static QemuMutex qemu_fair_mutex;
>>> +GStaticMutex qemu_global_mutex;
>>> +static GStaticMutex qemu_fair_mutex;
>>>
>>>   static QemuThread io_thread;
>>>
>>> @@ -416,9 +416,9 @@ int qemu_init_main_loop(void)
>>>       qemu_cond_init(&qemu_system_cond);
>>>       qemu_cond_init(&qemu_pause_cond);
>>>       qemu_cond_init(&qemu_work_cond);
>>> -    qemu_mutex_init(&qemu_fair_mutex);
>>> -    qemu_mutex_init(&qemu_global_mutex);
>>> -    qemu_mutex_lock(&qemu_global_mutex);
>>> +    g_static_mutex_init(&qemu_fair_mutex);
>>> +    g_static_mutex_init(&qemu_global_mutex);
>>> +    g_static_mutex_lock(&qemu_global_mutex);
>>>
>>>      
>> Just replacing our own abstraction with glib's looks like a step in the
>> wrong direction. From a first glance at that library and its semantics
>> it has at least two major drawbacks:
>>
>>   - Error handling of things like g_mutex_lock or g_cond_wait is, well,
>>     very "simplistic". Once we start to use more sophisticated locking,
>>     more bugs will occur here, and we will need more support than glib is
>>     able to provide (or can you control error handling elsewhere?).
>>
>>   - GMutex is not powerful enough for optional things like PI mutexes -
>>     which is required once we want to schedule parts of qemu with RT
>>     priorities (I did it, it works surprisingly well).
>>    
> 
> One of the nice design characteristics of glib/gobject/gtk is that it
> cohabitates well with other APIs.
> 
> Nothing stops you from using pthread mutex directly if you really need
> to.  It makes you less portable, but sometimes it's a price that has to
> be paid for functionality.

I'm not talking about adding new PI mutexes, I'm talking about
effectively reverting this patch as I need to initializes the existing
ones with PI enabled in the attributes. Or replacing g_mutex_* with
wrappers again to add real error handling. Really, glib is too primitive
here.

Jan

Patch

diff --git a/cpus.c b/cpus.c
index 9cf7e6e..0f8e33b 100644
--- a/cpus.c
+++ b/cpus.c
@@ -321,8 +321,8 @@  void vm_stop(int reason)
 
 #include "qemu-thread.h"
 
-QemuMutex qemu_global_mutex;
-static QemuMutex qemu_fair_mutex;
+GStaticMutex qemu_global_mutex;
+static GStaticMutex qemu_fair_mutex;
 
 static QemuThread io_thread;
 
@@ -416,9 +416,9 @@  int qemu_init_main_loop(void)
     qemu_cond_init(&qemu_system_cond);
     qemu_cond_init(&qemu_pause_cond);
     qemu_cond_init(&qemu_work_cond);
-    qemu_mutex_init(&qemu_fair_mutex);
-    qemu_mutex_init(&qemu_global_mutex);
-    qemu_mutex_lock(&qemu_global_mutex);
+    g_static_mutex_init(&qemu_fair_mutex);
+    g_static_mutex_init(&qemu_global_mutex);
+    g_static_mutex_lock(&qemu_global_mutex);
 
     qemu_thread_self(&io_thread);
 
@@ -454,7 +454,8 @@  void run_on_cpu(CPUState *env, void (*func)(void *data), void *data)
     while (!wi.done) {
         CPUState *self_env = cpu_single_env;
 
-        qemu_cond_wait(&qemu_work_cond, &qemu_global_mutex);
+        qemu_cond_wait(&qemu_work_cond,
+                       g_static_mutex_get_mutex(&qemu_global_mutex));
         cpu_single_env = self_env;
     }
 }
@@ -490,19 +491,20 @@  static void qemu_tcg_wait_io_event(void)
     CPUState *env;
 
     while (!any_cpu_has_work())
-        qemu_cond_timedwait(tcg_halt_cond, &qemu_global_mutex, 1000);
+        qemu_cond_timedwait(tcg_halt_cond,
+                            g_static_mutex_get_mutex(&qemu_global_mutex), 1000);
 
-    qemu_mutex_unlock(&qemu_global_mutex);
+    g_static_mutex_unlock(&qemu_global_mutex);
 
     /*
      * Users of qemu_global_mutex can be starved, having no chance
      * to acquire it since this path will get to it first.
      * So use another lock to provide fairness.
      */
-    qemu_mutex_lock(&qemu_fair_mutex);
-    qemu_mutex_unlock(&qemu_fair_mutex);
+    g_static_mutex_lock(&qemu_fair_mutex);
+    g_static_mutex_unlock(&qemu_fair_mutex);
 
-    qemu_mutex_lock(&qemu_global_mutex);
+    g_static_mutex_lock(&qemu_global_mutex);
 
     for (env = first_cpu; env != NULL; env = env->next_cpu) {
         qemu_wait_io_event_common(env);
@@ -551,12 +553,12 @@  static void qemu_kvm_eat_signal(CPUState *env, int timeout)
     sigaddset(&waitset, SIGBUS);
 
     do {
-        qemu_mutex_unlock(&qemu_global_mutex);
+        g_static_mutex_unlock(&qemu_global_mutex);
 
         r = sigtimedwait(&waitset, &siginfo, &ts);
         e = errno;
 
-        qemu_mutex_lock(&qemu_global_mutex);
+        g_static_mutex_lock(&qemu_global_mutex);
 
         if (r == -1 && !(e == EAGAIN || e == EINTR)) {
             fprintf(stderr, "sigtimedwait: %s\n", strerror(e));
@@ -585,7 +587,8 @@  static void qemu_kvm_eat_signal(CPUState *env, int timeout)
 static void qemu_kvm_wait_io_event(CPUState *env)
 {
     while (!cpu_has_work(env))
-        qemu_cond_timedwait(env->halt_cond, &qemu_global_mutex, 1000);
+        qemu_cond_timedwait(env->halt_cond,
+                            g_static_mutex_get_mutex(&qemu_global_mutex), 1000);
 
     qemu_kvm_eat_signal(env, 0);
     qemu_wait_io_event_common(env);
@@ -597,7 +600,7 @@  static void *kvm_cpu_thread_fn(void *arg)
 {
     CPUState *env = arg;
 
-    qemu_mutex_lock(&qemu_global_mutex);
+    g_static_mutex_lock(&qemu_global_mutex);
     qemu_thread_self(env->thread);
     if (kvm_enabled())
         kvm_init_vcpu(env);
@@ -610,7 +613,8 @@  static void *kvm_cpu_thread_fn(void *arg)
 
     /* and wait for machine initialization */
     while (!qemu_system_ready)
-        qemu_cond_timedwait(&qemu_system_cond, &qemu_global_mutex, 100);
+        qemu_cond_timedwait(&qemu_system_cond,
+                            g_static_mutex_get_mutex(&qemu_global_mutex), 100);
 
     while (1) {
         if (cpu_can_run(env))
@@ -629,14 +633,15 @@  static void *tcg_cpu_thread_fn(void *arg)
     qemu_thread_self(env->thread);
 
     /* signal CPU creation */
-    qemu_mutex_lock(&qemu_global_mutex);
+    g_static_mutex_lock(&qemu_global_mutex);
     for (env = first_cpu; env != NULL; env = env->next_cpu)
         env->created = 1;
     qemu_cond_signal(&qemu_cpu_cond);
 
     /* and wait for machine initialization */
     while (!qemu_system_ready)
-        qemu_cond_timedwait(&qemu_system_cond, &qemu_global_mutex, 100);
+        qemu_cond_timedwait(&qemu_system_cond,
+                            g_static_mutex_get_mutex(&qemu_global_mutex), 100);
 
     while (1) {
         cpu_exec_all();
@@ -737,22 +742,22 @@  static sigset_t block_io_signals(void)
 void qemu_mutex_lock_iothread(void)
 {
     if (kvm_enabled()) {
-        qemu_mutex_lock(&qemu_fair_mutex);
-        qemu_mutex_lock(&qemu_global_mutex);
-        qemu_mutex_unlock(&qemu_fair_mutex);
+        g_static_mutex_lock(&qemu_fair_mutex);
+        g_static_mutex_lock(&qemu_global_mutex);
+        g_static_mutex_unlock(&qemu_fair_mutex);
     } else {
-        qemu_mutex_lock(&qemu_fair_mutex);
-        if (qemu_mutex_trylock(&qemu_global_mutex)) {
+        g_static_mutex_lock(&qemu_fair_mutex);
+        if (g_static_mutex_trylock(&qemu_global_mutex)) {
             qemu_thread_signal(tcg_cpu_thread, SIG_IPI);
-            qemu_mutex_lock(&qemu_global_mutex);
+            g_static_mutex_lock(&qemu_global_mutex);
         }
-        qemu_mutex_unlock(&qemu_fair_mutex);
+        g_static_mutex_unlock(&qemu_fair_mutex);
     }
 }
 
 void qemu_mutex_unlock_iothread(void)
 {
-    qemu_mutex_unlock(&qemu_global_mutex);
+    g_static_mutex_unlock(&qemu_global_mutex);
 }
 
 static int all_vcpus_paused(void)
@@ -779,7 +784,8 @@  void pause_all_vcpus(void)
     }
 
     while (!all_vcpus_paused()) {
-        qemu_cond_timedwait(&qemu_pause_cond, &qemu_global_mutex, 100);
+        qemu_cond_timedwait(&qemu_pause_cond,
+                            g_static_mutex_get_mutex(&qemu_global_mutex), 100);
         penv = first_cpu;
         while (penv) {
             qemu_cpu_kick(penv);
@@ -810,7 +816,9 @@  static void tcg_init_vcpu(void *_env)
         qemu_cond_init(env->halt_cond);
         qemu_thread_create(env->thread, tcg_cpu_thread_fn, env);
         while (env->created == 0)
-            qemu_cond_timedwait(&qemu_cpu_cond, &qemu_global_mutex, 100);
+            qemu_cond_timedwait(&qemu_cpu_cond,
+                                g_static_mutex_get_mutex(&qemu_global_mutex),
+                                100);
         tcg_cpu_thread = env->thread;
         tcg_halt_cond = env->halt_cond;
     } else {
@@ -826,7 +834,8 @@  static void kvm_start_vcpu(CPUState *env)
     qemu_cond_init(env->halt_cond);
     qemu_thread_create(env->thread, kvm_cpu_thread_fn, env);
     while (env->created == 0)
-        qemu_cond_timedwait(&qemu_cpu_cond, &qemu_global_mutex, 100);
+        qemu_cond_timedwait(&qemu_cpu_cond,
+                            g_static_mutex_get_mutex(&qemu_global_mutex), 100);
 }
 
 void qemu_init_vcpu(void *_env)
diff --git a/qemu-thread.c b/qemu-thread.c
index 2c521ab..df17eb4 100644
--- a/qemu-thread.c
+++ b/qemu-thread.c
@@ -14,31 +14,6 @@ 
 #include "qemu-common.h"
 #include "qemu-thread.h"
 
-void qemu_mutex_init(QemuMutex *mutex)
-{
-    g_static_mutex_init(&mutex->lock);
-}
-
-void qemu_mutex_destroy(QemuMutex *mutex)
-{
-    g_static_mutex_free(&mutex->lock);
-}
-
-void qemu_mutex_lock(QemuMutex *mutex)
-{
-    g_static_mutex_lock(&mutex->lock);
-}
-
-int qemu_mutex_trylock(QemuMutex *mutex)
-{
-    return g_static_mutex_trylock(&mutex->lock);
-}
-
-void qemu_mutex_unlock(QemuMutex *mutex)
-{
-    g_static_mutex_unlock(&mutex->lock);
-}
-
 void qemu_cond_init(QemuCond *cond)
 {
     cond->cond = g_cond_new();
@@ -59,12 +34,12 @@  void qemu_cond_broadcast(QemuCond *cond)
     g_cond_broadcast(cond->cond);
 }
 
-void qemu_cond_wait(QemuCond *cond, QemuMutex *mutex)
+void qemu_cond_wait(QemuCond *cond, GMutex *mutex)
 {
-    g_cond_wait(cond->cond, g_static_mutex_get_mutex(&mutex->lock));
+    g_cond_wait(cond->cond, mutex);
 }
 
-int qemu_cond_timedwait(QemuCond *cond, QemuMutex *mutex, uint64_t msecs)
+int qemu_cond_timedwait(QemuCond *cond, GMutex *mutex, uint64_t msecs)
 {
     GTimeVal abs_time;
 
@@ -73,8 +48,7 @@  int qemu_cond_timedwait(QemuCond *cond, QemuMutex *mutex, uint64_t msecs)
     g_get_current_time(&abs_time);
     g_time_val_add(&abs_time, msecs * 1000); /* MSEC to USEC */
 
-    return g_cond_timed_wait(cond->cond,
-                             g_static_mutex_get_mutex(&mutex->lock), &abs_time);
+    return g_cond_timed_wait(cond->cond, mutex, &abs_time);
 }
 
 struct trampoline_data
@@ -82,7 +56,7 @@  struct trampoline_data
     QemuThread *thread;
     void *(*startfn)(void *);
     void *opaque;
-    QemuMutex lock;
+    GStaticMutex lock;
 };
 
 static gpointer thread_trampoline(gpointer data)
@@ -91,7 +65,7 @@  static gpointer thread_trampoline(gpointer data)
     gpointer retval;
 
     td->thread->tid = pthread_self();
-    qemu_mutex_unlock(&td->lock);
+    g_static_mutex_unlock(&td->lock);
 
     retval = td->startfn(td->opaque);
     qemu_free(td);
@@ -109,10 +83,10 @@  void qemu_thread_create(QemuThread *thread,
     td->startfn = start_routine;
     td->opaque = arg;
     td->thread = thread;
-    qemu_mutex_init(&td->lock);
+    g_static_mutex_init(&td->lock);
 
     /* on behalf of the new thread */
-    qemu_mutex_lock(&td->lock);
+    g_static_mutex_lock(&td->lock);
 
     sigfillset(&set);
     pthread_sigmask(SIG_SETMASK, &set, &old);
@@ -122,11 +96,11 @@  void qemu_thread_create(QemuThread *thread,
     /* we're transfering ownership of this lock to the thread so we no
      * longer hold it here */
 
-    qemu_mutex_lock(&td->lock);
+    g_static_mutex_lock(&td->lock);
     /* validate tid */
-    qemu_mutex_unlock(&td->lock);
+    g_static_mutex_unlock(&td->lock);
 
-    qemu_mutex_destroy(&td->lock);
+    g_static_mutex_free(&td->lock);
 }
 
 void qemu_thread_signal(QemuThread *thread, int sig)
diff --git a/qemu-thread.h b/qemu-thread.h
index dc22a60..dec6848 100644
--- a/qemu-thread.h
+++ b/qemu-thread.h
@@ -3,10 +3,6 @@ 
 #include <glib.h>
 #include <pthread.h>
 
-struct QemuMutex {
-    GStaticMutex lock;
-};
-
 struct QemuCond {
     GCond *cond;
 };
@@ -16,23 +12,15 @@  struct QemuThread {
     pthread_t tid;
 };
 
-typedef struct QemuMutex QemuMutex;
 typedef struct QemuCond QemuCond;
 typedef struct QemuThread QemuThread;
 
-void qemu_mutex_init(QemuMutex *mutex);
-void qemu_mutex_destroy(QemuMutex *mutex);
-void qemu_mutex_lock(QemuMutex *mutex);
-int qemu_mutex_trylock(QemuMutex *mutex);
-int qemu_mutex_timedlock(QemuMutex *mutex, uint64_t msecs);
-void qemu_mutex_unlock(QemuMutex *mutex);
-
 void qemu_cond_init(QemuCond *cond);
 void qemu_cond_destroy(QemuCond *cond);
 void qemu_cond_signal(QemuCond *cond);
 void qemu_cond_broadcast(QemuCond *cond);
-void qemu_cond_wait(QemuCond *cond, QemuMutex *mutex);
-int qemu_cond_timedwait(QemuCond *cond, QemuMutex *mutex, uint64_t msecs);
+void qemu_cond_wait(QemuCond *cond, GMutex *mutex);
+int qemu_cond_timedwait(QemuCond *cond, GMutex *mutex, uint64_t msecs);
 
 void qemu_thread_create(QemuThread *thread,
                        void *(*start_routine)(void*),
diff --git a/ui/vnc-jobs-async.c b/ui/vnc-jobs-async.c
index 6e9cf08..48f567e 100644
--- a/ui/vnc-jobs-async.c
+++ b/ui/vnc-jobs-async.c
@@ -50,7 +50,7 @@ 
 
 struct VncJobQueue {
     QemuCond cond;
-    QemuMutex mutex;
+    GStaticMutex mutex;
     QemuThread thread;
     Buffer buffer;
     bool exit;
@@ -67,12 +67,12 @@  static VncJobQueue *queue;
 
 static void vnc_lock_queue(VncJobQueue *queue)
 {
-    qemu_mutex_lock(&queue->mutex);
+    g_static_mutex_lock(&queue->mutex);
 }
 
 static void vnc_unlock_queue(VncJobQueue *queue)
 {
-    qemu_mutex_unlock(&queue->mutex);
+    g_static_mutex_unlock(&queue->mutex);
 }
 
 VncJob *vnc_job_new(VncState *vs)
@@ -152,7 +152,7 @@  void vnc_jobs_join(VncState *vs)
 {
     vnc_lock_queue(queue);
     while (vnc_has_job_locked(vs)) {
-        qemu_cond_wait(&queue->cond, &queue->mutex);
+        qemu_cond_wait(&queue->cond, g_static_mutex_get_mutex(&queue->mutex));
     }
     vnc_unlock_queue(queue);
 }
@@ -195,7 +195,7 @@  static int vnc_worker_thread_loop(VncJobQueue *queue)
 
     vnc_lock_queue(queue);
     while (QTAILQ_EMPTY(&queue->jobs) && !queue->exit) {
-        qemu_cond_wait(&queue->cond, &queue->mutex);
+        qemu_cond_wait(&queue->cond, g_static_mutex_get_mutex(&queue->mutex));
     }
     /* Here job can only be NULL if queue->exit is true */
     job = QTAILQ_FIRST(&queue->jobs);
@@ -275,7 +275,7 @@  static VncJobQueue *vnc_queue_init(void)
     VncJobQueue *queue = qemu_mallocz(sizeof(VncJobQueue));
 
     qemu_cond_init(&queue->cond);
-    qemu_mutex_init(&queue->mutex);
+    g_static_mutex_init(&queue->mutex);
     QTAILQ_INIT(&queue->jobs);
     return queue;
 }
@@ -283,7 +283,7 @@  static VncJobQueue *vnc_queue_init(void)
 static void vnc_queue_clear(VncJobQueue *q)
 {
     qemu_cond_destroy(&queue->cond);
-    qemu_mutex_destroy(&queue->mutex);
+    g_static_mutex_free(&queue->mutex);
     buffer_free(&queue->buffer);
     qemu_free(q);
     queue = NULL; /* Unset global queue */
diff --git a/ui/vnc-jobs.h b/ui/vnc-jobs.h
index b8dab81..f4cc262 100644
--- a/ui/vnc-jobs.h
+++ b/ui/vnc-jobs.h
@@ -50,7 +50,7 @@  void vnc_stop_worker_thread(void);
 static inline int vnc_trylock_display(VncDisplay *vd)
 {
 #ifdef CONFIG_VNC_THREAD
-    return qemu_mutex_trylock(&vd->mutex);
+    return g_static_mutex_trylock(&vd->mutex);
 #else
     return 0;
 #endif
@@ -59,28 +59,28 @@  static inline int vnc_trylock_display(VncDisplay *vd)
 static inline void vnc_lock_display(VncDisplay *vd)
 {
 #ifdef CONFIG_VNC_THREAD
-    qemu_mutex_lock(&vd->mutex);
+    g_static_mutex_lock(&vd->mutex);
 #endif
 }
 
 static inline void vnc_unlock_display(VncDisplay *vd)
 {
 #ifdef CONFIG_VNC_THREAD
-    qemu_mutex_unlock(&vd->mutex);
+    g_static_mutex_unlock(&vd->mutex);
 #endif
 }
 
 static inline void vnc_lock_output(VncState *vs)
 {
 #ifdef CONFIG_VNC_THREAD
-    qemu_mutex_lock(&vs->output_mutex);
+    g_static_mutex_lock(&vs->output_mutex);
 #endif
 }
 
 static inline void vnc_unlock_output(VncState *vs)
 {
 #ifdef CONFIG_VNC_THREAD
-    qemu_mutex_unlock(&vs->output_mutex);
+    g_static_mutex_unlock(&vs->output_mutex);
 #endif
 }
 
diff --git a/ui/vnc.c b/ui/vnc.c
index 495d6d6..4efd684 100644
--- a/ui/vnc.c
+++ b/ui/vnc.c
@@ -1046,7 +1046,7 @@  static void vnc_disconnect_finish(VncState *vs)
     vnc_unlock_output(vs);
 
 #ifdef CONFIG_VNC_THREAD
-    qemu_mutex_destroy(&vs->output_mutex);
+    g_static_mutex_free(&vs->output_mutex);
 #endif
     qemu_free(vs);
 }
@@ -2386,7 +2386,7 @@  static void vnc_connect(VncDisplay *vd, int csock)
     vs->as.endianness = 0;
 
 #ifdef CONFIG_VNC_THREAD
-    qemu_mutex_init(&vs->output_mutex);
+    g_static_mutex_init(&vs->output_mutex);
 #endif
 
     QTAILQ_INSERT_HEAD(&vd->clients, vs, next);
@@ -2448,7 +2448,7 @@  void vnc_display_init(DisplayState *ds)
         exit(1);
 
 #ifdef CONFIG_VNC_THREAD
-    qemu_mutex_init(&vs->mutex);
+    g_static_mutex_init(&vs->mutex);
     vnc_start_worker_thread();
 #endif
 
diff --git a/ui/vnc.h b/ui/vnc.h
index 4f895be..5c6a676 100644
--- a/ui/vnc.h
+++ b/ui/vnc.h
@@ -108,7 +108,7 @@  struct VncDisplay
     kbd_layout_t *kbd_layout;
     int lock_key_sync;
 #ifdef CONFIG_VNC_THREAD
-    QemuMutex mutex;
+    GStaticMutex mutex;
 #endif
 
     QEMUCursor *cursor;
@@ -244,7 +244,7 @@  struct VncState
 #ifndef CONFIG_VNC_THREAD
     VncJob job;
 #else
-    QemuMutex output_mutex;
+    GStaticMutex output_mutex;
 #endif
 
     /* Encoding specific, if you add something here, don't forget to