diff mbox

[07/12] qemu-thread: add QemuSemaphore

Message ID 1342435377-25897-8-git-send-email-pbonzini@redhat.com
State New
Headers show

Commit Message

Paolo Bonzini July 16, 2012, 10:42 a.m. UTC
The new thread pool will use semaphores instead of condition
variables, because QemuCond does not have qemu_cond_timedwait.
(I also like it more this way, since semaphores model well the
producer-consumer problem).

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 qemu-thread-posix.c |   74 +++++++++++++++++++++++++++++++++++++++++++++++++++
 qemu-thread-posix.h |    5 ++++
 qemu-thread-win32.c |   35 ++++++++++++++++++++++++
 qemu-thread-win32.h |    4 +++
 qemu-thread.h       |    7 +++++
 5 files changed, 125 insertions(+)

Comments

Jan Kiszka July 16, 2012, noon UTC | #1
On 2012-07-16 12:42, Paolo Bonzini wrote:
> The new thread pool will use semaphores instead of condition
> variables, because QemuCond does not have qemu_cond_timedwait.

I'll post an updated patch (according to last round's review comments)
that adds this service for POSIX. I bet you'll find a way to extend it
to Win32 if that is required. ;)

> (I also like it more this way, since semaphores model well the
> producer-consumer problem).

Let's not introduce another synchronization mechanism unless there is a
real need. Semaphores tend to be misused for things they don't fit, so
better keep them out of reach.

Also, if you do producer-consumer this way, you need a down() for every
entity you dequeue. In contrast, you only interact with condition
variables if there the consumer queue is empty - less atomic ops.

Jan
Paolo Bonzini July 16, 2012, 1:20 p.m. UTC | #2
Il 16/07/2012 14:00, Jan Kiszka ha scritto:
> On 2012-07-16 12:42, Paolo Bonzini wrote:
>> The new thread pool will use semaphores instead of condition
>> variables, because QemuCond does not have qemu_cond_timedwait.
> 
> I'll post an updated patch (according to last round's review comments)
> that adds this service for POSIX. I bet you'll find a way to extend it
> to Win32 if that is required. ;)

I can do that (or just use pthreads-win32), but only at the cost of
making cond_wait() slower and more complex.

>> (I also like it more this way, since semaphores model well the
>> producer-consumer problem).
> 
> Let's not introduce another synchronization mechanism unless there is a
> real need. Semaphores tend to be misused for things they don't fit, so
> better keep them out of reach.

That's what patch review is for...

> Also, if you do producer-consumer this way, you need a down() for every
> entity you dequeue. In contrast, you only interact with condition
> variables if there the consumer queue is empty - less atomic ops.

It doesn't really matter.  You want the thread pool to service requests
as fast as possible, which means you'll have always at least one free
thread waiting on the semaphore or cv.  So, with either semaphores or
cvs, the slow path is actually the normal case.

Paolo
Jan Kiszka July 16, 2012, 1:34 p.m. UTC | #3
On 2012-07-16 15:20, Paolo Bonzini wrote:
> Il 16/07/2012 14:00, Jan Kiszka ha scritto:
>> On 2012-07-16 12:42, Paolo Bonzini wrote:
>>> The new thread pool will use semaphores instead of condition
>>> variables, because QemuCond does not have qemu_cond_timedwait.
>>
>> I'll post an updated patch (according to last round's review comments)
>> that adds this service for POSIX. I bet you'll find a way to extend it
>> to Win32 if that is required. ;)
> 
> I can do that (or just use pthreads-win32), but only at the cost of
> making cond_wait() slower and more complex.

Why will it affect cond_wait? WaitForSingleObject can time out as well.

> 
>>> (I also like it more this way, since semaphores model well the
>>> producer-consumer problem).
>>
>> Let's not introduce another synchronization mechanism unless there is a
>> real need. Semaphores tend to be misused for things they don't fit, so
>> better keep them out of reach.
> 
> That's what patch review is for...

But even better is avoiding the temptation.

Jan
Paolo Bonzini July 16, 2012, 1:35 p.m. UTC | #4
Il 16/07/2012 15:34, Jan Kiszka ha scritto:
> On 2012-07-16 15:20, Paolo Bonzini wrote:
>> Il 16/07/2012 14:00, Jan Kiszka ha scritto:
>>> On 2012-07-16 12:42, Paolo Bonzini wrote:
>>>> The new thread pool will use semaphores instead of condition
>>>> variables, because QemuCond does not have qemu_cond_timedwait.
>>>
>>> I'll post an updated patch (according to last round's review comments)
>>> that adds this service for POSIX. I bet you'll find a way to extend it
>>> to Win32 if that is required. ;)
>>
>> I can do that (or just use pthreads-win32), but only at the cost of
>> making cond_wait() slower and more complex.
> 
> Why will it affect cond_wait? WaitForSingleObject can time out as well.

qemu_cond_wait only uses WaitForSingleObject with INFINITE timeout, and
the algorithm relies on that.

Paolo
Jan Kiszka July 16, 2012, 1:53 p.m. UTC | #5
On 2012-07-16 15:35, Paolo Bonzini wrote:
> Il 16/07/2012 15:34, Jan Kiszka ha scritto:
>> On 2012-07-16 15:20, Paolo Bonzini wrote:
>>> Il 16/07/2012 14:00, Jan Kiszka ha scritto:
>>>> On 2012-07-16 12:42, Paolo Bonzini wrote:
>>>>> The new thread pool will use semaphores instead of condition
>>>>> variables, because QemuCond does not have qemu_cond_timedwait.
>>>>
>>>> I'll post an updated patch (according to last round's review comments)
>>>> that adds this service for POSIX. I bet you'll find a way to extend it
>>>> to Win32 if that is required. ;)
>>>
>>> I can do that (or just use pthreads-win32), but only at the cost of
>>> making cond_wait() slower and more complex.
>>
>> Why will it affect cond_wait? WaitForSingleObject can time out as well.
> 
> qemu_cond_wait only uses WaitForSingleObject with INFINITE timeout, and
> the algorithm relies on that.

I see. But this doesn't look complex awfully. Just move the waker
signaling from within cond_wait under the mutex as well, maybe add
another flag if there is really someone waiting, and that's it. The
costs should be hard to measure, even in line of code.

Jan
Paolo Bonzini July 16, 2012, 2:03 p.m. UTC | #6
Il 16/07/2012 15:53, Jan Kiszka ha scritto:
>> > 
>> > qemu_cond_wait only uses WaitForSingleObject with INFINITE timeout, and
>> > the algorithm relies on that.
> I see. But this doesn't look complex awfully. Just move the waker
> signaling from within cond_wait under the mutex as well, maybe add
> another flag if there is really someone waiting, and that's it. The
> costs should be hard to measure, even in line of code.

There is still a race after WaitForSingleObject times out.  You need to
catch the mutex before decreasing the number of waiters, and during that
window somebody can broadcast the condition variable.

I'm not saying it's impossible, just that it's hard and I dislike
qemu_cond_timedwait as much as you dislike semaphores. :)

Paolo
Jan Kiszka July 16, 2012, 2:09 p.m. UTC | #7
On 2012-07-16 16:03, Paolo Bonzini wrote:
> Il 16/07/2012 15:53, Jan Kiszka ha scritto:
>>>>
>>>> qemu_cond_wait only uses WaitForSingleObject with INFINITE timeout, and
>>>> the algorithm relies on that.
>> I see. But this doesn't look complex awfully. Just move the waker
>> signaling from within cond_wait under the mutex as well, maybe add
>> another flag if there is really someone waiting, and that's it. The
>> costs should be hard to measure, even in line of code.
> 
> There is still a race after WaitForSingleObject times out.  You need to
> catch the mutex before decreasing the number of waiters, and during that
> window somebody can broadcast the condition variable.

...and that's why you check what needs to be done to handle this race
after grabbing the mutex. IOW, replicate the state information that the
Windows semaphore contains into the emulated condition variable object.

> 
> I'm not saying it's impossible, just that it's hard and I dislike
> qemu_cond_timedwait as much as you dislike semaphores. :)

As you cannot eliminate our use cases for condition variables, let's
focus on the existing synchronization patterns instead of introducing
new ones + new mechanisms.

Jan
Paolo Bonzini July 16, 2012, 2:20 p.m. UTC | #8
Il 16/07/2012 16:09, Jan Kiszka ha scritto:
> On 2012-07-16 16:03, Paolo Bonzini wrote:
>> Il 16/07/2012 15:53, Jan Kiszka ha scritto:
>>>>>
>>>>> qemu_cond_wait only uses WaitForSingleObject with INFINITE timeout, and
>>>>> the algorithm relies on that.
>>> I see. But this doesn't look complex awfully. Just move the waker
>>> signaling from within cond_wait under the mutex as well, maybe add
>>> another flag if there is really someone waiting, and that's it. The
>>> costs should be hard to measure, even in line of code.
>>
>> There is still a race after WaitForSingleObject times out.  You need to
>> catch the mutex before decreasing the number of waiters, and during that
>> window somebody can broadcast the condition variable.
> 
> ...and that's why you check what needs to be done to handle this race
> after grabbing the mutex. IOW, replicate the state information that the
> Windows semaphore contains into the emulated condition variable object.

It is already there (cv->waiters), but it is accessed atomically.  To do
what you suggest I would need to add a mutex.

>> I'm not saying it's impossible, just that it's hard and I dislike
>> qemu_cond_timedwait as much as you dislike semaphores. :)
> 
> As you cannot eliminate our use cases for condition variables, let's
> focus on the existing synchronization patterns instead of introducing
> new ones + new mechanisms.

I'm not introducing <fancy new primitive of the day>... semaphores have
the advantage of having better support under Windows (at least older
versions; Microsoft saw the light starting with Vista---yes I know the
oxymoron---and added condvars).

Paolo
diff mbox

Patch

diff --git a/qemu-thread-posix.c b/qemu-thread-posix.c
index 9e1b5fb..251fef0 100644
--- a/qemu-thread-posix.c
+++ b/qemu-thread-posix.c
@@ -17,6 +17,9 @@ 
 #include <signal.h>
 #include <stdint.h>
 #include <string.h>
+#include <limits.h>
+#include <unistd.h>
+#include <sys/time.h>
 #include "qemu-thread.h"
 
 static void error_exit(int err, const char *msg)
@@ -115,6 +118,77 @@  void qemu_cond_wait(QemuCond *cond, QemuMutex *mutex)
         error_exit(err, __func__);
 }
 
+void qemu_sem_init(QemuSemaphore *sem, int init)
+{
+    int rc;
+
+    rc = sem_init(&sem->sem, 0, init);
+    if (rc < 0) {
+        error_exit(errno, __func__);
+    }
+}
+
+void qemu_sem_destroy(QemuSemaphore *sem)
+{
+    int rc;
+
+    rc = sem_destroy(&sem->sem);
+    if (rc < 0) {
+        error_exit(errno, __func__);
+    }
+}
+
+void qemu_sem_post(QemuSemaphore *sem)
+{
+    int rc;
+
+    rc = sem_post(&sem->sem);
+    if (rc < 0) {
+        error_exit(errno, __func__);
+    }
+}
+
+int qemu_sem_timedwait(QemuSemaphore *sem, int ms)
+{
+    int rc;
+
+    if (ms <= 0) {
+        /* This is cheaper than sem_timedwait.  */
+        rc = sem_trywait(&sem->sem);
+        if (rc == -1 && errno == EAGAIN) {
+            return -1;
+        }
+    } else {
+        struct timeval tv;
+        struct timespec ts;
+        gettimeofday(&tv, NULL);
+        ts.tv_nsec = tv.tv_usec * 1000 + (ms % 1000) * 1000000;
+        ts.tv_sec = tv.tv_sec + ms / 1000;
+        if (ts.tv_nsec >= 1000000000) {
+            ts.tv_sec++;
+            ts.tv_nsec -= 1000000000;
+        }
+        rc = sem_timedwait(&sem->sem, &ts);
+        if (rc == -1 && errno == ETIMEDOUT) {
+            return -1;
+        }
+    }
+    if (rc < 0) {
+        error_exit(errno, __func__);
+    }
+    return 0;
+}
+
+void qemu_sem_wait(QemuSemaphore *sem)
+{
+    int rc;
+
+    rc = sem_wait(&sem->sem);
+    if (rc < 0) {
+        error_exit(errno, __func__);
+    }
+}
+
 void qemu_thread_create(QemuThread *thread,
                        void *(*start_routine)(void*),
                        void *arg, int mode)
diff --git a/qemu-thread-posix.h b/qemu-thread-posix.h
index ee4618e..2542c15 100644
--- a/qemu-thread-posix.h
+++ b/qemu-thread-posix.h
@@ -1,6 +1,7 @@ 
 #ifndef __QEMU_THREAD_POSIX_H
 #define __QEMU_THREAD_POSIX_H 1
 #include "pthread.h"
+#include <semaphore.h>
 
 struct QemuMutex {
     pthread_mutex_t lock;
@@ -10,6 +11,10 @@  struct QemuCond {
     pthread_cond_t cond;
 };
 
+struct QemuSemaphore {
+    sem_t sem;
+};
+
 struct QemuThread {
     pthread_t thread;
 };
diff --git a/qemu-thread-win32.c b/qemu-thread-win32.c
index 3524c8b..78602d2 100644
--- a/qemu-thread-win32.c
+++ b/qemu-thread-win32.c
@@ -192,6 +192,41 @@  void qemu_cond_wait(QemuCond *cond, QemuMutex *mutex)
     qemu_mutex_lock(mutex);
 }
 
+void qemu_sem_init(QemuSemaphore *sem, int init)
+{
+    /* Manual reset.  */
+    sem->sema = CreateSemaphore(NULL, init, LONG_MAX, NULL);
+}
+
+void qemu_sem_destroy(QemuSemaphore *sem)
+{
+    CloseHandle(sem->sema);
+}
+
+void qemu_sem_post(QemuSemaphore *sem)
+{
+    ReleaseSemaphore(sem->sema, 1, NULL);
+}
+
+int qemu_sem_timedwait(QemuSemaphore *sem, int ms)
+{
+    int rc = WaitForSingleObject(sem->sema, ms);
+    if (rc == WAIT_OBJECT_0) {
+        return 0;
+    }
+    if (rc != WAIT_TIMEOUT) {
+        error_exit(GetLastError(), __func__);
+    }
+    return -1;
+}
+
+void qemu_sem_wait(QemuSemaphore *sem)
+{
+    if (WaitForSingleObject(sem->sema, INFINITE) != WAIT_OBJECT_0) {
+        error_exit(GetLastError(), __func__);
+    }
+}
+
 struct QemuThreadData {
     /* Passed to win32_start_routine.  */
     void             *(*start_routine)(void *);
diff --git a/qemu-thread-win32.h b/qemu-thread-win32.h
index b9d1be8..13adb95 100644
--- a/qemu-thread-win32.h
+++ b/qemu-thread-win32.h
@@ -13,6 +13,10 @@  struct QemuCond {
     HANDLE continue_event;
 };
 
+struct QemuSemaphore {
+    HANDLE sema;
+};
+
 typedef struct QemuThreadData QemuThreadData;
 struct QemuThread {
     QemuThreadData *data;
diff --git a/qemu-thread.h b/qemu-thread.h
index a78a8f2..c3f960e 100644
--- a/qemu-thread.h
+++ b/qemu-thread.h
@@ -5,6 +5,7 @@ 
 
 typedef struct QemuMutex QemuMutex;
 typedef struct QemuCond QemuCond;
+typedef struct QemuSemaphore QemuSemaphore;
 typedef struct QemuThread QemuThread;
 
 #ifdef _WIN32
@@ -37,6 +38,12 @@  void qemu_cond_signal(QemuCond *cond);
 void qemu_cond_broadcast(QemuCond *cond);
 void qemu_cond_wait(QemuCond *cond, QemuMutex *mutex);
 
+void qemu_sem_init(QemuSemaphore *sem, int init);
+void qemu_sem_post(QemuSemaphore *sem);
+void qemu_sem_wait(QemuSemaphore *sem);
+int qemu_sem_timedwait(QemuSemaphore *sem, int ms);
+void qemu_sem_destroy(QemuSemaphore *sem);
+
 void qemu_thread_create(QemuThread *thread,
                         void *(*start_routine)(void *),
                         void *arg, int mode);