From patchwork Fri Jun 28 18:26:32 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 255513 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 1C81D2C0091 for ; Sat, 29 Jun 2013 04:33:33 +1000 (EST) Received: from localhost ([::1]:36950 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UsdUA-00064z-Tr for incoming@patchwork.ozlabs.org; Fri, 28 Jun 2013 14:33:30 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60674) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UsdOJ-000617-14 for qemu-devel@nongnu.org; Fri, 28 Jun 2013 14:27:29 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UsdOF-0000PI-Ln for qemu-devel@nongnu.org; Fri, 28 Jun 2013 14:27:26 -0400 Received: from mail-ee0-x22a.google.com ([2a00:1450:4013:c00::22a]:46936) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UsdOF-0000Oq-Av for qemu-devel@nongnu.org; Fri, 28 Jun 2013 14:27:23 -0400 Received: by mail-ee0-f42.google.com with SMTP id c4so1179699eek.15 for ; Fri, 28 Jun 2013 11:27:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:subject:date:message-id:x-mailer:in-reply-to :references; bh=/BL8iocto1u7C7YRkU46cK8M6FgYgLJgtXtwO1Rkv2k=; b=b6ErHF32ThSeTqvpdNZmImnXzK1/+8z4jGtvWBFs9Uc1W9Br5xOd4QLJkUct7vKy1s B3Wlrz0Lt0bhysc04h7B2uX4yc5/dTWy5WyHZA5RaME5N7xL/8an6N50X+ge9C6AWa97 q7wOIZMS1Pv4VutDZn4FKdZxZFZ1TuRL8Y0G9XOFrwJm3SRtSCtm5C7VT608n9ezyBCV V/psdWfdfFHkKeYBSBW9jQIhKecWbFQTBwo7FpUZyABje82nUd+XToZd9wbepC625lB/ pTQDm1v/rIWWZh8oP4xbJb2UIs89mG/AnoiU/B3lU5iwmFlGmKHmz3PvAtcJpRAzM9ed hH8A== X-Received: by 10.15.32.194 with SMTP id a42mr15050667eev.43.1372444042652; Fri, 28 Jun 2013 11:27:22 -0700 (PDT) Received: from playground.lan (net-37-116-217-184.cust.dsl.vodafone.it. [37.116.217.184]) by mx.google.com with ESMTPSA id o5sm12035344eef.5.2013.06.28.11.27.20 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Fri, 28 Jun 2013 11:27:21 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Fri, 28 Jun 2013 20:26:32 +0200 Message-Id: <1372444009-11544-14-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.1.4 In-Reply-To: <1372444009-11544-1-git-send-email-pbonzini@redhat.com> References: <1372444009-11544-1-git-send-email-pbonzini@redhat.com> X-detected-operating-system: by eggs.gnu.org: Error: Malformed IPv6 address (bad octet value). X-Received-From: 2a00:1450:4013:c00::22a Subject: [Qemu-devel] [PATCH 13/30] qemu-thread: report RCU quiescent states X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Most threads will use mutexes and other sleeping synchronization primitives (condition variables, semaphores, events) periodically. For these threads, the synchronization primitives are natural places to report a quiescent state (possibly an extended one). Signed-off-by: Paolo Bonzini --- docs/rcu.txt | 33 ++++++++++++++++++++++++++++++++- util/qemu-thread-posix.c | 30 ++++++++++++++++++++++++++---- util/qemu-thread-win32.c | 16 +++++++++++++++- util/rcu.c | 3 --- 4 files changed, 73 insertions(+), 9 deletions(-) diff --git a/docs/rcu.txt b/docs/rcu.txt index a3510b9..6c4a852 100644 --- a/docs/rcu.txt +++ b/docs/rcu.txt @@ -168,6 +168,35 @@ of "quiescent states", i.e. points where no RCU read-side critical section can be active. All threads created with qemu_thread_create participate in the RCU mechanism and need to annotate such points. +Luckily, in most cases no manual annotation is needed, because waiting +on condition variables (qemu_cond_wait), semaphores (qemu_sem_wait, +qemu_sem_timedwait) or events (qemu_event_wait) implicitly marks the thread +as quiescent for the whole duration of the wait. (There is an exception +for semaphore waits with a zero timeout). + +Manual annotation is still needed in the following cases: + +- threads that spend their sleeping time in the kernel, for example + in a call to select(), poll(), sigwait() or WaitForMultipleObjects(). + The QEMU I/O thread is an example of this case. When running under + KVM, VCPUs are also in a quiescent state while running the guest. + +- threads that perform a lot of I/O. In QEMU, the workers used for + aio=thread are an example of this case (see aio_worker in block/raw-*). + +- threads that run continuously until they exit. The migration thread + is an example of this case. + +Regarding the second case, note that the workers run in the QEMU thread +pool. The thread pool uses semaphores for synchronization, hence it does +report quiescent states periodically. However, in some cases (e.g. NFS +mounted with the "hard" option) the workers can take an arbitrarily long +amount of time. When this happens, synchronize_rcu() will not exit and +call_rcu() callbacks will be delayed arbitrarily. It is therefore a +good idea to mark I/O system calls as quiescence points in the worker +functions. + + Marking quiescent states is done with the following three APIs: void rcu_quiescent_state(void); @@ -229,7 +258,9 @@ DIFFERENCES WITH LINUX type of the callback's argument to be the type of the first argument. call_rcu1 is the same as Linux's call_rcu. -- Quiescent points must be marked explicitly in the thread. +- Quiescent points must be marked explicitly unless the thread uses + condvars/semaphores/events for synchronization. Note that mutexes + do not report quiescent points (see the first item above). RCU PATTERNS diff --git a/util/qemu-thread-posix.c b/util/qemu-thread-posix.c index 2df3382..21190be 100644 --- a/util/qemu-thread-posix.c +++ b/util/qemu-thread-posix.c @@ -119,7 +119,9 @@ void qemu_cond_wait(QemuCond *cond, QemuMutex *mutex) { int err; + rcu_thread_offline(); err = pthread_cond_wait(&cond->cond, &mutex->lock); + rcu_thread_online(); if (err) error_exit(err, __func__); } @@ -212,6 +214,10 @@ int qemu_sem_timedwait(QemuSemaphore *sem, int ms) int rc; struct timespec ts; + if (ms) { + rcu_thread_offline(); + } + #if defined(__APPLE__) || defined(__NetBSD__) compute_abs_deadline(&ts, ms); pthread_mutex_lock(&sem->lock); @@ -227,7 +233,10 @@ int qemu_sem_timedwait(QemuSemaphore *sem, int ms) } } pthread_mutex_unlock(&sem->lock); - return (rc == ETIMEDOUT ? -1 : 0); + if (rc == ETIMEDOUT) { + rc == -1; + } + #else if (ms <= 0) { /* This is cheaper than sem_timedwait. */ @@ -235,7 +244,7 @@ int qemu_sem_timedwait(QemuSemaphore *sem, int ms) rc = sem_trywait(&sem->sem); } while (rc == -1 && errno == EINTR); if (rc == -1 && errno == EAGAIN) { - return -1; + goto out; } } else { compute_abs_deadline(&ts, ms); @@ -243,18 +252,25 @@ int qemu_sem_timedwait(QemuSemaphore *sem, int ms) rc = sem_timedwait(&sem->sem, &ts); } while (rc == -1 && errno == EINTR); if (rc == -1 && errno == ETIMEDOUT) { - return -1; + goto out; } } if (rc < 0) { error_exit(errno, __func__); } - return 0; #endif + +out: + if (ms) { + rcu_thread_online(); + } + return rc; } void qemu_sem_wait(QemuSemaphore *sem) { + rcu_thread_offline(); + #if defined(__APPLE__) || defined(__NetBSD__) pthread_mutex_lock(&sem->lock); --sem->count; @@ -272,6 +288,8 @@ void qemu_sem_wait(QemuSemaphore *sem) error_exit(errno, __func__); } #endif + + rcu_thread_online(); } #ifdef __linux__ @@ -380,7 +398,11 @@ void qemu_event_wait(QemuEvent *ev) return; } } + rcu_thread_offline(); futex_wait(ev, EV_BUSY); + rcu_thread_online(); + } else { + rcu_quiescent_state(); } } diff --git a/util/qemu-thread-win32.c b/util/qemu-thread-win32.c index 18978be..9c14cf1 100644 --- a/util/qemu-thread-win32.c +++ b/util/qemu-thread-win32.c @@ -12,6 +12,7 @@ */ #include "qemu-common.h" #include "qemu/thread.h" +#include "qemu/rcu.h" #include #include #include @@ -187,7 +188,9 @@ void qemu_cond_wait(QemuCond *cond, QemuMutex *mutex) * leaving mutex unlocked before we wait on semaphore. */ qemu_mutex_unlock(mutex); + rcu_thread_offline(); WaitForSingleObject(cond->sema, INFINITE); + rcu_thread_online(); /* Now waiters must rendez-vous with the signaling thread and * let it continue. For cond_broadcast this has heavy contention @@ -227,7 +230,16 @@ void qemu_sem_post(QemuSemaphore *sem) int qemu_sem_timedwait(QemuSemaphore *sem, int ms) { - int rc = WaitForSingleObject(sem->sema, ms); + int rc; + + if (ms) { + rcu_thread_offline(); + } + rc = WaitForSingleObject(sem->sema, ms); + if (ms) { + rcu_thread_online(); + } + if (rc == WAIT_OBJECT_0) { return 0; } @@ -267,7 +279,9 @@ void qemu_event_reset(QemuEvent *ev) void qemu_event_wait(QemuEvent *ev) { + rcu_thread_offline(); WaitForSingleObject(ev->event, INFINITE); + rcu_thread_online(); } struct QemuThreadData { diff --git a/util/rcu.c b/util/rcu.c index f1c5736..654f3bb 100644 --- a/util/rcu.c +++ b/util/rcu.c @@ -232,9 +232,6 @@ static void *call_rcu_thread(void *opaque) { struct rcu_head *node; - /* This thread is just a writer. */ - rcu_thread_offline(); - for (;;) { int tries = 0; int n = atomic_read(&rcu_call_count);