From patchwork Wed Aug 12 13:37:04 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 506620 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 270801401B5 for ; Wed, 12 Aug 2015 23:46:18 +1000 (AEST) Received: from localhost ([::1]:38725 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZPWMC-00076x-Bs for incoming@patchwork.ozlabs.org; Wed, 12 Aug 2015 09:46:16 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:43068) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZPWDt-00023y-8C for qemu-devel@nongnu.org; Wed, 12 Aug 2015 09:37:43 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZPWDo-0002ev-MQ for qemu-devel@nongnu.org; Wed, 12 Aug 2015 09:37:40 -0400 Received: from mx1.redhat.com ([209.132.183.28]:49478) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZPWDo-0002em-DE for qemu-devel@nongnu.org; Wed, 12 Aug 2015 09:37:36 -0400 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) by mx1.redhat.com (Postfix) with ESMTPS id 0EB9E91DA0; Wed, 12 Aug 2015 13:37:36 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-112-82.ams2.redhat.com [10.36.112.82]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t7CDbETi004849; Wed, 12 Aug 2015 09:37:34 -0400 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Wed, 12 Aug 2015 15:37:04 +0200 Message-Id: <1439386633-18933-12-git-send-email-pbonzini@redhat.com> In-Reply-To: <1439386633-18933-1-git-send-email-pbonzini@redhat.com> References: <1439386633-18933-1-git-send-email-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 11/20] rcu: Allow calling rcu_(un)register_thread() during synchronize_rcu() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Wen Congyang If rcu_(un)register_thread() is called together with synchronize_rcu(), it will wait for the synchronize_rcu() to finish. But when synchronize_rcu() waits for some events, we can modify the list registry. We also use the lock rcu_gp_lock to assume that synchronize_rcu() isn't executed in more than one thread at the same time. Add a new mutex lock rcu_sync_lock to assume it and rename rcu_gp_lock to rcu_registry_lock. Release rcu_registry_lock when synchronize_rcu() waits for some events. Signed-off-by: Wen Congyang Message-Id: <55B59652.4090503@cn.fujitsu.com> Signed-off-by: Paolo Bonzini --- util/rcu.c | 48 +++++++++++++++++++++++++++++++++++------------- 1 file changed, 35 insertions(+), 13 deletions(-) diff --git a/util/rcu.c b/util/rcu.c index cdcad67..8ba304d 100644 --- a/util/rcu.c +++ b/util/rcu.c @@ -47,7 +47,8 @@ unsigned long rcu_gp_ctr = RCU_GP_LOCKED; QemuEvent rcu_gp_event; -static QemuMutex rcu_gp_lock; +static QemuMutex rcu_registry_lock; +static QemuMutex rcu_sync_lock; /* * Check whether a quiescent state was crossed between the beginning of @@ -66,7 +67,7 @@ static inline int rcu_gp_ongoing(unsigned long *ctr) */ __thread struct rcu_reader_data rcu_reader; -/* Protected by rcu_gp_lock. */ +/* Protected by rcu_registry_lock. */ typedef QLIST_HEAD(, rcu_reader_data) ThreadList; static ThreadList registry = QLIST_HEAD_INITIALIZER(registry); @@ -114,10 +115,26 @@ static void wait_for_readers(void) break; } - /* Wait for one thread to report a quiescent state and - * try again. + /* Wait for one thread to report a quiescent state and try again. + * Release rcu_registry_lock, so rcu_(un)register_thread() doesn't + * wait too much time. + * + * rcu_register_thread() may add nodes to ®istry; it will not + * wake up synchronize_rcu, but that is okay because at least another + * thread must exit its RCU read-side critical section before + * synchronize_rcu is done. The next iteration of the loop will + * move the new thread's rcu_reader from ®istry to &qsreaders, + * because rcu_gp_ongoing() will return false. + * + * rcu_unregister_thread() may remove nodes from &qsreaders instead + * of ®istry if it runs during qemu_event_wait. That's okay; + * the node then will not be added back to ®istry by QLIST_SWAP + * below. The invariant is that the node is part of one list when + * rcu_registry_lock is released. */ + qemu_mutex_unlock(&rcu_registry_lock); qemu_event_wait(&rcu_gp_event); + qemu_mutex_lock(&rcu_registry_lock); } /* put back the reader list in the registry */ @@ -126,7 +143,8 @@ static void wait_for_readers(void) void synchronize_rcu(void) { - qemu_mutex_lock(&rcu_gp_lock); + qemu_mutex_lock(&rcu_sync_lock); + qemu_mutex_lock(&rcu_registry_lock); if (!QLIST_EMPTY(®istry)) { /* In either case, the atomic_mb_set below blocks stores that free @@ -149,7 +167,8 @@ void synchronize_rcu(void) wait_for_readers(); } - qemu_mutex_unlock(&rcu_gp_lock); + qemu_mutex_unlock(&rcu_registry_lock); + qemu_mutex_unlock(&rcu_sync_lock); } @@ -273,23 +292,24 @@ void call_rcu1(struct rcu_head *node, void (*func)(struct rcu_head *node)) void rcu_register_thread(void) { assert(rcu_reader.ctr == 0); - qemu_mutex_lock(&rcu_gp_lock); + qemu_mutex_lock(&rcu_registry_lock); QLIST_INSERT_HEAD(®istry, &rcu_reader, node); - qemu_mutex_unlock(&rcu_gp_lock); + qemu_mutex_unlock(&rcu_registry_lock); } void rcu_unregister_thread(void) { - qemu_mutex_lock(&rcu_gp_lock); + qemu_mutex_lock(&rcu_registry_lock); QLIST_REMOVE(&rcu_reader, node); - qemu_mutex_unlock(&rcu_gp_lock); + qemu_mutex_unlock(&rcu_registry_lock); } static void rcu_init_complete(void) { QemuThread thread; - qemu_mutex_init(&rcu_gp_lock); + qemu_mutex_init(&rcu_registry_lock); + qemu_mutex_init(&rcu_sync_lock); qemu_event_init(&rcu_gp_event, true); qemu_event_init(&rcu_call_ready_event, false); @@ -306,12 +326,14 @@ static void rcu_init_complete(void) #ifdef CONFIG_POSIX static void rcu_init_lock(void) { - qemu_mutex_lock(&rcu_gp_lock); + qemu_mutex_lock(&rcu_sync_lock); + qemu_mutex_lock(&rcu_registry_lock); } static void rcu_init_unlock(void) { - qemu_mutex_unlock(&rcu_gp_lock); + qemu_mutex_unlock(&rcu_registry_lock); + qemu_mutex_unlock(&rcu_sync_lock); } #endif