From patchwork Tue Jun 27 15:22:40 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 781308 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3wxtPT1KLXz9s4q for ; Wed, 28 Jun 2017 03:35:57 +1000 (AEST) Received: from localhost ([::1]:57348 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dPuP4-0002HB-QU for incoming@patchwork.ozlabs.org; Tue, 27 Jun 2017 13:35:54 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:53576) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dPtm1-00084h-28 for qemu-devel@nongnu.org; Tue, 27 Jun 2017 12:55:36 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dPsKs-00031L-WC for qemu-devel@nongnu.org; Tue, 27 Jun 2017 11:23:28 -0400 Received: from mx1.redhat.com ([209.132.183.28]:58460) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dPsKp-0002kk-OV; Tue, 27 Jun 2017 11:23:23 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C2404558B1; Tue, 27 Jun 2017 15:23:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com C2404558B1 Authentication-Results: ext-mx01.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx01.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=pbonzini@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com C2404558B1 Received: from donizetti.redhat.com (unknown [10.36.117.255]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6023C8D771; Tue, 27 Jun 2017 15:23:14 +0000 (UTC) From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Tue, 27 Jun 2017 17:22:40 +0200 Message-Id: <20170627152248.10446-3-pbonzini@redhat.com> In-Reply-To: <20170627152248.10446-1-pbonzini@redhat.com> References: <20170627152248.10446-1-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Tue, 27 Jun 2017 15:23:22 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 02/10] coroutine-lock: add qemu_co_rwlock_downgrade and qemu_co_rwlock_upgrade X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, qemu-block@nongnu.org, stefanha@redhat.com Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" These functions are more efficient in the presence of contention. qemu_co_rwlock_downgrade also guarantees not to block, which may be useful in some algorithms too. Reviewed-by: Eric Blake Reviewed-by: Stefan Hajnoczi Signed-off-by: Paolo Bonzini --- include/qemu/coroutine.h | 18 ++++++++++++++++++ util/qemu-coroutine-lock.c | 35 +++++++++++++++++++++++++++++++++++ 2 files changed, 53 insertions(+) diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h index a4509bd977..9aff9a735e 100644 --- a/include/qemu/coroutine.h +++ b/include/qemu/coroutine.h @@ -229,6 +229,24 @@ void qemu_co_rwlock_init(CoRwlock *lock); void qemu_co_rwlock_rdlock(CoRwlock *lock); /** + * Write Locks the CoRwlock from a reader. This is a bit more efficient than + * @qemu_co_rwlock_unlock followed by a separate @qemu_co_rwlock_wrlock. + * However, if the lock cannot be upgraded immediately, control is transferred + * to the caller of the current coroutine. Also, @qemu_co_rwlock_upgrade + * only overrides CoRwlock fairness if there are no concurrent readers, so + * another writer might run while @qemu_co_rwlock_upgrade blocks. + */ +void qemu_co_rwlock_upgrade(CoRwlock *lock); + +/** + * Downgrades a write-side critical section to a reader. Downgrading with + * @qemu_co_rwlock_downgrade never blocks, unlike @qemu_co_rwlock_unlock + * followed by @qemu_co_rwlock_rdlock. This makes it more efficient, but + * may also sometimes be necessary for correctness. + */ +void qemu_co_rwlock_downgrade(CoRwlock *lock); + +/** * Write Locks the mutex. If the lock cannot be taken immediately because * of a parallel reader, control is transferred to the caller of the current * coroutine. diff --git a/util/qemu-coroutine-lock.c b/util/qemu-coroutine-lock.c index b44b5d55eb..846ff9167f 100644 --- a/util/qemu-coroutine-lock.c +++ b/util/qemu-coroutine-lock.c @@ -402,6 +402,21 @@ void qemu_co_rwlock_unlock(CoRwlock *lock) qemu_co_mutex_unlock(&lock->mutex); } +void qemu_co_rwlock_downgrade(CoRwlock *lock) +{ + Coroutine *self = qemu_coroutine_self(); + + /* lock->mutex critical section started in qemu_co_rwlock_wrlock or + * qemu_co_rwlock_upgrade. + */ + assert(lock->reader == 0); + lock->reader++; + qemu_co_mutex_unlock(&lock->mutex); + + /* The rest of the read-side critical section is run without the mutex. */ + self->locks_held++; +} + void qemu_co_rwlock_wrlock(CoRwlock *lock) { qemu_co_mutex_lock(&lock->mutex); @@ -416,3 +431,23 @@ void qemu_co_rwlock_wrlock(CoRwlock *lock) * There is no need to update self->locks_held. */ } + +void qemu_co_rwlock_upgrade(CoRwlock *lock) +{ + Coroutine *self = qemu_coroutine_self(); + + qemu_co_mutex_lock(&lock->mutex); + assert(lock->reader > 0); + lock->reader--; + lock->pending_writer++; + while (lock->reader) { + qemu_co_queue_wait(&lock->queue, &lock->mutex); + } + lock->pending_writer--; + + /* The rest of the write-side critical section is run with + * the mutex taken, similar to qemu_co_rwlock_wrlock. Do + * not account for the lock twice in self->locks_held. + */ + self->locks_held--; +}