From patchwork Tue Jun 18 16:19:52 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Roth X-Patchwork-Id: 252383 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id EF0E92C02B9 for ; Wed, 19 Jun 2013 02:20:48 +1000 (EST) Received: from localhost ([::1]:34355 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UoyeE-0003i8-9A for incoming@patchwork.ozlabs.org; Tue, 18 Jun 2013 12:20:46 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:46693) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Uoydw-0003dx-Ja for qemu-devel@nongnu.org; Tue, 18 Jun 2013 12:20:30 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Uoydv-0005SD-0y for qemu-devel@nongnu.org; Tue, 18 Jun 2013 12:20:28 -0400 Received: from mail-ie0-x22c.google.com ([2607:f8b0:4001:c03::22c]:48365) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Uoydu-0005S8-Pm for qemu-devel@nongnu.org; Tue, 18 Jun 2013 12:20:26 -0400 Received: by mail-ie0-f172.google.com with SMTP id 16so10480630iea.17 for ; Tue, 18 Jun 2013 09:20:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=OSKY4Uvgj+Sw5wT80thGPKbsjXqVVlFDydhKfF+EmRM=; b=HPUqh7DRVreZhL0W1oaImK2l8C1qoBBo5pGbgAr5Y5ZP3WdPPxidNip3KLGLvJvA05 jTcTTMH6Q9LM0WbuMYPqjCZu06k0zUbN4QSUNzq2zjy5u24UmZ/wqjnCou6PELdEvhZq NK3ESlcFMhpVOG9qhBCLDDmUc0WdjJ3z1vUUxVnzeYyfDCCqTBYpFFus2es/TfyzhYvY Fls71dzVLe+mbfBbl8YUdcVtGo41wXvqsj2cjxHXHTvBHR5LUS1p7kChQpicgDp7V0nC mUDcmWoLLICxfe6N4vK5lwokYaqfYpZFd2hYolLwhaz9qUK397GeyihhToCuAqEwV/Lx RLlg== X-Received: by 10.42.122.19 with SMTP id l19mr1577601icr.12.1371572426207; Tue, 18 Jun 2013 09:20:26 -0700 (PDT) Received: from localhost ([32.97.110.51]) by mx.google.com with ESMTPSA id fu2sm1694556igb.3.2013.06.18.09.20.24 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Tue, 18 Jun 2013 09:20:25 -0700 (PDT) Date: Tue, 18 Jun 2013 11:19:52 -0500 From: mdroth To: Liu Ping Fan Message-ID: <20130618161952.GB12685@vm> References: <1371381681-14252-1-git-send-email-pingfanl@linux.vnet.ibm.com> <1371381681-14252-3-git-send-email-pingfanl@linux.vnet.ibm.com> <20130618151438.GA12685@vm> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20130618151438.GA12685@vm> User-Agent: Mutt/1.5.21 (2010-09-15) X-detected-operating-system: by eggs.gnu.org: Error: Malformed IPv6 address (bad octet value). X-Received-From: 2607:f8b0:4001:c03::22c Cc: Paolo Bonzini , qemu-devel@nongnu.org, Anthony Liguori Subject: Re: [Qemu-devel] [PATCH v2 2/2] QEMUBH: make AioContext's bh re-entrant X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org On Tue, Jun 18, 2013 at 10:14:38AM -0500, mdroth wrote: > On Sun, Jun 16, 2013 at 07:21:21PM +0800, Liu Ping Fan wrote: > > BH will be used outside big lock, so introduce lock to protect > > between the writers, ie, bh's adders and deleter. > > Note that the lock only affects the writers and bh's callback does > > not take this extra lock. > > > > Signed-off-by: Liu Ping Fan > > --- > > async.c | 21 +++++++++++++++++++++ > > include/block/aio.h | 3 +++ > > 2 files changed, 24 insertions(+) > > > > diff --git a/async.c b/async.c > > index 90fe906..6a3269f 100644 > > --- a/async.c > > +++ b/async.c > > @@ -47,8 +47,12 @@ QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque) > > bh->ctx = ctx; > > bh->cb = cb; > > bh->opaque = opaque; > > + qemu_mutex_lock(&ctx->bh_lock); > > bh->next = ctx->first_bh; > > + /* Make sure the memebers ready before putting bh into list */ > > + smp_wmb(); > > ctx->first_bh = bh; > > + qemu_mutex_unlock(&ctx->bh_lock); > > return bh; > > } > > > > @@ -61,12 +65,18 @@ int aio_bh_poll(AioContext *ctx) > > > > ret = 0; > > for (bh = ctx->first_bh; bh; bh = next) { > > + /* Make sure fetching bh before accessing its members */ > > + smp_read_barrier_depends(); > > next = bh->next; > > if (!bh->deleted && bh->scheduled) { > > bh->scheduled = 0; > > if (!bh->idle) > > ret = 1; > > bh->idle = 0; > > + /* Paired with write barrier in bh schedule to ensure reading for > > + * callbacks coming after bh's scheduling. > > + */ > > + smp_rmb(); > > bh->cb(bh->opaque); > > Could we possibly simplify this by introducing a recursive mutex that we > could use to protect the whole list loop and hold even during the cb? > > I assume we can't hold the lock during the cb currently since we might > try to reschedule, but if it's a recursive mutex would that simplify > things? > > I've been doing something similar with IOHandlers for the QContext > stuff, and that's the approach I took. This patch introduces the > recursive mutex: > > https://github.com/mdroth/qemu/commit/c7ee0844da62283c9466fcb10ddbfadd0b8bfc53 Doh, missing this fix: > > > > } > > } > > @@ -75,6 +85,7 @@ int aio_bh_poll(AioContext *ctx) > > > > /* remove deleted bhs */ > > if (!ctx->walking_bh) { > > + qemu_mutex_lock(&ctx->bh_lock); > > bhp = &ctx->first_bh; > > while (*bhp) { > > bh = *bhp; > > @@ -85,6 +96,7 @@ int aio_bh_poll(AioContext *ctx) > > bhp = &bh->next; > > } > > } > > + qemu_mutex_unlock(&ctx->bh_lock); > > } > > > > return ret; > > @@ -94,6 +106,10 @@ void qemu_bh_schedule_idle(QEMUBH *bh) > > { > > if (bh->scheduled) > > return; > > + /* Make sure any writes that are needed by the callback are done > > + * before the locations are read in the aio_bh_poll. > > + */ > > + smp_wmb(); > > bh->scheduled = 1; > > bh->idle = 1; > > } > > @@ -102,6 +118,10 @@ void qemu_bh_schedule(QEMUBH *bh) > > { > > if (bh->scheduled) > > return; > > + /* Make sure any writes that are needed by the callback are done > > + * before the locations are read in the aio_bh_poll. > > + */ > > + smp_wmb(); > > bh->scheduled = 1; > > bh->idle = 0; > > aio_notify(bh->ctx); > > @@ -211,6 +231,7 @@ AioContext *aio_context_new(void) > > ctx = (AioContext *) g_source_new(&aio_source_funcs, sizeof(AioContext)); > > ctx->pollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD)); > > ctx->thread_pool = NULL; > > + qemu_mutex_init(&ctx->bh_lock); > > event_notifier_init(&ctx->notifier, false); > > aio_set_event_notifier(ctx, &ctx->notifier, > > (EventNotifierHandler *) > > diff --git a/include/block/aio.h b/include/block/aio.h > > index 1836793..971fbef 100644 > > --- a/include/block/aio.h > > +++ b/include/block/aio.h > > @@ -17,6 +17,7 @@ > > #include "qemu-common.h" > > #include "qemu/queue.h" > > #include "qemu/event_notifier.h" > > +#include "qemu/thread.h" > > > > typedef struct BlockDriverAIOCB BlockDriverAIOCB; > > typedef void BlockDriverCompletionFunc(void *opaque, int ret); > > @@ -53,6 +54,8 @@ typedef struct AioContext { > > */ > > int walking_handlers; > > > > + /* lock to protect between bh's adders and deleter */ > > + QemuMutex bh_lock; > > /* Anchor of the list of Bottom Halves belonging to the context */ > > struct QEMUBH *first_bh; > > > > -- > > 1.8.1.4 > > > > diff --git a/util/qemu-thread-win32.c b/util/qemu-thread-win32.c index 0623eca..a4d8170 100644 --- a/util/qemu-thread-win32.c +++ b/util/qemu-thread-win32.c @@ -81,7 +81,7 @@ int qemu_mutex_trylock(QemuMutex *mutex) void qemu_mutex_unlock(QemuMutex *mutex) { assert(mutex->owner == GetCurrentThreadId()); - if (!mutex->recursive || mutex->recursion_depth == 0) { + if (!mutex->recursive || --mutex->recursion_depth == 0) { mutex->owner = 0; } LeaveCriticalSection(&mutex->lock);