From patchwork Fri Jul 19 08:38:45 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 260203 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id B2F9B2C008C for ; Fri, 19 Jul 2013 18:40:56 +1000 (EST) Received: from localhost ([::1]:59776 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V06FC-0005NZ-B2 for incoming@patchwork.ozlabs.org; Fri, 19 Jul 2013 04:40:54 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41644) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V06E9-0004OZ-Sh for qemu-devel@nongnu.org; Fri, 19 Jul 2013 04:39:57 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1V06E1-00075N-Pq for qemu-devel@nongnu.org; Fri, 19 Jul 2013 04:39:49 -0400 Received: from mx1.redhat.com ([209.132.183.28]:6530) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V06E1-00075G-Gg for qemu-devel@nongnu.org; Fri, 19 Jul 2013 04:39:41 -0400 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r6J8dZw1030354 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 19 Jul 2013 04:39:36 -0400 Received: from localhost (ovpn-112-16.ams2.redhat.com [10.36.112.16]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id r6J8dVfS017059; Fri, 19 Jul 2013 04:39:33 -0400 From: Stefan Hajnoczi To: Date: Fri, 19 Jul 2013 16:38:45 +0800 Message-Id: <1374223132-29107-5-git-send-email-stefanha@redhat.com> In-Reply-To: <1374223132-29107-1-git-send-email-stefanha@redhat.com> References: <1374223132-29107-1-git-send-email-stefanha@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: Anthony Liguori , Liu Ping Fan , Liu Ping Fan , Stefan Hajnoczi Subject: [Qemu-devel] [PATCH 04/11] QEMUBH: make AioContext's bh re-entrant X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Liu Ping Fan BH will be used outside big lock, so introduce lock to protect between the writers, ie, bh's adders and deleter. The lock only affects the writers and bh's callback does not take this extra lock. Note that for the same AioContext, aio_bh_poll() can not run in parallel yet. Signed-off-by: Liu Ping Fan Reviewed-by: Stefan Hajnoczi Signed-off-by: Stefan Hajnoczi --- async.c | 33 +++++++++++++++++++++++++++++++-- include/block/aio.h | 7 +++++++ 2 files changed, 38 insertions(+), 2 deletions(-) diff --git a/async.c b/async.c index 90fe906..5ce3633 100644 --- a/async.c +++ b/async.c @@ -47,11 +47,16 @@ QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque) bh->ctx = ctx; bh->cb = cb; bh->opaque = opaque; + qemu_mutex_lock(&ctx->bh_lock); bh->next = ctx->first_bh; + /* Make sure that the members are ready before putting bh into list */ + smp_wmb(); ctx->first_bh = bh; + qemu_mutex_unlock(&ctx->bh_lock); return bh; } +/* Multiple occurrences of aio_bh_poll cannot be called concurrently */ int aio_bh_poll(AioContext *ctx) { QEMUBH *bh, **bhp, *next; @@ -61,9 +66,15 @@ int aio_bh_poll(AioContext *ctx) ret = 0; for (bh = ctx->first_bh; bh; bh = next) { + /* Make sure that fetching bh happens before accessing its members */ + smp_read_barrier_depends(); next = bh->next; if (!bh->deleted && bh->scheduled) { bh->scheduled = 0; + /* Paired with write barrier in bh schedule to ensure reading for + * idle & callbacks coming after bh's scheduling. + */ + smp_rmb(); if (!bh->idle) ret = 1; bh->idle = 0; @@ -75,6 +86,7 @@ int aio_bh_poll(AioContext *ctx) /* remove deleted bhs */ if (!ctx->walking_bh) { + qemu_mutex_lock(&ctx->bh_lock); bhp = &ctx->first_bh; while (*bhp) { bh = *bhp; @@ -85,6 +97,7 @@ int aio_bh_poll(AioContext *ctx) bhp = &bh->next; } } + qemu_mutex_unlock(&ctx->bh_lock); } return ret; @@ -94,24 +107,38 @@ void qemu_bh_schedule_idle(QEMUBH *bh) { if (bh->scheduled) return; - bh->scheduled = 1; bh->idle = 1; + /* Make sure that idle & any writes needed by the callback are done + * before the locations are read in the aio_bh_poll. + */ + smp_wmb(); + bh->scheduled = 1; } void qemu_bh_schedule(QEMUBH *bh) { if (bh->scheduled) return; - bh->scheduled = 1; bh->idle = 0; + /* Make sure that idle & any writes needed by the callback are done + * before the locations are read in the aio_bh_poll. + */ + smp_wmb(); + bh->scheduled = 1; aio_notify(bh->ctx); } + +/* This func is async. + */ void qemu_bh_cancel(QEMUBH *bh) { bh->scheduled = 0; } +/* This func is async.The bottom half will do the delete action at the finial + * end. + */ void qemu_bh_delete(QEMUBH *bh) { bh->scheduled = 0; @@ -176,6 +203,7 @@ aio_ctx_finalize(GSource *source) thread_pool_free(ctx->thread_pool); aio_set_event_notifier(ctx, &ctx->notifier, NULL, NULL); event_notifier_cleanup(&ctx->notifier); + qemu_mutex_destroy(&ctx->bh_lock); g_array_free(ctx->pollfds, TRUE); } @@ -211,6 +239,7 @@ AioContext *aio_context_new(void) ctx = (AioContext *) g_source_new(&aio_source_funcs, sizeof(AioContext)); ctx->pollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD)); ctx->thread_pool = NULL; + qemu_mutex_init(&ctx->bh_lock); event_notifier_init(&ctx->notifier, false); aio_set_event_notifier(ctx, &ctx->notifier, (EventNotifierHandler *) diff --git a/include/block/aio.h b/include/block/aio.h index 1836793..cc77771 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -17,6 +17,7 @@ #include "qemu-common.h" #include "qemu/queue.h" #include "qemu/event_notifier.h" +#include "qemu/thread.h" typedef struct BlockDriverAIOCB BlockDriverAIOCB; typedef void BlockDriverCompletionFunc(void *opaque, int ret); @@ -53,6 +54,8 @@ typedef struct AioContext { */ int walking_handlers; + /* lock to protect between bh's adders and deleter */ + QemuMutex bh_lock; /* Anchor of the list of Bottom Halves belonging to the context */ struct QEMUBH *first_bh; @@ -127,6 +130,8 @@ void aio_notify(AioContext *ctx); * aio_bh_poll: Poll bottom halves for an AioContext. * * These are internal functions used by the QEMU main loop. + * And notice that multiple occurrences of aio_bh_poll cannot + * be called concurrently */ int aio_bh_poll(AioContext *ctx); @@ -163,6 +168,8 @@ void qemu_bh_cancel(QEMUBH *bh); * Deleting a bottom half frees the memory that was allocated for it by * qemu_bh_new. It also implies canceling the bottom half if it was * scheduled. + * This func is async. The bottom half will do the delete action at the finial + * end. * * @bh: The bottom half to be deleted. */