From patchwork Wed Jun 19 20:59:29 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: pingfan liu X-Patchwork-Id: 252575 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 2355C2C02A5 for ; Wed, 19 Jun 2013 23:02:18 +1000 (EST) Received: from localhost ([::1]:54689 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UpI1g-0004Ij-A9 for incoming@patchwork.ozlabs.org; Wed, 19 Jun 2013 09:02:16 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:54299) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UpI0F-0002ZW-2w for qemu-devel@nongnu.org; Wed, 19 Jun 2013 09:00:48 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UpI08-0002xV-RJ for qemu-devel@nongnu.org; Wed, 19 Jun 2013 09:00:46 -0400 Received: from mail-pb0-x235.google.com ([2607:f8b0:400e:c01::235]:59866) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UpI08-0002xP-Ha for qemu-devel@nongnu.org; Wed, 19 Jun 2013 09:00:40 -0400 Received: by mail-pb0-f53.google.com with SMTP id xb12so5001445pbc.40 for ; Wed, 19 Jun 2013 06:00:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=3h3+Yiof4uAUFN86VpbiQ8qtnfjuyheV/HVogeQcAKM=; b=oLMAuPE/jNz2NalT9Fi1C3MsQ5gFTm2yJCq1lbvzWcGK2NePfm16GY32HFHG33sC0w wg9cGvv2BPbRElbJhU2XQ6wCdyu0s5GWbaStKSVsYVUhSIjJlL4e40xio3xj7lHtbPF1 TiZaPUmMPm3i+7nl1rmxmeke/49Q0obcyXx1+i5GO/ndiMzEwnzZGJlVWK9iFBUnJ3Yk xMbs53AP1t5l/5/CRm06SsvwzSANqq8z4LwLLP1SrvS/gXpv16c/tXRjrEnWe6jHZ1IJ kLQ5zBBwQIrrR2x+BBSAstsKH1hEh2Qq7B4pBCXdLkim4Oj+vOQEUZ1mnfwoWQV7UcJP bBeA== X-Received: by 10.68.42.134 with SMTP id o6mr2693569pbl.149.1371646839697; Wed, 19 Jun 2013 06:00:39 -0700 (PDT) Received: from localhost ([111.192.244.227]) by mx.google.com with ESMTPSA id qi1sm24695256pac.21.2013.06.19.06.00.35 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Wed, 19 Jun 2013 06:00:38 -0700 (PDT) From: Liu Ping Fan To: qemu-devel@nongnu.org Date: Thu, 20 Jun 2013 04:59:29 +0800 Message-Id: <1371675569-6516-3-git-send-email-pingfank@linux.vnet.ibm.com> X-Mailer: git-send-email 1.8.1.4 In-Reply-To: <1371675569-6516-1-git-send-email-pingfank@linux.vnet.ibm.com> References: <1371675569-6516-1-git-send-email-pingfank@linux.vnet.ibm.com> X-detected-operating-system: by eggs.gnu.org: Error: Malformed IPv6 address (bad octet value). X-Received-From: 2607:f8b0:400e:c01::235 Cc: Kevin Wolf , Paolo Bonzini , Liu Ping Fan , Stefan Hajnoczi , Anthony Liguori Subject: [Qemu-devel] [PATCH v3 2/2] QEMUBH: make AioContext's bh re-entrant X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org BH will be used outside big lock, so introduce lock to protect between the writers, ie, bh's adders and deleter. The lock only affects the writers and bh's callback does not take this extra lock. Note that for the same AioContext, aio_bh_poll() can not run in parallel yet. Signed-off-by: Liu Ping Fan --- async.c | 22 ++++++++++++++++++++++ include/block/aio.h | 5 +++++ 2 files changed, 27 insertions(+) diff --git a/async.c b/async.c index 90fe906..4b17eb7 100644 --- a/async.c +++ b/async.c @@ -47,11 +47,16 @@ QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque) bh->ctx = ctx; bh->cb = cb; bh->opaque = opaque; + qemu_mutex_lock(&ctx->bh_lock); bh->next = ctx->first_bh; + /* Make sure the members ready before putting bh into list */ + smp_wmb(); ctx->first_bh = bh; + qemu_mutex_unlock(&ctx->bh_lock); return bh; } +/* Multiple occurrences of aio_bh_poll cannot be called concurrently */ int aio_bh_poll(AioContext *ctx) { QEMUBH *bh, **bhp, *next; @@ -61,12 +66,18 @@ int aio_bh_poll(AioContext *ctx) ret = 0; for (bh = ctx->first_bh; bh; bh = next) { + /* Make sure fetching bh before accessing its members */ + smp_read_barrier_depends(); next = bh->next; if (!bh->deleted && bh->scheduled) { bh->scheduled = 0; if (!bh->idle) ret = 1; bh->idle = 0; + /* Paired with write barrier in bh schedule to ensure reading for + * callbacks coming after bh's scheduling. + */ + smp_rmb(); bh->cb(bh->opaque); } } @@ -75,6 +86,7 @@ int aio_bh_poll(AioContext *ctx) /* remove deleted bhs */ if (!ctx->walking_bh) { + qemu_mutex_lock(&ctx->bh_lock); bhp = &ctx->first_bh; while (*bhp) { bh = *bhp; @@ -85,6 +97,7 @@ int aio_bh_poll(AioContext *ctx) bhp = &bh->next; } } + qemu_mutex_unlock(&ctx->bh_lock); } return ret; @@ -94,6 +107,10 @@ void qemu_bh_schedule_idle(QEMUBH *bh) { if (bh->scheduled) return; + /* Make sure any writes that are needed by the callback are done + * before the locations are read in the aio_bh_poll. + */ + smp_wmb(); bh->scheduled = 1; bh->idle = 1; } @@ -102,6 +119,10 @@ void qemu_bh_schedule(QEMUBH *bh) { if (bh->scheduled) return; + /* Make sure any writes that are needed by the callback are done + * before the locations are read in the aio_bh_poll. + */ + smp_wmb(); bh->scheduled = 1; bh->idle = 0; aio_notify(bh->ctx); @@ -211,6 +232,7 @@ AioContext *aio_context_new(void) ctx = (AioContext *) g_source_new(&aio_source_funcs, sizeof(AioContext)); ctx->pollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD)); ctx->thread_pool = NULL; + qemu_mutex_init(&ctx->bh_lock); event_notifier_init(&ctx->notifier, false); aio_set_event_notifier(ctx, &ctx->notifier, (EventNotifierHandler *) diff --git a/include/block/aio.h b/include/block/aio.h index 1836793..686b10f 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -17,6 +17,7 @@ #include "qemu-common.h" #include "qemu/queue.h" #include "qemu/event_notifier.h" +#include "qemu/thread.h" typedef struct BlockDriverAIOCB BlockDriverAIOCB; typedef void BlockDriverCompletionFunc(void *opaque, int ret); @@ -53,6 +54,8 @@ typedef struct AioContext { */ int walking_handlers; + /* lock to protect between bh's adders and deleter */ + QemuMutex bh_lock; /* Anchor of the list of Bottom Halves belonging to the context */ struct QEMUBH *first_bh; @@ -127,6 +130,8 @@ void aio_notify(AioContext *ctx); * aio_bh_poll: Poll bottom halves for an AioContext. * * These are internal functions used by the QEMU main loop. + * And notice that multiple occurrences of aio_bh_poll cannot + * be called concurrently */ int aio_bh_poll(AioContext *ctx);