From patchwork Sun Jul 7 10:00:17 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: pingfan liu X-Patchwork-Id: 257318 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 3BCD72C009C for ; Sun, 7 Jul 2013 20:01:26 +1000 (EST) Received: from localhost ([::1]:42617 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UvlmT-0004Pg-Hr for incoming@patchwork.ozlabs.org; Sun, 07 Jul 2013 06:01:21 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:37260) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UvlmE-0004PV-GR for qemu-devel@nongnu.org; Sun, 07 Jul 2013 06:01:08 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UvlmC-0004Zm-Tt for qemu-devel@nongnu.org; Sun, 07 Jul 2013 06:01:06 -0400 Received: from mail-pd0-x22b.google.com ([2607:f8b0:400e:c02::22b]:43527) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UvlmC-0004ZW-Kh for qemu-devel@nongnu.org; Sun, 07 Jul 2013 06:01:04 -0400 Received: by mail-pd0-f171.google.com with SMTP id y14so3154181pdi.2 for ; Sun, 07 Jul 2013 03:01:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer; bh=ipPsfuI3reYFnusSa7W6PYP7Dj7lZ82rZmIUYN9JZLU=; b=X+tiZqPftEOzabZLeGoN91NTx1xWNWbmftyww8IPJzpGs2lDJ9CES8eck5l9okdMBA WiRuOcpVE87D4v3/RWDaPa9Q1RPuvCUcNmN0S5BK/2jigRN+3Q0jq9/lP4ZWIAm3ra9/ nZfwKZ37oMwJcrvncpVQaWX3Ugt0+ou2YtuXXIIMxtV34gVAbyJNkms5VnOFh90Lcipb gX54Gg8k1Y+Pa1Ma0EIsGjvhvFkOdOQl4l3M1D2dol0lSz7O0r4UjcflC27UFALD9z1N be6jSsfpKFR1FOYCKeopi4L5MynndhZQ072TuV4GUEnFZmrRhgyK2k5gurEe+UA6cgdQ +h9g== X-Received: by 10.68.219.70 with SMTP id pm6mr16525399pbc.154.1373191262950; Sun, 07 Jul 2013 03:01:02 -0700 (PDT) Received: from localhost ([222.128.143.72]) by mx.google.com with ESMTPSA id eq5sm16458108pbc.15.2013.07.07.03.00.59 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Sun, 07 Jul 2013 03:01:01 -0700 (PDT) From: Liu Ping Fan To: qemu-devel@nongnu.org Date: Sun, 7 Jul 2013 18:00:17 +0800 Message-Id: <1373191217-3204-1-git-send-email-pingfank@linux.vnet.ibm.com> X-Mailer: git-send-email 1.8.1.4 X-detected-operating-system: by eggs.gnu.org: Error: Malformed IPv6 address (bad octet value). X-Received-From: 2607:f8b0:400e:c02::22b Cc: Kevin Wolf , Paolo Bonzini , Stefan Hajnoczi Subject: [Qemu-devel] [PATCH v5] QEMUBH: make AioContext's bh re-entrant X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Liu Ping Fan BH will be used outside big lock, so introduce lock to protect between the writers, ie, bh's adders and deleter. The lock only affects the writers and bh's callback does not take this extra lock. Note that for the same AioContext, aio_bh_poll() can not run in parallel yet. Signed-off-by: Liu Ping Fan Reviewed-by: Stefan Hajnoczi --- Repost it, and thanks Paolo for having sent pull request for the atomics header that this patch depends on. --- async.c | 32 ++++++++++++++++++++++++++++++-- include/block/aio.h | 7 +++++++ 2 files changed, 37 insertions(+), 2 deletions(-) diff --git a/async.c b/async.c index 90fe906..e73b93c 100644 --- a/async.c +++ b/async.c @@ -47,11 +47,16 @@ QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque) bh->ctx = ctx; bh->cb = cb; bh->opaque = opaque; + qemu_mutex_lock(&ctx->bh_lock); bh->next = ctx->first_bh; + /* Make sure that the members are ready before putting bh into list */ + smp_wmb(); ctx->first_bh = bh; + qemu_mutex_unlock(&ctx->bh_lock); return bh; } +/* Multiple occurrences of aio_bh_poll cannot be called concurrently */ int aio_bh_poll(AioContext *ctx) { QEMUBH *bh, **bhp, *next; @@ -61,9 +66,15 @@ int aio_bh_poll(AioContext *ctx) ret = 0; for (bh = ctx->first_bh; bh; bh = next) { + /* Make sure that fetching bh happens before accessing its members */ + smp_read_barrier_depends(); next = bh->next; if (!bh->deleted && bh->scheduled) { bh->scheduled = 0; + /* Paired with write barrier in bh schedule to ensure reading for + * idle & callbacks coming after bh's scheduling. + */ + smp_rmb(); if (!bh->idle) ret = 1; bh->idle = 0; @@ -75,6 +86,7 @@ int aio_bh_poll(AioContext *ctx) /* remove deleted bhs */ if (!ctx->walking_bh) { + qemu_mutex_lock(&ctx->bh_lock); bhp = &ctx->first_bh; while (*bhp) { bh = *bhp; @@ -85,6 +97,7 @@ int aio_bh_poll(AioContext *ctx) bhp = &bh->next; } } + qemu_mutex_unlock(&ctx->bh_lock); } return ret; @@ -94,24 +107,38 @@ void qemu_bh_schedule_idle(QEMUBH *bh) { if (bh->scheduled) return; - bh->scheduled = 1; bh->idle = 1; + /* Make sure that idle & any writes needed by the callback are done + * before the locations are read in the aio_bh_poll. + */ + smp_wmb(); + bh->scheduled = 1; } void qemu_bh_schedule(QEMUBH *bh) { if (bh->scheduled) return; - bh->scheduled = 1; bh->idle = 0; + /* Make sure that idle & any writes needed by the callback are done + * before the locations are read in the aio_bh_poll. + */ + smp_wmb(); + bh->scheduled = 1; aio_notify(bh->ctx); } + +/* This func is async. + */ void qemu_bh_cancel(QEMUBH *bh) { bh->scheduled = 0; } +/* This func is async.The bottom half will do the delete action at the finial + * end. + */ void qemu_bh_delete(QEMUBH *bh) { bh->scheduled = 0; @@ -211,6 +238,7 @@ AioContext *aio_context_new(void) ctx = (AioContext *) g_source_new(&aio_source_funcs, sizeof(AioContext)); ctx->pollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD)); ctx->thread_pool = NULL; + qemu_mutex_init(&ctx->bh_lock); event_notifier_init(&ctx->notifier, false); aio_set_event_notifier(ctx, &ctx->notifier, (EventNotifierHandler *) diff --git a/include/block/aio.h b/include/block/aio.h index 1836793..cc77771 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -17,6 +17,7 @@ #include "qemu-common.h" #include "qemu/queue.h" #include "qemu/event_notifier.h" +#include "qemu/thread.h" typedef struct BlockDriverAIOCB BlockDriverAIOCB; typedef void BlockDriverCompletionFunc(void *opaque, int ret); @@ -53,6 +54,8 @@ typedef struct AioContext { */ int walking_handlers; + /* lock to protect between bh's adders and deleter */ + QemuMutex bh_lock; /* Anchor of the list of Bottom Halves belonging to the context */ struct QEMUBH *first_bh; @@ -127,6 +130,8 @@ void aio_notify(AioContext *ctx); * aio_bh_poll: Poll bottom halves for an AioContext. * * These are internal functions used by the QEMU main loop. + * And notice that multiple occurrences of aio_bh_poll cannot + * be called concurrently */ int aio_bh_poll(AioContext *ctx); @@ -163,6 +168,8 @@ void qemu_bh_cancel(QEMUBH *bh); * Deleting a bottom half frees the memory that was allocated for it by * qemu_bh_new. It also implies canceling the bottom half if it was * scheduled. + * This func is async. The bottom half will do the delete action at the finial + * end. * * @bh: The bottom half to be deleted. */