From patchwork Sun Jun 16 11:21:21 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: pingfan liu X-Patchwork-Id: 251672 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id D0FE52C0174 for ; Sun, 16 Jun 2013 21:22:21 +1000 (EST) Received: from localhost ([::1]:49007 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UoB2J-0006qk-Kz for incoming@patchwork.ozlabs.org; Sun, 16 Jun 2013 07:22:19 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:44010) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UoB1p-0006ma-GN for qemu-devel@nongnu.org; Sun, 16 Jun 2013 07:21:51 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UoB1o-00070p-CI for qemu-devel@nongnu.org; Sun, 16 Jun 2013 07:21:49 -0400 Received: from mail-pb0-x22a.google.com ([2607:f8b0:400e:c01::22a]:34665) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UoB1o-0006zL-6A for qemu-devel@nongnu.org; Sun, 16 Jun 2013 07:21:48 -0400 Received: by mail-pb0-f42.google.com with SMTP id un1so1890572pbc.1 for ; Sun, 16 Jun 2013 04:21:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=aH/Ldk6GRxVwjN5YcOiwgTkDseuOdMLrsdYLeK9Y7XI=; b=lXVpQScsJ/tTxBsUZu5TbOccASoU8D6vY9pRpV3J864/JBg/BzkECsiP6HtECxGvqi Jb/dp3HbR7BJMMs31rZkZzGwnDQdZpoAs0mxk4ZvVzEFmsuw9/ZhhbHDTbPiSrm2qszy M/X0uFIqR3H2ffTl++0o/nuF/Z8thv6HK0Wg0re2sIKBf6QvrK8jxWnXFO7qEf1pWGa7 eUJHPuEs4tp3hAZq6p8HeVTAtjcqA5ESg1aWY26Qr5snDDlcYBkuOgmtRwIbKC30MmAc l6ck24yFsR23up8RLdNdcfdf5ZHEvQAUDikLomZW6JuLjn5oqzVrnKA9G2mWLpKXrrzi CMVg== X-Received: by 10.68.164.132 with SMTP id yq4mr9011583pbb.123.1371381707474; Sun, 16 Jun 2013 04:21:47 -0700 (PDT) Received: from localhost ([222.128.137.231]) by mx.google.com with ESMTPSA id z5sm9656325pbk.0.2013.06.16.04.21.44 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Sun, 16 Jun 2013 04:21:46 -0700 (PDT) From: Liu Ping Fan To: qemu-devel@nongnu.org Date: Sun, 16 Jun 2013 19:21:21 +0800 Message-Id: <1371381681-14252-3-git-send-email-pingfanl@linux.vnet.ibm.com> X-Mailer: git-send-email 1.8.1.4 In-Reply-To: <1371381681-14252-1-git-send-email-pingfanl@linux.vnet.ibm.com> References: <1371381681-14252-1-git-send-email-pingfanl@linux.vnet.ibm.com> X-detected-operating-system: by eggs.gnu.org: Error: Malformed IPv6 address (bad octet value). X-Received-From: 2607:f8b0:400e:c01::22a Cc: Paolo Bonzini , Anthony Liguori Subject: [Qemu-devel] [PATCH v2 2/2] QEMUBH: make AioContext's bh re-entrant X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org BH will be used outside big lock, so introduce lock to protect between the writers, ie, bh's adders and deleter. Note that the lock only affects the writers and bh's callback does not take this extra lock. Signed-off-by: Liu Ping Fan --- async.c | 21 +++++++++++++++++++++ include/block/aio.h | 3 +++ 2 files changed, 24 insertions(+) diff --git a/async.c b/async.c index 90fe906..6a3269f 100644 --- a/async.c +++ b/async.c @@ -47,8 +47,12 @@ QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque) bh->ctx = ctx; bh->cb = cb; bh->opaque = opaque; + qemu_mutex_lock(&ctx->bh_lock); bh->next = ctx->first_bh; + /* Make sure the memebers ready before putting bh into list */ + smp_wmb(); ctx->first_bh = bh; + qemu_mutex_unlock(&ctx->bh_lock); return bh; } @@ -61,12 +65,18 @@ int aio_bh_poll(AioContext *ctx) ret = 0; for (bh = ctx->first_bh; bh; bh = next) { + /* Make sure fetching bh before accessing its members */ + smp_read_barrier_depends(); next = bh->next; if (!bh->deleted && bh->scheduled) { bh->scheduled = 0; if (!bh->idle) ret = 1; bh->idle = 0; + /* Paired with write barrier in bh schedule to ensure reading for + * callbacks coming after bh's scheduling. + */ + smp_rmb(); bh->cb(bh->opaque); } } @@ -75,6 +85,7 @@ int aio_bh_poll(AioContext *ctx) /* remove deleted bhs */ if (!ctx->walking_bh) { + qemu_mutex_lock(&ctx->bh_lock); bhp = &ctx->first_bh; while (*bhp) { bh = *bhp; @@ -85,6 +96,7 @@ int aio_bh_poll(AioContext *ctx) bhp = &bh->next; } } + qemu_mutex_unlock(&ctx->bh_lock); } return ret; @@ -94,6 +106,10 @@ void qemu_bh_schedule_idle(QEMUBH *bh) { if (bh->scheduled) return; + /* Make sure any writes that are needed by the callback are done + * before the locations are read in the aio_bh_poll. + */ + smp_wmb(); bh->scheduled = 1; bh->idle = 1; } @@ -102,6 +118,10 @@ void qemu_bh_schedule(QEMUBH *bh) { if (bh->scheduled) return; + /* Make sure any writes that are needed by the callback are done + * before the locations are read in the aio_bh_poll. + */ + smp_wmb(); bh->scheduled = 1; bh->idle = 0; aio_notify(bh->ctx); @@ -211,6 +231,7 @@ AioContext *aio_context_new(void) ctx = (AioContext *) g_source_new(&aio_source_funcs, sizeof(AioContext)); ctx->pollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD)); ctx->thread_pool = NULL; + qemu_mutex_init(&ctx->bh_lock); event_notifier_init(&ctx->notifier, false); aio_set_event_notifier(ctx, &ctx->notifier, (EventNotifierHandler *) diff --git a/include/block/aio.h b/include/block/aio.h index 1836793..971fbef 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -17,6 +17,7 @@ #include "qemu-common.h" #include "qemu/queue.h" #include "qemu/event_notifier.h" +#include "qemu/thread.h" typedef struct BlockDriverAIOCB BlockDriverAIOCB; typedef void BlockDriverCompletionFunc(void *opaque, int ret); @@ -53,6 +54,8 @@ typedef struct AioContext { */ int walking_handlers; + /* lock to protect between bh's adders and deleter */ + QemuMutex bh_lock; /* Anchor of the list of Bottom Halves belonging to the context */ struct QEMUBH *first_bh;