From patchwork Wed Dec 6 10:53:04 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 845112 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=2001:4830:134:3::11; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3ysFrG1W7Wz9s9Y for ; Wed, 6 Dec 2017 21:55:10 +1100 (AEDT) Received: from localhost ([::1]:54858 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eMXM4-0002lV-8w for incoming@patchwork.ozlabs.org; Wed, 06 Dec 2017 05:55:08 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50687) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eMXKO-0001n8-G0 for qemu-devel@nongnu.org; Wed, 06 Dec 2017 05:53:26 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eMXKN-0006aN-Fl for qemu-devel@nongnu.org; Wed, 06 Dec 2017 05:53:24 -0500 Received: from mx1.redhat.com ([209.132.183.28]:59064) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1eMXKJ-0006X8-Vr; Wed, 06 Dec 2017 05:53:20 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 115BC7F7A9; Wed, 6 Dec 2017 10:53:19 +0000 (UTC) Received: from localhost.localdomain.com (unknown [10.36.117.0]) by smtp.corp.redhat.com (Postfix) with ESMTP id A2F70600C0; Wed, 6 Dec 2017 10:53:17 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Wed, 6 Dec 2017 11:53:04 +0100 Message-Id: <20171206105309.3468-2-kwolf@redhat.com> In-Reply-To: <20171206105309.3468-1-kwolf@redhat.com> References: <20171206105309.3468-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Wed, 06 Dec 2017 10:53:19 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH v2 1/6] block: Make bdrv_drain_invoke() recursive X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, pbonzini@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, qemu-stable@nongnu.org Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" This change separates bdrv_drain_invoke(), which calls the BlockDriver drain callbacks, from bdrv_drain_recurse(). Instead, the function performs its own recursion now. One reason for this is that bdrv_drain_recurse() can be called multiple times by bdrv_drain_all_begin(), but the callbacks may only be called once. The separation is necessary to fix this bug. The other reason is that we intend to go to a model where we call all driver callbacks first, and only then start polling. This is not fully achieved yet with this patch, as bdrv_drain_invoke() contains a BDRV_POLL_WHILE() loop for the block driver callbacks, which can still call callbacks for any unrelated event. It's a step in this direction anyway. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf --- block/io.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/block/io.c b/block/io.c index 6773926fc1..096468b761 100644 --- a/block/io.c +++ b/block/io.c @@ -175,8 +175,10 @@ static void coroutine_fn bdrv_drain_invoke_entry(void *opaque) bdrv_wakeup(bs); } +/* Recursively call BlockDriver.bdrv_co_drain_begin/end callbacks */ static void bdrv_drain_invoke(BlockDriverState *bs, bool begin) { + BdrvChild *child, *tmp; BdrvCoDrainData data = { .bs = bs, .done = false, .begin = begin}; if (!bs->drv || (begin && !bs->drv->bdrv_co_drain_begin) || @@ -187,6 +189,10 @@ static void bdrv_drain_invoke(BlockDriverState *bs, bool begin) data.co = qemu_coroutine_create(bdrv_drain_invoke_entry, &data); bdrv_coroutine_enter(bs, data.co); BDRV_POLL_WHILE(bs, !data.done); + + QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) { + bdrv_drain_invoke(child->bs, begin); + } } static bool bdrv_drain_recurse(BlockDriverState *bs, bool begin) @@ -194,9 +200,6 @@ static bool bdrv_drain_recurse(BlockDriverState *bs, bool begin) BdrvChild *child, *tmp; bool waited; - /* Ensure any pending metadata writes are submitted to bs->file. */ - bdrv_drain_invoke(bs, begin); - /* Wait for drained requests to finish */ waited = BDRV_POLL_WHILE(bs, atomic_read(&bs->in_flight) > 0); @@ -279,6 +282,7 @@ void bdrv_drained_begin(BlockDriverState *bs) bdrv_parent_drained_begin(bs); } + bdrv_drain_invoke(bs, true); bdrv_drain_recurse(bs, true); } @@ -294,6 +298,7 @@ void bdrv_drained_end(BlockDriverState *bs) } bdrv_parent_drained_end(bs); + bdrv_drain_invoke(bs, false); bdrv_drain_recurse(bs, false); aio_enable_external(bdrv_get_aio_context(bs)); } @@ -372,6 +377,8 @@ void bdrv_drain_all_begin(void) aio_context_acquire(aio_context); for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { if (aio_context == bdrv_get_aio_context(bs)) { + /* FIXME Calling this multiple times is wrong */ + bdrv_drain_invoke(bs, true); waited |= bdrv_drain_recurse(bs, true); } } @@ -393,6 +400,7 @@ void bdrv_drain_all_end(void) aio_context_acquire(aio_context); aio_enable_external(aio_context); bdrv_parent_drained_end(bs); + bdrv_drain_invoke(bs, false); bdrv_drain_recurse(bs, false); aio_context_release(aio_context); }