From patchwork Tue Dec 5 14:54:21 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 844799 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=2001:4830:134:3::11; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3yrlDq5551z9t2x for ; Wed, 6 Dec 2017 01:56:11 +1100 (AEDT) Received: from localhost ([::1]:49236 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eMEdl-0001n8-Mr for incoming@patchwork.ozlabs.org; Tue, 05 Dec 2017 09:56:09 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41297) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eMEcY-0001Dc-HG for qemu-devel@nongnu.org; Tue, 05 Dec 2017 09:54:55 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eMEcU-0005XZ-Kk for qemu-devel@nongnu.org; Tue, 05 Dec 2017 09:54:54 -0500 Received: from mx1.redhat.com ([209.132.183.28]:41662) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1eMEcL-0005Ui-Ad; Tue, 05 Dec 2017 09:54:41 -0500 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6261FC01A69E; Tue, 5 Dec 2017 14:54:40 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-104.ams2.redhat.com [10.36.117.104]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5F45E5C661; Tue, 5 Dec 2017 14:54:38 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Tue, 5 Dec 2017 15:54:21 +0100 Message-Id: <20171205145424.24598-2-kwolf@redhat.com> In-Reply-To: <20171205145424.24598-1-kwolf@redhat.com> References: <20171205145424.24598-1-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Tue, 05 Dec 2017 14:54:40 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 1/4] block: Make bdrv_drain_invoke() recursive X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, pbonzini@redhat.com, famz@redhat.com, qemu-devel@nongnu.org, qemu-stable@nongnu.org Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" This change separates bdrv_drain_invoke(), which calls the BlockDriver drain callbacks, from bdrv_drain_recurse(). Instead, the function performs its own recursion now. One reason for this is that bdrv_drain_recurse() can be called multiple times bdrv_drain_recurse(), but the callbacks may only be called once. The separation is necessary to fix this bug. The other reasons is that we intend to go to model where we call all driver callbacks first, and only then start polling. This is not fully achieved yet with this patch, as bdrv_drain_invoke() contains a BDRV_POLL_WHILE() loop for the block driver callbacks which can still call callbacks for any unrelated event. It's a step in this direction anyway. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf --- block/io.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/block/io.c b/block/io.c index 6773926fc1..096468b761 100644 --- a/block/io.c +++ b/block/io.c @@ -175,8 +175,10 @@ static void coroutine_fn bdrv_drain_invoke_entry(void *opaque) bdrv_wakeup(bs); } +/* Recursively call BlockDriver.bdrv_co_drain_begin/end callbacks */ static void bdrv_drain_invoke(BlockDriverState *bs, bool begin) { + BdrvChild *child, *tmp; BdrvCoDrainData data = { .bs = bs, .done = false, .begin = begin}; if (!bs->drv || (begin && !bs->drv->bdrv_co_drain_begin) || @@ -187,6 +189,10 @@ static void bdrv_drain_invoke(BlockDriverState *bs, bool begin) data.co = qemu_coroutine_create(bdrv_drain_invoke_entry, &data); bdrv_coroutine_enter(bs, data.co); BDRV_POLL_WHILE(bs, !data.done); + + QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) { + bdrv_drain_invoke(child->bs, begin); + } } static bool bdrv_drain_recurse(BlockDriverState *bs, bool begin) @@ -194,9 +200,6 @@ static bool bdrv_drain_recurse(BlockDriverState *bs, bool begin) BdrvChild *child, *tmp; bool waited; - /* Ensure any pending metadata writes are submitted to bs->file. */ - bdrv_drain_invoke(bs, begin); - /* Wait for drained requests to finish */ waited = BDRV_POLL_WHILE(bs, atomic_read(&bs->in_flight) > 0); @@ -279,6 +282,7 @@ void bdrv_drained_begin(BlockDriverState *bs) bdrv_parent_drained_begin(bs); } + bdrv_drain_invoke(bs, true); bdrv_drain_recurse(bs, true); } @@ -294,6 +298,7 @@ void bdrv_drained_end(BlockDriverState *bs) } bdrv_parent_drained_end(bs); + bdrv_drain_invoke(bs, false); bdrv_drain_recurse(bs, false); aio_enable_external(bdrv_get_aio_context(bs)); } @@ -372,6 +377,8 @@ void bdrv_drain_all_begin(void) aio_context_acquire(aio_context); for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { if (aio_context == bdrv_get_aio_context(bs)) { + /* FIXME Calling this multiple times is wrong */ + bdrv_drain_invoke(bs, true); waited |= bdrv_drain_recurse(bs, true); } } @@ -393,6 +400,7 @@ void bdrv_drain_all_end(void) aio_context_acquire(aio_context); aio_enable_external(aio_context); bdrv_parent_drained_end(bs); + bdrv_drain_invoke(bs, false); bdrv_drain_recurse(bs, false); aio_context_release(aio_context); }