From patchwork Wed May 20 06:16:15 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fam Zheng X-Patchwork-Id: 474189 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 5613D1400A0 for ; Wed, 20 May 2015 16:23:20 +1000 (AEST) Received: from localhost ([::1]:50061 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YuxPS-0004Gm-9S for incoming@patchwork.ozlabs.org; Wed, 20 May 2015 02:23:18 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:34364) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YuxJw-0002Bf-AZ for qemu-devel@nongnu.org; Wed, 20 May 2015 02:17:38 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YuxJv-0002r8-Do for qemu-devel@nongnu.org; Wed, 20 May 2015 02:17:36 -0400 Received: from mx1.redhat.com ([209.132.183.28]:40710) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YuxJm-0002pc-Sa; Wed, 20 May 2015 02:17:26 -0400 Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) by mx1.redhat.com (Postfix) with ESMTPS id 8F6208E685; Wed, 20 May 2015 06:17:26 +0000 (UTC) Received: from ad.nay.redhat.com (dhcp-14-155.nay.redhat.com [10.66.14.155]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t4K6GHCL026839; Wed, 20 May 2015 02:17:21 -0400 From: Fam Zheng To: qemu-devel@nongnu.org Date: Wed, 20 May 2015 14:16:15 +0800 Message-Id: <1432102576-6637-13-git-send-email-famz@redhat.com> In-Reply-To: <1432102576-6637-1-git-send-email-famz@redhat.com> References: <1432102576-6637-1-git-send-email-famz@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: Kevin Wolf , qemu-block@nongnu.org, jcody@redhat.com, armbru@redhat.com, mreitz@redhat.com, Stefan Hajnoczi , amit.shah@redhat.com, pbonzini@redhat.com Subject: [Qemu-devel] [PATCH v5 12/13] block: Block "device IO" during bdrv_drain and bdrv_drain_all X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org We don't want new requests from guest, so block the operation around the nested poll. It also avoids looping forever when iothread is submitting a lot of requests. Signed-off-by: Fam Zheng --- block/io.c | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-) diff --git a/block/io.c b/block/io.c index 1ce62c4..b23a83f 100644 --- a/block/io.c +++ b/block/io.c @@ -286,12 +286,21 @@ static bool bdrv_drain_one(BlockDriverState *bs) * * Note that unlike bdrv_drain_all(), the caller must hold the BlockDriverState * AioContext. + * + * Devices are paused to avoid looping forever because otherwise they could + * keep submitting more requests. */ void bdrv_drain(BlockDriverState *bs) { + Error *blocker = NULL; + + error_setg(&blocker, "bdrv_drain in progress"); + bdrv_op_block(bs, BLOCK_OP_TYPE_DEVICE_IO, blocker); while (bdrv_drain_one(bs)) { /* Keep iterating */ } + bdrv_op_unblock(bs, BLOCK_OP_TYPE_DEVICE_IO, blocker); + error_free(blocker); } /* @@ -303,14 +312,20 @@ void bdrv_drain(BlockDriverState *bs) * Note that completion of an asynchronous I/O operation can trigger any * number of other I/O operations on other devices---for example a coroutine * can be arbitrarily complex and a constant flow of I/O can come until the - * coroutine is complete. Because of this, it is not possible to have a - * function to drain a single device's I/O queue. + * coroutine is complete. Because of this, we must call bdrv_drain_one in a + * loop. + * + * We explicitly pause block jobs and devices to prevent them from submitting + * more requests. */ void bdrv_drain_all(void) { /* Always run first iteration so any pending completion BHs run */ bool busy = true; BlockDriverState *bs = NULL; + Error *blocker = NULL; + + error_setg(&blocker, "bdrv_drain_all in progress"); while ((bs = bdrv_next(bs))) { AioContext *aio_context = bdrv_get_aio_context(bs); @@ -319,6 +334,7 @@ void bdrv_drain_all(void) if (bs->job) { block_job_pause(bs->job); } + bdrv_op_block(bs, BLOCK_OP_TYPE_DEVICE_IO, blocker); aio_context_release(aio_context); } @@ -343,8 +359,10 @@ void bdrv_drain_all(void) if (bs->job) { block_job_resume(bs->job); } + bdrv_op_unblock(bs, BLOCK_OP_TYPE_DEVICE_IO, blocker); aio_context_release(aio_context); } + error_free(blocker); } /**