From patchwork Wed Nov 30 12:23:43 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 128494 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [140.186.70.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 86CB0B6F87 for ; Wed, 30 Nov 2011 23:24:59 +1100 (EST) Received: from localhost ([::1]:54965 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1RVjDc-000282-O7 for incoming@patchwork.ozlabs.org; Wed, 30 Nov 2011 07:24:56 -0500 Received: from eggs.gnu.org ([140.186.70.92]:51317) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1RVjDW-00026r-Fa for qemu-devel@nongnu.org; Wed, 30 Nov 2011 07:24:51 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1RVjDJ-0007cu-2P for qemu-devel@nongnu.org; Wed, 30 Nov 2011 07:24:42 -0500 Received: from e06smtp16.uk.ibm.com ([195.75.94.112]:59613) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1RVjDI-0007au-KI for qemu-devel@nongnu.org; Wed, 30 Nov 2011 07:24:36 -0500 Received: from /spool/local by e06smtp16.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 30 Nov 2011 12:24:25 -0000 Received: from d06nrmr1707.portsmouth.uk.ibm.com ([9.149.39.225]) by e06smtp16.uk.ibm.com ([192.168.101.146]) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 30 Nov 2011 12:23:52 -0000 Received: from d06av06.portsmouth.uk.ibm.com (d06av06.portsmouth.uk.ibm.com [9.149.37.217]) by d06nrmr1707.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id pAUCNqMQ2732212 for ; Wed, 30 Nov 2011 12:23:52 GMT Received: from d06av06.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av06.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id pAUCNhgI005379 for ; Wed, 30 Nov 2011 05:23:43 -0700 Received: from localhost (gbv54226.uk.ibm.com [9.146.211.23]) by d06av06.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id pAUCNhFb005363; Wed, 30 Nov 2011 05:23:43 -0700 From: Stefan Hajnoczi To: Date: Wed, 30 Nov 2011 12:23:43 +0000 Message-Id: <1322655824-31778-4-git-send-email-stefanha@linux.vnet.ibm.com> X-Mailer: git-send-email 1.7.7.3 In-Reply-To: <1322655824-31778-1-git-send-email-stefanha@linux.vnet.ibm.com> References: <1322655824-31778-1-git-send-email-stefanha@linux.vnet.ibm.com> x-cbid: 11113012-3548-0000-0000-00000042538F X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 195.75.94.112 Cc: Kevin Wolf , Stefan Hajnoczi Subject: [Qemu-devel] [PATCH 3/3] block: convert qemu_aio_flush() calls to bdrv_drain_all() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Many places in QEMU call qemu_aio_flush() to complete all pending asynchronous I/O. Most of these places actually want to drain all block requests but there is block layer API to do so. This patch introduces the bdrv_drain_all() API to wait for requests across all BlockDriverStates to complete. As a bonus we perform checks after qemu_aio_wait() to ensure that requests really have finished. Signed-off-by: Stefan Hajnoczi --- block-migration.c | 2 +- block.c | 19 +++++++++++++++++++ block.h | 1 + blockdev.c | 4 ++-- cpus.c | 2 +- hw/ide/macio.c | 5 +++-- hw/ide/pci.c | 2 +- hw/virtio-blk.c | 2 +- hw/xen_platform.c | 2 +- qemu-io.c | 4 ++-- savevm.c | 2 +- xen-mapcache.c | 2 +- 12 files changed, 34 insertions(+), 13 deletions(-) diff --git a/block-migration.c b/block-migration.c index 5f104864..423c5a0 100644 --- a/block-migration.c +++ b/block-migration.c @@ -387,7 +387,7 @@ static int mig_save_device_dirty(Monitor *mon, QEMUFile *f, for (sector = bmds->cur_dirty; sector < bmds->total_sectors;) { if (bmds_aio_inflight(bmds, sector)) { - qemu_aio_flush(); + bdrv_drain_all(); } if (bdrv_get_dirty(bmds->bs, sector)) { diff --git a/block.c b/block.c index 37a96f3..86807d6 100644 --- a/block.c +++ b/block.c @@ -846,6 +846,25 @@ void bdrv_close_all(void) } } +/* + * Wait for pending requests to complete across all BlockDriverStates + * + * This function does not flush data to disk, use bdrv_flush_all() for that + * after calling this function. + */ +void bdrv_drain_all(void) +{ + BlockDriverState *bs; + + qemu_aio_flush(); + + /* If requests are still pending there is a bug somewhere */ + QTAILQ_FOREACH(bs, &bdrv_states, list) { + assert(QLIST_EMPTY(&bs->tracked_requests)); + assert(qemu_co_queue_empty(&bs->throttled_reqs)); + } +} + /* make a BlockDriverState anonymous by removing from bdrv_state list. Also, NULL terminate the device_name to prevent double remove */ void bdrv_make_anon(BlockDriverState *bs) diff --git a/block.h b/block.h index 1d28e85..0dc4905 100644 --- a/block.h +++ b/block.h @@ -214,6 +214,7 @@ int bdrv_flush(BlockDriverState *bs); int coroutine_fn bdrv_co_flush(BlockDriverState *bs); void bdrv_flush_all(void); void bdrv_close_all(void); +void bdrv_drain_all(void); int bdrv_discard(BlockDriverState *bs, int64_t sector_num, int nb_sectors); int bdrv_co_discard(BlockDriverState *bs, int64_t sector_num, int nb_sectors); diff --git a/blockdev.c b/blockdev.c index af4e239..dbf0251 100644 --- a/blockdev.c +++ b/blockdev.c @@ -653,7 +653,7 @@ int do_snapshot_blkdev(Monitor *mon, const QDict *qdict, QObject **ret_data) goto out; } - qemu_aio_flush(); + bdrv_drain_all(); bdrv_flush(bs); bdrv_close(bs); @@ -840,7 +840,7 @@ int do_drive_del(Monitor *mon, const QDict *qdict, QObject **ret_data) } /* quiesce block driver; prevent further io */ - qemu_aio_flush(); + bdrv_drain_all(); bdrv_flush(bs); bdrv_close(bs); diff --git a/cpus.c b/cpus.c index 82530c4..58e173c 100644 --- a/cpus.c +++ b/cpus.c @@ -396,7 +396,7 @@ static void do_vm_stop(RunState state) pause_all_vcpus(); runstate_set(state); vm_state_notify(0, state); - qemu_aio_flush(); + bdrv_drain_all(); bdrv_flush_all(); monitor_protocol_event(QEVENT_STOP, NULL); } diff --git a/hw/ide/macio.c b/hw/ide/macio.c index 70b3342..c09d2e0 100644 --- a/hw/ide/macio.c +++ b/hw/ide/macio.c @@ -200,8 +200,9 @@ static void pmac_ide_flush(DBDMA_io *io) { MACIOIDEState *m = io->opaque; - if (m->aiocb) - qemu_aio_flush(); + if (m->aiocb) { + bdrv_drain_all(); + } } /* PowerMac IDE memory IO */ diff --git a/hw/ide/pci.c b/hw/ide/pci.c index 49b823d..5078c0b 100644 --- a/hw/ide/pci.c +++ b/hw/ide/pci.c @@ -309,7 +309,7 @@ void bmdma_cmd_writeb(BMDMAState *bm, uint32_t val) * aio operation with preadv/pwritev. */ if (bm->bus->dma->aiocb) { - qemu_aio_flush(); + bdrv_drain_all(); assert(bm->bus->dma->aiocb == NULL); assert((bm->status & BM_STATUS_DMAING) == 0); } diff --git a/hw/virtio-blk.c b/hw/virtio-blk.c index d6d1f87..4b0d113 100644 --- a/hw/virtio-blk.c +++ b/hw/virtio-blk.c @@ -474,7 +474,7 @@ static void virtio_blk_reset(VirtIODevice *vdev) * This should cancel pending requests, but can't do nicely until there * are per-device request lists. */ - qemu_aio_flush(); + bdrv_drain_all(); } /* coalesce internal state, copy to pci i/o region 0 diff --git a/hw/xen_platform.c b/hw/xen_platform.c index 5e792f5..e62eaef 100644 --- a/hw/xen_platform.c +++ b/hw/xen_platform.c @@ -120,7 +120,7 @@ static void platform_fixed_ioport_writew(void *opaque, uint32_t addr, uint32_t v devices, and bit 2 the non-primary-master IDE devices. */ if (val & UNPLUG_ALL_IDE_DISKS) { DPRINTF("unplug disks\n"); - qemu_aio_flush(); + bdrv_drain_all(); bdrv_flush_all(); pci_unplug_disks(s->pci_dev.bus); } diff --git a/qemu-io.c b/qemu-io.c index de26422..b35adbb 100644 --- a/qemu-io.c +++ b/qemu-io.c @@ -1853,9 +1853,9 @@ int main(int argc, char **argv) command_loop(); /* - * Make sure all outstanding requests get flushed the program exits. + * Make sure all outstanding requests complete before the program exits. */ - qemu_aio_flush(); + bdrv_drain_all(); if (bs) { bdrv_delete(bs); diff --git a/savevm.c b/savevm.c index f53cd4c..0443554 100644 --- a/savevm.c +++ b/savevm.c @@ -2104,7 +2104,7 @@ int load_vmstate(const char *name) } /* Flush all IO requests so they don't interfere with the new state. */ - qemu_aio_flush(); + bdrv_drain_all(); bs = NULL; while ((bs = bdrv_next(bs))) { diff --git a/xen-mapcache.c b/xen-mapcache.c index 7bcb86e..9fecc64 100644 --- a/xen-mapcache.c +++ b/xen-mapcache.c @@ -351,7 +351,7 @@ void xen_invalidate_map_cache(void) MapCacheRev *reventry; /* Flush pending AIO before destroying the mapcache */ - qemu_aio_flush(); + bdrv_drain_all(); QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) { DPRINTF("There should be no locked mappings at this time, "