From patchwork Fri Jun 23 16:21:05 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 780099 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3wvP8G1Bsfz9sNW for ; Sat, 24 Jun 2017 02:30:54 +1000 (AEST) Received: from localhost ([::1]:36261 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dORTv-0000Vy-Kn for incoming@patchwork.ozlabs.org; Fri, 23 Jun 2017 12:30:51 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:59231) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dORLf-0000WT-96 for qemu-devel@nongnu.org; Fri, 23 Jun 2017 12:22:20 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dORLe-0007V6-71 for qemu-devel@nongnu.org; Fri, 23 Jun 2017 12:22:19 -0400 Received: from mx1.redhat.com ([209.132.183.28]:55950) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dORLZ-0007S6-1Z; Fri, 23 Jun 2017 12:22:13 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 0C67430AF64; Fri, 23 Jun 2017 16:22:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 0C67430AF64 Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=kwolf@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 0C67430AF64 Received: from noname.redhat.com (ovpn-117-196.ams2.redhat.com [10.36.117.196]) by smtp.corp.redhat.com (Postfix) with ESMTP id 139FF6EC75; Fri, 23 Jun 2017 16:22:10 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Fri, 23 Jun 2017 18:21:05 +0200 Message-Id: <1498234919-27316-8-git-send-email-kwolf@redhat.com> In-Reply-To: <1498234919-27316-1-git-send-email-kwolf@redhat.com> References: <1498234919-27316-1-git-send-email-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Fri, 23 Jun 2017 16:22:12 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 07/61] migration: use bdrv_drain_all_begin/end() instead bdrv_drain_all() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Stefan Hajnoczi blk/bdrv_drain_all() only takes effect for a single instant and then resumes block jobs, guest devices, and other external clients like the NBD server. This can be handy when performing a synchronous drain before terminating the program, for example. Monitor commands usually need to quiesce I/O across an entire code region so blk/bdrv_drain_all() is not suitable. They must use bdrv_drain_all_begin/end() to mark the region. This prevents new I/O requests from slipping in or worse - block jobs completing and modifying the graph. I audited other blk/bdrv_drain_all() callers but did not find anything that needs a similar fix. This patch fixes the savevm/loadvm commands. Although I haven't encountered a read world issue this makes the code safer. Suggested-by: Kevin Wolf Signed-off-by: Stefan Hajnoczi Reviewed-by: Eric Blake Signed-off-by: Kevin Wolf --- migration/savevm.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-) diff --git a/migration/savevm.c b/migration/savevm.c index 5846d9c..b08df04 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -2107,6 +2107,8 @@ int save_snapshot(const char *name, Error **errp) } vm_stop(RUN_STATE_SAVE_VM); + bdrv_drain_all_begin(); + aio_context_acquire(aio_context); memset(sn, 0, sizeof(*sn)); @@ -2165,6 +2167,9 @@ int save_snapshot(const char *name, Error **errp) if (aio_context) { aio_context_release(aio_context); } + + bdrv_drain_all_end(); + if (saved_vm_running) { vm_start(); } @@ -2273,20 +2278,21 @@ int load_snapshot(const char *name, Error **errp) } /* Flush all IO requests so they don't interfere with the new state. */ - bdrv_drain_all(); + bdrv_drain_all_begin(); ret = bdrv_all_goto_snapshot(name, &bs); if (ret < 0) { error_setg(errp, "Error %d while activating snapshot '%s' on '%s'", ret, name, bdrv_get_device_name(bs)); - return ret; + goto err_drain; } /* restore the VM state */ f = qemu_fopen_bdrv(bs_vm_state, 0); if (!f) { error_setg(errp, "Could not open VM state file"); - return -EINVAL; + ret = -EINVAL; + goto err_drain; } qemu_system_reset(SHUTDOWN_CAUSE_NONE); @@ -2296,6 +2302,8 @@ int load_snapshot(const char *name, Error **errp) ret = qemu_loadvm_state(f); aio_context_release(aio_context); + bdrv_drain_all_end(); + migration_incoming_state_destroy(); if (ret < 0) { error_setg(errp, "Error %d while loading VM state", ret); @@ -2303,6 +2311,10 @@ int load_snapshot(const char *name, Error **errp) } return 0; + +err_drain: + bdrv_drain_all_end(); + return ret; } void vmstate_register_ram(MemoryRegion *mr, DeviceState *dev)