From patchwork Mon Oct 3 14:41:09 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Michael S. Tsirkin" X-Patchwork-Id: 117451 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [140.186.70.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id ABE4DB6F75 for ; Tue, 4 Oct 2011 01:40:29 +1100 (EST) Received: from localhost ([::1]:37786 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1RAjgv-00088Y-Uw for incoming@patchwork.ozlabs.org; Mon, 03 Oct 2011 10:40:25 -0400 Received: from eggs.gnu.org ([140.186.70.92]:53660) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1RAjgp-00088F-SI for qemu-devel@nongnu.org; Mon, 03 Oct 2011 10:40:21 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1RAjgj-00072M-OY for qemu-devel@nongnu.org; Mon, 03 Oct 2011 10:40:19 -0400 Received: from mx1.redhat.com ([209.132.183.28]:19161) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1RAjgj-00072G-D6 for qemu-devel@nongnu.org; Mon, 03 Oct 2011 10:40:13 -0400 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id p93Ee2ur030080 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Mon, 3 Oct 2011 10:40:02 -0400 Received: from redhat.com (vpn1-7-204.ams2.redhat.com [10.36.7.204]) by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with SMTP id p93EdwAN030802; Mon, 3 Oct 2011 10:39:59 -0400 Date: Mon, 3 Oct 2011 16:41:09 +0200 From: "Michael S. Tsirkin" To: Anthony Liguori Message-ID: <20111003144109.GE19689@redhat.com> References: <1316443309-23843-1-git-send-email-mdroth@linux.vnet.ibm.com> <4E88C7DB.9090105@linux.vnet.ibm.com> <20111002210802.GC8072@redhat.com> <4E89B0D4.3090203@us.ibm.com> <20111003133802.GD18920@redhat.com> <4E89BDCE.2010502@codemonkey.ws> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <4E89BDCE.2010502@codemonkey.ws> User-Agent: Mutt/1.5.21 (2010-09-15) X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.6 (newer, 3) X-Received-From: 209.132.183.28 Cc: aliguori@linux.vnet.ibm.com, Anthony Liguori , Stefan Berger , qemu-devel@nongnu.org, Michael Roth Subject: Re: [Qemu-devel] [RFC] New Migration Protocol using Visitor Interface X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org On Mon, Oct 03, 2011 at 08:51:10AM -0500, Anthony Liguori wrote: > On 10/03/2011 08:38 AM, Michael S. Tsirkin wrote: > >On Mon, Oct 03, 2011 at 07:55:48AM -0500, Anthony Liguori wrote: > >>On 10/02/2011 04:08 PM, Michael S. Tsirkin wrote: > >>>On Sun, Oct 02, 2011 at 04:21:47PM -0400, Stefan Berger wrote: > >>>> > >>>>>4) Implement the BERVisitor and make this the default migration protocol. > >>>>> > >>>>>Most of the work will be in 1), though with the implementation in this series we should be able to do it incrementally. I'm not sure if the best approach is doing the mechanical phase 1 conversion, then doing phase 2 sometime after 4), doing phase 1 + 2 as part of 1), or just doing VMState conversions which gives basically the same capabilities as phase 1 + 2. > >>>>> > >>>>>Thoughts? > >>>>Is anyone working on this? If not I may give it a shot (tomorrow++) > >>>>for at least some of the primitives... for enabling vNVRAM metadata > >>>>of course. Indefinite length encoding of constructed data types I > >>>>suppose won't be used otherwise the visitor interface seems wrong > >>>>for parsing and skipping of extra data towards the end of a > >>>>structure if version n wrote the stream and appended some of its > >>>>version n data and now version m< n is trying to read the struct > >>>>and needs to skip the version [m+1, n ] data fields ... in that case > >>>>the de-serialization of the stream should probably be stream-driven > >>>>rather than structure-driven. > >>>> > >>>> Stefan > >>> > >>>Yes I've been struggling with that exactly. > >>>Anthony, any thoughts? > >> > >>It just depends on how you write your visitor. If you used > >>sequences, you'd probably do something like this: > >> > >>start_struct -> > >> check for sequence tag, push starting offset and size onto stack > >> increment offset to next tag > >> > >>type_int (et al) -> > >> check for explicit type, parse data > >> increment offset to next tag > >> > >>end_struct -> > >> pop starting offset and size to temp variables > >> set offset to starting offset + size > >> > >>This is roughly how the QMP input marshaller works FWIW. > >> > >>Regards, > >> > >>Anthony Liguori > > > >One thing I worry about is enabling zero copy for > >large string types (e.g. memory migration). > > Memory shouldn't be done through Visitors. It should be handled as a special case. OK, that's fine then. > >So we need to be able to see a tag for memory page + address, > >read that from socket directly at the correct virtual address. > > > >Probably, we can avoid using visitors for memory, and hope > >everything else can stand an extra copy since it's small. > > > >But then, why do we worry about the size of > >encoded device state as Anthony seems to do? > > There's a significant difference between the cost of something on > the wire and the cost of doing a memcpy. The cost of the data on > the wire is directly proportional to downtime. So if we increase > the size of the device state by a factor of 10, we increase the > minimum downtime by a factor of 10. > > Of course, *if* the size of device state is already negligible with > respect to the minimum downtime, then it doesn't matter. This is > easy to quantify though. For a normal migration session today, > what's the total size of the device state in relation to the > calculated bandwidth of the minimum downtime? > > If it's very small, then we can add names and not worry about it. > > Regards, > > Anthony Liguori Yes, it's easy to quantify. I think the following gives us the offset before and after, so the difference is the size we seek, right? diff --git a/savevm.c b/savevm.c index 1feaa70..dbbbcc6 100644 --- a/savevm.c +++ b/savevm.c @@ -1543,6 +1543,7 @@ int qemu_savevm_state_iterate(Monitor *mon, QEMUFile *f) int qemu_savevm_state_complete(Monitor *mon, QEMUFile *f) { SaveStateEntry *se; + unsigned long long vm_state_size; cpu_synchronize_all_states(); @@ -1557,6 +1558,8 @@ int qemu_savevm_state_complete(Monitor *mon, QEMUFile *f) se->save_live_state(mon, f, QEMU_VM_SECTION_END, se->opaque); } + vm_state_size = qemu_ftell(f); + fprintf(stderr, "start size: %lld\n", vm_state_size); QTAILQ_FOREACH(se, &savevm_handlers, entry) { int len; @@ -1577,6 +1580,8 @@ int qemu_savevm_state_complete(Monitor *mon, QEMUFile *f) vmstate_save(f, se); } + vm_state_size = qemu_ftell(f); + fprintf(stderr, "end size: %lld\n", vm_state_size); qemu_put_byte(f, QEMU_VM_EOF);