From patchwork Fri Mar 1 13:31:44 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Lieven X-Patchwork-Id: 224331 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 4BFA32C0291 for ; Sat, 2 Mar 2013 00:32:00 +1100 (EST) Received: from localhost ([::1]:54005 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UBQ46-0001nS-Gb for incoming@patchwork.ozlabs.org; Fri, 01 Mar 2013 08:31:58 -0500 Received: from eggs.gnu.org ([208.118.235.92]:40067) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UBQ3q-0001nL-LM for qemu-devel@nongnu.org; Fri, 01 Mar 2013 08:31:45 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UBQ3m-0006vP-Fp for qemu-devel@nongnu.org; Fri, 01 Mar 2013 08:31:42 -0500 Received: from ssl.dlhnet.de ([91.198.192.8]:55502 helo=ssl.dlh.net) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UBQ3m-0006vF-AF for qemu-devel@nongnu.org; Fri, 01 Mar 2013 08:31:38 -0500 Received: from localhost (localhost [127.0.0.1]) by ssl.dlh.net (Postfix) with ESMTP id AD1C314DA6A; Fri, 1 Mar 2013 14:31:37 +0100 (CET) X-Virus-Scanned: Debian amavisd-new at ssl.dlh.net Received: from ssl.dlh.net ([127.0.0.1]) by localhost (ssl.dlh.net [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id PFhcp+RU5N7N; Fri, 1 Mar 2013 14:31:37 +0100 (CET) Received: from [172.21.12.60] (unknown [82.141.1.226]) by ssl.dlh.net (Postfix) with ESMTPSA id D34AA143801; Fri, 1 Mar 2013 14:31:36 +0100 (CET) Message-ID: <5130ADC0.9070402@dlhnet.de> Date: Fri, 01 Mar 2013 14:31:44 +0100 From: Peter Lieven User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130221 Thunderbird/17.0.3 MIME-Version: 1.0 To: "qemu-devel@nongnu.org" X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.6.x X-Received-From: 91.198.192.8 Cc: Orit Wasserman , Paolo Bonzini Subject: [Qemu-devel] [PATCH] migration: use XBZRLE only after bulk stage X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org at the beginning of migration all pages are marked dirty and in the first round a bulk migration of all pages is performed. currently all these pages are copied to the page cache regardless if there are frequently updated or not. this doesn't make sense since most of these pages are never transferred again. this patch changes the XBZRLE transfer to only be used after the bulk stage has been completed. that means a page is added to the page cache the second time it is transferred and XBZRLE can benefit from the third time of transfer. since the page cache is likely smaller than the number of pages its also likely that in the second round the page is missing in the cache due to collisions in the bulk phase. on the other hand a lot of unneccssary mallocs, memdups and frees are saved. Signed-off-by: Peter Lieven Reviewed-by: Eric Blake --- arch_init.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch_init.c b/arch_init.c index 8da868b..24241e0 100644 --- a/arch_init.c +++ b/arch_init.c @@ -347,6 +347,7 @@ static ram_addr_t last_offset; static unsigned long *migration_bitmap; static uint64_t migration_dirty_pages; static uint32_t last_version; +static bool ram_bulk_stage; static inline ram_addr_t migration_bitmap_find_and_reset_dirty(MemoryRegion *mr, @@ -451,6 +452,7 @@ static int ram_save_block(QEMUFile *f, bool last_stage) if (!block) { block = QTAILQ_FIRST(&ram_list.blocks); complete_round = true; + ram_bulk_stage = false; } } else { uint8_t *p; @@ -467,7 +469,7 @@ static int ram_save_block(QEMUFile *f, bool last_stage) RAM_SAVE_FLAG_COMPRESS); qemu_put_byte(f, *p); bytes_sent += 1; - } else if (migrate_use_xbzrle()) { + } else if (!ram_bulk_stage && migrate_use_xbzrle()) { current_addr = block->offset + offset; bytes_sent = save_xbzrle_page(f, p, current_addr, block, offset, cont, last_stage); @@ -554,6 +556,7 @@ static void reset_ram_globals(void) last_sent_block = NULL; last_offset = 0; last_version = ram_list.version; + ram_bulk_stage = true; } #define MAX_WAIT 50 /* ms, half buffered_file limit */