diff mbox

[PATCHv3,9/9] migration: use XBZRLE only after bulk stage

Message ID 1363881457-14814-10-git-send-email-pl@kamp.de
State New
Headers show

Commit Message

Peter Lieven March 21, 2013, 3:57 p.m. UTC
at the beginning of migration all pages are marked dirty and
in the first round a bulk migration of all pages is performed.

currently all these pages are copied to the page cache regardless
if there are frequently updated or not. this doesn't make sense
since most of these pages are never transferred again.

this patch changes the XBZRLE transfer to only be used after
the bulk stage has been completed. that means a page is added
to the page cache the second time it is transferred and XBZRLE
can benefit from the third time of transfer.

since the page cache is likely smaller than the number of pages
its also likely that in the second round the page is missing in the
cache due to collisions in the bulk phase.

on the other hand a lot of unnecessary mallocs, memdups and frees
are saved.

the following results have been taken earlier while executing
the test program from docs/xbzrle.txt. (+) with the patch and (-)
without. (thanks to Eric Blake for reformatting and comments)

+ total time: 22185 milliseconds
- total time: 22410 milliseconds

Shaved 0.3 seconds, better than 1%!

+ downtime: 29 milliseconds
- downtime: 21 milliseconds

Not sure why downtime seemed worse, but probably not the end of the world.

+ transferred ram: 706034 kbytes
- transferred ram: 721318 kbytes

Fewer bytes sent - good.

+ remaining ram: 0 kbytes
- remaining ram: 0 kbytes
+ total ram: 1057216 kbytes
- total ram: 1057216 kbytes
+ duplicate: 108556 pages
- duplicate: 105553 pages
+ normal: 175146 pages
- normal: 179589 pages
+ normal bytes: 700584 kbytes
- normal bytes: 718356 kbytes

Fewer normal bytes...

+ cache size: 67108864 bytes
- cache size: 67108864 bytes
+ xbzrle transferred: 3127 kbytes
- xbzrle transferred: 630 kbytes

...and more compressed pages sent - good.

+ xbzrle pages: 117811 pages
- xbzrle pages: 21527 pages
+ xbzrle cache miss: 18750
- xbzrle cache miss: 179589

And very good improvement on the cache miss rate.

+ xbzrle overflow : 0
- xbzrle overflow : 0

Signed-off-by: Peter Lieven <pl@kamp.de>
---
 arch_init.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Eric Blake March 21, 2013, 7:31 p.m. UTC | #1
On 03/21/2013 09:57 AM, Peter Lieven wrote:
> at the beginning of migration all pages are marked dirty and
> in the first round a bulk migration of all pages is performed.
> 
> currently all these pages are copied to the page cache regardless
> if there are frequently updated or not. this doesn't make sense

s/if there/of whether they/

> since most of these pages are never transferred again.
> 
> this patch changes the XBZRLE transfer to only be used after
> the bulk stage has been completed. that means a page is added
> to the page cache the second time it is transferred and XBZRLE
> can benefit from the third time of transfer.
> 
> since the page cache is likely smaller than the number of pages
> its also likely that in the second round the page is missing in the

s/its/it's/

> cache due to collisions in the bulk phase.
> 
> on the other hand a lot of unnecessary mallocs, memdups and frees
> are saved.
> 
...
> Signed-off-by: Peter Lieven <pl@kamp.de>
> ---
>  arch_init.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Prior review still stands (touching up typos in a commit message
generally does not invalidate carrying forward a reviewed-by comment).
diff mbox

Patch

diff --git a/arch_init.c b/arch_init.c
index 4718d39..4e89783 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -450,7 +450,7 @@  static int ram_save_block(QEMUFile *f, bool last_stage)
                     qemu_put_byte(f, 0);
                 }
                 bytes_sent++;
-            } else if (migrate_use_xbzrle()) {
+            } else if (!ram_bulk_stage && migrate_use_xbzrle()) {
                 current_addr = block->offset + offset;
                 bytes_sent = save_xbzrle_page(f, p, current_addr, block,
                                               offset, cont, last_stage);