Patchwork [PATCHv4,9/9] migration: use XBZRLE only after bulk stage

login
register
mail settings
Submitter Peter Lieven
Date March 22, 2013, 12:46 p.m.
Message ID <1363956370-23681-10-git-send-email-pl@kamp.de>
Download mbox | patch
Permalink /patch/230001/
State New
Headers show

Comments

Peter Lieven - March 22, 2013, 12:46 p.m.
at the beginning of migration all pages are marked dirty and
in the first round a bulk migration of all pages is performed.

currently all these pages are copied to the page cache regardless
of whether they are frequently updated or not. this doesn't make sense
since most of these pages are never transferred again.

this patch changes the XBZRLE transfer to only be used after
the bulk stage has been completed. that means a page is added
to the page cache the second time it is transferred and XBZRLE
can benefit from the third time of transfer.

since the page cache is likely smaller than the number of pages
it's also likely that in the second round the page is missing in the
cache due to collisions in the bulk phase.

on the other hand a lot of unnecessary mallocs, memdups and frees
are saved.

the following results have been taken earlier while executing
the test program from docs/xbzrle.txt. (+) with the patch and (-)
without. (thanks to Eric Blake for reformatting and comments)

+ total time: 22185 milliseconds
- total time: 22410 milliseconds

Shaved 0.3 seconds, better than 1%!

+ downtime: 29 milliseconds
- downtime: 21 milliseconds

Not sure why downtime seemed worse, but probably not the end of the world.

+ transferred ram: 706034 kbytes
- transferred ram: 721318 kbytes

Fewer bytes sent - good.

+ remaining ram: 0 kbytes
- remaining ram: 0 kbytes
+ total ram: 1057216 kbytes
- total ram: 1057216 kbytes
+ duplicate: 108556 pages
- duplicate: 105553 pages
+ normal: 175146 pages
- normal: 179589 pages
+ normal bytes: 700584 kbytes
- normal bytes: 718356 kbytes

Fewer normal bytes...

+ cache size: 67108864 bytes
- cache size: 67108864 bytes
+ xbzrle transferred: 3127 kbytes
- xbzrle transferred: 630 kbytes

...and more compressed pages sent - good.

+ xbzrle pages: 117811 pages
- xbzrle pages: 21527 pages
+ xbzrle cache miss: 18750
- xbzrle cache miss: 179589

And very good improvement on the cache miss rate.

+ xbzrle overflow : 0
- xbzrle overflow : 0

Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
 arch_init.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
Orit Wasserman - March 25, 2013, 10:16 a.m.
On 03/22/2013 02:46 PM, Peter Lieven wrote:
> at the beginning of migration all pages are marked dirty and
> in the first round a bulk migration of all pages is performed.
> 
> currently all these pages are copied to the page cache regardless
> of whether they are frequently updated or not. this doesn't make sense
> since most of these pages are never transferred again.
> 
> this patch changes the XBZRLE transfer to only be used after
> the bulk stage has been completed. that means a page is added
> to the page cache the second time it is transferred and XBZRLE
> can benefit from the third time of transfer.
> 
> since the page cache is likely smaller than the number of pages
> it's also likely that in the second round the page is missing in the
> cache due to collisions in the bulk phase.
> 
> on the other hand a lot of unnecessary mallocs, memdups and frees
> are saved.
> 
> the following results have been taken earlier while executing
> the test program from docs/xbzrle.txt. (+) with the patch and (-)
> without. (thanks to Eric Blake for reformatting and comments)
> 
> + total time: 22185 milliseconds
> - total time: 22410 milliseconds
> 
> Shaved 0.3 seconds, better than 1%!
> 
> + downtime: 29 milliseconds
> - downtime: 21 milliseconds
> 
> Not sure why downtime seemed worse, but probably not the end of the world.
> 
> + transferred ram: 706034 kbytes
> - transferred ram: 721318 kbytes
> 
> Fewer bytes sent - good.
> 
> + remaining ram: 0 kbytes
> - remaining ram: 0 kbytes
> + total ram: 1057216 kbytes
> - total ram: 1057216 kbytes
> + duplicate: 108556 pages
> - duplicate: 105553 pages
> + normal: 175146 pages
> - normal: 179589 pages
> + normal bytes: 700584 kbytes
> - normal bytes: 718356 kbytes
> 
> Fewer normal bytes...
> 
> + cache size: 67108864 bytes
> - cache size: 67108864 bytes
> + xbzrle transferred: 3127 kbytes
> - xbzrle transferred: 630 kbytes
> 
> ...and more compressed pages sent - good.
> 
> + xbzrle pages: 117811 pages
> - xbzrle pages: 21527 pages
> + xbzrle cache miss: 18750
> - xbzrle cache miss: 179589
> 
> And very good improvement on the cache miss rate.
> 
> + xbzrle overflow : 0
> - xbzrle overflow : 0
> 
> Signed-off-by: Peter Lieven <pl@kamp.de>
> Reviewed-by: Eric Blake <eblake@redhat.com>
> ---
>  arch_init.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch_init.c b/arch_init.c
> index b2b932a..86f7e28 100644
> --- a/arch_init.c
> +++ b/arch_init.c
> @@ -464,7 +464,7 @@ static int ram_save_block(QEMUFile *f, bool last_stage)
>                      acct_info.skipped_pages++;
>                      bytes_sent = 0;
>                  }
> -            } else if (migrate_use_xbzrle()) {
> +            } else if (!ram_bulk_stage && migrate_use_xbzrle()) {
>                  current_addr = block->offset + offset;
>                  bytes_sent = save_xbzrle_page(f, p, current_addr, block,
>                                                offset, cont, last_stage);
> 
Reviewed-by: Orit Wasserman <owasserm@redhat.com>

Patch

diff --git a/arch_init.c b/arch_init.c
index b2b932a..86f7e28 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -464,7 +464,7 @@  static int ram_save_block(QEMUFile *f, bool last_stage)
                     acct_info.skipped_pages++;
                     bytes_sent = 0;
                 }
-            } else if (migrate_use_xbzrle()) {
+            } else if (!ram_bulk_stage && migrate_use_xbzrle()) {
                 current_addr = block->offset + offset;
                 bytes_sent = save_xbzrle_page(f, p, current_addr, block,
                                               offset, cont, last_stage);