Patchwork [PATCHv3,8/9] migration: do not search dirty pages in bulk stage

login
register
mail settings
Submitter Peter Lieven
Date March 21, 2013, 3:57 p.m.
Message ID <1363881457-14814-9-git-send-email-pl@kamp.de>
Download mbox | patch
Permalink /patch/229741/
State New
Headers show

Comments

Peter Lieven - March 21, 2013, 3:57 p.m.
avoid searching for dirty pages just increment the
page offset. all pages are dirty anyway.

Signed-off-by: Peter Lieven <pl@kamp.de>
---
 arch_init.c |   10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)
Eric Blake - March 21, 2013, 7:27 p.m.
On 03/21/2013 09:57 AM, Peter Lieven wrote:
> avoid searching for dirty pages just increment the
> page offset. all pages are dirty anyway.
> 
> Signed-off-by: Peter Lieven <pl@kamp.de>
> ---
>  arch_init.c |   10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)

Prior review still stands.
Peter Lieven - March 21, 2013, 7:57 p.m.
Am 21.03.2013 um 20:27 schrieb Eric Blake <eblake@redhat.com>:

> On 03/21/2013 09:57 AM, Peter Lieven wrote:
>> avoid searching for dirty pages just increment the
>> page offset. all pages are dirty anyway.
>> 
>> Signed-off-by: Peter Lieven <pl@kamp.de>
>> ---
>> arch_init.c |   10 ++++++++--
>> 1 file changed, 8 insertions(+), 2 deletions(-)
> 
> Prior review still stands.

I changed the logic a little bit.

a) last_offset is initialized to 0 again.
b) for the case last_offset == 0 in bulk ram migration the search
is not skipped resulting in offset == 0 if the page is dirty (first call
to this function with last_offset==0) and offset==1 when its called
the second time (page is no longer dirty).
Afterwards offset is calculated as last_offset + 1.

Peter


> 
> -- 
> Eric Blake   eblake redhat com    +1-919-301-3266
> Libvirt virtualization library http://libvirt.org
>

Patch

diff --git a/arch_init.c b/arch_init.c
index c2cb40a..4718d39 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -327,8 +327,14 @@  ram_addr_t migration_bitmap_find_and_reset_dirty(MemoryRegion *mr,
     unsigned long nr = base + (start >> TARGET_PAGE_BITS);
     unsigned long size = base + (int128_get64(mr->size) >> TARGET_PAGE_BITS);
 
-    unsigned long next = find_next_bit(migration_bitmap, size, nr);
-
+    unsigned long next;
+    
+    if (ram_bulk_stage && nr > base) {
+        next = nr + 1;
+    } else {
+        next = find_next_bit(migration_bitmap, size, nr);
+    }
+    
     if (next < size) {
         clear_bit(next, migration_bitmap);
         migration_dirty_pages--;