Patchwork [PULL,02/14] arch_init: align MR size to target page size

login
register
mail settings
Submitter Michael S. Tsirkin
Date Aug. 26, 2013, 4:43 p.m.
Message ID <1377535318-30491-3-git-send-email-mst@redhat.com>
Download mbox | patch
Permalink /patch/269936/
State New
Headers show

Comments

Michael S. Tsirkin - Aug. 26, 2013, 4:43 p.m.
Migration code assumes that each MR is a multiple of TARGET_PAGE_SIZE:
MR size is divided by TARGET_PAGE_SIZE, so if it isn't migration
never completes.
But this isn't really required for regions set up with
memory_region_init_ram, since that calls qemu_ram_alloc
which aligns size up using TARGET_PAGE_ALIGN.

Align MR size up to full target page sizes, this way
migration completes even if we create a RAM MR
which is not a full target page size.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
---
 arch_init.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Patch

diff --git a/arch_init.c b/arch_init.c
index 68a7ab7..ac8eb59 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -342,7 +342,8 @@  ram_addr_t migration_bitmap_find_and_reset_dirty(MemoryRegion *mr,
 {
     unsigned long base = mr->ram_addr >> TARGET_PAGE_BITS;
     unsigned long nr = base + (start >> TARGET_PAGE_BITS);
-    unsigned long size = base + (int128_get64(mr->size) >> TARGET_PAGE_BITS);
+    uint64_t mr_size = TARGET_PAGE_ALIGN(memory_region_size(mr));
+    unsigned long size = base + (mr_size >> TARGET_PAGE_BITS);
 
     unsigned long next;