Patchwork exec.c: fix dirty bitmap reallocation

login
register
mail settings
Submitter Mitsyanko Igor
Date Aug. 10, 2012, 2:45 p.m.
Message ID <1344609911-29588-1-git-send-email-i.mitsyanko@samsung.com>
Download mbox | patch
Permalink /patch/176502/
State New
Headers show

Comments

Mitsyanko Igor - Aug. 10, 2012, 2:45 p.m.
For each newly created RAM block, dirty bitmap is reallocated with g_realloc, which doesn't
make any promises on initial content of new extra data in returned buffer. In theory,
we initialize this new data with cpu_physical_memory_set_dirty_range() call. The
problem is, cpu_physical_memory_set_dirty_range() has a side effect of incrementing
ram_list.dirty_pages variable, but only for pages which are not already dirty. And
page "cleanliness" is determined using the same not yet uninitialized dirty bitmap
we've just reallocated. This results in inconsistency between real dirty page number
and value in ram_list.dirty_pages variable, which in turn could (and will) result
in errors during VM migration.
Zero initialize new dirty bitmap bytes to fix this problem.

Signed-off-by: Igor Mitsyanko <i.mitsyanko@samsung.com>
---
 exec.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)
Blue Swirl - Aug. 11, 2012, 2:48 p.m.
Thanks, applied.

On Fri, Aug 10, 2012 at 2:45 PM, Igor Mitsyanko <i.mitsyanko@samsung.com> wrote:
> For each newly created RAM block, dirty bitmap is reallocated with g_realloc, which doesn't
> make any promises on initial content of new extra data in returned buffer. In theory,
> we initialize this new data with cpu_physical_memory_set_dirty_range() call. The
> problem is, cpu_physical_memory_set_dirty_range() has a side effect of incrementing
> ram_list.dirty_pages variable, but only for pages which are not already dirty. And
> page "cleanliness" is determined using the same not yet uninitialized dirty bitmap
> we've just reallocated. This results in inconsistency between real dirty page number
> and value in ram_list.dirty_pages variable, which in turn could (and will) result
> in errors during VM migration.
> Zero initialize new dirty bitmap bytes to fix this problem.
>
> Signed-off-by: Igor Mitsyanko <i.mitsyanko@samsung.com>
> ---
>  exec.c |    2 ++
>  1 files changed, 2 insertions(+), 0 deletions(-)
>
> diff --git a/exec.c b/exec.c
> index a42a0b5..929db5c 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -2550,6 +2550,8 @@ ram_addr_t qemu_ram_alloc_from_ptr(ram_addr_t size, void *host,
>
>      ram_list.phys_dirty = g_realloc(ram_list.phys_dirty,
>                                         last_ram_offset() >> TARGET_PAGE_BITS);
> +    memset(ram_list.phys_dirty + (new_block->offset >> TARGET_PAGE_BITS),
> +           0, size >> TARGET_PAGE_BITS);
>      cpu_physical_memory_set_dirty_range(new_block->offset, size, 0xff);
>
>      if (kvm_enabled())
> --
> 1.7.5.4
>

Patch

diff --git a/exec.c b/exec.c
index a42a0b5..929db5c 100644
--- a/exec.c
+++ b/exec.c
@@ -2550,6 +2550,8 @@  ram_addr_t qemu_ram_alloc_from_ptr(ram_addr_t size, void *host,
 
     ram_list.phys_dirty = g_realloc(ram_list.phys_dirty,
                                        last_ram_offset() >> TARGET_PAGE_BITS);
+    memset(ram_list.phys_dirty + (new_block->offset >> TARGET_PAGE_BITS),
+           0, size >> TARGET_PAGE_BITS);
     cpu_physical_memory_set_dirty_range(new_block->offset, size, 0xff);
 
     if (kvm_enabled())