diff mbox series

[1/2] softmmu/physmem: move last_ram_page() call under qemu_mutex_lock_ramlist()

Message ID 20220325154013.16809-1-arbn@yandex-team.com
State New
Headers show
Series [1/2] softmmu/physmem: move last_ram_page() call under qemu_mutex_lock_ramlist() | expand

Commit Message

Andrey Ryabinin March 25, 2022, 3:40 p.m. UTC
The 'ram_list.blocks' modifications protected by 'ram_list.mutex'.
last_ram_page() uses state of 'ram_list.blocks' to identify ram's size.
ram_block_add() calls last_ram_page() before the mutex lock
making the following race possible:

     CPU#0                                       CPU#1
                                      ram_block_add()
                                         old_ram_size = last_ram_page()
                                         qemu_mutex_lock_ramlist()
                                         ...
                                         dirty_memory_extend(old_ram_size, new_ram_size);
ram_block_add()
   old_ram_size = last_ram_page()

					      //insert block to ram_list
					      QLIST_INSERT_*_RCU()
					      qemu_mutex_unlock_ramlist()
   qemu_mutex_lock_ramlist()
   ....
   dirty_memory_extend(old_ram_size, new_ram_size);

Such race may result in leaking some dirty memory bitmaps.

Because of stale 'old_ram_size' value, the dirty_memory_extend() on CPU#0
will allocate and reinitialize some of the already allocated on CPU#1
dirty memory bitmap blocks.

Fix this by moving last_ram_page() call under the qemu_mutex_lock_ramlist()

Cc: qemu-stable@nongnu.org
Signed-off-by: Andrey Ryabinin <arbn@yandex-team.com>
---
 softmmu/physmem.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Peter Xu March 30, 2022, 6:47 p.m. UTC | #1
On Fri, Mar 25, 2022 at 06:40:12PM +0300, Andrey Ryabinin wrote:
> The 'ram_list.blocks' modifications protected by 'ram_list.mutex'.
> last_ram_page() uses state of 'ram_list.blocks' to identify ram's size.
> ram_block_add() calls last_ram_page() before the mutex lock
> making the following race possible:
> 
>      CPU#0                                       CPU#1
>                                       ram_block_add()
>                                          old_ram_size = last_ram_page()
>                                          qemu_mutex_lock_ramlist()
>                                          ...
>                                          dirty_memory_extend(old_ram_size, new_ram_size);
> ram_block_add()
>    old_ram_size = last_ram_page()
> 
> 					      //insert block to ram_list
> 					      QLIST_INSERT_*_RCU()
> 					      qemu_mutex_unlock_ramlist()
>    qemu_mutex_lock_ramlist()
>    ....
>    dirty_memory_extend(old_ram_size, new_ram_size);
> 
> Such race may result in leaking some dirty memory bitmaps.
> 
> Because of stale 'old_ram_size' value, the dirty_memory_extend() on CPU#0
> will allocate and reinitialize some of the already allocated on CPU#1
> dirty memory bitmap blocks.
> 
> Fix this by moving last_ram_page() call under the qemu_mutex_lock_ramlist()
> 
> Cc: qemu-stable@nongnu.org
> Signed-off-by: Andrey Ryabinin <arbn@yandex-team.com>

Reviewed-by: Peter Xu <peterx@redhat.com>
diff mbox series

Patch

diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index 4e1b27a20e..32f76362bf 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -1969,9 +1969,9 @@  static void ram_block_add(RAMBlock *new_block, Error **errp)
     ram_addr_t old_ram_size, new_ram_size;
     Error *err = NULL;
 
+    qemu_mutex_lock_ramlist();
     old_ram_size = last_ram_page();
 
-    qemu_mutex_lock_ramlist();
     new_block->offset = find_ram_offset(new_block->max_length);
 
     if (!new_block->host) {