diff mbox series

[2/2] migration/ram.c: Remove the qemu_mutex_lock in colo_flush_ram_cache.

Message ID 1636533456-5374-2-git-send-email-lei.rao@intel.com
State New
Headers show
Series [1/2] Fixed a QEMU hang when guest poweroff in COLO mode | expand

Commit Message

Lei Rao Nov. 10, 2021, 8:37 a.m. UTC
From: "Rao, Lei" <lei.rao@intel.com>

The code to acquire bitmap_mutex is added in the commit of
"63268c4970a5f126cc9af75f3ccb8057abef5ec0". There is no
need to acquire bitmap_mutex in colo_flush_ram_cache(). This
is because the colo_flush_ram_cache only be called on the COLO
secondary VM, which is the destination side.
On the COLO secondary VM, only the COLO thread will touch
the bitmap of ram cache.

Signed-off-by: Lei Rao <lei.rao@intel.com>
---
 migration/ram.c | 2 --
 1 file changed, 2 deletions(-)

Comments

Juan Quintela Nov. 10, 2021, 9:56 a.m. UTC | #1
"Rao, Lei" <lei.rao@intel.com> wrote:
> From: "Rao, Lei" <lei.rao@intel.com>
>
> The code to acquire bitmap_mutex is added in the commit of
> "63268c4970a5f126cc9af75f3ccb8057abef5ec0". There is no
> need to acquire bitmap_mutex in colo_flush_ram_cache(). This
> is because the colo_flush_ram_cache only be called on the COLO
> secondary VM, which is the destination side.
> On the COLO secondary VM, only the COLO thread will touch
> the bitmap of ram cache.
>
> Signed-off-by: Lei Rao <lei.rao@intel.com>

Reviewed-by: Juan Quintela <quintela@redhat.com>

As we are on the softfreeze, I am queuing this on my next-7.0 branch.

Later, Juan.
Zhang, Chen Nov. 11, 2021, 2:24 a.m. UTC | #2
> -----Original Message-----
> From: Rao, Lei <lei.rao@intel.com>
> Sent: Wednesday, November 10, 2021 4:38 PM
> To: Zhang, Chen <chen.zhang@intel.com>;
> zhang.zhanghailiang@huawei.com; quintela@redhat.com;
> dgilbert@redhat.com
> Cc: qemu-devel@nongnu.org; Rao, Lei <lei.rao@intel.com>
> Subject: [PATCH 2/2] migration/ram.c: Remove the qemu_mutex_lock in
> colo_flush_ram_cache.
> 
> From: "Rao, Lei" <lei.rao@intel.com>
> 
> The code to acquire bitmap_mutex is added in the commit of
> "63268c4970a5f126cc9af75f3ccb8057abef5ec0". There is no need to acquire
> bitmap_mutex in colo_flush_ram_cache(). This is because the
> colo_flush_ram_cache only be called on the COLO secondary VM, which is
> the destination side.
> On the COLO secondary VM, only the COLO thread will touch the bitmap of
> ram cache.
> 
> Signed-off-by: Lei Rao <lei.rao@intel.com>

Reviewed-by: Zhang Chen <chen.zhang@intel.com>

Thanks
Chen

> ---
>  migration/ram.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c index 863035d..2c688f5 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -3918,7 +3918,6 @@ void colo_flush_ram_cache(void)
>      unsigned long offset = 0;
> 
>      memory_global_dirty_log_sync();
> -    qemu_mutex_lock(&ram_state->bitmap_mutex);
>      WITH_RCU_READ_LOCK_GUARD() {
>          RAMBLOCK_FOREACH_NOT_IGNORED(block) {
>              ramblock_sync_dirty_bitmap(ram_state, block); @@ -3954,7 +3953,6
> @@ void colo_flush_ram_cache(void)
>          }
>      }
>      trace_colo_flush_ram_cache_end();
> -    qemu_mutex_unlock(&ram_state->bitmap_mutex);
>  }
> 
>  /**
> --
> 1.8.3.1
diff mbox series

Patch

diff --git a/migration/ram.c b/migration/ram.c
index 863035d..2c688f5 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3918,7 +3918,6 @@  void colo_flush_ram_cache(void)
     unsigned long offset = 0;
 
     memory_global_dirty_log_sync();
-    qemu_mutex_lock(&ram_state->bitmap_mutex);
     WITH_RCU_READ_LOCK_GUARD() {
         RAMBLOCK_FOREACH_NOT_IGNORED(block) {
             ramblock_sync_dirty_bitmap(ram_state, block);
@@ -3954,7 +3953,6 @@  void colo_flush_ram_cache(void)
         }
     }
     trace_colo_flush_ram_cache_end();
-    qemu_mutex_unlock(&ram_state->bitmap_mutex);
 }
 
 /**