diff mbox series

[v2,03/15] migration: Cleanup xbzrle zero page cache update logic

Message ID 20221011215559.602584-4-peterx@redhat.com
State New
Headers show
Series migration: Postcopy Preempt-Full | expand

Commit Message

Peter Xu Oct. 11, 2022, 9:55 p.m. UTC
The major change is to replace "!save_page_use_compression()" with
"xbzrle_enabled" to make it clear.

Reasonings:

(1) When compression enabled, "!save_page_use_compression()" is exactly the
    same as checking "xbzrle_enabled".

(2) When compression disabled, "!save_page_use_compression()" always return
    true.  We used to try calling the xbzrle code, but after this change we
    won't, and we shouldn't need to.

Since at it, drop the xbzrle_enabled check in xbzrle_cache_zero_page()
because with this change it's not needed anymore.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/ram.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

Comments

Juan Quintela Nov. 14, 2022, 3:08 p.m. UTC | #1
Peter Xu <peterx@redhat.com> wrote:
> The major change is to replace "!save_page_use_compression()" with
> "xbzrle_enabled" to make it clear.
>
> Reasonings:
>
> (1) When compression enabled, "!save_page_use_compression()" is exactly the
>     same as checking "xbzrle_enabled".
>
> (2) When compression disabled, "!save_page_use_compression()" always return
>     true.  We used to try calling the xbzrle code, but after this change we
>     won't, and we shouldn't need to.
>
> Since at it, drop the xbzrle_enabled check in xbzrle_cache_zero_page()
> because with this change it's not needed anymore.
>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>

Reviewed-by: Juan Quintela <quintela@redhat.com>
diff mbox series

Patch

diff --git a/migration/ram.c b/migration/ram.c
index 7aaa843a21..562646609e 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -741,10 +741,6 @@  void mig_throttle_counter_reset(void)
  */
 static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
 {
-    if (!rs->xbzrle_enabled) {
-        return;
-    }
-
     /* We don't care if this fails to allocate a new cache page
      * as long as it updated an old one */
     cache_insert(XBZRLE.cache, current_addr, XBZRLE.zero_target_page,
@@ -2301,7 +2297,7 @@  static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss)
         /* Must let xbzrle know, otherwise a previous (now 0'd) cached
          * page would be stale
          */
-        if (!save_page_use_compression(rs)) {
+        if (rs->xbzrle_enabled) {
             XBZRLE_cache_lock();
             xbzrle_cache_zero_page(rs, block->offset + offset);
             XBZRLE_cache_unlock();