diff mbox series

[02/14] migration: Cleanup xbzrle zero page cache update logic

Message ID 20220920225106.48451-3-peterx@redhat.com
State New
Headers show
Series migration: Postcopy Preempt-Full | expand

Commit Message

Peter Xu Sept. 20, 2022, 10:50 p.m. UTC
The major change is to replace "!save_page_use_compression()" with
"xbzrle_enabled" to make it clear.

Reasonings:

(1) When compression enabled, "!save_page_use_compression()" is exactly the
    same as checking "xbzrle_enabled".

(2) When compression disabled, "!save_page_use_compression()" always return
    true.  We used to try calling the xbzrle code, but after this change we
    won't, and we shouldn't need to.

Since at it, drop the xbzrle_enabled check in xbzrle_cache_zero_page()
because with this change it's not needed anymore.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/ram.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

Comments

Dr. David Alan Gilbert Oct. 4, 2022, 10:33 a.m. UTC | #1
* Peter Xu (peterx@redhat.com) wrote:
> The major change is to replace "!save_page_use_compression()" with
> "xbzrle_enabled" to make it clear.
> 
> Reasonings:
> 
> (1) When compression enabled, "!save_page_use_compression()" is exactly the
>     same as checking "xbzrle_enabled".
> 
> (2) When compression disabled, "!save_page_use_compression()" always return
>     true.  We used to try calling the xbzrle code, but after this change we
>     won't, and we shouldn't need to.
> 
> Since at it, drop the xbzrle_enabled check in xbzrle_cache_zero_page()
> because with this change it's not needed anymore.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>

Yes, I think that's right - it took me a bit of thinking to check it.
The important thing is that once xbzrle is enaled then we must always
take in the zero pages to squash old data in the cache.


Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  migration/ram.c | 6 +-----
>  1 file changed, 1 insertion(+), 5 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index d8cf7cc901..fc59c052cf 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -741,10 +741,6 @@ void mig_throttle_counter_reset(void)
>   */
>  static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
>  {
> -    if (!rs->xbzrle_enabled) {
> -        return;
> -    }
> -
>      /* We don't care if this fails to allocate a new cache page
>       * as long as it updated an old one */
>      cache_insert(XBZRLE.cache, current_addr, XBZRLE.zero_target_page,
> @@ -2301,7 +2297,7 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss)
>          /* Must let xbzrle know, otherwise a previous (now 0'd) cached
>           * page would be stale
>           */
> -        if (!save_page_use_compression(rs)) {
> +        if (rs->xbzrle_enabled) {
>              XBZRLE_cache_lock();
>              xbzrle_cache_zero_page(rs, block->offset + offset);
>              XBZRLE_cache_unlock();
> -- 
> 2.32.0
>
diff mbox series

Patch

diff --git a/migration/ram.c b/migration/ram.c
index d8cf7cc901..fc59c052cf 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -741,10 +741,6 @@  void mig_throttle_counter_reset(void)
  */
 static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
 {
-    if (!rs->xbzrle_enabled) {
-        return;
-    }
-
     /* We don't care if this fails to allocate a new cache page
      * as long as it updated an old one */
     cache_insert(XBZRLE.cache, current_addr, XBZRLE.zero_target_page,
@@ -2301,7 +2297,7 @@  static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss)
         /* Must let xbzrle know, otherwise a previous (now 0'd) cached
          * page would be stale
          */
-        if (!save_page_use_compression(rs)) {
+        if (rs->xbzrle_enabled) {
             XBZRLE_cache_lock();
             xbzrle_cache_zero_page(rs, block->offset + offset);
             XBZRLE_cache_unlock();