Message ID | 20190530092919.26059-5-peterx@redhat.com |
---|---|
State | New |
Headers | show |
Series | kvm/migration: support KVM_CLEAR_DIRTY_LOG | expand |
Peter Xu <peterx@redhat.com> wrote: > Similar to 9460dee4b2 ("memory: do not touch code dirty bitmap unless > TCG is enabled", 2015-06-05) but for the migration bitmap - we can > skip the MIGRATION bitmap update if migration not enabled. > > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> > Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> But if we ever decide to _not_ dirty all the bitmap at start (only used pages) we need to fix this.
On Fri, May 31, 2019 at 03:01:29PM +0200, Juan Quintela wrote: > Peter Xu <peterx@redhat.com> wrote: > > Similar to 9460dee4b2 ("memory: do not touch code dirty bitmap unless > > TCG is enabled", 2015-06-05) but for the migration bitmap - we can > > skip the MIGRATION bitmap update if migration not enabled. > > > > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> > > Signed-off-by: Peter Xu <peterx@redhat.com> > > Reviewed-by: Juan Quintela <quintela@redhat.com> > > But if we ever decide to _not_ dirty all the bitmap at start (only used > pages) we need to fix this. Right, but IMHO we can never avoid doing it, because kvm (and also the per-ramblock dirty bitmaps) will only capture "dirtied pages" after log sync has started. One example is what if one page P is never been touched after log_sync? Then in kvm dirty log it'll never be set, and the only way to make sure we will still migrate that page P (that could be touched before log_sync so it might still contain valid data rather than a zero page) is to dirty all the pages at the start of migration (for now, it's ram_list_init_bitmaps). Thanks for the review!
diff --git a/include/exec/memory.h b/include/exec/memory.h index e6140e8a04..f29300c54d 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -46,6 +46,8 @@ OBJECT_GET_CLASS(IOMMUMemoryRegionClass, (obj), \ TYPE_IOMMU_MEMORY_REGION) +extern bool global_dirty_log; + typedef struct MemoryRegionOps MemoryRegionOps; typedef struct MemoryRegionMmio MemoryRegionMmio; diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h index 6fc49e5db5..79e70a96ee 100644 --- a/include/exec/ram_addr.h +++ b/include/exec/ram_addr.h @@ -348,8 +348,13 @@ static inline void cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap, if (bitmap[k]) { unsigned long temp = leul_to_cpu(bitmap[k]); - atomic_or(&blocks[DIRTY_MEMORY_MIGRATION][idx][offset], temp); atomic_or(&blocks[DIRTY_MEMORY_VGA][idx][offset], temp); + + if (global_dirty_log) { + atomic_or(&blocks[DIRTY_MEMORY_MIGRATION][idx][offset], + temp); + } + if (tcg_enabled()) { atomic_or(&blocks[DIRTY_MEMORY_CODE][idx][offset], temp); } @@ -366,6 +371,11 @@ static inline void cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap, xen_hvm_modified_memory(start, pages << TARGET_PAGE_BITS); } else { uint8_t clients = tcg_enabled() ? DIRTY_CLIENTS_ALL : DIRTY_CLIENTS_NOCODE; + + if (!global_dirty_log) { + clients &= ~(1 << DIRTY_MEMORY_MIGRATION); + } + /* * bitmap-traveling is faster than memory-traveling (for addr...) * especially when most of the memory is not dirty. diff --git a/memory.c b/memory.c index 0920c105aa..cff0ea8f40 100644 --- a/memory.c +++ b/memory.c @@ -38,7 +38,7 @@ static unsigned memory_region_transaction_depth; static bool memory_region_update_pending; static bool ioeventfd_update_pending; -static bool global_dirty_log = false; +bool global_dirty_log; static QTAILQ_HEAD(, MemoryListener) memory_listeners = QTAILQ_HEAD_INITIALIZER(memory_listeners);