Message ID | 1496828798-27548-8-git-send-email-a.perevalov@samsung.com |
---|---|
State | New |
Headers | show |
Alexey Perevalov <a.perevalov@samsung.com> wrote: > +static unsigned long get_copiedmap_size(RAMBlock *rb) > +{ > + unsigned long pages; > + pages = rb->max_length >> find_first_bit((unsigned long *)&rb->page_size, > + sizeof(rb->page_size)); > + return pages; Are you sure that you want this and not: pages = rb->max_length >> TARGET_PAGE_BITS? Otherwise, in some architectures/configurations you can end with a bitmap size that is different of the migration bitmap size.
On 06/07/2017 12:46 PM, Alexey Perevalov wrote: > This patch adds ability to track down already copied > pages, it's necessary for calculation vCPU block time in > postcopy migration feature, maybe for restore after > postcopy migration failure. > Also it's necessary to solve shared memory issue in > postcopy livemigration. Information about copied pages > will be transferred to the software virtual bridge > (e.g. OVS-VSWITCHD), to avoid fallocate (unmap) for > already copied pages. fallocate syscall is required for > remmaped shared memory, due to remmaping itself blocks > ioctl(UFFDIO_COPY, ioctl in this case will end with EEXIT > error (struct page is exists after remmap). > > Bitmap is placed into RAMBlock as another postcopy/precopy > related bitmaps. Helpers are in migration/ram.c, due to > in this file is allowing to work with RAMBlock. > > Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com> > --- > include/exec/ram_addr.h | 2 ++ > migration/ram.c | 36 ++++++++++++++++++++++++++++++++++++ > migration/ram.h | 4 ++++ > 3 files changed, 42 insertions(+) > > diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h > index 140efa8..6a3780b 100644 > --- a/include/exec/ram_addr.h > +++ b/include/exec/ram_addr.h > @@ -47,6 +47,8 @@ struct RAMBlock { > * of the postcopy phase > */ > unsigned long *unsentmap; > + /* bitmap of already copied pages in postcopy */ > + unsigned long *copiedmap; > }; > > static inline bool offset_in_ramblock(RAMBlock *b, ram_addr_t offset) > diff --git a/migration/ram.c b/migration/ram.c > index f387e9c..a7c0db4 100644 > --- a/migration/ram.c > +++ b/migration/ram.c > @@ -149,6 +149,25 @@ out: > return ret; > } > > +static unsigned long int get_copied_bit_offset(uint64_t addr, RAMBlock *rb) > +{ > + uint64_t addr_offset = addr - (uint64_t)(uintptr_t)rb->host; > + int page_shift = find_first_bit((unsigned long *)&rb->page_size, > + sizeof(rb->page_size)); > + > + return addr_offset >> page_shift; > +} > + > +int test_copiedmap_by_addr(uint64_t addr, RAMBlock *rb) > +{ > + return test_bit(get_copied_bit_offset(addr, rb), rb->copiedmap); > +} > + > +void set_copiedmap_by_addr(uint64_t addr, RAMBlock *rb) > +{ > + set_bit_atomic(get_copied_bit_offset(addr, rb), rb->copiedmap); > +} > + > /* > * An outstanding page request, on the source, having been received > * and queued > @@ -1449,6 +1468,8 @@ static void ram_migration_cleanup(void *opaque) > block->bmap = NULL; > g_free(block->unsentmap); > block->unsentmap = NULL; looks like it's wrong place, because copiedmap is living on destination side, so maybe in qemu_ram_free > + g_free(block->copiedmap); > + block->copiedmap = NULL; > } > > XBZRLE_cache_lock(); > @@ -2517,6 +2538,14 @@ static int ram_load_postcopy(QEMUFile *f) > return ret; > } > > +static unsigned long get_copiedmap_size(RAMBlock *rb) > +{ > + unsigned long pages; > + pages = rb->max_length >> find_first_bit((unsigned long *)&rb->page_size, > + sizeof(rb->page_size)); > + return pages; > +} > + > static int ram_load(QEMUFile *f, void *opaque, int version_id) > { > int flags = 0, ret = 0; > @@ -2544,6 +2573,13 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id) > rcu_read_lock(); > > if (postcopy_running) { > + RAMBlock *rb; > + RAMBLOCK_FOREACH(rb) { > + /* need for destination, bitmap_new calls > + * g_try_malloc0 and this function > + * Attempts to allocate @n_bytes, initialized to 0'sh */ > + rb->copiedmap = bitmap_new(get_copiedmap_size(rb)); > + } > ret = ram_load_postcopy(f); > } > > diff --git a/migration/ram.h b/migration/ram.h > index c9563d1..1f32824 100644 > --- a/migration/ram.h > +++ b/migration/ram.h > @@ -67,4 +67,8 @@ int ram_discard_range(const char *block_name, uint64_t start, size_t length); > int ram_postcopy_incoming_init(MigrationIncomingState *mis); > > void ram_handle_compressed(void *host, uint8_t ch, uint64_t size); > + > +int test_copiedmap_by_addr(uint64_t addr, RAMBlock *rb); > +void set_copiedmap_by_addr(uint64_t addr, RAMBlock *rb); > + > #endif
On 06/07/2017 03:56 PM, Juan Quintela wrote: > Alexey Perevalov <a.perevalov@samsung.com> wrote: > >> +static unsigned long get_copiedmap_size(RAMBlock *rb) >> +{ >> + unsigned long pages; >> + pages = rb->max_length >> find_first_bit((unsigned long *)&rb->page_size, >> + sizeof(rb->page_size)); >> + return pages; > Are you sure that you want this and not: > > pages = rb->max_length >> TARGET_PAGE_BITS? I just wish to optimize size of bitmap, > > Otherwise, in some architectures/configurations you can end with a > bitmap size that is different of the migration bitmap size. > looks like, yes, that solution is for le only, so I feel luck of converting to le, here. > >
On Wed, Jun 07, 2017 at 05:13:00PM +0300, Alexey Perevalov wrote: > On 06/07/2017 12:46 PM, Alexey Perevalov wrote: > >This patch adds ability to track down already copied > >pages, it's necessary for calculation vCPU block time in > >postcopy migration feature, maybe for restore after > >postcopy migration failure. > >Also it's necessary to solve shared memory issue in > >postcopy livemigration. Information about copied pages > >will be transferred to the software virtual bridge > >(e.g. OVS-VSWITCHD), to avoid fallocate (unmap) for > >already copied pages. fallocate syscall is required for > >remmaped shared memory, due to remmaping itself blocks > >ioctl(UFFDIO_COPY, ioctl in this case will end with EEXIT > >error (struct page is exists after remmap). > > > >Bitmap is placed into RAMBlock as another postcopy/precopy > >related bitmaps. Helpers are in migration/ram.c, due to > >in this file is allowing to work with RAMBlock. > > > >Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com> > >--- > > include/exec/ram_addr.h | 2 ++ > > migration/ram.c | 36 ++++++++++++++++++++++++++++++++++++ > > migration/ram.h | 4 ++++ > > 3 files changed, 42 insertions(+) > > > >diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h > >index 140efa8..6a3780b 100644 > >--- a/include/exec/ram_addr.h > >+++ b/include/exec/ram_addr.h > >@@ -47,6 +47,8 @@ struct RAMBlock { > > * of the postcopy phase > > */ > > unsigned long *unsentmap; > >+ /* bitmap of already copied pages in postcopy */ > >+ unsigned long *copiedmap; > > }; > > static inline bool offset_in_ramblock(RAMBlock *b, ram_addr_t offset) > >diff --git a/migration/ram.c b/migration/ram.c > >index f387e9c..a7c0db4 100644 > >--- a/migration/ram.c > >+++ b/migration/ram.c > >@@ -149,6 +149,25 @@ out: > > return ret; > > } > >+static unsigned long int get_copied_bit_offset(uint64_t addr, RAMBlock *rb) > >+{ > >+ uint64_t addr_offset = addr - (uint64_t)(uintptr_t)rb->host; > >+ int page_shift = find_first_bit((unsigned long *)&rb->page_size, > >+ sizeof(rb->page_size)); > >+ > >+ return addr_offset >> page_shift; > >+} > >+ > >+int test_copiedmap_by_addr(uint64_t addr, RAMBlock *rb) > >+{ > >+ return test_bit(get_copied_bit_offset(addr, rb), rb->copiedmap); > >+} > >+ > >+void set_copiedmap_by_addr(uint64_t addr, RAMBlock *rb) > >+{ > >+ set_bit_atomic(get_copied_bit_offset(addr, rb), rb->copiedmap); > >+} > >+ > > /* > > * An outstanding page request, on the source, having been received > > * and queued > >@@ -1449,6 +1468,8 @@ static void ram_migration_cleanup(void *opaque) > > block->bmap = NULL; > > g_free(block->unsentmap); > > block->unsentmap = NULL; > looks like it's wrong place, because copiedmap is living > on destination side, so maybe in qemu_ram_free Yes, and... > >+ g_free(block->copiedmap); > >+ block->copiedmap = NULL; > > } > > XBZRLE_cache_lock(); > >@@ -2517,6 +2538,14 @@ static int ram_load_postcopy(QEMUFile *f) > > return ret; > > } > >+static unsigned long get_copiedmap_size(RAMBlock *rb) > >+{ > >+ unsigned long pages; > >+ pages = rb->max_length >> find_first_bit((unsigned long *)&rb->page_size, > >+ sizeof(rb->page_size)); > >+ return pages; > >+} > >+ > > static int ram_load(QEMUFile *f, void *opaque, int version_id) > > { > > int flags = 0, ret = 0; > >@@ -2544,6 +2573,13 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id) > > rcu_read_lock(); > > if (postcopy_running) { > >+ RAMBlock *rb; > >+ RAMBLOCK_FOREACH(rb) { > >+ /* need for destination, bitmap_new calls > >+ * g_try_malloc0 and this function > >+ * Attempts to allocate @n_bytes, initialized to 0'sh */ > >+ rb->copiedmap = bitmap_new(get_copiedmap_size(rb)); ... I'm not sure whether this is the right place to init the bitmap, since iiuc ram_load() can be entered multiple times? Also, I think we need the bitmap even before the first page we send during precopy, right? I would think loadvm_postcopy_handle_advise() somewhere proper: that is before the first page is sent, and also when we are there it means source wants to do postcopy finally. Thanks, > >+ } > > ret = ram_load_postcopy(f); > > } > >diff --git a/migration/ram.h b/migration/ram.h > >index c9563d1..1f32824 100644 > >--- a/migration/ram.h > >+++ b/migration/ram.h > >@@ -67,4 +67,8 @@ int ram_discard_range(const char *block_name, uint64_t start, size_t length); > > int ram_postcopy_incoming_init(MigrationIncomingState *mis); > > void ram_handle_compressed(void *host, uint8_t ch, uint64_t size); > >+ > >+int test_copiedmap_by_addr(uint64_t addr, RAMBlock *rb); > >+void set_copiedmap_by_addr(uint64_t addr, RAMBlock *rb); > >+ > > #endif > > > -- > Best regards, > Alexey Perevalov
On 06/09/2017 09:06 AM, Peter Xu wrote: > On Wed, Jun 07, 2017 at 05:13:00PM +0300, Alexey Perevalov wrote: >> On 06/07/2017 12:46 PM, Alexey Perevalov wrote: >>> This patch adds ability to track down already copied >>> pages, it's necessary for calculation vCPU block time in >>> postcopy migration feature, maybe for restore after >>> postcopy migration failure. >>> Also it's necessary to solve shared memory issue in >>> postcopy livemigration. Information about copied pages >>> will be transferred to the software virtual bridge >>> (e.g. OVS-VSWITCHD), to avoid fallocate (unmap) for >>> already copied pages. fallocate syscall is required for >>> remmaped shared memory, due to remmaping itself blocks >>> ioctl(UFFDIO_COPY, ioctl in this case will end with EEXIT >>> error (struct page is exists after remmap). >>> >>> Bitmap is placed into RAMBlock as another postcopy/precopy >>> related bitmaps. Helpers are in migration/ram.c, due to >>> in this file is allowing to work with RAMBlock. >>> >>> Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com> >>> --- >>> include/exec/ram_addr.h | 2 ++ >>> migration/ram.c | 36 ++++++++++++++++++++++++++++++++++++ >>> migration/ram.h | 4 ++++ >>> 3 files changed, 42 insertions(+) >>> >>> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h >>> index 140efa8..6a3780b 100644 >>> --- a/include/exec/ram_addr.h >>> +++ b/include/exec/ram_addr.h >>> @@ -47,6 +47,8 @@ struct RAMBlock { >>> * of the postcopy phase >>> */ >>> unsigned long *unsentmap; >>> + /* bitmap of already copied pages in postcopy */ >>> + unsigned long *copiedmap; >>> }; >>> static inline bool offset_in_ramblock(RAMBlock *b, ram_addr_t offset) >>> diff --git a/migration/ram.c b/migration/ram.c >>> index f387e9c..a7c0db4 100644 >>> --- a/migration/ram.c >>> +++ b/migration/ram.c >>> @@ -149,6 +149,25 @@ out: >>> return ret; >>> } >>> +static unsigned long int get_copied_bit_offset(uint64_t addr, RAMBlock *rb) >>> +{ >>> + uint64_t addr_offset = addr - (uint64_t)(uintptr_t)rb->host; >>> + int page_shift = find_first_bit((unsigned long *)&rb->page_size, >>> + sizeof(rb->page_size)); >>> + >>> + return addr_offset >> page_shift; >>> +} >>> + >>> +int test_copiedmap_by_addr(uint64_t addr, RAMBlock *rb) >>> +{ >>> + return test_bit(get_copied_bit_offset(addr, rb), rb->copiedmap); >>> +} >>> + >>> +void set_copiedmap_by_addr(uint64_t addr, RAMBlock *rb) >>> +{ >>> + set_bit_atomic(get_copied_bit_offset(addr, rb), rb->copiedmap); >>> +} >>> + >>> /* >>> * An outstanding page request, on the source, having been received >>> * and queued >>> @@ -1449,6 +1468,8 @@ static void ram_migration_cleanup(void *opaque) >>> block->bmap = NULL; >>> g_free(block->unsentmap); >>> block->unsentmap = NULL; >> looks like it's wrong place, because copiedmap is living >> on destination side, so maybe in qemu_ram_free > Yes, and... > >>> + g_free(block->copiedmap); >>> + block->copiedmap = NULL; >>> } >>> XBZRLE_cache_lock(); >>> @@ -2517,6 +2538,14 @@ static int ram_load_postcopy(QEMUFile *f) >>> return ret; >>> } >>> +static unsigned long get_copiedmap_size(RAMBlock *rb) >>> +{ >>> + unsigned long pages; size in bits, but I passed bytes, but as I remember it was already mentioned. >>> + pages = rb->max_length >> find_first_bit((unsigned long *)&rb->page_size, >>> + sizeof(rb->page_size)); >>> + return pages; >>> +} >>> + >>> static int ram_load(QEMUFile *f, void *opaque, int version_id) >>> { >>> int flags = 0, ret = 0; >>> @@ -2544,6 +2573,13 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id) >>> rcu_read_lock(); >>> if (postcopy_running) { >>> + RAMBlock *rb; >>> + RAMBLOCK_FOREACH(rb) { >>> + /* need for destination, bitmap_new calls >>> + * g_try_malloc0 and this function >>> + * Attempts to allocate @n_bytes, initialized to 0'sh */ >>> + rb->copiedmap = bitmap_new(get_copiedmap_size(rb)); > ... I'm not sure whether this is the right place to init the bitmap, > since iiuc ram_load() can be entered multiple times? yes, you right, every time qemu_loadvm_section_part_end is called and it qemu_loadvm_section_part_start too, so I didn't take it into account. > > Also, I think we need the bitmap even before the first page we send > during precopy, right? > > I would think loadvm_postcopy_handle_advise() somewhere proper: that > is before the first page is sent, and also when we are there it means > source wants to do postcopy finally. I think, you right again ), loadvm_postcopy_handle_advise is calling before ram_discard_range (so page faults will be after that) and before postcopy_place_page. > > Thanks, > >>> + } >>> ret = ram_load_postcopy(f); >>> } >>> diff --git a/migration/ram.h b/migration/ram.h >>> index c9563d1..1f32824 100644 >>> --- a/migration/ram.h >>> +++ b/migration/ram.h >>> @@ -67,4 +67,8 @@ int ram_discard_range(const char *block_name, uint64_t start, size_t length); >>> int ram_postcopy_incoming_init(MigrationIncomingState *mis); >>> void ram_handle_compressed(void *host, uint8_t ch, uint64_t size); >>> + >>> +int test_copiedmap_by_addr(uint64_t addr, RAMBlock *rb); >>> +void set_copiedmap_by_addr(uint64_t addr, RAMBlock *rb); >>> + >>> #endif >> >> -- >> Best regards, >> Alexey Perevalov
* Alexey Perevalov (a.perevalov@samsung.com) wrote: > This patch adds ability to track down already copied > pages, it's necessary for calculation vCPU block time in > postcopy migration feature, maybe for restore after > postcopy migration failure. > Also it's necessary to solve shared memory issue in > postcopy livemigration. Information about copied pages > will be transferred to the software virtual bridge > (e.g. OVS-VSWITCHD), to avoid fallocate (unmap) for > already copied pages. fallocate syscall is required for > remmaped shared memory, due to remmaping itself blocks > ioctl(UFFDIO_COPY, ioctl in this case will end with EEXIT > error (struct page is exists after remmap). > > Bitmap is placed into RAMBlock as another postcopy/precopy > related bitmaps. Helpers are in migration/ram.c, due to > in this file is allowing to work with RAMBlock. > > Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com> > --- > include/exec/ram_addr.h | 2 ++ > migration/ram.c | 36 ++++++++++++++++++++++++++++++++++++ > migration/ram.h | 4 ++++ > 3 files changed, 42 insertions(+) > > diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h > index 140efa8..6a3780b 100644 > --- a/include/exec/ram_addr.h > +++ b/include/exec/ram_addr.h > @@ -47,6 +47,8 @@ struct RAMBlock { > * of the postcopy phase > */ > unsigned long *unsentmap; > + /* bitmap of already copied pages in postcopy */ > + unsigned long *copiedmap; > }; > > static inline bool offset_in_ramblock(RAMBlock *b, ram_addr_t offset) > diff --git a/migration/ram.c b/migration/ram.c > index f387e9c..a7c0db4 100644 > --- a/migration/ram.c > +++ b/migration/ram.c > @@ -149,6 +149,25 @@ out: > return ret; > } > > +static unsigned long int get_copied_bit_offset(uint64_t addr, RAMBlock *rb) > +{ > + uint64_t addr_offset = addr - (uint64_t)(uintptr_t)rb->host; > + int page_shift = find_first_bit((unsigned long *)&rb->page_size, > + sizeof(rb->page_size)); > + > + return addr_offset >> page_shift; > +} > + > +int test_copiedmap_by_addr(uint64_t addr, RAMBlock *rb) > +{ > + return test_bit(get_copied_bit_offset(addr, rb), rb->copiedmap); > +} > + > +void set_copiedmap_by_addr(uint64_t addr, RAMBlock *rb) > +{ > + set_bit_atomic(get_copied_bit_offset(addr, rb), rb->copiedmap); > +} Hi, Can you please make the 'uint64_t addr' you pass in here be void *host_addr ; it's just since we have so many types of addresses it gets a bit confusing. > /* > * An outstanding page request, on the source, having been received > * and queued > @@ -1449,6 +1468,8 @@ static void ram_migration_cleanup(void *opaque) > block->bmap = NULL; > g_free(block->unsentmap); > block->unsentmap = NULL; > + g_free(block->copiedmap); > + block->copiedmap = NULL; > } > > XBZRLE_cache_lock(); > @@ -2517,6 +2538,14 @@ static int ram_load_postcopy(QEMUFile *f) > return ret; > } > > +static unsigned long get_copiedmap_size(RAMBlock *rb) > +{ > + unsigned long pages; > + pages = rb->max_length >> find_first_bit((unsigned long *)&rb->page_size, > + sizeof(rb->page_size)); > + return pages; > +} I think the bitmap size should be the same size for all bitmaps; so you shouldn't need a copiedmap specific function? > static int ram_load(QEMUFile *f, void *opaque, int version_id) > { > int flags = 0, ret = 0; > @@ -2544,6 +2573,13 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id) > rcu_read_lock(); > > if (postcopy_running) { > + RAMBlock *rb; > + RAMBLOCK_FOREACH(rb) { > + /* need for destination, bitmap_new calls > + * g_try_malloc0 and this function > + * Attempts to allocate @n_bytes, initialized to 0'sh */ > + rb->copiedmap = bitmap_new(get_copiedmap_size(rb)); > + } Do you need to record the pages that have been received prior to postcopy starting (and discard entries when 'discard' messages are received?). Dave > ret = ram_load_postcopy(f); > } > > diff --git a/migration/ram.h b/migration/ram.h > index c9563d1..1f32824 100644 > --- a/migration/ram.h > +++ b/migration/ram.h > @@ -67,4 +67,8 @@ int ram_discard_range(const char *block_name, uint64_t start, size_t length); > int ram_postcopy_incoming_init(MigrationIncomingState *mis); > > void ram_handle_compressed(void *host, uint8_t ch, uint64_t size); > + > +int test_copiedmap_by_addr(uint64_t addr, RAMBlock *rb); > +void set_copiedmap_by_addr(uint64_t addr, RAMBlock *rb); > + > #endif > -- > 1.9.1 > -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
On Wed, Jun 07, 2017 at 12:46:34PM +0300, Alexey Perevalov wrote: > This patch adds ability to track down already copied > pages, it's necessary for calculation vCPU block time in > postcopy migration feature, maybe for restore after > postcopy migration failure. > Also it's necessary to solve shared memory issue in > postcopy livemigration. Information about copied pages > will be transferred to the software virtual bridge > (e.g. OVS-VSWITCHD), to avoid fallocate (unmap) for > already copied pages. fallocate syscall is required for > remmaped shared memory, due to remmaping itself blocks > ioctl(UFFDIO_COPY, ioctl in this case will end with EEXIT > error (struct page is exists after remmap). > > Bitmap is placed into RAMBlock as another postcopy/precopy > related bitmaps. Helpers are in migration/ram.c, due to > in this file is allowing to work with RAMBlock. > > Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com> Hi, Alexey, Besides all the existing comments, I would suggest you do all the copied_map things in this single patch, so that it'll be easier for others to work upon your work. E.g., move the bit_set() operations here as well (currently it was in followup patches, and looks like that's not enough since we need to capture copied_map even for precopy phase), then this single patch can ideally be separated from the whole series (and then I can work upon it :-). Or, please just let me know if you want me to do this for you. I can post this as a standalone patch, with your s-o-b if you allow. Thanks, > --- > include/exec/ram_addr.h | 2 ++ > migration/ram.c | 36 ++++++++++++++++++++++++++++++++++++ > migration/ram.h | 4 ++++ > 3 files changed, 42 insertions(+) > > diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h > index 140efa8..6a3780b 100644 > --- a/include/exec/ram_addr.h > +++ b/include/exec/ram_addr.h > @@ -47,6 +47,8 @@ struct RAMBlock { > * of the postcopy phase > */ > unsigned long *unsentmap; > + /* bitmap of already copied pages in postcopy */ > + unsigned long *copiedmap; > }; > > static inline bool offset_in_ramblock(RAMBlock *b, ram_addr_t offset) > diff --git a/migration/ram.c b/migration/ram.c > index f387e9c..a7c0db4 100644 > --- a/migration/ram.c > +++ b/migration/ram.c > @@ -149,6 +149,25 @@ out: > return ret; > } > > +static unsigned long int get_copied_bit_offset(uint64_t addr, RAMBlock *rb) > +{ > + uint64_t addr_offset = addr - (uint64_t)(uintptr_t)rb->host; > + int page_shift = find_first_bit((unsigned long *)&rb->page_size, > + sizeof(rb->page_size)); > + > + return addr_offset >> page_shift; > +} > + > +int test_copiedmap_by_addr(uint64_t addr, RAMBlock *rb) > +{ > + return test_bit(get_copied_bit_offset(addr, rb), rb->copiedmap); > +} > + > +void set_copiedmap_by_addr(uint64_t addr, RAMBlock *rb) > +{ > + set_bit_atomic(get_copied_bit_offset(addr, rb), rb->copiedmap); > +} > + > /* > * An outstanding page request, on the source, having been received > * and queued > @@ -1449,6 +1468,8 @@ static void ram_migration_cleanup(void *opaque) > block->bmap = NULL; > g_free(block->unsentmap); > block->unsentmap = NULL; > + g_free(block->copiedmap); > + block->copiedmap = NULL; > } > > XBZRLE_cache_lock(); > @@ -2517,6 +2538,14 @@ static int ram_load_postcopy(QEMUFile *f) > return ret; > } > > +static unsigned long get_copiedmap_size(RAMBlock *rb) > +{ > + unsigned long pages; > + pages = rb->max_length >> find_first_bit((unsigned long *)&rb->page_size, > + sizeof(rb->page_size)); > + return pages; > +} > + > static int ram_load(QEMUFile *f, void *opaque, int version_id) > { > int flags = 0, ret = 0; > @@ -2544,6 +2573,13 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id) > rcu_read_lock(); > > if (postcopy_running) { > + RAMBlock *rb; > + RAMBLOCK_FOREACH(rb) { > + /* need for destination, bitmap_new calls > + * g_try_malloc0 and this function > + * Attempts to allocate @n_bytes, initialized to 0'sh */ > + rb->copiedmap = bitmap_new(get_copiedmap_size(rb)); > + } > ret = ram_load_postcopy(f); > } > > diff --git a/migration/ram.h b/migration/ram.h > index c9563d1..1f32824 100644 > --- a/migration/ram.h > +++ b/migration/ram.h > @@ -67,4 +67,8 @@ int ram_discard_range(const char *block_name, uint64_t start, size_t length); > int ram_postcopy_incoming_init(MigrationIncomingState *mis); > > void ram_handle_compressed(void *host, uint8_t ch, uint64_t size); > + > +int test_copiedmap_by_addr(uint64_t addr, RAMBlock *rb); > +void set_copiedmap_by_addr(uint64_t addr, RAMBlock *rb); > + > #endif > -- > 1.9.1 >
On 06/13/2017 08:59 AM, Peter Xu wrote: > On Wed, Jun 07, 2017 at 12:46:34PM +0300, Alexey Perevalov wrote: >> This patch adds ability to track down already copied >> pages, it's necessary for calculation vCPU block time in >> postcopy migration feature, maybe for restore after >> postcopy migration failure. >> Also it's necessary to solve shared memory issue in >> postcopy livemigration. Information about copied pages >> will be transferred to the software virtual bridge >> (e.g. OVS-VSWITCHD), to avoid fallocate (unmap) for >> already copied pages. fallocate syscall is required for >> remmaped shared memory, due to remmaping itself blocks >> ioctl(UFFDIO_COPY, ioctl in this case will end with EEXIT >> error (struct page is exists after remmap). >> >> Bitmap is placed into RAMBlock as another postcopy/precopy >> related bitmaps. Helpers are in migration/ram.c, due to >> in this file is allowing to work with RAMBlock. >> >> Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com> > Hi, Alexey, > > Besides all the existing comments, I would suggest you do all the > copied_map things in this single patch, so that it'll be easier for > others to work upon your work. E.g., move the bit_set() operations > here as well (currently it was in followup patches, and looks like > that's not enough since we need to capture copied_map even for precopy > phase), then this single patch can ideally be separated from the whole > series (and then I can work upon it :-). > > Or, please just let me know if you want me to do this for you. I can > post this as a standalone patch, with your s-o-b if you allow. Hello Peter, I'm working with this patch in another patch series too. (it's about QEMU's shared memory and OVS-VSWITCHD, vhost-user use case). So if you need that I could resend this patch as separate patch. And it will be convenient to base both my patch set and you patches on top of it. > > Thanks, > >> --- >> include/exec/ram_addr.h | 2 ++ >> migration/ram.c | 36 ++++++++++++++++++++++++++++++++++++ >> migration/ram.h | 4 ++++ >> 3 files changed, 42 insertions(+) >> >> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h >> index 140efa8..6a3780b 100644 >> --- a/include/exec/ram_addr.h >> +++ b/include/exec/ram_addr.h >> @@ -47,6 +47,8 @@ struct RAMBlock { >> * of the postcopy phase >> */ >> unsigned long *unsentmap; >> + /* bitmap of already copied pages in postcopy */ >> + unsigned long *copiedmap; >> }; >> >> static inline bool offset_in_ramblock(RAMBlock *b, ram_addr_t offset) >> diff --git a/migration/ram.c b/migration/ram.c >> index f387e9c..a7c0db4 100644 >> --- a/migration/ram.c >> +++ b/migration/ram.c >> @@ -149,6 +149,25 @@ out: >> return ret; >> } >> >> +static unsigned long int get_copied_bit_offset(uint64_t addr, RAMBlock *rb) >> +{ >> + uint64_t addr_offset = addr - (uint64_t)(uintptr_t)rb->host; >> + int page_shift = find_first_bit((unsigned long *)&rb->page_size, >> + sizeof(rb->page_size)); >> + >> + return addr_offset >> page_shift; >> +} >> + >> +int test_copiedmap_by_addr(uint64_t addr, RAMBlock *rb) >> +{ >> + return test_bit(get_copied_bit_offset(addr, rb), rb->copiedmap); >> +} >> + >> +void set_copiedmap_by_addr(uint64_t addr, RAMBlock *rb) >> +{ >> + set_bit_atomic(get_copied_bit_offset(addr, rb), rb->copiedmap); >> +} >> + >> /* >> * An outstanding page request, on the source, having been received >> * and queued >> @@ -1449,6 +1468,8 @@ static void ram_migration_cleanup(void *opaque) >> block->bmap = NULL; >> g_free(block->unsentmap); >> block->unsentmap = NULL; >> + g_free(block->copiedmap); >> + block->copiedmap = NULL; >> } >> >> XBZRLE_cache_lock(); >> @@ -2517,6 +2538,14 @@ static int ram_load_postcopy(QEMUFile *f) >> return ret; >> } >> >> +static unsigned long get_copiedmap_size(RAMBlock *rb) >> +{ >> + unsigned long pages; >> + pages = rb->max_length >> find_first_bit((unsigned long *)&rb->page_size, >> + sizeof(rb->page_size)); >> + return pages; >> +} >> + >> static int ram_load(QEMUFile *f, void *opaque, int version_id) >> { >> int flags = 0, ret = 0; >> @@ -2544,6 +2573,13 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id) >> rcu_read_lock(); >> >> if (postcopy_running) { >> + RAMBlock *rb; >> + RAMBLOCK_FOREACH(rb) { >> + /* need for destination, bitmap_new calls >> + * g_try_malloc0 and this function >> + * Attempts to allocate @n_bytes, initialized to 0'sh */ >> + rb->copiedmap = bitmap_new(get_copiedmap_size(rb)); >> + } >> ret = ram_load_postcopy(f); >> } >> >> diff --git a/migration/ram.h b/migration/ram.h >> index c9563d1..1f32824 100644 >> --- a/migration/ram.h >> +++ b/migration/ram.h >> @@ -67,4 +67,8 @@ int ram_discard_range(const char *block_name, uint64_t start, size_t length); >> int ram_postcopy_incoming_init(MigrationIncomingState *mis); >> >> void ram_handle_compressed(void *host, uint8_t ch, uint64_t size); >> + >> +int test_copiedmap_by_addr(uint64_t addr, RAMBlock *rb); >> +void set_copiedmap_by_addr(uint64_t addr, RAMBlock *rb); >> + >> #endif >> -- >> 1.9.1 >>
On Tue, Jun 13, 2017 at 09:10:46AM +0300, Alexey Perevalov wrote: > On 06/13/2017 08:59 AM, Peter Xu wrote: > >On Wed, Jun 07, 2017 at 12:46:34PM +0300, Alexey Perevalov wrote: > >>This patch adds ability to track down already copied > >>pages, it's necessary for calculation vCPU block time in > >>postcopy migration feature, maybe for restore after > >>postcopy migration failure. > >>Also it's necessary to solve shared memory issue in > >>postcopy livemigration. Information about copied pages > >>will be transferred to the software virtual bridge > >>(e.g. OVS-VSWITCHD), to avoid fallocate (unmap) for > >>already copied pages. fallocate syscall is required for > >>remmaped shared memory, due to remmaping itself blocks > >>ioctl(UFFDIO_COPY, ioctl in this case will end with EEXIT > >>error (struct page is exists after remmap). > >> > >>Bitmap is placed into RAMBlock as another postcopy/precopy > >>related bitmaps. Helpers are in migration/ram.c, due to > >>in this file is allowing to work with RAMBlock. > >> > >>Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com> > >Hi, Alexey, > > > >Besides all the existing comments, I would suggest you do all the > >copied_map things in this single patch, so that it'll be easier for > >others to work upon your work. E.g., move the bit_set() operations > >here as well (currently it was in followup patches, and looks like > >that's not enough since we need to capture copied_map even for precopy > >phase), then this single patch can ideally be separated from the whole > >series (and then I can work upon it :-). > > > >Or, please just let me know if you want me to do this for you. I can > >post this as a standalone patch, with your s-o-b if you allow. > > Hello Peter, > I'm working with this patch in another patch series too. > (it's about QEMU's shared memory and OVS-VSWITCHD, > vhost-user use case). > So if you need that I could resend this patch as separate patch. > And it will be convenient to base both my patch set and you patches > on top of it. That'll be great! Then please post this as standalone patch. Thanks,
diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h index 140efa8..6a3780b 100644 --- a/include/exec/ram_addr.h +++ b/include/exec/ram_addr.h @@ -47,6 +47,8 @@ struct RAMBlock { * of the postcopy phase */ unsigned long *unsentmap; + /* bitmap of already copied pages in postcopy */ + unsigned long *copiedmap; }; static inline bool offset_in_ramblock(RAMBlock *b, ram_addr_t offset) diff --git a/migration/ram.c b/migration/ram.c index f387e9c..a7c0db4 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -149,6 +149,25 @@ out: return ret; } +static unsigned long int get_copied_bit_offset(uint64_t addr, RAMBlock *rb) +{ + uint64_t addr_offset = addr - (uint64_t)(uintptr_t)rb->host; + int page_shift = find_first_bit((unsigned long *)&rb->page_size, + sizeof(rb->page_size)); + + return addr_offset >> page_shift; +} + +int test_copiedmap_by_addr(uint64_t addr, RAMBlock *rb) +{ + return test_bit(get_copied_bit_offset(addr, rb), rb->copiedmap); +} + +void set_copiedmap_by_addr(uint64_t addr, RAMBlock *rb) +{ + set_bit_atomic(get_copied_bit_offset(addr, rb), rb->copiedmap); +} + /* * An outstanding page request, on the source, having been received * and queued @@ -1449,6 +1468,8 @@ static void ram_migration_cleanup(void *opaque) block->bmap = NULL; g_free(block->unsentmap); block->unsentmap = NULL; + g_free(block->copiedmap); + block->copiedmap = NULL; } XBZRLE_cache_lock(); @@ -2517,6 +2538,14 @@ static int ram_load_postcopy(QEMUFile *f) return ret; } +static unsigned long get_copiedmap_size(RAMBlock *rb) +{ + unsigned long pages; + pages = rb->max_length >> find_first_bit((unsigned long *)&rb->page_size, + sizeof(rb->page_size)); + return pages; +} + static int ram_load(QEMUFile *f, void *opaque, int version_id) { int flags = 0, ret = 0; @@ -2544,6 +2573,13 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id) rcu_read_lock(); if (postcopy_running) { + RAMBlock *rb; + RAMBLOCK_FOREACH(rb) { + /* need for destination, bitmap_new calls + * g_try_malloc0 and this function + * Attempts to allocate @n_bytes, initialized to 0'sh */ + rb->copiedmap = bitmap_new(get_copiedmap_size(rb)); + } ret = ram_load_postcopy(f); } diff --git a/migration/ram.h b/migration/ram.h index c9563d1..1f32824 100644 --- a/migration/ram.h +++ b/migration/ram.h @@ -67,4 +67,8 @@ int ram_discard_range(const char *block_name, uint64_t start, size_t length); int ram_postcopy_incoming_init(MigrationIncomingState *mis); void ram_handle_compressed(void *host, uint8_t ch, uint64_t size); + +int test_copiedmap_by_addr(uint64_t addr, RAMBlock *rb); +void set_copiedmap_by_addr(uint64_t addr, RAMBlock *rb); + #endif
This patch adds ability to track down already copied pages, it's necessary for calculation vCPU block time in postcopy migration feature, maybe for restore after postcopy migration failure. Also it's necessary to solve shared memory issue in postcopy livemigration. Information about copied pages will be transferred to the software virtual bridge (e.g. OVS-VSWITCHD), to avoid fallocate (unmap) for already copied pages. fallocate syscall is required for remmaped shared memory, due to remmaping itself blocks ioctl(UFFDIO_COPY, ioctl in this case will end with EEXIT error (struct page is exists after remmap). Bitmap is placed into RAMBlock as another postcopy/precopy related bitmaps. Helpers are in migration/ram.c, due to in this file is allowing to work with RAMBlock. Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com> --- include/exec/ram_addr.h | 2 ++ migration/ram.c | 36 ++++++++++++++++++++++++++++++++++++ migration/ram.h | 4 ++++ 3 files changed, 42 insertions(+)