Message ID | 20190507031703.856-1-richardw.yang@linux.intel.com |
---|---|
State | New |
Headers | show |
Series | migratioin/ram: leave RAMBlock->bmap blank on allocating | expand |
* Wei Yang (richardw.yang@linux.intel.com) wrote: > During migration, we would sync bitmap from ram_list.dirty_memory to > RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap(). > > Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this > means at the first round this sync is meaningless and is a duplicated > work. > > Leaving RAMBlock->bmap blank on allocating would have a side effect on > migration_dirty_pages, since it is calculated from the result of > cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to > set migration_dirty_pages to 0 in ram_state_init(). > > Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> I've looked at this for a while, and I think it's OK, so Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Peter, Juan: Can you just see if there's arny reason this would be bad, but I think it's actually more sensible than what we have. Dave > --- > migration/ram.c | 8 +------- > 1 file changed, 1 insertion(+), 7 deletions(-) > > diff --git a/migration/ram.c b/migration/ram.c > index 95c51109d2..417874707d 100644 > --- a/migration/ram.c > +++ b/migration/ram.c > @@ -3151,12 +3151,7 @@ static int ram_state_init(RAMState **rsp) > qemu_mutex_init(&(*rsp)->src_page_req_mutex); > QSIMPLEQ_INIT(&(*rsp)->src_page_requests); > > - /* > - * Count the total number of pages used by ram blocks not including any > - * gaps due to alignment or unplugs. > - */ > - (*rsp)->migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS; > - > + (*rsp)->migration_dirty_pages = 0; > ram_state_reset(*rsp); > > return 0; > @@ -3172,7 +3167,6 @@ static void ram_list_init_bitmaps(void) > RAMBLOCK_FOREACH_NOT_IGNORED(block) { > pages = block->max_length >> TARGET_PAGE_BITS; > block->bmap = bitmap_new(pages); > - bitmap_set(block->bmap, 0, pages); > if (migrate_postcopy_ram()) { > block->unsentmap = bitmap_new(pages); > bitmap_set(block->unsentmap, 0, pages); > -- > 2.19.1 > -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
On Fri, May 31, 2019 at 05:43:37PM +0100, Dr. David Alan Gilbert wrote: > * Wei Yang (richardw.yang@linux.intel.com) wrote: > > During migration, we would sync bitmap from ram_list.dirty_memory to > > RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap(). > > > > Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this > > means at the first round this sync is meaningless and is a duplicated > > work. > > > > Leaving RAMBlock->bmap blank on allocating would have a side effect on > > migration_dirty_pages, since it is calculated from the result of > > cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to > > set migration_dirty_pages to 0 in ram_state_init(). > > > > Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> > > I've looked at this for a while, and I think it's OK, so > > Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> > > Peter, Juan: Can you just see if there's arny reason this would be bad, > but I think it's actually more sensible than what we have. I really suspect it will work in all cases... Wei, have you done any test (or better, thorough tests) with this change? My reasoning of why we should need the bitmap all set is here: https://lists.gnu.org/archive/html/qemu-devel/2019-05/msg07361.html Regards,
On Sat, Jun 01, 2019 at 11:34:41AM +0800, Peter Xu wrote: >On Fri, May 31, 2019 at 05:43:37PM +0100, Dr. David Alan Gilbert wrote: >> * Wei Yang (richardw.yang@linux.intel.com) wrote: >> > During migration, we would sync bitmap from ram_list.dirty_memory to >> > RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap(). >> > >> > Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this >> > means at the first round this sync is meaningless and is a duplicated >> > work. >> > >> > Leaving RAMBlock->bmap blank on allocating would have a side effect on >> > migration_dirty_pages, since it is calculated from the result of >> > cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to >> > set migration_dirty_pages to 0 in ram_state_init(). >> > >> > Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> >> >> I've looked at this for a while, and I think it's OK, so >> >> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> >> >> Peter, Juan: Can you just see if there's arny reason this would be bad, >> but I think it's actually more sensible than what we have. > >I really suspect it will work in all cases... Wei, have you done any >test (or better, thorough tests) with this change? My reasoning of >why we should need the bitmap all set is here: > I have done some migration cases, like migrate a linux guest through tcp. Other cases suggested to do? >https://lists.gnu.org/archive/html/qemu-devel/2019-05/msg07361.html > >Regards, > >-- >Peter Xu
On Mon, Jun 03, 2019 at 09:33:05AM +0800, Wei Yang wrote: > On Sat, Jun 01, 2019 at 11:34:41AM +0800, Peter Xu wrote: > >On Fri, May 31, 2019 at 05:43:37PM +0100, Dr. David Alan Gilbert wrote: > >> * Wei Yang (richardw.yang@linux.intel.com) wrote: > >> > During migration, we would sync bitmap from ram_list.dirty_memory to > >> > RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap(). > >> > > >> > Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this > >> > means at the first round this sync is meaningless and is a duplicated > >> > work. > >> > > >> > Leaving RAMBlock->bmap blank on allocating would have a side effect on > >> > migration_dirty_pages, since it is calculated from the result of > >> > cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to > >> > set migration_dirty_pages to 0 in ram_state_init(). > >> > > >> > Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> > >> > >> I've looked at this for a while, and I think it's OK, so > >> > >> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> > >> > >> Peter, Juan: Can you just see if there's arny reason this would be bad, > >> but I think it's actually more sensible than what we have. > > > >I really suspect it will work in all cases... Wei, have you done any > >test (or better, thorough tests) with this change? My reasoning of > >why we should need the bitmap all set is here: > > > > I have done some migration cases, like migrate a linux guest through tcp. When did you start the migration? Have you tried to migrate during some workload? > > Other cases suggested to do? Could you also help answer the question I raised below in the link? Thanks, > >https://lists.gnu.org/archive/html/qemu-devel/2019-05/msg07361.html
On Mon, Jun 03, 2019 at 10:35:27AM +0800, Peter Xu wrote: >On Mon, Jun 03, 2019 at 09:33:05AM +0800, Wei Yang wrote: >> On Sat, Jun 01, 2019 at 11:34:41AM +0800, Peter Xu wrote: >> >On Fri, May 31, 2019 at 05:43:37PM +0100, Dr. David Alan Gilbert wrote: >> >> * Wei Yang (richardw.yang@linux.intel.com) wrote: >> >> > During migration, we would sync bitmap from ram_list.dirty_memory to >> >> > RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap(). >> >> > >> >> > Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this >> >> > means at the first round this sync is meaningless and is a duplicated >> >> > work. >> >> > >> >> > Leaving RAMBlock->bmap blank on allocating would have a side effect on >> >> > migration_dirty_pages, since it is calculated from the result of >> >> > cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to >> >> > set migration_dirty_pages to 0 in ram_state_init(). >> >> > >> >> > Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> >> >> >> >> I've looked at this for a while, and I think it's OK, so >> >> >> >> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> >> >> >> >> Peter, Juan: Can you just see if there's arny reason this would be bad, >> >> but I think it's actually more sensible than what we have. >> > >> >I really suspect it will work in all cases... Wei, have you done any >> >test (or better, thorough tests) with this change? My reasoning of >> >why we should need the bitmap all set is here: >> > >> >> I have done some migration cases, like migrate a linux guest through tcp. > >When did you start the migration? Have you tried to migrate during >some workload? > I tried kernel build in guest. >> >> Other cases suggested to do? > >Could you also help answer the question I raised below in the link? > >Thanks, > >> >https://lists.gnu.org/archive/html/qemu-devel/2019-05/msg07361.html > I took a look into this link, hope my understanding is correct. You concern is this thread/patch is based on one prerequisite -- dirty all the bitmap at start. My answer is we already did it in ram_block_add() for each RAMBlock. And then the bitmap is synced by migration_bitmap_sync_precopy() from ram_list.dirty_memory to RAMBlock.bmap. >-- >Peter Xu
On Mon, Jun 03, 2019 at 11:36:00AM +0800, Wei Yang wrote: > On Mon, Jun 03, 2019 at 10:35:27AM +0800, Peter Xu wrote: > >On Mon, Jun 03, 2019 at 09:33:05AM +0800, Wei Yang wrote: > >> On Sat, Jun 01, 2019 at 11:34:41AM +0800, Peter Xu wrote: > >> >On Fri, May 31, 2019 at 05:43:37PM +0100, Dr. David Alan Gilbert wrote: > >> >> * Wei Yang (richardw.yang@linux.intel.com) wrote: > >> >> > During migration, we would sync bitmap from ram_list.dirty_memory to > >> >> > RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap(). > >> >> > > >> >> > Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this > >> >> > means at the first round this sync is meaningless and is a duplicated > >> >> > work. > >> >> > > >> >> > Leaving RAMBlock->bmap blank on allocating would have a side effect on > >> >> > migration_dirty_pages, since it is calculated from the result of > >> >> > cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to > >> >> > set migration_dirty_pages to 0 in ram_state_init(). > >> >> > > >> >> > Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> > >> >> > >> >> I've looked at this for a while, and I think it's OK, so > >> >> > >> >> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> > >> >> > >> >> Peter, Juan: Can you just see if there's arny reason this would be bad, > >> >> but I think it's actually more sensible than what we have. > >> > > >> >I really suspect it will work in all cases... Wei, have you done any > >> >test (or better, thorough tests) with this change? My reasoning of > >> >why we should need the bitmap all set is here: > >> > > >> > >> I have done some migration cases, like migrate a linux guest through tcp. > > > >When did you start the migration? Have you tried to migrate during > >some workload? > > > > I tried kernel build in guest. > > >> > >> Other cases suggested to do? > > > >Could you also help answer the question I raised below in the link? > > > >Thanks, > > > >> >https://lists.gnu.org/archive/html/qemu-devel/2019-05/msg07361.html > > > > I took a look into this link, hope my understanding is correct. > > You concern is this thread/patch is based on one prerequisite -- dirty all the > bitmap at start. > > My answer is we already did it in ram_block_add() for each RAMBlock. And then > the bitmap is synced by migration_bitmap_sync_precopy() from > ram_list.dirty_memory to RAMBlock.bmap. Ah I see, thanks for the pointer. Then I would agree it's fine. I'm not an expert of TCG - I'm curious on why all those three dirty bitmaps need to be set at the very beginning. IIUC at least the VGA bitmap should not require that (so IMHO we should be fine to have all zeros with VGA bitmap for ramblocks, and we only set them when the guest touches them). Migration bitmap should be special somehow but I don't know much on TCG/TLB part I'd confess so I can't say. In other words, if migration is the only one that requires this "all-1" initialization then IMHO we may consider to remove the other part rather than here in migration because that's what we'd better to be sure with. And even if you want to remove this, I still have two suggestions: (1) proper comment here above bmap on the above fact that although bmap is not set here but it's actually set somewhere else because we'll sooner or later copy all 1s from the ramblock bitmap (2) imho you can move "migration_dirty_pages = 0" into ram_list_init_bitmaps() too to let them be together
On Mon, Jun 03, 2019 at 01:40:13PM +0800, Peter Xu wrote: > >Ah I see, thanks for the pointer. Then I would agree it's fine. > >I'm not an expert of TCG - I'm curious on why all those three dirty >bitmaps need to be set at the very beginning. IIUC at least the VGA >bitmap should not require that (so IMHO we should be fine to have all >zeros with VGA bitmap for ramblocks, and we only set them when the >guest touches them). Migration bitmap should be special somehow but I >don't know much on TCG/TLB part I'd confess so I can't say. In other >words, if migration is the only one that requires this "all-1" >initialization then IMHO we may consider to remove the other part >rather than here in migration because that's what we'd better to be >sure with. I am not sure about the background here, so I didn't make a change at this place. > >And even if you want to remove this, I still have two suggestions: > >(1) proper comment here above bmap on the above fact that although > bmap is not set here but it's actually set somewhere else because > we'll sooner or later copy all 1s from the ramblock bitmap > >(2) imho you can move "migration_dirty_pages = 0" into > ram_list_init_bitmaps() too to let them be together > I will address these two comments and send v2. Thanks. >-- >Peter Xu
On Mon, Jun 03, 2019 at 02:05:47PM +0800, Wei Yang wrote: >On Mon, Jun 03, 2019 at 01:40:13PM +0800, Peter Xu wrote: >> >>Ah I see, thanks for the pointer. Then I would agree it's fine. >> >>I'm not an expert of TCG - I'm curious on why all those three dirty >>bitmaps need to be set at the very beginning. IIUC at least the VGA >>bitmap should not require that (so IMHO we should be fine to have all >>zeros with VGA bitmap for ramblocks, and we only set them when the >>guest touches them). Migration bitmap should be special somehow but I >>don't know much on TCG/TLB part I'd confess so I can't say. In other >>words, if migration is the only one that requires this "all-1" >>initialization then IMHO we may consider to remove the other part >>rather than here in migration because that's what we'd better to be >>sure with. > >I am not sure about the background here, so I didn't make a change at this >place. > >> >>And even if you want to remove this, I still have two suggestions: >> >>(1) proper comment here above bmap on the above fact that although >> bmap is not set here but it's actually set somewhere else because >> we'll sooner or later copy all 1s from the ramblock bitmap >> >>(2) imho you can move "migration_dirty_pages = 0" into >> ram_list_init_bitmaps() too to let them be together >> I took a look into this one. ram_list_init_bitmaps() setup bitmap for each RAMBlock, while ram_state_init() setup RAMState. Since migration_dirty_pages belongs to RAMState, it maybe more proper to leave it at the original place. Do you feel good about this? > >I will address these two comments and send v2. > >Thanks. > >>-- >>Peter Xu > >-- >Wei Yang >Help you, Help me
On Mon, Jun 03, 2019 at 02:10:34PM +0800, Wei Yang wrote: > On Mon, Jun 03, 2019 at 02:05:47PM +0800, Wei Yang wrote: > >On Mon, Jun 03, 2019 at 01:40:13PM +0800, Peter Xu wrote: > >> > >>Ah I see, thanks for the pointer. Then I would agree it's fine. > >> > >>I'm not an expert of TCG - I'm curious on why all those three dirty > >>bitmaps need to be set at the very beginning. IIUC at least the VGA > >>bitmap should not require that (so IMHO we should be fine to have all > >>zeros with VGA bitmap for ramblocks, and we only set them when the > >>guest touches them). Migration bitmap should be special somehow but I > >>don't know much on TCG/TLB part I'd confess so I can't say. In other > >>words, if migration is the only one that requires this "all-1" > >>initialization then IMHO we may consider to remove the other part > >>rather than here in migration because that's what we'd better to be > >>sure with. > > > >I am not sure about the background here, so I didn't make a change at this > >place. > > > >> > >>And even if you want to remove this, I still have two suggestions: > >> > >>(1) proper comment here above bmap on the above fact that although > >> bmap is not set here but it's actually set somewhere else because > >> we'll sooner or later copy all 1s from the ramblock bitmap > >> > >>(2) imho you can move "migration_dirty_pages = 0" into > >> ram_list_init_bitmaps() too to let them be together > >> > > I took a look into this one. > > ram_list_init_bitmaps() setup bitmap for each RAMBlock, while ram_state_init() > setup RAMState. Since migration_dirty_pages belongs to RAMState, it maybe more > proper to leave it at the original place. > > Do you feel good about this? Yes it's ok to me. Thanks,
diff --git a/migration/ram.c b/migration/ram.c index 95c51109d2..417874707d 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -3151,12 +3151,7 @@ static int ram_state_init(RAMState **rsp) qemu_mutex_init(&(*rsp)->src_page_req_mutex); QSIMPLEQ_INIT(&(*rsp)->src_page_requests); - /* - * Count the total number of pages used by ram blocks not including any - * gaps due to alignment or unplugs. - */ - (*rsp)->migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS; - + (*rsp)->migration_dirty_pages = 0; ram_state_reset(*rsp); return 0; @@ -3172,7 +3167,6 @@ static void ram_list_init_bitmaps(void) RAMBLOCK_FOREACH_NOT_IGNORED(block) { pages = block->max_length >> TARGET_PAGE_BITS; block->bmap = bitmap_new(pages); - bitmap_set(block->bmap, 0, pages); if (migrate_postcopy_ram()) { block->unsentmap = bitmap_new(pages); bitmap_set(block->unsentmap, 0, pages);
During migration, we would sync bitmap from ram_list.dirty_memory to RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap(). Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this means at the first round this sync is meaningless and is a duplicated work. Leaving RAMBlock->bmap blank on allocating would have a side effect on migration_dirty_pages, since it is calculated from the result of cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to set migration_dirty_pages to 0 in ram_state_init(). Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> --- migration/ram.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-)