Message ID | 1434450415-11339-30-git-send-email-dgilbert@redhat.com |
---|---|
State | New |
Headers | show |
"Dr. David Alan Gilbert (git)" <dgilbert@redhat.com> wrote: > From: "Dr. David Alan Gilbert" <dgilbert@redhat.com> > > The end of migration in postcopy is a bit different since some of > the things normally done at the end of migration have already been > done on the transition to postcopy. > > The end of migration code is getting a bit complciated now, so > move out into its own function. > > Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> I think that I would splint the function and then add the postcopy code. BTW, it is a local function, we can use shorter names: migration_completion()? trace names specifically get hugggggggggge. > +static void migration_thread_end_of_iteration(MigrationState *s, > + int current_active_state, RunState? And it is not needed as parameter. > + bool *old_vm_running, > + int64_t *start_time) > +{ > + int ret; > + if (s->state == MIGRATION_STATUS_ACTIVE) { current_active_state = s->state; > + qemu_mutex_lock_iothread(); > + *start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME); > + qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER); > + *old_vm_running = runstate_is_running(); > + > + ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE); > + if (ret >= 0) { > + qemu_file_set_rate_limit(s->file, INT64_MAX); > + qemu_savevm_state_complete_precopy(s->file); > + } > + qemu_mutex_unlock_iothread(); > + > + if (ret < 0) { > + goto fail; > + } > + } else if (s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE) { current_active_state = s->state; > + trace_migration_thread_end_of_iteration_postcopy_end(); > + > + qemu_savevm_state_complete_postcopy(s->file); > + trace_migration_thread_end_of_iteration_postcopy_end_after_complete(); > + } > + > + /* > + * If rp was opened we must clean up the thread before > + * cleaning everything else up (since if there are no failures > + * it will wait for the destination to send it's status in > + * a SHUT command). > + * Postcopy opens rp if enabled (even if it's not avtivated) > + */ > + if (migrate_postcopy_ram()) { > + int rp_error; > + trace_migration_thread_end_of_iteration_postcopy_end_before_rp(); > + rp_error = await_return_path_close_on_source(s); > + trace_migration_thread_end_of_iteration_postcopy_end_after_rp(rp_error); > + if (rp_error) { > + goto fail; > + } > + } > + > + if (qemu_file_get_error(s->file)) { > + trace_migration_thread_end_of_iteration_file_err(); > + goto fail; > + } > + > + migrate_set_state(s, current_active_state, MIGRATION_STATUS_COMPLETED); > + return; > + > +fail: > + migrate_set_state(s, current_active_state, MIGRATION_STATUS_FAILED); > +} > + > +/* > * Master migration thread on the source VM. > * It drives the migration and pumps the data down the outgoing channel. > */ > @@ -1233,31 +1294,11 @@ static void *migration_thread(void *opaque) > /* Just another iteration step */ > qemu_savevm_state_iterate(s->file); > } else { > - int ret; > - > - qemu_mutex_lock_iothread(); > - start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME); > - qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER); > - old_vm_running = runstate_is_running(); > - > - ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE); > - if (ret >= 0) { > - qemu_file_set_rate_limit(s->file, INT64_MAX); > - qemu_savevm_state_complete_precopy(s->file); > - } > - qemu_mutex_unlock_iothread(); > + trace_migration_thread_low_pending(pending_size); > > - if (ret < 0) { > - migrate_set_state(s, MIGRATION_STATUS_ACTIVE, > - MIGRATION_STATUS_FAILED); > - break; > - } > - > - if (!qemu_file_get_error(s->file)) { > - migrate_set_state(s, MIGRATION_STATUS_ACTIVE, > - MIGRATION_STATUS_COMPLETED); > - break; > - } > + migration_thread_end_of_iteration(s, current_active_type, > + &old_vm_running, &start_time); > + break; > } > } > > diff --git a/trace-events b/trace-events > index f096877..528d5a3 100644 > --- a/trace-events > +++ b/trace-events > @@ -1425,6 +1425,12 @@ migrate_send_rp_message(int msg_type, uint16_t len) "%d: len %d" > migration_thread_after_loop(void) "" > migration_thread_file_err(void) "" > migration_thread_setup_complete(void) "" > +migration_thread_low_pending(uint64_t pending) "%" PRIu64 > +migration_thread_end_of_iteration_file_err(void) "" > +migration_thread_end_of_iteration_postcopy_end(void) "" > +migration_thread_end_of_iteration_postcopy_end_after_complete(void) "" > +migration_thread_end_of_iteration_postcopy_end_before_rp(void) "" > +migration_thread_end_of_iteration_postcopy_end_after_rp(int rp_error) "%d" > open_return_path_on_source(void) "" > open_return_path_on_source_continue(void) "" > postcopy_start(void) ""
On (Tue) 16 Jun 2015 [11:26:42], Dr. David Alan Gilbert (git) wrote: > From: "Dr. David Alan Gilbert" <dgilbert@redhat.com> > > The end of migration in postcopy is a bit different since some of > the things normally done at the end of migration have already been > done on the transition to postcopy. > > The end of migration code is getting a bit complciated now, so > move out into its own function. > > Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Amit
On (Mon) 13 Jul 2015 [15:15:07], Juan Quintela wrote: > "Dr. David Alan Gilbert (git)" <dgilbert@redhat.com> wrote: > > From: "Dr. David Alan Gilbert" <dgilbert@redhat.com> > > > > The end of migration in postcopy is a bit different since some of > > the things normally done at the end of migration have already been > > done on the transition to postcopy. > > > > The end of migration code is getting a bit complciated now, so > > move out into its own function. > > > > Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> > > I think that I would splint the function and then add the postcopy code. Yeah, esp since this code was added / modified in the previous patch. Amit
* Juan Quintela (quintela@redhat.com) wrote: > "Dr. David Alan Gilbert (git)" <dgilbert@redhat.com> wrote: > > From: "Dr. David Alan Gilbert" <dgilbert@redhat.com> > > > > The end of migration in postcopy is a bit different since some of > > the things normally done at the end of migration have already been > > done on the transition to postcopy. > > > > The end of migration code is getting a bit complciated now, so > > move out into its own function. > > > > Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> > > I think that I would splint the function and then add the postcopy code. Done; I now have two patches: Split out end of migration code from migration_thread Postcopy: End of iteration > BTW, it is a local function, we can use shorter names: > > migration_completion()? > > trace names specifically get hugggggggggge. Done. > > +static void migration_thread_end_of_iteration(MigrationState *s, > > + int current_active_state, > > RunState? > And it is not needed as parameter. No, it's not RunState, it's derived from s->state which is still an int; it's also not the current state, but the current state we're expecting to be in, i.e. one of MIGRATION_STATUS_ACTIVE or MIGRATION_STATE_POSTCOPY_ACTIVE (which is why it's current_*active*_state); and it's only used as the parameter to migrate_set_state - in the same way the current code does: migrate_set_state(s, MIGRATION_STATUS_ACTIVE, MIGRATION_STATUS_COMPLETED); to ensure that any failure or cancel occuring at the same time isn't lost. Dave > > > > + bool *old_vm_running, > > + int64_t *start_time) > > +{ > > + int ret; > > + if (s->state == MIGRATION_STATUS_ACTIVE) { > current_active_state = s->state; > > + qemu_mutex_lock_iothread(); > > + *start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME); > > + qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER); > > + *old_vm_running = runstate_is_running(); > > + > > + ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE); > > + if (ret >= 0) { > > + qemu_file_set_rate_limit(s->file, INT64_MAX); > > + qemu_savevm_state_complete_precopy(s->file); > > + } > > + qemu_mutex_unlock_iothread(); > > + > > + if (ret < 0) { > > + goto fail; > > + } > > + } else if (s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE) { > current_active_state = s->state; > > + trace_migration_thread_end_of_iteration_postcopy_end(); > > + > > + qemu_savevm_state_complete_postcopy(s->file); > > + trace_migration_thread_end_of_iteration_postcopy_end_after_complete(); > > + } > > + > > + /* > > + * If rp was opened we must clean up the thread before > > + * cleaning everything else up (since if there are no failures > > + * it will wait for the destination to send it's status in > > + * a SHUT command). > > + * Postcopy opens rp if enabled (even if it's not avtivated) > > + */ > > + if (migrate_postcopy_ram()) { > > + int rp_error; > > + trace_migration_thread_end_of_iteration_postcopy_end_before_rp(); > > + rp_error = await_return_path_close_on_source(s); > > + trace_migration_thread_end_of_iteration_postcopy_end_after_rp(rp_error); > > + if (rp_error) { > > + goto fail; > > + } > > + } > > + > > + if (qemu_file_get_error(s->file)) { > > + trace_migration_thread_end_of_iteration_file_err(); > > + goto fail; > > + } > > + > > + migrate_set_state(s, current_active_state, MIGRATION_STATUS_COMPLETED); > > + return; > > + > > +fail: > > + migrate_set_state(s, current_active_state, MIGRATION_STATUS_FAILED); > > +} > > + > > +/* > > * Master migration thread on the source VM. > > * It drives the migration and pumps the data down the outgoing channel. > > */ > > @@ -1233,31 +1294,11 @@ static void *migration_thread(void *opaque) > > /* Just another iteration step */ > > qemu_savevm_state_iterate(s->file); > > } else { > > - int ret; > > - > > - qemu_mutex_lock_iothread(); > > - start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME); > > - qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER); > > - old_vm_running = runstate_is_running(); > > - > > - ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE); > > - if (ret >= 0) { > > - qemu_file_set_rate_limit(s->file, INT64_MAX); > > - qemu_savevm_state_complete_precopy(s->file); > > - } > > - qemu_mutex_unlock_iothread(); > > + trace_migration_thread_low_pending(pending_size); > > > > - if (ret < 0) { > > - migrate_set_state(s, MIGRATION_STATUS_ACTIVE, > > - MIGRATION_STATUS_FAILED); > > - break; > > - } > > - > > - if (!qemu_file_get_error(s->file)) { > > - migrate_set_state(s, MIGRATION_STATUS_ACTIVE, > > - MIGRATION_STATUS_COMPLETED); > > - break; > > - } > > + migration_thread_end_of_iteration(s, current_active_type, > > + &old_vm_running, &start_time); > > + break; > > } > > } > > > > diff --git a/trace-events b/trace-events > > index f096877..528d5a3 100644 > > --- a/trace-events > > +++ b/trace-events > > @@ -1425,6 +1425,12 @@ migrate_send_rp_message(int msg_type, uint16_t len) "%d: len %d" > > migration_thread_after_loop(void) "" > > migration_thread_file_err(void) "" > > migration_thread_setup_complete(void) "" > > +migration_thread_low_pending(uint64_t pending) "%" PRIu64 > > +migration_thread_end_of_iteration_file_err(void) "" > > +migration_thread_end_of_iteration_postcopy_end(void) "" > > +migration_thread_end_of_iteration_postcopy_end_after_complete(void) "" > > +migration_thread_end_of_iteration_postcopy_end_before_rp(void) "" > > +migration_thread_end_of_iteration_postcopy_end_after_rp(int rp_error) "%d" > > open_return_path_on_source(void) "" > > open_return_path_on_source_continue(void) "" > > postcopy_start(void) "" -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
diff --git a/migration/migration.c b/migration/migration.c index 8d15f33..3e5a7c8 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -1041,7 +1041,6 @@ static int open_return_path_on_source(MigrationState *ms) return 0; } -__attribute__ (( unused )) /* Until later in patch series */ /* Returns 0 if the RP was ok, otherwise there was an error on the RP */ static int await_return_path_close_on_source(MigrationState *ms) { @@ -1159,6 +1158,68 @@ fail: } /* + * Used by migration_thread when there's not much left pending. + * The caller 'breaks' the loop when this returns. + */ +static void migration_thread_end_of_iteration(MigrationState *s, + int current_active_state, + bool *old_vm_running, + int64_t *start_time) +{ + int ret; + if (s->state == MIGRATION_STATUS_ACTIVE) { + qemu_mutex_lock_iothread(); + *start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME); + qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER); + *old_vm_running = runstate_is_running(); + + ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE); + if (ret >= 0) { + qemu_file_set_rate_limit(s->file, INT64_MAX); + qemu_savevm_state_complete_precopy(s->file); + } + qemu_mutex_unlock_iothread(); + + if (ret < 0) { + goto fail; + } + } else if (s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE) { + trace_migration_thread_end_of_iteration_postcopy_end(); + + qemu_savevm_state_complete_postcopy(s->file); + trace_migration_thread_end_of_iteration_postcopy_end_after_complete(); + } + + /* + * If rp was opened we must clean up the thread before + * cleaning everything else up (since if there are no failures + * it will wait for the destination to send it's status in + * a SHUT command). + * Postcopy opens rp if enabled (even if it's not avtivated) + */ + if (migrate_postcopy_ram()) { + int rp_error; + trace_migration_thread_end_of_iteration_postcopy_end_before_rp(); + rp_error = await_return_path_close_on_source(s); + trace_migration_thread_end_of_iteration_postcopy_end_after_rp(rp_error); + if (rp_error) { + goto fail; + } + } + + if (qemu_file_get_error(s->file)) { + trace_migration_thread_end_of_iteration_file_err(); + goto fail; + } + + migrate_set_state(s, current_active_state, MIGRATION_STATUS_COMPLETED); + return; + +fail: + migrate_set_state(s, current_active_state, MIGRATION_STATUS_FAILED); +} + +/* * Master migration thread on the source VM. * It drives the migration and pumps the data down the outgoing channel. */ @@ -1233,31 +1294,11 @@ static void *migration_thread(void *opaque) /* Just another iteration step */ qemu_savevm_state_iterate(s->file); } else { - int ret; - - qemu_mutex_lock_iothread(); - start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME); - qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER); - old_vm_running = runstate_is_running(); - - ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE); - if (ret >= 0) { - qemu_file_set_rate_limit(s->file, INT64_MAX); - qemu_savevm_state_complete_precopy(s->file); - } - qemu_mutex_unlock_iothread(); + trace_migration_thread_low_pending(pending_size); - if (ret < 0) { - migrate_set_state(s, MIGRATION_STATUS_ACTIVE, - MIGRATION_STATUS_FAILED); - break; - } - - if (!qemu_file_get_error(s->file)) { - migrate_set_state(s, MIGRATION_STATUS_ACTIVE, - MIGRATION_STATUS_COMPLETED); - break; - } + migration_thread_end_of_iteration(s, current_active_type, + &old_vm_running, &start_time); + break; } } diff --git a/trace-events b/trace-events index f096877..528d5a3 100644 --- a/trace-events +++ b/trace-events @@ -1425,6 +1425,12 @@ migrate_send_rp_message(int msg_type, uint16_t len) "%d: len %d" migration_thread_after_loop(void) "" migration_thread_file_err(void) "" migration_thread_setup_complete(void) "" +migration_thread_low_pending(uint64_t pending) "%" PRIu64 +migration_thread_end_of_iteration_file_err(void) "" +migration_thread_end_of_iteration_postcopy_end(void) "" +migration_thread_end_of_iteration_postcopy_end_after_complete(void) "" +migration_thread_end_of_iteration_postcopy_end_before_rp(void) "" +migration_thread_end_of_iteration_postcopy_end_after_rp(int rp_error) "%d" open_return_path_on_source(void) "" open_return_path_on_source_continue(void) "" postcopy_start(void) ""