diff mbox

[PULL,00/34] migration thread and queue

Message ID a618034d-5006-43b2-8a53-ae1a7107cacc@mailpro
State New
Headers show

Commit Message

Alexandre DERUMIER Dec. 27, 2012, 3:30 p.m. UTC
Hi,
I'm currently testing new migration code with last qemu.git,

it's working pretty fine (around 30ms downtime with standard workload).


But I have add some problem, with high memory workload vm. (playing an HD video for example).

Target vm is pause after migration, 
# info status
VM status: paused (internal-error)

(downtime is around 700ms)


I can reproduce it 100%

Regards,

Alexandre Derumier


----- Mail original ----- 

De: "Juan Quintela" <quintela@redhat.com> 
À: qemu-devel@nongnu.org 
Cc: anthony@codemonkey.ws 
Envoyé: Vendredi 21 Décembre 2012 20:41:03 
Objet: [Qemu-devel] [PULL 00/34] migration thread and queue 

Hi 

Changes for last version: 
- inlined paolo fixes, attached here (they were very small, and only change error cases locking) 
- I tested with autotest and see no problems 

Please pull. 

Thanks, Juan. 




[20121220] 
Changes for yesterday: 
- Paolo Acked the series 
- Rebase on top of today git (only conflicts were due to header re-shuffle) 

Please pull. 

[20121219] 

This is my queue for migration-thread and patches associated. This 
integrates review comments & code for Paolo. This is the subset from 
both approachs that we agreed with. rest of patches need more review 
and are not here. 

Migrating and idle guest with upstwream: 

(qemu) info migrate 
capabilities: xbzrle: off 
Migration status: completed 
total time: 34251 milliseconds 
downtime: 492 milliseconds 
transferred ram: 762458 kbytes 
remaining ram: 0 kbytes 
total ram: 14688768 kbytes 
duplicate: 3492606 pages 
normal: 189762 pages 
normal bytes: 759048 kbytes 

with this series of patches. 

(qemu) info migrate 
capabilities: xbzrle: off 
Migration status: completed 
total time: 30712 milliseconds 
downtime: 29 milliseconds 
transferred ram: 738857 kbytes 
remaining ram: 0 kbytes 
total ram: 14688768 kbytes 
duplicate: 3503423 pages 
normal: 176671 pages 
normal bytes: 706684 kbytes 

Notice the big difference in downtime. And that is also seen inside 
the guest a program that just do an idle loop seeing how "long" it 
takes to wait for 10ms. 

with upstream: 

[root@d1 ~]# ./timer 
delay of 452 ms 
delay of 114 ms 
delay of 136 ms 
delay of 135 ms 
delay of 136 ms 
delay of 131 ms 
delay of 134 ms 

with this series of patches, wait never takes 100ms, nothing is printed. 

Please pull. 

Thanks, Juan. 

The following changes since commit 27dd7730582be85c7d4f680f5f71146629809c86: 

Merge remote-tracking branch 'bonzini/header-dirs' into staging (2012-12-19 17:15:39 -0600) 

are available in the git repository at: 


git://repo.or.cz/qemu/quintela.git thread.next 

for you to fetch changes up to 381c08083929f50f4780ea272ea36f7e5899b3b6: 

migration: merge QEMUFileBuffered into MigrationState (2012-12-21 20:01:24 +0100) 

---------------------------------------------------------------- 
Juan Quintela (25): 
migration: include qemu-file.h 
migration-fd: remove duplicate include 
buffered_file: Move from using a timer to use a thread 
migration: make qemu_fopen_ops_buffered() return void 
migration: move migration thread init code to migrate_fd_put_ready 
migration: make writes blocking 
migration: remove unfreeze logic 
migration: just lock migrate_fd_put_ready 
buffered_file: Unfold the trick to restart generating migration data 
buffered_file: don't flush on put buffer 
buffered_file: unfold buffered_append in buffered_put_buffer 
savevm: New save live migration method: pending 
migration: move buffered_file.c code into migration.c 
migration: add XFER_LIMIT_RATIO 
migration: move migration_fd_put_ready() 
migration: Inline qemu_fopen_ops_buffered into migrate_fd_connect 
migration: move migration notifier 
ram: rename last_block to last_seen_block 
ram: Add last_sent_block 
memory: introduce memory_region_test_and_clear_dirty 
ram: Use memory_region_test_and_clear_dirty 
ram: optimize migration bitmap walking 
ram: account the amount of transferred ram better 
ram: refactor ram_save_block() return value 
migration: merge QEMUFileBuffered into MigrationState 

Paolo Bonzini (7): 
migration: fix migration_bitmap leak 
buffered_file: do not send more than s->bytes_xfer bytes per tick 
migration: remove double call to migrate_fd_close 
exec: change ramlist from MRU order to a 1-item cache 
exec: change RAM list to a TAILQ 
exec: sort the memory from biggest to smallest 
migration: fix qemu_get_fd for BufferedFile 

Umesh Deshpande (2): 
add a version number to ram_list 
protect the ramlist with a separate mutex 

Makefile.objs | 3 +- 
arch_init.c | 244 +++++++++++++------------- 
block-migration.c | 49 ++---- 
buffered_file.c | 268 ----------------------------- 
buffered_file.h | 22 --- 
dump.c | 8 +- 
exec.c | 128 +++++++++----- 
include/exec/cpu-all.h | 15 +- 
include/exec/memory.h | 16 ++ 
include/migration/migration.h | 13 +- 
include/migration/qemu-file.h | 5 - 
include/migration/vmstate.h | 1 + 
include/sysemu/sysemu.h | 1 + 
memory.c | 16 ++ 
memory_mapping.c | 4 +- 
migration-exec.c | 3 +- 
migration-fd.c | 4 +- 
migration-tcp.c | 3 +- 
migration-unix.c | 3 +- 
migration.c | 390 +++++++++++++++++++++++++++++++----------- 
savevm.c | 24 ++- 
target-i386/arch_dump.c | 2 +- 
22 files changed, 599 insertions(+), 623 deletions(-) 
delete mode 100644 buffered_file.c 
delete mode 100644 buffered_file.h

Comments

Paolo Bonzini Dec. 27, 2012, 3:44 p.m. UTC | #1
Il 27/12/2012 16:30, Alexandre DERUMIER ha scritto:
> Hi,
> I'm currently testing new migration code with last qemu.git,
> 
> it's working pretty fine (around 30ms downtime with standard workload).
> 
> 
> But I have add some problem, with high memory workload vm. (playing an HD video for example).
> 
> Target vm is pause after migration, 
> # info status
> VM status: paused (internal-error)
> 
> (downtime is around 700ms)

1) What happened before these patches?  If something different, can you
bisect it?

2) Do you get anything on the console (stdout)?  See
kvm_handle_internal_error in kvm-all.c for what to expect.

Paolo

> I can reproduce it 100%
> 
> Regards,
> 
> Alexandre Derumier
> 
> 
> ----- Mail original ----- 
> 
> De: "Juan Quintela" <quintela@redhat.com> 
> À: qemu-devel@nongnu.org 
> Cc: anthony@codemonkey.ws 
> Envoyé: Vendredi 21 Décembre 2012 20:41:03 
> Objet: [Qemu-devel] [PULL 00/34] migration thread and queue 
> 
> Hi 
> 
> Changes for last version: 
> - inlined paolo fixes, attached here (they were very small, and only change error cases locking) 
> - I tested with autotest and see no problems 
> 
> Please pull. 
> 
> Thanks, Juan. 
> 
> 
> diff --git a/arch_init.c b/arch_init.c 
> index 86f8544..ea75441 100644 
> --- a/arch_init.c 
> +++ b/arch_init.c 
> @@ -641,13 +641,13 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) 
> } 
> i++; 
> } 
> + qemu_mutex_unlock_ramlist(); 
> 
> if (ret < 0) { 
> bytes_transferred += total_sent; 
> return ret; 
> } 
> 
> - qemu_mutex_unlock_ramlist(); 
> qemu_put_be64(f, RAM_SAVE_FLAG_EOS); 
> total_sent += 8; 
> bytes_transferred += total_sent; 
> @@ -657,9 +657,8 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) 
> 
> static int ram_save_complete(QEMUFile *f, void *opaque) 
> { 
> - migration_bitmap_sync(); 
> - 
> qemu_mutex_lock_ramlist(); 
> + migration_bitmap_sync(); 
> 
> /* try transferring iterative blocks of memory */ 
> 
> diff --git a/migration.c b/migration.c 
> index c69e864..d6ec3e8 100644 
> --- a/migration.c 
> +++ b/migration.c 
> @@ -650,7 +650,7 @@ static int64_t buffered_set_rate_limit(void *opaque, int64_t new_rate) 
> new_rate = SIZE_MAX; 
> } 
> 
> - s->xfer_limit = new_rate / 10; 
> + s->xfer_limit = new_rate / XFER_LIMIT_RATIO; 
> 
> out: 
> return s->xfer_limit; 
> 
> 
> [20121220] 
> Changes for yesterday: 
> - Paolo Acked the series 
> - Rebase on top of today git (only conflicts were due to header re-shuffle) 
> 
> Please pull. 
> 
> [20121219] 
> 
> This is my queue for migration-thread and patches associated. This 
> integrates review comments & code for Paolo. This is the subset from 
> both approachs that we agreed with. rest of patches need more review 
> and are not here. 
> 
> Migrating and idle guest with upstwream: 
> 
> (qemu) info migrate 
> capabilities: xbzrle: off 
> Migration status: completed 
> total time: 34251 milliseconds 
> downtime: 492 milliseconds 
> transferred ram: 762458 kbytes 
> remaining ram: 0 kbytes 
> total ram: 14688768 kbytes 
> duplicate: 3492606 pages 
> normal: 189762 pages 
> normal bytes: 759048 kbytes 
> 
> with this series of patches. 
> 
> (qemu) info migrate 
> capabilities: xbzrle: off 
> Migration status: completed 
> total time: 30712 milliseconds 
> downtime: 29 milliseconds 
> transferred ram: 738857 kbytes 
> remaining ram: 0 kbytes 
> total ram: 14688768 kbytes 
> duplicate: 3503423 pages 
> normal: 176671 pages 
> normal bytes: 706684 kbytes 
> 
> Notice the big difference in downtime. And that is also seen inside 
> the guest a program that just do an idle loop seeing how "long" it 
> takes to wait for 10ms. 
> 
> with upstream: 
> 
> [root@d1 ~]# ./timer 
> delay of 452 ms 
> delay of 114 ms 
> delay of 136 ms 
> delay of 135 ms 
> delay of 136 ms 
> delay of 131 ms 
> delay of 134 ms 
> 
> with this series of patches, wait never takes 100ms, nothing is printed. 
> 
> Please pull. 
> 
> Thanks, Juan. 
> 
> The following changes since commit 27dd7730582be85c7d4f680f5f71146629809c86: 
> 
> Merge remote-tracking branch 'bonzini/header-dirs' into staging (2012-12-19 17:15:39 -0600) 
> 
> are available in the git repository at: 
> 
> 
> git://repo.or.cz/qemu/quintela.git thread.next 
> 
> for you to fetch changes up to 381c08083929f50f4780ea272ea36f7e5899b3b6: 
> 
> migration: merge QEMUFileBuffered into MigrationState (2012-12-21 20:01:24 +0100) 
> 
> ---------------------------------------------------------------- 
> Juan Quintela (25): 
> migration: include qemu-file.h 
> migration-fd: remove duplicate include 
> buffered_file: Move from using a timer to use a thread 
> migration: make qemu_fopen_ops_buffered() return void 
> migration: move migration thread init code to migrate_fd_put_ready 
> migration: make writes blocking 
> migration: remove unfreeze logic 
> migration: just lock migrate_fd_put_ready 
> buffered_file: Unfold the trick to restart generating migration data 
> buffered_file: don't flush on put buffer 
> buffered_file: unfold buffered_append in buffered_put_buffer 
> savevm: New save live migration method: pending 
> migration: move buffered_file.c code into migration.c 
> migration: add XFER_LIMIT_RATIO 
> migration: move migration_fd_put_ready() 
> migration: Inline qemu_fopen_ops_buffered into migrate_fd_connect 
> migration: move migration notifier 
> ram: rename last_block to last_seen_block 
> ram: Add last_sent_block 
> memory: introduce memory_region_test_and_clear_dirty 
> ram: Use memory_region_test_and_clear_dirty 
> ram: optimize migration bitmap walking 
> ram: account the amount of transferred ram better 
> ram: refactor ram_save_block() return value 
> migration: merge QEMUFileBuffered into MigrationState 
> 
> Paolo Bonzini (7): 
> migration: fix migration_bitmap leak 
> buffered_file: do not send more than s->bytes_xfer bytes per tick 
> migration: remove double call to migrate_fd_close 
> exec: change ramlist from MRU order to a 1-item cache 
> exec: change RAM list to a TAILQ 
> exec: sort the memory from biggest to smallest 
> migration: fix qemu_get_fd for BufferedFile 
> 
> Umesh Deshpande (2): 
> add a version number to ram_list 
> protect the ramlist with a separate mutex 
> 
> Makefile.objs | 3 +- 
> arch_init.c | 244 +++++++++++++------------- 
> block-migration.c | 49 ++---- 
> buffered_file.c | 268 ----------------------------- 
> buffered_file.h | 22 --- 
> dump.c | 8 +- 
> exec.c | 128 +++++++++----- 
> include/exec/cpu-all.h | 15 +- 
> include/exec/memory.h | 16 ++ 
> include/migration/migration.h | 13 +- 
> include/migration/qemu-file.h | 5 - 
> include/migration/vmstate.h | 1 + 
> include/sysemu/sysemu.h | 1 + 
> memory.c | 16 ++ 
> memory_mapping.c | 4 +- 
> migration-exec.c | 3 +- 
> migration-fd.c | 4 +- 
> migration-tcp.c | 3 +- 
> migration-unix.c | 3 +- 
> migration.c | 390 +++++++++++++++++++++++++++++++----------- 
> savevm.c | 24 ++- 
> target-i386/arch_dump.c | 2 +- 
> 22 files changed, 599 insertions(+), 623 deletions(-) 
> delete mode 100644 buffered_file.c 
> delete mode 100644 buffered_file.h 
> 
>
Alexandre DERUMIER Dec. 27, 2012, 3:51 p.m. UTC | #2
>>1) What happened before these patches? If something different, can you 
>>bisect it? 
Well, qemu 1.3 stable release, was working slowly (bigger downtime), but no crash.
I don't see any changes since 1.3 and theses patches.


>>2) Do you get anything on the console (stdout)? See 
>>kvm_handle_internal_error in kvm-all.c for what to expect. 
I'll have a look at this. 
Currently I start the target vm with --daemonize, does I need to remove this option to see stdout ?


----- Mail original ----- 

De: "Paolo Bonzini" <pbonzini@redhat.com> 
À: "Alexandre DERUMIER" <aderumier@odiso.com> 
Cc: "Juan Quintela" <quintela@redhat.com>, qemu-devel@nongnu.org, anthony@codemonkey.ws 
Envoyé: Jeudi 27 Décembre 2012 16:44:40 
Objet: Re: [PULL 00/34] migration thread and queue 

Il 27/12/2012 16:30, Alexandre DERUMIER ha scritto: 
> Hi, 
> I'm currently testing new migration code with last qemu.git, 
> 
> it's working pretty fine (around 30ms downtime with standard workload). 
> 
> 
> But I have add some problem, with high memory workload vm. (playing an HD video for example). 
> 
> Target vm is pause after migration, 
> # info status 
> VM status: paused (internal-error) 
> 
> (downtime is around 700ms) 

1) What happened before these patches? If something different, can you 
bisect it? 

2) Do you get anything on the console (stdout)? See 
kvm_handle_internal_error in kvm-all.c for what to expect. 

Paolo 

> I can reproduce it 100% 
> 
> Regards, 
> 
> Alexandre Derumier 
> 
> 
> ----- Mail original ----- 
> 
> De: "Juan Quintela" <quintela@redhat.com> 
> À: qemu-devel@nongnu.org 
> Cc: anthony@codemonkey.ws 
> Envoyé: Vendredi 21 Décembre 2012 20:41:03 
> Objet: [Qemu-devel] [PULL 00/34] migration thread and queue 
> 
> Hi 
> 
> Changes for last version: 
> - inlined paolo fixes, attached here (they were very small, and only change error cases locking) 
> - I tested with autotest and see no problems 
> 
> Please pull. 
> 
> Thanks, Juan. 
> 
> 
> diff --git a/arch_init.c b/arch_init.c 
> index 86f8544..ea75441 100644 
> --- a/arch_init.c 
> +++ b/arch_init.c 
> @@ -641,13 +641,13 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) 
> } 
> i++; 
> } 
> + qemu_mutex_unlock_ramlist(); 
> 
> if (ret < 0) { 
> bytes_transferred += total_sent; 
> return ret; 
> } 
> 
> - qemu_mutex_unlock_ramlist(); 
> qemu_put_be64(f, RAM_SAVE_FLAG_EOS); 
> total_sent += 8; 
> bytes_transferred += total_sent; 
> @@ -657,9 +657,8 @@ static int ram_save_iterate(QEMUFile *f, void *opaque) 
> 
> static int ram_save_complete(QEMUFile *f, void *opaque) 
> { 
> - migration_bitmap_sync(); 
> - 
> qemu_mutex_lock_ramlist(); 
> + migration_bitmap_sync(); 
> 
> /* try transferring iterative blocks of memory */ 
> 
> diff --git a/migration.c b/migration.c 
> index c69e864..d6ec3e8 100644 
> --- a/migration.c 
> +++ b/migration.c 
> @@ -650,7 +650,7 @@ static int64_t buffered_set_rate_limit(void *opaque, int64_t new_rate) 
> new_rate = SIZE_MAX; 
> } 
> 
> - s->xfer_limit = new_rate / 10; 
> + s->xfer_limit = new_rate / XFER_LIMIT_RATIO; 
> 
> out: 
> return s->xfer_limit; 
> 
> 
> [20121220] 
> Changes for yesterday: 
> - Paolo Acked the series 
> - Rebase on top of today git (only conflicts were due to header re-shuffle) 
> 
> Please pull. 
> 
> [20121219] 
> 
> This is my queue for migration-thread and patches associated. This 
> integrates review comments & code for Paolo. This is the subset from 
> both approachs that we agreed with. rest of patches need more review 
> and are not here. 
> 
> Migrating and idle guest with upstwream: 
> 
> (qemu) info migrate 
> capabilities: xbzrle: off 
> Migration status: completed 
> total time: 34251 milliseconds 
> downtime: 492 milliseconds 
> transferred ram: 762458 kbytes 
> remaining ram: 0 kbytes 
> total ram: 14688768 kbytes 
> duplicate: 3492606 pages 
> normal: 189762 pages 
> normal bytes: 759048 kbytes 
> 
> with this series of patches. 
> 
> (qemu) info migrate 
> capabilities: xbzrle: off 
> Migration status: completed 
> total time: 30712 milliseconds 
> downtime: 29 milliseconds 
> transferred ram: 738857 kbytes 
> remaining ram: 0 kbytes 
> total ram: 14688768 kbytes 
> duplicate: 3503423 pages 
> normal: 176671 pages 
> normal bytes: 706684 kbytes 
> 
> Notice the big difference in downtime. And that is also seen inside 
> the guest a program that just do an idle loop seeing how "long" it 
> takes to wait for 10ms. 
> 
> with upstream: 
> 
> [root@d1 ~]# ./timer 
> delay of 452 ms 
> delay of 114 ms 
> delay of 136 ms 
> delay of 135 ms 
> delay of 136 ms 
> delay of 131 ms 
> delay of 134 ms 
> 
> with this series of patches, wait never takes 100ms, nothing is printed. 
> 
> Please pull. 
> 
> Thanks, Juan. 
> 
> The following changes since commit 27dd7730582be85c7d4f680f5f71146629809c86: 
> 
> Merge remote-tracking branch 'bonzini/header-dirs' into staging (2012-12-19 17:15:39 -0600) 
> 
> are available in the git repository at: 
> 
> 
> git://repo.or.cz/qemu/quintela.git thread.next 
> 
> for you to fetch changes up to 381c08083929f50f4780ea272ea36f7e5899b3b6: 
> 
> migration: merge QEMUFileBuffered into MigrationState (2012-12-21 20:01:24 +0100) 
> 
> ---------------------------------------------------------------- 
> Juan Quintela (25): 
> migration: include qemu-file.h 
> migration-fd: remove duplicate include 
> buffered_file: Move from using a timer to use a thread 
> migration: make qemu_fopen_ops_buffered() return void 
> migration: move migration thread init code to migrate_fd_put_ready 
> migration: make writes blocking 
> migration: remove unfreeze logic 
> migration: just lock migrate_fd_put_ready 
> buffered_file: Unfold the trick to restart generating migration data 
> buffered_file: don't flush on put buffer 
> buffered_file: unfold buffered_append in buffered_put_buffer 
> savevm: New save live migration method: pending 
> migration: move buffered_file.c code into migration.c 
> migration: add XFER_LIMIT_RATIO 
> migration: move migration_fd_put_ready() 
> migration: Inline qemu_fopen_ops_buffered into migrate_fd_connect 
> migration: move migration notifier 
> ram: rename last_block to last_seen_block 
> ram: Add last_sent_block 
> memory: introduce memory_region_test_and_clear_dirty 
> ram: Use memory_region_test_and_clear_dirty 
> ram: optimize migration bitmap walking 
> ram: account the amount of transferred ram better 
> ram: refactor ram_save_block() return value 
> migration: merge QEMUFileBuffered into MigrationState 
> 
> Paolo Bonzini (7): 
> migration: fix migration_bitmap leak 
> buffered_file: do not send more than s->bytes_xfer bytes per tick 
> migration: remove double call to migrate_fd_close 
> exec: change ramlist from MRU order to a 1-item cache 
> exec: change RAM list to a TAILQ 
> exec: sort the memory from biggest to smallest 
> migration: fix qemu_get_fd for BufferedFile 
> 
> Umesh Deshpande (2): 
> add a version number to ram_list 
> protect the ramlist with a separate mutex 
> 
> Makefile.objs | 3 +- 
> arch_init.c | 244 +++++++++++++------------- 
> block-migration.c | 49 ++---- 
> buffered_file.c | 268 ----------------------------- 
> buffered_file.h | 22 --- 
> dump.c | 8 +- 
> exec.c | 128 +++++++++----- 
> include/exec/cpu-all.h | 15 +- 
> include/exec/memory.h | 16 ++ 
> include/migration/migration.h | 13 +- 
> include/migration/qemu-file.h | 5 - 
> include/migration/vmstate.h | 1 + 
> include/sysemu/sysemu.h | 1 + 
> memory.c | 16 ++ 
> memory_mapping.c | 4 +- 
> migration-exec.c | 3 +- 
> migration-fd.c | 4 +- 
> migration-tcp.c | 3 +- 
> migration-unix.c | 3 +- 
> migration.c | 390 +++++++++++++++++++++++++++++++----------- 
> savevm.c | 24 ++- 
> target-i386/arch_dump.c | 2 +- 
> 22 files changed, 599 insertions(+), 623 deletions(-) 
> delete mode 100644 buffered_file.c 
> delete mode 100644 buffered_file.h 
> 
>
Paolo Bonzini Dec. 27, 2012, 4:08 p.m. UTC | #3
Il 27/12/2012 16:51, Alexandre DERUMIER ha scritto:
>>> 1) What happened before these patches? If something different, can you 
>>> bisect it? 
> Well, qemu 1.3 stable release, was working slowly (bigger downtime), but no crash.
> I don't see any changes since 1.3 and theses patches.

These patches are 34. :)  Bisecting among them will already be of huge help.

>>> 2) Do you get anything on the console (stdout)? See 
>>> kvm_handle_internal_error in kvm-all.c for what to expect. 
> I'll have a look at this. 
> Currently I start the target vm with --daemonize, does I need to remove this option to see stdout ?

Yes, please do that so that you can also bisect the problem.

BTW, what kernel version?  Does it reproduce with

    migrate exec:cat>foo.save

and then, when migration finishes, loading it with "-incoming 'exec:cat
foo.save'"?

Paolo
Alexandre DERUMIER Dec. 27, 2012, 4:32 p.m. UTC | #4
Ok,I'll try to bisect it tomorrow and will do more tests.

I'll keep you in touch!

Alexandre


----- Mail original ----- 

De: "Paolo Bonzini" <pbonzini@redhat.com> 
À: "Alexandre DERUMIER" <aderumier@odiso.com> 
Cc: qemu-devel@nongnu.org, anthony@codemonkey.ws, "Juan Quintela" <quintela@redhat.com> 
Envoyé: Jeudi 27 Décembre 2012 17:08:23 
Objet: Re: [Qemu-devel] [PULL 00/34] migration thread and queue 

Il 27/12/2012 16:51, Alexandre DERUMIER ha scritto: 
>>> 1) What happened before these patches? If something different, can you 
>>> bisect it? 
> Well, qemu 1.3 stable release, was working slowly (bigger downtime), but no crash. 
> I don't see any changes since 1.3 and theses patches. 

These patches are 34. :) Bisecting among them will already be of huge help. 

>>> 2) Do you get anything on the console (stdout)? See 
>>> kvm_handle_internal_error in kvm-all.c for what to expect. 
> I'll have a look at this. 
> Currently I start the target vm with --daemonize, does I need to remove this option to see stdout ? 

Yes, please do that so that you can also bisect the problem. 

BTW, what kernel version? Does it reproduce with 

migrate exec:cat>foo.save 

and then, when migration finishes, loading it with "-incoming 'exec:cat 
foo.save'"? 

Paolo
diff mbox

Patch

diff --git a/arch_init.c b/arch_init.c 
index 86f8544..ea75441 100644 
--- a/arch_init.c 
+++ b/arch_init.c 
@@ -641,13 +641,13 @@  static int ram_save_iterate(QEMUFile *f, void *opaque) 
} 
i++; 
} 
+ qemu_mutex_unlock_ramlist(); 

if (ret < 0) { 
bytes_transferred += total_sent; 
return ret; 
} 

- qemu_mutex_unlock_ramlist(); 
qemu_put_be64(f, RAM_SAVE_FLAG_EOS); 
total_sent += 8; 
bytes_transferred += total_sent; 
@@ -657,9 +657,8 @@  static int ram_save_iterate(QEMUFile *f, void *opaque) 

static int ram_save_complete(QEMUFile *f, void *opaque) 
{ 
- migration_bitmap_sync(); 
- 
qemu_mutex_lock_ramlist(); 
+ migration_bitmap_sync(); 

/* try transferring iterative blocks of memory */ 

diff --git a/migration.c b/migration.c 
index c69e864..d6ec3e8 100644 
--- a/migration.c 
+++ b/migration.c 
@@ -650,7 +650,7 @@  static int64_t buffered_set_rate_limit(void *opaque, int64_t new_rate) 
new_rate = SIZE_MAX; 
} 

- s->xfer_limit = new_rate / 10; 
+ s->xfer_limit = new_rate / XFER_LIMIT_RATIO; 

out: 
return s->xfer_limit;