diff mbox

[19/23] userfaultfd: activate syscall

Message ID 20150910122431.GL17433@in.ibm.com
State New
Headers show

Commit Message

Bharata B Rao Sept. 10, 2015, 12:24 p.m. UTC
(cc trimmed since this looks like an issue that is contained within QEMU)

On Tue, Sep 08, 2015 at 03:13:56PM +0100, Dr. David Alan Gilbert wrote:
> * Bharata B Rao (bharata@linux.vnet.ibm.com) wrote:
> > On Tue, Sep 08, 2015 at 01:46:52PM +0100, Dr. David Alan Gilbert wrote:
> > > * Bharata B Rao (bharata@linux.vnet.ibm.com) wrote:
> > > > On Tue, Sep 08, 2015 at 09:59:47AM +0100, Dr. David Alan Gilbert wrote:
> > > > > * Bharata B Rao (bharata@linux.vnet.ibm.com) wrote:
> > > > > > In fact I had successfully done postcopy migration of sPAPR guest with
> > > > > > this setup.
> > > > > 
> > > > > Interesting - I'd not got that far myself on power; I was hitting a problem
> > > > > loading htab ( htab_load() bad index 2113929216 (14848+0 entries) in htab stream (htab_shift=25) )
> > > > > 
> > > > > Did you have to make any changes to the qemu code to get that happy?
> > > > 
> > > > I should have mentioned that I tried only QEMU driven migration within
> > > > the same host using wp3-postcopy branch of your tree. I don't see the
> > > > above issue.
> > > > 
> > > > (qemu) info migrate
> > > > capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off compress: off x-postcopy-ram: on 
> > > > Migration status: completed
> > > > total time: 39432 milliseconds
> > > > downtime: 162 milliseconds
> > > > setup: 14 milliseconds
> > > > transferred ram: 1297209 kbytes
> > > > throughput: 270.72 mbps
> > > > remaining ram: 0 kbytes
> > > > total ram: 4194560 kbytes
> > > > duplicate: 734015 pages
> > > > skipped: 0 pages
> > > > normal: 318469 pages
> > > > normal bytes: 1273876 kbytes
> > > > dirty sync count: 4
> > > > 
> > > > I will try migration between different hosts soon and check.
> > > 
> > > I hit that on the same host; are you sure you've switched into postcopy mode;
> > > i.e. issued a migrate_start_postcopy before the end of migration?
> > 
> > Sorry I was following your discussion with Li in this thread
> > 
> > https://www.marc.info/?l=qemu-devel&m=143035620026744&w=4
> > 
> > and it wasn't obvious to me that anything apart from turning on the
> > x-postcopy-ram capability was required :(
> 
> OK.
> 
> > So I do see the problem now.
> > 
> > At the source
> > -------------
> > Error reading data from KVM HTAB fd: Bad file descriptor
> > Segmentation fault
> > 
> > At the target
> > -------------
> > htab_load() bad index 2113929216 (14336+0 entries) in htab stream (htab_shift=25)
> > qemu-system-ppc64: error while loading state section id 56(spapr/htab)
> > qemu-system-ppc64: postcopy_ram_listen_thread: loadvm failed: -22
> > qemu-system-ppc64: VQ 0 size 0x100 Guest index 0x0 inconsistent with Host index 0x1f: delta 0xffe1
> > qemu-system-ppc64: error while loading state for instance 0x0 of device 'pci@800000020000000:00.0/virtio-net'
> > *** Error in `./ppc64-softmmu/qemu-system-ppc64': corrupted double-linked list: 0x00000100241234a0 ***
> > ======= Backtrace: =========
> > /lib64/power8/libc.so.6Segmentation fault
> 
> Good - my current world has got rid of the segfaults/corruption in the cleanup on power - but those
> are only after it stumbled over the htab problem.
> 
> I don't know the innards of power/htab, so if you've got any pointers on what upset it
> I'd be happy for some pointers.
 
When migrate_start_postcopy is issued, for HTAB, the SaveStateEntry
save_live_iterate call is coming after save_live_complete. In case of HTAB,
the spapr->htab_fd is closed when HTAB saving is completed in
save_live_complete handler. When save_live_iterate call comes after this,
we end up accessing an invalid fd resulting in the migration failure
we are seeing here.

- With postcopy migration, is it expected to get a save_live_iterate
  call after save_live_complete ? IIUC, save_live_complete signals the
  completion of the saving. Is save_live_iterate handler expected to
  handle this condition ?

I am able to get past this failure and get migration to complete successfully
by this below hack where I teach save_live_iterate handler to ignore
the requests after save_live_complete has been called.


(qemu) migrate_set_capability x-postcopy-ram on
(qemu) migrate -d tcp:localhost:4444
(qemu) migrate_start_postcopy
(qemu) info migrate
capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off compress: off x-postcopy-ram: on 
Migration status: completed
total time: 3801 milliseconds
downtime: 147 milliseconds
setup: 17 milliseconds
transferred ram: 1091652 kbytes
throughput: 2365.71 mbps
remaining ram: 0 kbytes
total ram: 4194560 kbytes
duplicate: 781969 pages
skipped: 0 pages
normal: 267087 pages
normal bytes: 1068348 kbytes
dirty sync count: 2

Comments

Dr. David Alan Gilbert Sept. 11, 2015, 7:15 p.m. UTC | #1
* Bharata B Rao (bharata@linux.vnet.ibm.com) wrote:
> (cc trimmed since this looks like an issue that is contained within QEMU)
> 
> On Tue, Sep 08, 2015 at 03:13:56PM +0100, Dr. David Alan Gilbert wrote:
> > * Bharata B Rao (bharata@linux.vnet.ibm.com) wrote:
> > > On Tue, Sep 08, 2015 at 01:46:52PM +0100, Dr. David Alan Gilbert wrote:
> > > > * Bharata B Rao (bharata@linux.vnet.ibm.com) wrote:
> > > > > On Tue, Sep 08, 2015 at 09:59:47AM +0100, Dr. David Alan Gilbert wrote:
> > > > > > * Bharata B Rao (bharata@linux.vnet.ibm.com) wrote:
> > > > > > > In fact I had successfully done postcopy migration of sPAPR guest with
> > > > > > > this setup.
> > > > > > 
> > > > > > Interesting - I'd not got that far myself on power; I was hitting a problem
> > > > > > loading htab ( htab_load() bad index 2113929216 (14848+0 entries) in htab stream (htab_shift=25) )
> > > > > > 
> > > > > > Did you have to make any changes to the qemu code to get that happy?
> > > > > 
> > > > > I should have mentioned that I tried only QEMU driven migration within
> > > > > the same host using wp3-postcopy branch of your tree. I don't see the
> > > > > above issue.
> > > > > 
> > > > > (qemu) info migrate
> > > > > capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off compress: off x-postcopy-ram: on 
> > > > > Migration status: completed
> > > > > total time: 39432 milliseconds
> > > > > downtime: 162 milliseconds
> > > > > setup: 14 milliseconds
> > > > > transferred ram: 1297209 kbytes
> > > > > throughput: 270.72 mbps
> > > > > remaining ram: 0 kbytes
> > > > > total ram: 4194560 kbytes
> > > > > duplicate: 734015 pages
> > > > > skipped: 0 pages
> > > > > normal: 318469 pages
> > > > > normal bytes: 1273876 kbytes
> > > > > dirty sync count: 4
> > > > > 
> > > > > I will try migration between different hosts soon and check.
> > > > 
> > > > I hit that on the same host; are you sure you've switched into postcopy mode;
> > > > i.e. issued a migrate_start_postcopy before the end of migration?
> > > 
> > > Sorry I was following your discussion with Li in this thread
> > > 
> > > https://www.marc.info/?l=qemu-devel&m=143035620026744&w=4
> > > 
> > > and it wasn't obvious to me that anything apart from turning on the
> > > x-postcopy-ram capability was required :(
> > 
> > OK.
> > 
> > > So I do see the problem now.
> > > 
> > > At the source
> > > -------------
> > > Error reading data from KVM HTAB fd: Bad file descriptor
> > > Segmentation fault
> > > 
> > > At the target
> > > -------------
> > > htab_load() bad index 2113929216 (14336+0 entries) in htab stream (htab_shift=25)
> > > qemu-system-ppc64: error while loading state section id 56(spapr/htab)
> > > qemu-system-ppc64: postcopy_ram_listen_thread: loadvm failed: -22
> > > qemu-system-ppc64: VQ 0 size 0x100 Guest index 0x0 inconsistent with Host index 0x1f: delta 0xffe1
> > > qemu-system-ppc64: error while loading state for instance 0x0 of device 'pci@800000020000000:00.0/virtio-net'
> > > *** Error in `./ppc64-softmmu/qemu-system-ppc64': corrupted double-linked list: 0x00000100241234a0 ***
> > > ======= Backtrace: =========
> > > /lib64/power8/libc.so.6Segmentation fault
> > 
> > Good - my current world has got rid of the segfaults/corruption in the cleanup on power - but those
> > are only after it stumbled over the htab problem.
> > 
> > I don't know the innards of power/htab, so if you've got any pointers on what upset it
> > I'd be happy for some pointers.
>  
> When migrate_start_postcopy is issued, for HTAB, the SaveStateEntry
> save_live_iterate call is coming after save_live_complete. In case of HTAB,
> the spapr->htab_fd is closed when HTAB saving is completed in
> save_live_complete handler. When save_live_iterate call comes after this,
> we end up accessing an invalid fd resulting in the migration failure
> we are seeing here.

Ah interesting.

> - With postcopy migration, is it expected to get a save_live_iterate
>   call after save_live_complete ? IIUC, save_live_complete signals the
>   completion of the saving. Is save_live_iterate handler expected to
>   handle this condition ?

OK, thanks for spotting that; Power's htab was the only other iterative
device running at the same time.
I'll change it to avoid calling save_live_iterate during postcopy on devices
that haven't declared a save_live_complete_postcopy handler.

> 
> I am able to get past this failure and get migration to complete successfully
> by this below hack where I teach save_live_iterate handler to ignore
> the requests after save_live_complete has been called.

Excellent; but as stated, I think it's better if I fix it and then the
individual devices don't need to change.

Thanks again,

Dave

> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 2f8155d..550e234 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -1236,6 +1236,11 @@ static int htab_save_iterate(QEMUFile *f, void *opaque)
>              return rc;
>          }
>  
> +        if (spapr->htab_fd == -1) {
> +            rc = 1;
> +            goto out;
> +        }
> +
>          rc = kvmppc_save_htab(f, spapr->htab_fd,
>                                MAX_KVM_BUF_SIZE, MAX_ITERATION_NS);
>          if (rc < 0) {
> @@ -1247,6 +1252,7 @@ static int htab_save_iterate(QEMUFile *f, void *opaque)
>          rc = htab_save_later_pass(f, spapr, MAX_ITERATION_NS);
>      }
>  
> +out:
>      /* End marker */
>      qemu_put_be32(f, 0);
>      qemu_put_be16(f, 0);
> 
> (qemu) migrate_set_capability x-postcopy-ram on
> (qemu) migrate -d tcp:localhost:4444
> (qemu) migrate_start_postcopy
> (qemu) info migrate
> capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off compress: off x-postcopy-ram: on 
> Migration status: completed
> total time: 3801 milliseconds
> downtime: 147 milliseconds
> setup: 17 milliseconds
> transferred ram: 1091652 kbytes
> throughput: 2365.71 mbps
> remaining ram: 0 kbytes
> total ram: 4194560 kbytes
> duplicate: 781969 pages
> skipped: 0 pages
> normal: 267087 pages
> normal bytes: 1068348 kbytes
> dirty sync count: 2
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
diff mbox

Patch

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 2f8155d..550e234 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -1236,6 +1236,11 @@  static int htab_save_iterate(QEMUFile *f, void *opaque)
             return rc;
         }
 
+        if (spapr->htab_fd == -1) {
+            rc = 1;
+            goto out;
+        }
+
         rc = kvmppc_save_htab(f, spapr->htab_fd,
                               MAX_KVM_BUF_SIZE, MAX_ITERATION_NS);
         if (rc < 0) {
@@ -1247,6 +1252,7 @@  static int htab_save_iterate(QEMUFile *f, void *opaque)
         rc = htab_save_later_pass(f, spapr, MAX_ITERATION_NS);
     }
 
+out:
     /* End marker */
     qemu_put_be32(f, 0);
     qemu_put_be16(f, 0);