diff mbox series

[v3,3/3] multifd: Send using asynchronous write on nocomp to send RAM pages.

Message ID 20210922222423.644444-4-leobras@redhat.com
State New
Headers show
Series QIOChannel async_write & async_flush + MSG_ZEROCOPY + multifd | expand

Commit Message

Leonardo Bras Sept. 22, 2021, 10:24 p.m. UTC
Change multifd nocomp version to use asynchronous write for RAM pages, and
benefit of MSG_ZEROCOPY when it's available.

The asynchronous flush happens on cleanup only, before destroying the QIOChannel.

This will work fine on RAM migration because the RAM pages are not usually freed,
and there is no problem on changing the pages content between async_send() and
the actual sending of the buffer, because this change will dirty the page and
cause it to be re-sent on a next iteration anyway.

Signed-off-by: Leonardo Bras <leobras@redhat.com>
---
 migration/multifd.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Daniel P. Berrangé Sept. 24, 2021, 5:43 p.m. UTC | #1
On Wed, Sep 22, 2021 at 07:24:23PM -0300, Leonardo Bras wrote:
> Change multifd nocomp version to use asynchronous write for RAM pages, and
> benefit of MSG_ZEROCOPY when it's available.
> 
> The asynchronous flush happens on cleanup only, before destroying the QIOChannel.
> 
> This will work fine on RAM migration because the RAM pages are not usually freed,
> and there is no problem on changing the pages content between async_send() and
> the actual sending of the buffer, because this change will dirty the page and
> cause it to be re-sent on a next iteration anyway.
> 
> Signed-off-by: Leonardo Bras <leobras@redhat.com>
> ---
>  migration/multifd.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 377da78f5b..d247207a0a 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -105,7 +105,7 @@ static int nocomp_send_prepare(MultiFDSendParams *p, uint32_t used,
>   */
>  static int nocomp_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
>  {
> -    return qio_channel_writev_all(p->c, p->pages->iov, used, errp);
> +    return qio_channel_async_writev_all(p->c, p->pages->iov, used, errp);
>  }

This needs to be made conditional so zeroopy is only used if rquested
by the mgmt app, and it isn't going to work in all cases (eg TLS) so
silently enabling it is not good.

>  
>  /**
> @@ -546,6 +546,7 @@ void multifd_save_cleanup(void)
>          MultiFDSendParams *p = &multifd_send_state->params[i];
>          Error *local_err = NULL;
>  
> +        qio_channel_async_flush(p->c, NULL);
>          socket_send_channel_destroy(p->c);
>          p->c = NULL;
>          qemu_mutex_destroy(&p->mutex);

This isn't reliable beucase qio_channel_async_flush will return early
even if not everything is flushed.

Regards,
Daniel
Peter Xu Sept. 28, 2021, 10:48 p.m. UTC | #2
On Fri, Sep 24, 2021 at 06:43:49PM +0100, Daniel P. Berrangé wrote:
> > @@ -546,6 +546,7 @@ void multifd_save_cleanup(void)
> >          MultiFDSendParams *p = &multifd_send_state->params[i];
> >          Error *local_err = NULL;
> >  
> > +        qio_channel_async_flush(p->c, NULL);
> >          socket_send_channel_destroy(p->c);
> >          p->c = NULL;
> >          qemu_mutex_destroy(&p->mutex);
> 
> This isn't reliable beucase qio_channel_async_flush will return early
> even if not everything is flushed.

Right, though I think the problem is in patch 2 where we should make sure
flush() will only return if all data sent.

And at the meantime we may also want to move it to before we send the devices
states for both pre/post copy.  multifd_save_cleanup() is called only until
migration completed and we're cleaning stuffs, I'm afraid it's too late, so
potentially the device states can arrive and dest vm running without the latest
src VM memories.
Leonardo Bras Sept. 29, 2021, 7:44 p.m. UTC | #3
On Fri, Sep 24, 2021 at 2:44 PM Daniel P. Berrangé <berrange@redhat.com> wrote:
>
> On Wed, Sep 22, 2021 at 07:24:23PM -0300, Leonardo Bras wrote:
> > Change multifd nocomp version to use asynchronous write for RAM pages, and
> > benefit of MSG_ZEROCOPY when it's available.
> >
> > The asynchronous flush happens on cleanup only, before destroying the QIOChannel.
> >
> > This will work fine on RAM migration because the RAM pages are not usually freed,
> > and there is no problem on changing the pages content between async_send() and
> > the actual sending of the buffer, because this change will dirty the page and
> > cause it to be re-sent on a next iteration anyway.
> >
> > Signed-off-by: Leonardo Bras <leobras@redhat.com>
> > ---
> >  migration/multifd.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/migration/multifd.c b/migration/multifd.c
> > index 377da78f5b..d247207a0a 100644
> > --- a/migration/multifd.c
> > +++ b/migration/multifd.c
> > @@ -105,7 +105,7 @@ static int nocomp_send_prepare(MultiFDSendParams *p, uint32_t used,
> >   */
> >  static int nocomp_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
> >  {
> > -    return qio_channel_writev_all(p->c, p->pages->iov, used, errp);
> > +    return qio_channel_async_writev_all(p->c, p->pages->iov, used, errp);
> >  }
>
> This needs to be made conditional so zeroopy is only used if rquested
> by the mgmt app, and it isn't going to work in all cases (eg TLS) so
> silently enabling it is not good.

I agree, that seems a better approach.


>
> >
> >  /**
> > @@ -546,6 +546,7 @@ void multifd_save_cleanup(void)
> >          MultiFDSendParams *p = &multifd_send_state->params[i];
> >          Error *local_err = NULL;
> >
> > +        qio_channel_async_flush(p->c, NULL);
> >          socket_send_channel_destroy(p->c);
> >          p->c = NULL;
> >          qemu_mutex_destroy(&p->mutex);
>
> This isn't reliable beucase qio_channel_async_flush will return early
> even if not everything is flushed.

Yeah, I need to make sure qio_channel_async_flush will only return after
everything is sent, or at least return the number of missing requests.

>
> Regards,
> Daniel
> --
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
>

Best regards,
Leonardo
Leonardo Bras Sept. 29, 2021, 7:46 p.m. UTC | #4
On Tue, Sep 28, 2021 at 7:49 PM Peter Xu <peterx@redhat.com> wrote:
>
> On Fri, Sep 24, 2021 at 06:43:49PM +0100, Daniel P. Berrangé wrote:
> > > @@ -546,6 +546,7 @@ void multifd_save_cleanup(void)
> > >          MultiFDSendParams *p = &multifd_send_state->params[i];
> > >          Error *local_err = NULL;
> > >
> > > +        qio_channel_async_flush(p->c, NULL);
> > >          socket_send_channel_destroy(p->c);
> > >          p->c = NULL;
> > >          qemu_mutex_destroy(&p->mutex);
> >
> > This isn't reliable beucase qio_channel_async_flush will return early
> > even if not everything is flushed.
>
> Right, though I think the problem is in patch 2 where we should make sure
> flush() will only return if all data sent.
>
> And at the meantime we may also want to move it to before we send the devices
> states for both pre/post copy.  multifd_save_cleanup() is called only until
> migration completed and we're cleaning stuffs, I'm afraid it's too late, so
> potentially the device states can arrive and dest vm running without the latest
> src VM memories.

Thanks for that Peter!
I had some trouble sorting out when to flush, and I think I still missed this.

I will have thet improved for v3.

>
> --
> Peter Xu
>

Best regards,
Leonardo
diff mbox series

Patch

diff --git a/migration/multifd.c b/migration/multifd.c
index 377da78f5b..d247207a0a 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -105,7 +105,7 @@  static int nocomp_send_prepare(MultiFDSendParams *p, uint32_t used,
  */
 static int nocomp_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
 {
-    return qio_channel_writev_all(p->c, p->pages->iov, used, errp);
+    return qio_channel_async_writev_all(p->c, p->pages->iov, used, errp);
 }
 
 /**
@@ -546,6 +546,7 @@  void multifd_save_cleanup(void)
         MultiFDSendParams *p = &multifd_send_state->params[i];
         Error *local_err = NULL;
 
+        qio_channel_async_flush(p->c, NULL);
         socket_send_channel_destroy(p->c);
         p->c = NULL;
         qemu_mutex_destroy(&p->mutex);