diff mbox series

[v2,14/18] migration: Add documentation for fdset with multifd + file

Message ID 20240523190548.23977-15-farosas@suse.de
State New
Headers show
Series migration/mapped-ram: Add direct-io support | expand

Commit Message

Fabiano Rosas May 23, 2024, 7:05 p.m. UTC
With the last few changes to the fdset infrastructure, we now allow
multifd to use an fdset when migrating to a file. This is useful for
the scenario where the management layer wants to have control over the
migration file.

By receiving the file descriptors directly, QEMU can delegate some
high level operating system operations to the management layer (such
as mandatory access control). The management layer might also want to
add its own headers before the migration stream.

Document the "file:/dev/fdset/#" syntax for the multifd migration with
mapped-ram. The requirements for the fdset mechanism are:

- the fdset must contain two fds that are not duplicates between
  themselves;

- if direct-io is to be used, exactly one of the fds must have the
  O_DIRECT flag set;

- the file must be opened with WRONLY on the migration source side;

- the file must be opened with RDONLY on the migration destination
  side.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
 docs/devel/migration/main.rst       | 24 +++++++++++++++++++-----
 docs/devel/migration/mapped-ram.rst |  6 +++++-
 2 files changed, 24 insertions(+), 6 deletions(-)

Comments

Peter Xu June 4, 2024, 8:46 p.m. UTC | #1
On Thu, May 23, 2024 at 04:05:44PM -0300, Fabiano Rosas wrote:
> With the last few changes to the fdset infrastructure, we now allow
> multifd to use an fdset when migrating to a file. This is useful for
> the scenario where the management layer wants to have control over the
> migration file.
> 
> By receiving the file descriptors directly, QEMU can delegate some
> high level operating system operations to the management layer (such
> as mandatory access control). The management layer might also want to
> add its own headers before the migration stream.
> 
> Document the "file:/dev/fdset/#" syntax for the multifd migration with
> mapped-ram. The requirements for the fdset mechanism are:
> 
> - the fdset must contain two fds that are not duplicates between
>   themselves;
> 
> - if direct-io is to be used, exactly one of the fds must have the
>   O_DIRECT flag set;
> 
> - the file must be opened with WRONLY on the migration source side;
> 
> - the file must be opened with RDONLY on the migration destination
>   side.
> 
> Signed-off-by: Fabiano Rosas <farosas@suse.de>
> ---
>  docs/devel/migration/main.rst       | 24 +++++++++++++++++++-----
>  docs/devel/migration/mapped-ram.rst |  6 +++++-
>  2 files changed, 24 insertions(+), 6 deletions(-)
> 
> diff --git a/docs/devel/migration/main.rst b/docs/devel/migration/main.rst
> index 495cdcb112..784c899dca 100644
> --- a/docs/devel/migration/main.rst
> +++ b/docs/devel/migration/main.rst
> @@ -47,11 +47,25 @@ over any transport.
>    QEMU interference. Note that QEMU does not flush cached file
>    data/metadata at the end of migration.
>  
> -In addition, support is included for migration using RDMA, which
> -transports the page data using ``RDMA``, where the hardware takes care of
> -transporting the pages, and the load on the CPU is much lower.  While the
> -internals of RDMA migration are a bit different, this isn't really visible
> -outside the RAM migration code.
> +  The file migration also supports using a file that has already been
> +  opened. A set of file descriptors is passed to QEMU via an "fdset"
> +  (see add-fd QMP command documentation). This method allows a
> +  management application to have control over the migration file
> +  opening operation. There are, however, strict requirements to this
> +  interface if the multifd capability is enabled:
> +
> +    - the fdset must contain two file descriptors that are not
> +      duplicates between themselves;
> +    - if the direct-io capability is to be used, exactly one of the
> +      file descriptors must have the O_DIRECT flag set;
> +    - the file must be opened with WRONLY on the migration source side
> +      and RDONLY on the migration destination side.
> +
> +- rdma migration: support is included for migration using RDMA, which
> +  transports the page data using ``RDMA``, where the hardware takes
> +  care of transporting the pages, and the load on the CPU is much
> +  lower.  While the internals of RDMA migration are a bit different,
> +  this isn't really visible outside the RAM migration code.
>  
>  All these migration protocols use the same infrastructure to
>  save/restore state devices.  This infrastructure is shared with the
> diff --git a/docs/devel/migration/mapped-ram.rst b/docs/devel/migration/mapped-ram.rst
> index fa4cefd9fc..e6505511f0 100644
> --- a/docs/devel/migration/mapped-ram.rst
> +++ b/docs/devel/migration/mapped-ram.rst
> @@ -16,7 +16,7 @@ location in the file, rather than constantly being added to a
>  sequential stream. Having the pages at fixed offsets also allows the
>  usage of O_DIRECT for save/restore of the migration stream as the
>  pages are ensured to be written respecting O_DIRECT alignment
> -restrictions (direct-io support not yet implemented).
> +restrictions.
>  
>  Usage
>  -----
> @@ -35,6 +35,10 @@ Use a ``file:`` URL for migration:
>  Mapped-ram migration is best done non-live, i.e. by stopping the VM on
>  the source side before migrating.
>  
> +For best performance enable the ``direct-io`` capability as well:

Parameter?

> +
> +    ``migrate_set_capability direct-io on``

migrate_set_parameters?

If with that fixed:

Reviewed-by: Peter Xu <peterx@redhat.com>
diff mbox series

Patch

diff --git a/docs/devel/migration/main.rst b/docs/devel/migration/main.rst
index 495cdcb112..784c899dca 100644
--- a/docs/devel/migration/main.rst
+++ b/docs/devel/migration/main.rst
@@ -47,11 +47,25 @@  over any transport.
   QEMU interference. Note that QEMU does not flush cached file
   data/metadata at the end of migration.
 
-In addition, support is included for migration using RDMA, which
-transports the page data using ``RDMA``, where the hardware takes care of
-transporting the pages, and the load on the CPU is much lower.  While the
-internals of RDMA migration are a bit different, this isn't really visible
-outside the RAM migration code.
+  The file migration also supports using a file that has already been
+  opened. A set of file descriptors is passed to QEMU via an "fdset"
+  (see add-fd QMP command documentation). This method allows a
+  management application to have control over the migration file
+  opening operation. There are, however, strict requirements to this
+  interface if the multifd capability is enabled:
+
+    - the fdset must contain two file descriptors that are not
+      duplicates between themselves;
+    - if the direct-io capability is to be used, exactly one of the
+      file descriptors must have the O_DIRECT flag set;
+    - the file must be opened with WRONLY on the migration source side
+      and RDONLY on the migration destination side.
+
+- rdma migration: support is included for migration using RDMA, which
+  transports the page data using ``RDMA``, where the hardware takes
+  care of transporting the pages, and the load on the CPU is much
+  lower.  While the internals of RDMA migration are a bit different,
+  this isn't really visible outside the RAM migration code.
 
 All these migration protocols use the same infrastructure to
 save/restore state devices.  This infrastructure is shared with the
diff --git a/docs/devel/migration/mapped-ram.rst b/docs/devel/migration/mapped-ram.rst
index fa4cefd9fc..e6505511f0 100644
--- a/docs/devel/migration/mapped-ram.rst
+++ b/docs/devel/migration/mapped-ram.rst
@@ -16,7 +16,7 @@  location in the file, rather than constantly being added to a
 sequential stream. Having the pages at fixed offsets also allows the
 usage of O_DIRECT for save/restore of the migration stream as the
 pages are ensured to be written respecting O_DIRECT alignment
-restrictions (direct-io support not yet implemented).
+restrictions.
 
 Usage
 -----
@@ -35,6 +35,10 @@  Use a ``file:`` URL for migration:
 Mapped-ram migration is best done non-live, i.e. by stopping the VM on
 the source side before migrating.
 
+For best performance enable the ``direct-io`` capability as well:
+
+    ``migrate_set_capability direct-io on``
+
 Use-cases
 ---------