diff mbox series

[v6,1/3] multifd: Create property multifd-flush-after-each-section

Message ID 20230215180231.7644-2-quintela@redhat.com
State New
Headers show
Series Eliminate multifd flush | expand

Commit Message

Juan Quintela Feb. 15, 2023, 6:02 p.m. UTC
We used to flush all channels at the end of each RAM section
sent.  That is not needed, so preparing to only flush after a full
iteration through all the RAM.

Default value of the property is false.  But we return "true" in
migrate_multifd_flush_after_each_section() until we implement the code
in following patches.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>

---

Rename each-iteration to after-each-section
Rename multifd-sync-after-each-section to
       multifd-flush-after-each-section
---
 qapi/migration.json   | 21 ++++++++++++++++++++-
 migration/migration.h |  1 +
 hw/core/machine.c     |  1 +
 migration/migration.c | 17 +++++++++++++++--
 4 files changed, 37 insertions(+), 3 deletions(-)

Comments

Peter Xu Feb. 15, 2023, 7:59 p.m. UTC | #1
On Wed, Feb 15, 2023 at 07:02:29PM +0100, Juan Quintela wrote:
> We used to flush all channels at the end of each RAM section
> sent.  That is not needed, so preparing to only flush after a full
> iteration through all the RAM.
> 
> Default value of the property is false.  But we return "true" in
> migrate_multifd_flush_after_each_section() until we implement the code
> in following patches.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

This line can be dropped, after (I assume) git commit helped to add the
other one below. :)

Normally that's also why I put R-bs before my SoB because I should have two
SoB but then I merge them into the last; git is happy with that too.

> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Juan Quintela <quintela@redhat.com>

Acked-by: Peter Xu <peterx@redhat.com>

But some nitpicks below (I'll leave those to you to decide whether to
rework or keep them as is..).

>
> ---
> 
> Rename each-iteration to after-each-section
> Rename multifd-sync-after-each-section to
>        multifd-flush-after-each-section
> ---
>  qapi/migration.json   | 21 ++++++++++++++++++++-
>  migration/migration.h |  1 +
>  hw/core/machine.c     |  1 +
>  migration/migration.c | 17 +++++++++++++++--
>  4 files changed, 37 insertions(+), 3 deletions(-)
> 
> diff --git a/qapi/migration.json b/qapi/migration.json
> index c84fa10e86..3afd81174d 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -478,6 +478,24 @@
>  #                    should not affect the correctness of postcopy migration.
>  #                    (since 7.1)
>  #
> +# @multifd-flush-after-each-section: flush every channel after each
> +#                                    section sent.  This assures that
> +#                                    we can't mix pages from one
> +#                                    iteration through ram pages with
> +#                                    pages for the following
> +#                                    iteration.  We really only need
> +#                                    to do this flush after we have go
> +#                                    through all the dirty pages.
> +#                                    For historical reasons, we do
> +#                                    that after each section.  This is
> +#                                    suboptimal (we flush too many
> +#                                    times).
> +#                                    Default value is false.
> +#                                    Setting this capability has no
> +#                                    effect until the patch that
> +#                                    removes this comment.
> +#                                    (since 8.0)

IMHO the core of this new "cap" is the new RAM_SAVE_FLAG_MULTIFD_FLUSH bit
in the stream protocol, but it's not referenced here.  I would suggest
simplify the content but highlight the core change:

 @multifd-lazy-flush:  When enabled, multifd will only do sync flush after
                       each whole round of bitmap scan.  Otherwise it'll be
                       done per RAM save iteration (which happens with a much
                       higher frequency).

                       Please consider enable this as long as possible, and
                       keep this off only if any of the src/dst QEMU binary
                       doesn't support it.

                       This capability is bound to the new RAM save flag
                       RAM_SAVE_FLAG_MULTIFD_FLUSH, the new flag will only
                       be used and recognized when this feature bit set.

I know you dislike multifd-lazy-flush, but that's still the best I can come
up with when writting this (yeah I still like it :-p), please bare with me
and take whatever you think the best.

> +#
>  # Features:
>  # @unstable: Members @x-colo and @x-ignore-shared are experimental.
>  #
> @@ -492,7 +510,8 @@
>             'dirty-bitmaps', 'postcopy-blocktime', 'late-block-activate',
>             { 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
>             'validate-uuid', 'background-snapshot',
> -           'zero-copy-send', 'postcopy-preempt'] }
> +           'zero-copy-send', 'postcopy-preempt',
> +           'multifd-flush-after-each-section'] }
>  
>  ##
>  # @MigrationCapabilityStatus:
> diff --git a/migration/migration.h b/migration/migration.h
> index 2da2f8a164..7f0f4260ba 100644
> --- a/migration/migration.h
> +++ b/migration/migration.h
> @@ -424,6 +424,7 @@ int migrate_multifd_channels(void);
>  MultiFDCompression migrate_multifd_compression(void);
>  int migrate_multifd_zlib_level(void);
>  int migrate_multifd_zstd_level(void);
> +bool migrate_multifd_flush_after_each_section(void);
>  
>  #ifdef CONFIG_LINUX
>  bool migrate_use_zero_copy_send(void);
> diff --git a/hw/core/machine.c b/hw/core/machine.c
> index f73fc4c45c..602e775f34 100644
> --- a/hw/core/machine.c
> +++ b/hw/core/machine.c
> @@ -54,6 +54,7 @@ const size_t hw_compat_7_1_len = G_N_ELEMENTS(hw_compat_7_1);
>  GlobalProperty hw_compat_7_0[] = {
>      { "arm-gicv3-common", "force-8-bit-prio", "on" },
>      { "nvme-ns", "eui64-default", "on"},
> +    { "migration", "multifd-flush-after-each-section", "on"},

[same note to IRC: need to revert if rename, but otherwise don't bother]

Thanks,

>  };
>  const size_t hw_compat_7_0_len = G_N_ELEMENTS(hw_compat_7_0);
>  
> diff --git a/migration/migration.c b/migration/migration.c
> index 90fca70cb7..cfba0da005 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -167,7 +167,8 @@ INITIALIZE_MIGRATE_CAPS_SET(check_caps_background_snapshot,
>      MIGRATION_CAPABILITY_XBZRLE,
>      MIGRATION_CAPABILITY_X_COLO,
>      MIGRATION_CAPABILITY_VALIDATE_UUID,
> -    MIGRATION_CAPABILITY_ZERO_COPY_SEND);
> +    MIGRATION_CAPABILITY_ZERO_COPY_SEND,
> +    MIGRATION_CAPABILITY_MULTIFD_FLUSH_AFTER_EACH_SECTION);
>  
>  /* When we add fault tolerance, we could have several
>     migrations at once.  For now we don't need to add
> @@ -2701,6 +2702,17 @@ bool migrate_use_multifd(void)
>      return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD];
>  }
>  
> +bool migrate_multifd_flush_after_each_section(void)
> +{
> +    MigrationState *s = migrate_get_current();
> +
> +    /*
> +     * Until the patch that remove this comment, we always return that
> +     * the capability is enabled.
> +     */
> +    return true || s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD_FLUSH_AFTER_EACH_SECTION];

(I'd rather not care about what happens if someone applies only this patch
 not the latter two by dropping "true ||" directly here, but again, no a
 huge deal)

> +}
> +
>  bool migrate_pause_before_switchover(void)
>  {
>      MigrationState *s;
> @@ -4535,7 +4547,8 @@ static Property migration_properties[] = {
>      DEFINE_PROP_MIG_CAP("x-zero-copy-send",
>              MIGRATION_CAPABILITY_ZERO_COPY_SEND),
>  #endif
> -
> +    DEFINE_PROP_MIG_CAP("multifd-flush-after-each-section",
> +                        MIGRATION_CAPABILITY_MULTIFD_FLUSH_AFTER_EACH_SECTION),
>      DEFINE_PROP_END_OF_LIST(),
>  };
>  
> -- 
> 2.39.1
> 
>
Juan Quintela Feb. 15, 2023, 8:13 p.m. UTC | #2
Peter Xu <peterx@redhat.com> wrote:
> On Wed, Feb 15, 2023 at 07:02:29PM +0100, Juan Quintela wrote:
>> We used to flush all channels at the end of each RAM section
>> sent.  That is not needed, so preparing to only flush after a full
>> iteration through all the RAM.
>> 
>> Default value of the property is false.  But we return "true" in
>> migrate_multifd_flush_after_each_section() until we implement the code
>> in following patches.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>
> This line can be dropped, after (I assume) git commit helped to add the
> other one below. :)

Gree, git and trailers is always so much fun.  Will try to fix them (again)

>
> Normally that's also why I put R-bs before my SoB because I should have two
> SoB but then I merge them into the last; git is happy with that too.
>
>> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>
> Acked-by: Peter Xu <peterx@redhat.com>

Thanks.

> But some nitpicks below (I'll leave those to you to decide whether to
> rework or keep them as is..).
>
>>
>> ---
>> 
>> Rename each-iteration to after-each-section
>> Rename multifd-sync-after-each-section to
>>        multifd-flush-after-each-section
>> ---
>>  qapi/migration.json   | 21 ++++++++++++++++++++-
>>  migration/migration.h |  1 +
>>  hw/core/machine.c     |  1 +
>>  migration/migration.c | 17 +++++++++++++++--
>>  4 files changed, 37 insertions(+), 3 deletions(-)
>> 
>> diff --git a/qapi/migration.json b/qapi/migration.json
>> index c84fa10e86..3afd81174d 100644
>> --- a/qapi/migration.json
>> +++ b/qapi/migration.json
>> @@ -478,6 +478,24 @@
>>  #                    should not affect the correctness of postcopy migration.
>>  #                    (since 7.1)
>>  #
>> +# @multifd-flush-after-each-section: flush every channel after each
>> +#                                    section sent.  This assures that
>> +#                                    we can't mix pages from one
>> +#                                    iteration through ram pages with
>> +#                                    pages for the following
>> +#                                    iteration.  We really only need
>> +#                                    to do this flush after we have go
>> +#                                    through all the dirty pages.
>> +#                                    For historical reasons, we do
>> +#                                    that after each section.  This is
>> +#                                    suboptimal (we flush too many
>> +#                                    times).
>> +#                                    Default value is false.
>> +#                                    Setting this capability has no
>> +#                                    effect until the patch that
>> +#                                    removes this comment.
>> +#                                    (since 8.0)
>
> IMHO the core of this new "cap" is the new RAM_SAVE_FLAG_MULTIFD_FLUSH bit
> in the stream protocol, but it's not referenced here.  I would suggest
> simplify the content but highlight the core change:

Actually it is the other way around.  What this capability will do is
_NOT_ use RAM_SAVE_FLAG_MULTIFD_FLUSH protocol.

>  @multifd-lazy-flush:  When enabled, multifd will only do sync flush after
>                        each whole round of bitmap scan.  Otherwise it'll be
>                        done per RAM save iteration (which happens with a much
>                        higher frequency).
>
>                        Please consider enable this as long as possible, and
>                        keep this off only if any of the src/dst QEMU binary
>                        doesn't support it.
>
>                        This capability is bound to the new RAM save flag
>                        RAM_SAVE_FLAG_MULTIFD_FLUSH, the new flag will only
>                        be used and recognized when this feature bit set.

Name is wrong.  It would be multifd-non-lazy-flush.  And I don't like
negatives.  Real name is:

multifd-I-messed-and-flush-too-many-times.


> I know you dislike multifd-lazy-flush, but that's still the best I can come
> up with when writting this (yeah I still like it :-p), please bare with me
> and take whatever you think the best.

Libvirt assumes that all capabilities are false except if enabled.
We want RAM_SAVE_FLAG_MULTFD_FLUSH by default (in new machine types).

So, if we can do

capability_use_new_way = true

We change that to

capability_use_old_way = true

And then by default with false value is what we want.

>> +bool migrate_multifd_flush_after_each_section(void)
>> +{
>> +    MigrationState *s = migrate_get_current();
>> +
>> +    /*
>> +     * Until the patch that remove this comment, we always return that
>> +     * the capability is enabled.
>> +     */
>> +    return true || s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD_FLUSH_AFTER_EACH_SECTION];
>
> (I'd rather not care about what happens if someone applies only this patch
>  not the latter two by dropping "true ||" directly here, but again, no a
>  huge deal)
>
>> +}

It don't matter at all.

Patch1: Introduces the code for a capability, but nothing else.
        It makes the capability be true always (remember old code)
Patch2: Makes old code conditional on this capability (see that we have
        force it to be true).
Patch3: Introduce code for the new capability, and "protect" it for the
        capability being false.  We can remove the "trick" that we just
        had.

See discussion on v5 with Markus for more details.

Later, Juan.

>> +
>>  bool migrate_pause_before_switchover(void)
>>  {
>>      MigrationState *s;
>> @@ -4535,7 +4547,8 @@ static Property migration_properties[] = {
>>      DEFINE_PROP_MIG_CAP("x-zero-copy-send",
>>              MIGRATION_CAPABILITY_ZERO_COPY_SEND),
>>  #endif
>> -
>> +    DEFINE_PROP_MIG_CAP("multifd-flush-after-each-section",
>> +                        MIGRATION_CAPABILITY_MULTIFD_FLUSH_AFTER_EACH_SECTION),
>>      DEFINE_PROP_END_OF_LIST(),
>>  };
>>  
>> -- 
>> 2.39.1
>> 
>>
Markus Armbruster Feb. 16, 2023, 3:15 p.m. UTC | #3
Juan Quintela <quintela@redhat.com> writes:

> Peter Xu <peterx@redhat.com> wrote:
>> On Wed, Feb 15, 2023 at 07:02:29PM +0100, Juan Quintela wrote:
>>> We used to flush all channels at the end of each RAM section
>>> sent.  That is not needed, so preparing to only flush after a full
>>> iteration through all the RAM.
>>> 
>>> Default value of the property is false.  But we return "true" in
>>> migrate_multifd_flush_after_each_section() until we implement the code
>>> in following patches.
>>> 
>>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>>
>> This line can be dropped, after (I assume) git commit helped to add the
>> other one below. :)
>
> Gree, git and trailers is always so much fun.  Will try to fix them (again)
>
>>
>> Normally that's also why I put R-bs before my SoB because I should have two
>> SoB but then I merge them into the last; git is happy with that too.
>>
>>> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>>
>> Acked-by: Peter Xu <peterx@redhat.com>
>
> Thanks.
>
>> But some nitpicks below (I'll leave those to you to decide whether to
>> rework or keep them as is..).
>>
>>>
>>> ---
>>> 
>>> Rename each-iteration to after-each-section
>>> Rename multifd-sync-after-each-section to
>>>        multifd-flush-after-each-section
>>> ---
>>>  qapi/migration.json   | 21 ++++++++++++++++++++-
>>>  migration/migration.h |  1 +
>>>  hw/core/machine.c     |  1 +
>>>  migration/migration.c | 17 +++++++++++++++--
>>>  4 files changed, 37 insertions(+), 3 deletions(-)
>>> 
>>> diff --git a/qapi/migration.json b/qapi/migration.json
>>> index c84fa10e86..3afd81174d 100644
>>> --- a/qapi/migration.json
>>> +++ b/qapi/migration.json
>>> @@ -478,6 +478,24 @@
>>>  #                    should not affect the correctness of postcopy migration.
>>>  #                    (since 7.1)
>>>  #
>>> +# @multifd-flush-after-each-section: flush every channel after each
>>> +#                                    section sent.  This assures that
>>> +#                                    we can't mix pages from one
>>> +#                                    iteration through ram pages with

RAM

>>> +#                                    pages for the following
>>> +#                                    iteration.  We really only need
>>> +#                                    to do this flush after we have go

to flush after we have gone

>>> +#                                    through all the dirty pages.
>>> +#                                    For historical reasons, we do
>>> +#                                    that after each section.  This is

we flush after each section

>>> +#                                    suboptimal (we flush too many
>>> +#                                    times).

inefficient: we flush too often.

>>> +#                                    Default value is false.
>>> +#                                    Setting this capability has no
>>> +#                                    effect until the patch that
>>> +#                                    removes this comment.
>>> +#                                    (since 8.0)
>>
>> IMHO the core of this new "cap" is the new RAM_SAVE_FLAG_MULTIFD_FLUSH bit
>> in the stream protocol, but it's not referenced here.  I would suggest
>> simplify the content but highlight the core change:
>
> Actually it is the other way around.  What this capability will do is
> _NOT_ use RAM_SAVE_FLAG_MULTIFD_FLUSH protocol.
>
>>  @multifd-lazy-flush:  When enabled, multifd will only do sync flush after

Spell out "synchronrous".

>>                        each whole round of bitmap scan.  Otherwise it'll be

Suggest to scratch "whole".

>>                        done per RAM save iteration (which happens with a much
>>                        higher frequency).

Less detail than Juan's version.  I'm not sure how much detail is
appropriate for QMP reference documentation.

>>                        Please consider enable this as long as possible, and
>>                        keep this off only if any of the src/dst QEMU binary
>>                        doesn't support it.

Clear guidance on how to use it, good!

Perhaps state it more forcefully: "Enable this when both source and
destination support it."

>>
>>                        This capability is bound to the new RAM save flag
>>                        RAM_SAVE_FLAG_MULTIFD_FLUSH, the new flag will only
>>                        be used and recognized when this feature bit set.

Is RAM_SAVE_FLAG_MULTIFD_FLUSH visible in the QMP interface?  Or in the
migration stream?

I'm asking because doc comments are QMP reference documentation, but
when writing them, it's easy to mistake them for internal documentation,
because, well, they're comments.

> Name is wrong.  It would be multifd-non-lazy-flush.  And I don't like
> negatives.  Real name is:
>
> multifd-I-messed-and-flush-too-many-times.

If you don't like "non-lazy", say "eager".

>> I know you dislike multifd-lazy-flush, but that's still the best I can come
>> up with when writting this (yeah I still like it :-p), please bare with me
>> and take whatever you think the best.
>
> Libvirt assumes that all capabilities are false except if enabled.
> We want RAM_SAVE_FLAG_MULTFD_FLUSH by default (in new machine types).
>
> So, if we can do
>
> capability_use_new_way = true
>
> We change that to
>
> capability_use_old_way = true
>
> And then by default with false value is what we want.

Eventually, all supported migration peers will support lazy flush.  What
then?  Will we flip the default?  Or will we ignore the capability and
always flush lazily?

[...]
Juan Quintela Feb. 16, 2023, 5:13 p.m. UTC | #4
Markus Armbruster <armbru@redhat.com> wrote:
> Juan Quintela <quintela@redhat.com> writes:
>
>>>> @@ -478,6 +478,24 @@
>>>>  #                    should not affect the correctness of postcopy migration.
>>>>  #                    (since 7.1)
>>>>  #
>>>> +# @multifd-flush-after-each-section: flush every channel after each
>>>> +#                                    section sent.  This assures that
>>>> +#                                    we can't mix pages from one
>>>> +#                                    iteration through ram pages with
>
> RAM

OK.

>>>> +#                                    pages for the following
>>>> +#                                    iteration.  We really only need
>>>> +#                                    to do this flush after we have go
>
> to flush after we have gone

OK

>>>> +#                                    through all the dirty pages.
>>>> +#                                    For historical reasons, we do
>>>> +#                                    that after each section.  This is

> we flush after each section

OK

>>>> +#                                    suboptimal (we flush too many
>>>> +#                                    times).

> inefficient: we flush too often.

OK

>>>> +#                                    Default value is false.
>>>> +#                                    Setting this capability has no
>>>> +#                                    effect until the patch that
>>>> +#                                    removes this comment.
>>>> +#                                    (since 8.0)
>>>
>>> IMHO the core of this new "cap" is the new RAM_SAVE_FLAG_MULTIFD_FLUSH bit
>>> in the stream protocol, but it's not referenced here.  I would suggest
>>> simplify the content but highlight the core change:
>>
>> Actually it is the other way around.  What this capability will do is
>> _NOT_ use RAM_SAVE_FLAG_MULTIFD_FLUSH protocol.
>>
>>>  @multifd-lazy-flush:  When enabled, multifd will only do sync flush after
>
> Spell out "synchronrous".
ok.

>>>                        each whole round of bitmap scan.  Otherwise it'll be
>
> Suggest to scratch "whole".

ok.

>>>                        done per RAM save iteration (which happens with a much
>>>                        higher frequency).
>
> Less detail than Juan's version.  I'm not sure how much detail is
> appropriate for QMP reference documentation.
>
>>>                        Please consider enable this as long as possible, and
>>>                        keep this off only if any of the src/dst QEMU binary
>>>                        doesn't support it.
>
> Clear guidance on how to use it, good!
>
> Perhaps state it more forcefully: "Enable this when both source and
> destination support it."
>
>>>
>>>                        This capability is bound to the new RAM save flag
>>>                        RAM_SAVE_FLAG_MULTIFD_FLUSH, the new flag will only
>>>                        be used and recognized when this feature bit set.
>
> Is RAM_SAVE_FLAG_MULTIFD_FLUSH visible in the QMP interface?  Or in the
> migration stream?

No.  Only migration stream.

> I'm asking because doc comments are QMP reference documentation, but
> when writing them, it's easy to mistake them for internal documentation,
> because, well, they're comments.

>> Name is wrong.  It would be multifd-non-lazy-flush.  And I don't like
>> negatives.  Real name is:
>>
>> multifd-I-messed-and-flush-too-many-times.
>
> If you don't like "non-lazy", say "eager".

more than eager it is unnecesary.

>>> I know you dislike multifd-lazy-flush, but that's still the best I can come
>>> up with when writting this (yeah I still like it :-p), please bare with me
>>> and take whatever you think the best.
>>
>> Libvirt assumes that all capabilities are false except if enabled.
>> We want RAM_SAVE_FLAG_MULTFD_FLUSH by default (in new machine types).
>>
>> So, if we can do
>>
>> capability_use_new_way = true
>>
>> We change that to
>>
>> capability_use_old_way = true
>>
>> And then by default with false value is what we want.
>
> Eventually, all supported migration peers will support lazy flush.  What
> then?  Will we flip the default?  Or will we ignore the capability and
> always flush lazily?

I have to take a step back.  Cope with me.

How we fix problems in migration that make the stream incompatible.
We create a property.

static Property migration_properties[] = {
    ...
    DEFINE_PROP_BOOL("decompress-error-check", MigrationState,
                      decompress_error_check, true),
    ....
}

In this case it is true by default.

GlobalProperty hw_compat_2_12[] = {
    { "migration", "decompress-error-check", "off" },
    ...
};

We introduced it on whatever machine that is newer than 2_12.
Then we make it "off" for older machine types, that way we make sure
that migration from old qemu to new qemu works.

And we can even left libvirt, if they know that both qemus are new, they
can setup the property to true even for old machine types.

So, what we have:

Machine 2_13 and newer use the new code.
Machine 2_12 and older use the old code (by default).
We _can_ migrate machine 2_12 with new code, but we need to setup it
correctly on both sides.
We can run the old code with machine type 2_13.  But I admit than that
is only useful for testing, debugging, meassuring performance, etc.

So, the idea here is that we flush a lot of times for old machine types,
and we only flush when needed for new machine types.  Libvirt (or
whoever) can use the new method if it sees fit just using the
capability.

Now that I am telling this, I can switch back to a property instead of a
capability:
- I can have the any default value that I want
- So I can name it multifd_lazy_flush or whatever.

Later, Juan.
Markus Armbruster Feb. 17, 2023, 5:53 a.m. UTC | #5
Juan Quintela <quintela@redhat.com> writes:

> Markus Armbruster <armbru@redhat.com> wrote:
>> Juan Quintela <quintela@redhat.com> writes:
>>
>>>>> @@ -478,6 +478,24 @@
>>>>>  #                    should not affect the correctness of postcopy migration.
>>>>>  #                    (since 7.1)
>>>>>  #
>>>>> +# @multifd-flush-after-each-section: flush every channel after each
>>>>> +#                                    section sent.  This assures that
>>>>> +#                                    we can't mix pages from one
>>>>> +#                                    iteration through ram pages with
>>
>> RAM
>
> OK.
>
>>>>> +#                                    pages for the following
>>>>> +#                                    iteration.  We really only need
>>>>> +#                                    to do this flush after we have go
>>
>> to flush after we have gone
>
> OK
>
>>>>> +#                                    through all the dirty pages.
>>>>> +#                                    For historical reasons, we do
>>>>> +#                                    that after each section.  This is
>
>> we flush after each section
>
> OK
>
>>>>> +#                                    suboptimal (we flush too many
>>>>> +#                                    times).
>
>> inefficient: we flush too often.
>
> OK
>
>>>>> +#                                    Default value is false.
>>>>> +#                                    Setting this capability has no
>>>>> +#                                    effect until the patch that
>>>>> +#                                    removes this comment.
>>>>> +#                                    (since 8.0)
>>>>
>>>> IMHO the core of this new "cap" is the new RAM_SAVE_FLAG_MULTIFD_FLUSH bit
>>>> in the stream protocol, but it's not referenced here.  I would suggest
>>>> simplify the content but highlight the core change:
>>>
>>> Actually it is the other way around.  What this capability will do is
>>> _NOT_ use RAM_SAVE_FLAG_MULTIFD_FLUSH protocol.
>>>
>>>>  @multifd-lazy-flush:  When enabled, multifd will only do sync flush after
>>
>> Spell out "synchronrous".
>
> ok.
>
>>>>                        each whole round of bitmap scan.  Otherwise it'll be
>>
>> Suggest to scratch "whole".
>
> ok.
>
>>>>                        done per RAM save iteration (which happens with a much
>>>>                        higher frequency).
>>
>> Less detail than Juan's version.  I'm not sure how much detail is
>> appropriate for QMP reference documentation.
>>
>>>>                        Please consider enable this as long as possible, and
>>>>                        keep this off only if any of the src/dst QEMU binary
>>>>                        doesn't support it.
>>
>> Clear guidance on how to use it, good!
>>
>> Perhaps state it more forcefully: "Enable this when both source and
>> destination support it."
>>
>>>>
>>>>                        This capability is bound to the new RAM save flag
>>>>                        RAM_SAVE_FLAG_MULTIFD_FLUSH, the new flag will only
>>>>                        be used and recognized when this feature bit set.
>>
>> Is RAM_SAVE_FLAG_MULTIFD_FLUSH visible in the QMP interface?  Or in the
>> migration stream?
>
> No.  Only migration stream.

Doc comments should be written for readers of the QEMU QMP Reference
Manual.  Is RAM_SAVE_FLAG_MULTIFD_FLUSH relevant for them?

Perhaps the relevant part is "the peer needs to enable this capability
too, or else", for a value of "or else".

What happens when the source enables, and the destination doesn't?

What happens when the destination enables, and the source doesn't?

Any particular reason for having the destination recognize the flag only
when the capability is enabled?

>> I'm asking because doc comments are QMP reference documentation, but
>> when writing them, it's easy to mistake them for internal documentation,
>> because, well, they're comments.
>
>>> Name is wrong.  It would be multifd-non-lazy-flush.  And I don't like
>>> negatives.  Real name is:
>>>
>>> multifd-I-messed-and-flush-too-many-times.
>>
>> If you don't like "non-lazy", say "eager".
>
> more than eager it is unnecesary.

"overeager"?

>>>> I know you dislike multifd-lazy-flush, but that's still the best I can come
>>>> up with when writting this (yeah I still like it :-p), please bare with me
>>>> and take whatever you think the best.
>>>
>>> Libvirt assumes that all capabilities are false except if enabled.
>>> We want RAM_SAVE_FLAG_MULTFD_FLUSH by default (in new machine types).
>>>
>>> So, if we can do
>>>
>>> capability_use_new_way = true
>>>
>>> We change that to
>>>
>>> capability_use_old_way = true
>>>
>>> And then by default with false value is what we want.
>>
>> Eventually, all supported migration peers will support lazy flush.  What
>> then?  Will we flip the default?  Or will we ignore the capability and
>> always flush lazily?
>
> I have to take a step back.  Cope with me.
>
> How we fix problems in migration that make the stream incompatible.
> We create a property.
>
> static Property migration_properties[] = {
>     ...
>     DEFINE_PROP_BOOL("decompress-error-check", MigrationState,
>                       decompress_error_check, true),
>     ....
> }
>
> In this case it is true by default.
>
> GlobalProperty hw_compat_2_12[] = {
>     { "migration", "decompress-error-check", "off" },
>     ...
> };
>
> We introduced it on whatever machine that is newer than 2_12.
> Then we make it "off" for older machine types, that way we make sure
> that migration from old qemu to new qemu works.
>
> And we can even left libvirt, if they know that both qemus are new, they
> can setup the property to true even for old machine types.
>
> So, what we have:
>
> Machine 2_13 and newer use the new code.
> Machine 2_12 and older use the old code (by default).
> We _can_ migrate machine 2_12 with new code, but we need to setup it
> correctly on both sides.
> We can run the old code with machine type 2_13.  But I admit than that
> is only useful for testing, debugging, meassuring performance, etc.
>
> So, the idea here is that we flush a lot of times for old machine types,
> and we only flush when needed for new machine types.  Libvirt (or
> whoever) can use the new method if it sees fit just using the
> capability.

Got it.

What should be done, if anything, when all machines defaulting to the
old code are gone?  Getting rid of the old code along with the
capability is desirable, isn't it?  But if the capability is a stable
interface, the deprecation process applies.  Should it be stable?  Or
should it be just something we use to do the right thing depending on
the machine types (primary purpose), and also enable experimentation
(secondary purpose)?

> Now that I am telling this, I can switch back to a property instead of a
> capability:
> - I can have the any default value that I want
> - So I can name it multifd_lazy_flush or whatever.
>
> Later, Juan.
diff mbox series

Patch

diff --git a/qapi/migration.json b/qapi/migration.json
index c84fa10e86..3afd81174d 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -478,6 +478,24 @@ 
 #                    should not affect the correctness of postcopy migration.
 #                    (since 7.1)
 #
+# @multifd-flush-after-each-section: flush every channel after each
+#                                    section sent.  This assures that
+#                                    we can't mix pages from one
+#                                    iteration through ram pages with
+#                                    pages for the following
+#                                    iteration.  We really only need
+#                                    to do this flush after we have go
+#                                    through all the dirty pages.
+#                                    For historical reasons, we do
+#                                    that after each section.  This is
+#                                    suboptimal (we flush too many
+#                                    times).
+#                                    Default value is false.
+#                                    Setting this capability has no
+#                                    effect until the patch that
+#                                    removes this comment.
+#                                    (since 8.0)
+#
 # Features:
 # @unstable: Members @x-colo and @x-ignore-shared are experimental.
 #
@@ -492,7 +510,8 @@ 
            'dirty-bitmaps', 'postcopy-blocktime', 'late-block-activate',
            { 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
            'validate-uuid', 'background-snapshot',
-           'zero-copy-send', 'postcopy-preempt'] }
+           'zero-copy-send', 'postcopy-preempt',
+           'multifd-flush-after-each-section'] }
 
 ##
 # @MigrationCapabilityStatus:
diff --git a/migration/migration.h b/migration/migration.h
index 2da2f8a164..7f0f4260ba 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -424,6 +424,7 @@  int migrate_multifd_channels(void);
 MultiFDCompression migrate_multifd_compression(void);
 int migrate_multifd_zlib_level(void);
 int migrate_multifd_zstd_level(void);
+bool migrate_multifd_flush_after_each_section(void);
 
 #ifdef CONFIG_LINUX
 bool migrate_use_zero_copy_send(void);
diff --git a/hw/core/machine.c b/hw/core/machine.c
index f73fc4c45c..602e775f34 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -54,6 +54,7 @@  const size_t hw_compat_7_1_len = G_N_ELEMENTS(hw_compat_7_1);
 GlobalProperty hw_compat_7_0[] = {
     { "arm-gicv3-common", "force-8-bit-prio", "on" },
     { "nvme-ns", "eui64-default", "on"},
+    { "migration", "multifd-flush-after-each-section", "on"},
 };
 const size_t hw_compat_7_0_len = G_N_ELEMENTS(hw_compat_7_0);
 
diff --git a/migration/migration.c b/migration/migration.c
index 90fca70cb7..cfba0da005 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -167,7 +167,8 @@  INITIALIZE_MIGRATE_CAPS_SET(check_caps_background_snapshot,
     MIGRATION_CAPABILITY_XBZRLE,
     MIGRATION_CAPABILITY_X_COLO,
     MIGRATION_CAPABILITY_VALIDATE_UUID,
-    MIGRATION_CAPABILITY_ZERO_COPY_SEND);
+    MIGRATION_CAPABILITY_ZERO_COPY_SEND,
+    MIGRATION_CAPABILITY_MULTIFD_FLUSH_AFTER_EACH_SECTION);
 
 /* When we add fault tolerance, we could have several
    migrations at once.  For now we don't need to add
@@ -2701,6 +2702,17 @@  bool migrate_use_multifd(void)
     return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD];
 }
 
+bool migrate_multifd_flush_after_each_section(void)
+{
+    MigrationState *s = migrate_get_current();
+
+    /*
+     * Until the patch that remove this comment, we always return that
+     * the capability is enabled.
+     */
+    return true || s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD_FLUSH_AFTER_EACH_SECTION];
+}
+
 bool migrate_pause_before_switchover(void)
 {
     MigrationState *s;
@@ -4535,7 +4547,8 @@  static Property migration_properties[] = {
     DEFINE_PROP_MIG_CAP("x-zero-copy-send",
             MIGRATION_CAPABILITY_ZERO_COPY_SEND),
 #endif
-
+    DEFINE_PROP_MIG_CAP("multifd-flush-after-each-section",
+                        MIGRATION_CAPABILITY_MULTIFD_FLUSH_AFTER_EACH_SECTION),
     DEFINE_PROP_END_OF_LIST(),
 };