diff mbox

mirror: add sync mode incremental to drive-mirror and blockdev-mirror

Message ID 20170504105444.8940-1-daniel.kucera@gmail.com
State New
Headers show

Commit Message

Daniel Kučera May 4, 2017, 10:54 a.m. UTC
parameter bitmap chooses existing dirtymap instead of newly created
in mirror_start_job

Signed-off-by: Daniel Kucera <daniel.kucera@gmail.com>
---
 block/mirror.c            | 47 ++++++++++++++++++++++++++++++-----------------
 blockdev.c                | 16 +++++++++++++++-
 include/block/block_int.h |  4 +++-
 qapi/block-core.json      | 12 ++++++++++--
 4 files changed, 58 insertions(+), 21 deletions(-)

Comments

Stefan Hajnoczi May 8, 2017, 8:35 p.m. UTC | #1
On Thu, May 04, 2017 at 12:54:40PM +0200, Daniel Kucera wrote:

Seems like a logical extension along the same lines as the backup block
job's dirty bitmap sync mode.

> parameter bitmap chooses existing dirtymap instead of newly created
> in mirror_start_job
> 
> Signed-off-by: Daniel Kucera <daniel.kucera@gmail.com>
> ---
>  block/mirror.c            | 47 ++++++++++++++++++++++++++++++-----------------
>  blockdev.c                | 16 +++++++++++++++-
>  include/block/block_int.h |  4 +++-
>  qapi/block-core.json      | 12 ++++++++++--
>  4 files changed, 58 insertions(+), 21 deletions(-)

This patch modifies the user's bitmap in-place and does not
differentiate successful completion from failure.  I suggest following
the same bitmap lifecycle as the backup block job so that
cancellation/failure offers useful behavior.  (If the job fails due to
intermittent errors on the target it should be possible to start the job
again - the bitmap must not be lost!).

> diff --git a/block/mirror.c b/block/mirror.c
> index 9f5eb69..1bfbe2e 100644
> --- a/block/mirror.c
> +++ b/block/mirror.c
> @@ -49,7 +49,7 @@ typedef struct MirrorBlockJob {
>      BlockDriverState *to_replace;
>      /* Used to block operations on the drive-mirror-replace target */
>      Error *replace_blocker;
> -    bool is_none_mode;
> +    MirrorSyncMode sync_mode;
>      BlockMirrorBackingMode backing_mode;
>      BlockdevOnError on_source_error, on_target_error;
>      bool synced;
> @@ -523,7 +523,9 @@ static void mirror_exit(BlockJob *job, void *opaque)
>      bdrv_child_try_set_perm(mirror_top_bs->backing, 0, BLK_PERM_ALL,
>                              &error_abort);
>      if (s->backing_mode == MIRROR_SOURCE_BACKING_CHAIN) {
> -        BlockDriverState *backing = s->is_none_mode ? src : s->base;
> +        BlockDriverState *backing =
> +        (s->sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) ||
> +        (s->sync_mode == MIRROR_SYNC_MODE_NONE) ? src : s->base;

Please update the comment in include/block/block_int.h:typedef enum
BlockMirrorBackingMode.

> @@ -1213,9 +1217,21 @@ static void mirror_start_job(const char *job_id, BlockDriverState *bs,
>          s->should_complete = true;
>      }
>  
> -    s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity, NULL, errp);
> -    if (!s->dirty_bitmap) {
> -        goto fail;
> +    if (s->sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) {
> +        if (bitmap == NULL) {
> +            error_setg(errp, "Mode incremental requires parameter 'bitmap'");
> +            goto fail;
> +        }
> +        s->dirty_bitmap = bdrv_find_dirty_bitmap(bs, bitmap);
> +        if (!s->dirty_bitmap) {
> +            error_setg(errp, "Bitmap '%s' not found", bitmap);
> +            goto fail;
> +        }
> +    } else {
> +        s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity, NULL, errp);
> +        if (!s->dirty_bitmap) {
> +            goto fail;
> +        }
>      }

Do we need to check that granularity is compatible with the pre-existing
dirty bitmap's granularity when sync_mode is
MIRROR_SYNC_MODE_INCREMENTAL?
Denis V. Lunev May 8, 2017, 9:02 p.m. UTC | #2
On 05/08/2017 10:35 PM, Stefan Hajnoczi wrote:
> On Thu, May 04, 2017 at 12:54:40PM +0200, Daniel Kucera wrote:
>
> Seems like a logical extension along the same lines as the backup block
> job's dirty bitmap sync mode.
>
>> parameter bitmap chooses existing dirtymap instead of newly created
>> in mirror_start_job
>>
>> Signed-off-by: Daniel Kucera <daniel.kucera@gmail.com>
Can you pls describe the use case pls in a bit more details.

For now this could be a bit strange:
- dirty bitmap, which can be found via bdrv_create_dirty_bitmap
  could be read-only or read-write, i.e. being modified by writes
  or be read-only, which should not be modified. Thus adding
  r/o bitmap to the mirror could result in interesting things.

Minimally we should prohibit usage of r/o bitmaps this way.

So, why to use mirror, not backup for the case?

Den
John Snow May 8, 2017, 9:07 p.m. UTC | #3
On 05/08/2017 05:02 PM, Denis V. Lunev wrote:
> On 05/08/2017 10:35 PM, Stefan Hajnoczi wrote:
>> On Thu, May 04, 2017 at 12:54:40PM +0200, Daniel Kucera wrote:
>>
>> Seems like a logical extension along the same lines as the backup block
>> job's dirty bitmap sync mode.
>>
>>> parameter bitmap chooses existing dirtymap instead of newly created
>>> in mirror_start_job
>>>
>>> Signed-off-by: Daniel Kucera <daniel.kucera@gmail.com>
> Can you pls describe the use case pls in a bit more details.
> 
> For now this could be a bit strange:
> - dirty bitmap, which can be found via bdrv_create_dirty_bitmap
>   could be read-only or read-write, i.e. being modified by writes
>   or be read-only, which should not be modified. Thus adding
>   r/o bitmap to the mirror could result in interesting things.
> 

This patch as it was submitted does not put the bitmap into a read-only
mode; it leaves it RW and modifies it as it processes the mirror command.

Though you do raise a good point; this bitmap is now in-use by a job and
should not be allowed to be deleted by the user, but our existing
mechanism treats a locked bitmap as one that is also in R/O mode. This
would be a different use case.

> Minimally we should prohibit usage of r/o bitmaps this way.
> 
> So, why to use mirror, not backup for the case?
> 

My guess is for pivot semantics.

> Den
>
Stefan Hajnoczi May 9, 2017, 4:52 p.m. UTC | #4
On Mon, May 08, 2017 at 05:07:18PM -0400, John Snow wrote:
> 
> 
> On 05/08/2017 05:02 PM, Denis V. Lunev wrote:
> > On 05/08/2017 10:35 PM, Stefan Hajnoczi wrote:
> >> On Thu, May 04, 2017 at 12:54:40PM +0200, Daniel Kucera wrote:
> >>
> >> Seems like a logical extension along the same lines as the backup block
> >> job's dirty bitmap sync mode.
> >>
> >>> parameter bitmap chooses existing dirtymap instead of newly created
> >>> in mirror_start_job
> >>>
> >>> Signed-off-by: Daniel Kucera <daniel.kucera@gmail.com>
> > Can you pls describe the use case pls in a bit more details.
> > 
> > For now this could be a bit strange:
> > - dirty bitmap, which can be found via bdrv_create_dirty_bitmap
> >   could be read-only or read-write, i.e. being modified by writes
> >   or be read-only, which should not be modified. Thus adding
> >   r/o bitmap to the mirror could result in interesting things.
> > 
> 
> This patch as it was submitted does not put the bitmap into a read-only
> mode; it leaves it RW and modifies it as it processes the mirror command.
> 
> Though you do raise a good point; this bitmap is now in-use by a job and
> should not be allowed to be deleted by the user, but our existing
> mechanism treats a locked bitmap as one that is also in R/O mode. This
> would be a different use case.
> 
> > Minimally we should prohibit usage of r/o bitmaps this way.
> > 
> > So, why to use mirror, not backup for the case?
> > 
> 
> My guess is for pivot semantics.

Daniel posted his workflow in a previous revision of this series:

He is doing a variation on non-shared storage migration with the mirror
block job, but using the ZFS send operation to transfer the initial copy
of the disk.

Once ZFS send completes it's necessary to transfer all the blocks that
were dirtied while the transfer was taking place.

1. Create dirty bitmap and start tracking dirty blocks in QEMU.
2. Snapshot and send ZFS volume.
3. mirror sync=bitmap after ZFS send completes.
4. Live migrate.

Stefan
Denis V. Lunev May 10, 2017, 1:25 p.m. UTC | #5
On 05/09/2017 06:52 PM, Stefan Hajnoczi wrote:
> On Mon, May 08, 2017 at 05:07:18PM -0400, John Snow wrote:
>>
>> On 05/08/2017 05:02 PM, Denis V. Lunev wrote:
>>> On 05/08/2017 10:35 PM, Stefan Hajnoczi wrote:
>>>> On Thu, May 04, 2017 at 12:54:40PM +0200, Daniel Kucera wrote:
>>>>
>>>> Seems like a logical extension along the same lines as the backup block
>>>> job's dirty bitmap sync mode.
>>>>
>>>>> parameter bitmap chooses existing dirtymap instead of newly created
>>>>> in mirror_start_job
>>>>>
>>>>> Signed-off-by: Daniel Kucera <daniel.kucera@gmail.com>
>>> Can you pls describe the use case pls in a bit more details.
>>>
>>> For now this could be a bit strange:
>>> - dirty bitmap, which can be found via bdrv_create_dirty_bitmap
>>>   could be read-only or read-write, i.e. being modified by writes
>>>   or be read-only, which should not be modified. Thus adding
>>>   r/o bitmap to the mirror could result in interesting things.
>>>
>> This patch as it was submitted does not put the bitmap into a read-only
>> mode; it leaves it RW and modifies it as it processes the mirror command.
>>
>> Though you do raise a good point; this bitmap is now in-use by a job and
>> should not be allowed to be deleted by the user, but our existing
>> mechanism treats a locked bitmap as one that is also in R/O mode. This
>> would be a different use case.
>>
>>> Minimally we should prohibit usage of r/o bitmaps this way.
>>>
>>> So, why to use mirror, not backup for the case?
>>>
>> My guess is for pivot semantics.
> Daniel posted his workflow in a previous revision of this series:
>
> He is doing a variation on non-shared storage migration with the mirror
> block job, but using the ZFS send operation to transfer the initial copy
> of the disk.
>
> Once ZFS send completes it's necessary to transfer all the blocks that
> were dirtied while the transfer was taking place.
>
> 1. Create dirty bitmap and start tracking dirty blocks in QEMU.
> 2. Snapshot and send ZFS volume.
> 3. mirror sync=bitmap after ZFS send completes.
> 4. Live migrate.
>
> Stefan
thank you very much. This is clear now.

If I am not mistaken,  this can be very easy done with
the current implementation without further QEMU modifications.
Daniel just needs to start mirror and put it on pause for the
duration of stage (2).

Will this work?

Den
Stefan Hajnoczi May 10, 2017, 3 p.m. UTC | #6
On Wed, May 10, 2017 at 03:25:31PM +0200, Denis V. Lunev wrote:
> On 05/09/2017 06:52 PM, Stefan Hajnoczi wrote:
> > On Mon, May 08, 2017 at 05:07:18PM -0400, John Snow wrote:
> >>
> >> On 05/08/2017 05:02 PM, Denis V. Lunev wrote:
> >>> On 05/08/2017 10:35 PM, Stefan Hajnoczi wrote:
> >>>> On Thu, May 04, 2017 at 12:54:40PM +0200, Daniel Kucera wrote:
> >>>>
> >>>> Seems like a logical extension along the same lines as the backup block
> >>>> job's dirty bitmap sync mode.
> >>>>
> >>>>> parameter bitmap chooses existing dirtymap instead of newly created
> >>>>> in mirror_start_job
> >>>>>
> >>>>> Signed-off-by: Daniel Kucera <daniel.kucera@gmail.com>
> >>> Can you pls describe the use case pls in a bit more details.
> >>>
> >>> For now this could be a bit strange:
> >>> - dirty bitmap, which can be found via bdrv_create_dirty_bitmap
> >>>   could be read-only or read-write, i.e. being modified by writes
> >>>   or be read-only, which should not be modified. Thus adding
> >>>   r/o bitmap to the mirror could result in interesting things.
> >>>
> >> This patch as it was submitted does not put the bitmap into a read-only
> >> mode; it leaves it RW and modifies it as it processes the mirror command.
> >>
> >> Though you do raise a good point; this bitmap is now in-use by a job and
> >> should not be allowed to be deleted by the user, but our existing
> >> mechanism treats a locked bitmap as one that is also in R/O mode. This
> >> would be a different use case.
> >>
> >>> Minimally we should prohibit usage of r/o bitmaps this way.
> >>>
> >>> So, why to use mirror, not backup for the case?
> >>>
> >> My guess is for pivot semantics.
> > Daniel posted his workflow in a previous revision of this series:
> >
> > He is doing a variation on non-shared storage migration with the mirror
> > block job, but using the ZFS send operation to transfer the initial copy
> > of the disk.
> >
> > Once ZFS send completes it's necessary to transfer all the blocks that
> > were dirtied while the transfer was taking place.
> >
> > 1. Create dirty bitmap and start tracking dirty blocks in QEMU.
> > 2. Snapshot and send ZFS volume.
> > 3. mirror sync=bitmap after ZFS send completes.
> > 4. Live migrate.
> >
> > Stefan
> thank you very much. This is clear now.
> 
> If I am not mistaken,  this can be very easy done with
> the current implementation without further QEMU modifications.
> Daniel just needs to start mirror and put it on pause for the
> duration of stage (2).
> 
> Will this work?

I think it's a interesting idea but I'm not sure if sync=none + pause
can be done atomically.  Without atomicity a block might be sent to the
destination while the ZFS send is still in progress.

Stefan
Denis V. Lunev May 10, 2017, 3:05 p.m. UTC | #7
On 05/10/2017 05:00 PM, Stefan Hajnoczi wrote:
> On Wed, May 10, 2017 at 03:25:31PM +0200, Denis V. Lunev wrote:
>> On 05/09/2017 06:52 PM, Stefan Hajnoczi wrote:
>>> On Mon, May 08, 2017 at 05:07:18PM -0400, John Snow wrote:
>>>> On 05/08/2017 05:02 PM, Denis V. Lunev wrote:
>>>>> On 05/08/2017 10:35 PM, Stefan Hajnoczi wrote:
>>>>>> On Thu, May 04, 2017 at 12:54:40PM +0200, Daniel Kucera wrote:
>>>>>>
>>>>>> Seems like a logical extension along the same lines as the backup block
>>>>>> job's dirty bitmap sync mode.
>>>>>>
>>>>>>> parameter bitmap chooses existing dirtymap instead of newly created
>>>>>>> in mirror_start_job
>>>>>>>
>>>>>>> Signed-off-by: Daniel Kucera <daniel.kucera@gmail.com>
>>>>> Can you pls describe the use case pls in a bit more details.
>>>>>
>>>>> For now this could be a bit strange:
>>>>> - dirty bitmap, which can be found via bdrv_create_dirty_bitmap
>>>>>   could be read-only or read-write, i.e. being modified by writes
>>>>>   or be read-only, which should not be modified. Thus adding
>>>>>   r/o bitmap to the mirror could result in interesting things.
>>>>>
>>>> This patch as it was submitted does not put the bitmap into a read-only
>>>> mode; it leaves it RW and modifies it as it processes the mirror command.
>>>>
>>>> Though you do raise a good point; this bitmap is now in-use by a job and
>>>> should not be allowed to be deleted by the user, but our existing
>>>> mechanism treats a locked bitmap as one that is also in R/O mode. This
>>>> would be a different use case.
>>>>
>>>>> Minimally we should prohibit usage of r/o bitmaps this way.
>>>>>
>>>>> So, why to use mirror, not backup for the case?
>>>>>
>>>> My guess is for pivot semantics.
>>> Daniel posted his workflow in a previous revision of this series:
>>>
>>> He is doing a variation on non-shared storage migration with the mirror
>>> block job, but using the ZFS send operation to transfer the initial copy
>>> of the disk.
>>>
>>> Once ZFS send completes it's necessary to transfer all the blocks that
>>> were dirtied while the transfer was taking place.
>>>
>>> 1. Create dirty bitmap and start tracking dirty blocks in QEMU.
>>> 2. Snapshot and send ZFS volume.
>>> 3. mirror sync=bitmap after ZFS send completes.
>>> 4. Live migrate.
>>>
>>> Stefan
>> thank you very much. This is clear now.
>>
>> If I am not mistaken,  this can be very easy done with
>> the current implementation without further QEMU modifications.
>> Daniel just needs to start mirror and put it on pause for the
>> duration of stage (2).
>>
>> Will this work?
> I think it's a interesting idea but I'm not sure if sync=none + pause
> can be done atomically.  Without atomicity a block might be sent to the
> destination while the ZFS send is still in progress.
>
> Stefan
Atomicity here is completely impossible.

The case is like this.

1) start the mirror
2) pause the mirror
3) snapshot + ZFS send
4) resume mirror
5) live migrate

The worst case problem - some additional blocks which
would be send twice. This should not be very big deal.
This is actually which backup always does. The amount
of such blocks will not be really big.

Den
Daniel Kučera May 11, 2017, 2:16 p.m. UTC | #8
2017-05-10 17:05 GMT+02:00 Denis V. Lunev <den@openvz.org>:

> On 05/10/2017 05:00 PM, Stefan Hajnoczi wrote:
> > On Wed, May 10, 2017 at 03:25:31PM +0200, Denis V. Lunev wrote:
> >> On 05/09/2017 06:52 PM, Stefan Hajnoczi wrote:
> >>> On Mon, May 08, 2017 at 05:07:18PM -0400, John Snow wrote:
> >>>> On 05/08/2017 05:02 PM, Denis V. Lunev wrote:
> >>>>> On 05/08/2017 10:35 PM, Stefan Hajnoczi wrote:
> >>>>>> On Thu, May 04, 2017 at 12:54:40PM +0200, Daniel Kucera wrote:
> >>>>>>
> >>>>>> Seems like a logical extension along the same lines as the backup
> block
> >>>>>> job's dirty bitmap sync mode.
> >>>>>>
> >>>>>>> parameter bitmap chooses existing dirtymap instead of newly created
> >>>>>>> in mirror_start_job
> >>>>>>>
> >>>>>>> Signed-off-by: Daniel Kucera <daniel.kucera@gmail.com>
> >>>>> Can you pls describe the use case pls in a bit more details.
> >>>>>
> >>>>> For now this could be a bit strange:
> >>>>> - dirty bitmap, which can be found via bdrv_create_dirty_bitmap
> >>>>>   could be read-only or read-write, i.e. being modified by writes
> >>>>>   or be read-only, which should not be modified. Thus adding
> >>>>>   r/o bitmap to the mirror could result in interesting things.
> >>>>>
> >>>> This patch as it was submitted does not put the bitmap into a
> read-only
> >>>> mode; it leaves it RW and modifies it as it processes the mirror
> command.
> >>>>
> >>>> Though you do raise a good point; this bitmap is now in-use by a job
> and
> >>>> should not be allowed to be deleted by the user, but our existing
> >>>> mechanism treats a locked bitmap as one that is also in R/O mode. This
> >>>> would be a different use case.
> >>>>
> >>>>> Minimally we should prohibit usage of r/o bitmaps this way.
> >>>>>
> >>>>> So, why to use mirror, not backup for the case?
> >>>>>
> >>>> My guess is for pivot semantics.
> >>> Daniel posted his workflow in a previous revision of this series:
> >>>
> >>> He is doing a variation on non-shared storage migration with the mirror
> >>> block job, but using the ZFS send operation to transfer the initial
> copy
> >>> of the disk.
> >>>
> >>> Once ZFS send completes it's necessary to transfer all the blocks that
> >>> were dirtied while the transfer was taking place.
> >>>
> >>> 1. Create dirty bitmap and start tracking dirty blocks in QEMU.
> >>> 2. Snapshot and send ZFS volume.
> >>> 3. mirror sync=bitmap after ZFS send completes.
> >>> 4. Live migrate.
> >>>
> >>> Stefan
> >> thank you very much. This is clear now.
> >>
> >> If I am not mistaken,  this can be very easy done with
> >> the current implementation without further QEMU modifications.
> >> Daniel just needs to start mirror and put it on pause for the
> >> duration of stage (2).
> >>
> >> Will this work?
> > I think it's a interesting idea but I'm not sure if sync=none + pause
> > can be done atomically.  Without atomicity a block might be sent to the
> > destination while the ZFS send is still in progress.
> >
> > Stefan
> Atomicity here is completely impossible.
>
> The case is like this.
>
> 1) start the mirror
> 2) pause the mirror
> 3) snapshot + ZFS send
> 4) resume mirror
> 5) live migrate
>
> The worst case problem - some additional blocks which
> would be send twice. This should not be very big deal.
> This is actually which backup always does. The amount
> of such blocks will not be really big.
>
> Den
>

I guess it won't be possible to start mirror in 1) or it will instantly
fail because the block device on destination doesn't exist at that moment,
so it's not even possible to start nbd server.

Or am I wrong?

S pozdravom / Best regards
Daniel Kucera.
Denis V. Lunev May 11, 2017, 2:28 p.m. UTC | #9
On 05/11/2017 04:16 PM, Daniel Kučera wrote:
>
> 2017-05-10 17:05 GMT+02:00 Denis V. Lunev <den@openvz.org
> <mailto:den@openvz.org>>:
>
>     On 05/10/2017 05:00 PM, Stefan Hajnoczi wrote:
>     > On Wed, May 10, 2017 at 03:25:31PM +0200, Denis V. Lunev wrote:
>     >> On 05/09/2017 06:52 PM, Stefan Hajnoczi wrote:
>     >>> On Mon, May 08, 2017 at 05:07:18PM -0400, John Snow wrote:
>     >>>> On 05/08/2017 05:02 PM, Denis V. Lunev wrote:
>     >>>>> On 05/08/2017 10:35 PM, Stefan Hajnoczi wrote:
>     >>>>>> On Thu, May 04, 2017 at 12:54:40PM +0200, Daniel Kucera wrote:
>     >>>>>>
>     >>>>>> Seems like a logical extension along the same lines as the
>     backup block
>     >>>>>> job's dirty bitmap sync mode.
>     >>>>>>
>     >>>>>>> parameter bitmap chooses existing dirtymap instead of
>     newly created
>     >>>>>>> in mirror_start_job
>     >>>>>>>
>     >>>>>>> Signed-off-by: Daniel Kucera <daniel.kucera@gmail.com
>     <mailto:daniel.kucera@gmail.com>>
>     >>>>> Can you pls describe the use case pls in a bit more details.
>     >>>>>
>     >>>>> For now this could be a bit strange:
>     >>>>> - dirty bitmap, which can be found via bdrv_create_dirty_bitmap
>     >>>>>   could be read-only or read-write, i.e. being modified by
>     writes
>     >>>>>   or be read-only, which should not be modified. Thus adding
>     >>>>>   r/o bitmap to the mirror could result in interesting things.
>     >>>>>
>     >>>> This patch as it was submitted does not put the bitmap into a
>     read-only
>     >>>> mode; it leaves it RW and modifies it as it processes the
>     mirror command.
>     >>>>
>     >>>> Though you do raise a good point; this bitmap is now in-use
>     by a job and
>     >>>> should not be allowed to be deleted by the user, but our existing
>     >>>> mechanism treats a locked bitmap as one that is also in R/O
>     mode. This
>     >>>> would be a different use case.
>     >>>>
>     >>>>> Minimally we should prohibit usage of r/o bitmaps this way.
>     >>>>>
>     >>>>> So, why to use mirror, not backup for the case?
>     >>>>>
>     >>>> My guess is for pivot semantics.
>     >>> Daniel posted his workflow in a previous revision of this series:
>     >>>
>     >>> He is doing a variation on non-shared storage migration with
>     the mirror
>     >>> block job, but using the ZFS send operation to transfer the
>     initial copy
>     >>> of the disk.
>     >>>
>     >>> Once ZFS send completes it's necessary to transfer all the
>     blocks that
>     >>> were dirtied while the transfer was taking place.
>     >>>
>     >>> 1. Create dirty bitmap and start tracking dirty blocks in QEMU.
>     >>> 2. Snapshot and send ZFS volume.
>     >>> 3. mirror sync=bitmap after ZFS send completes.
>     >>> 4. Live migrate.
>     >>>
>     >>> Stefan
>     >> thank you very much. This is clear now.
>     >>
>     >> If I am not mistaken,  this can be very easy done with
>     >> the current implementation without further QEMU modifications.
>     >> Daniel just needs to start mirror and put it on pause for the
>     >> duration of stage (2).
>     >>
>     >> Will this work?
>     > I think it's a interesting idea but I'm not sure if sync=none +
>     pause
>     > can be done atomically.  Without atomicity a block might be sent
>     to the
>     > destination while the ZFS send is still in progress.
>     >
>     > Stefan
>     Atomicity here is completely impossible.
>
>     The case is like this.
>
>     1) start the mirror
>     2) pause the mirror
>     3) snapshot + ZFS send
>     4) resume mirror
>     5) live migrate
>
>     The worst case problem - some additional blocks which
>     would be send twice. This should not be very big deal.
>     This is actually which backup always does. The amount
>     of such blocks will not be really big.
>
>     Den
>
>
> I guess it won't be possible to start mirror in 1) or it will
> instantly fail because the block device on destination doesn't exist
> at that moment, so it's not even possible to start nbd server.
>
> Or am I wrong?
>
good point, by I guess you can create empty volume of the
proper size it at step 0, setup QEMU mirror and start to copy
the data to that volume. I may be completely wrong here as
I do not know ZFS management procedures and tools.

Can you share the commands you are using to perform the
op? May be we will be able to find suitable solution.

Den
Daniel Kučera May 11, 2017, 2:52 p.m. UTC | #10
2017-05-11 16:28 GMT+02:00 Denis V. Lunev <den@openvz.org>:

> On 05/11/2017 04:16 PM, Daniel Kučera wrote:
> >
> > 2017-05-10 17:05 GMT+02:00 Denis V. Lunev <den@openvz.org
> > <mailto:den@openvz.org>>:
> >
> >     On 05/10/2017 05:00 PM, Stefan Hajnoczi wrote:
> >     > On Wed, May 10, 2017 at 03:25:31PM +0200, Denis V. Lunev wrote:
> >     >> On 05/09/2017 06:52 PM, Stefan Hajnoczi wrote:
> >     >>> On Mon, May 08, 2017 at 05:07:18PM -0400, John Snow wrote:
> >     >>>> On 05/08/2017 05:02 PM, Denis V. Lunev wrote:
> >     >>>>> On 05/08/2017 10:35 PM, Stefan Hajnoczi wrote:
> >     >>>>>> On Thu, May 04, 2017 at 12:54:40PM +0200, Daniel Kucera wrote:
> >     >>>>>>
> >     >>>>>> Seems like a logical extension along the same lines as the
> >     backup block
> >     >>>>>> job's dirty bitmap sync mode.
> >     >>>>>>
> >     >>>>>>> parameter bitmap chooses existing dirtymap instead of
> >     newly created
> >     >>>>>>> in mirror_start_job
> >     >>>>>>>
> >     >>>>>>> Signed-off-by: Daniel Kucera <daniel.kucera@gmail.com
> >     <mailto:daniel.kucera@gmail.com>>
> >     >>>>> Can you pls describe the use case pls in a bit more details.
> >     >>>>>
> >     >>>>> For now this could be a bit strange:
> >     >>>>> - dirty bitmap, which can be found via bdrv_create_dirty_bitmap
> >     >>>>>   could be read-only or read-write, i.e. being modified by
> >     writes
> >     >>>>>   or be read-only, which should not be modified. Thus adding
> >     >>>>>   r/o bitmap to the mirror could result in interesting things.
> >     >>>>>
> >     >>>> This patch as it was submitted does not put the bitmap into a
> >     read-only
> >     >>>> mode; it leaves it RW and modifies it as it processes the
> >     mirror command.
> >     >>>>
> >     >>>> Though you do raise a good point; this bitmap is now in-use
> >     by a job and
> >     >>>> should not be allowed to be deleted by the user, but our
> existing
> >     >>>> mechanism treats a locked bitmap as one that is also in R/O
> >     mode. This
> >     >>>> would be a different use case.
> >     >>>>
> >     >>>>> Minimally we should prohibit usage of r/o bitmaps this way.
> >     >>>>>
> >     >>>>> So, why to use mirror, not backup for the case?
> >     >>>>>
> >     >>>> My guess is for pivot semantics.
> >     >>> Daniel posted his workflow in a previous revision of this series:
> >     >>>
> >     >>> He is doing a variation on non-shared storage migration with
> >     the mirror
> >     >>> block job, but using the ZFS send operation to transfer the
> >     initial copy
> >     >>> of the disk.
> >     >>>
> >     >>> Once ZFS send completes it's necessary to transfer all the
> >     blocks that
> >     >>> were dirtied while the transfer was taking place.
> >     >>>
> >     >>> 1. Create dirty bitmap and start tracking dirty blocks in QEMU.
> >     >>> 2. Snapshot and send ZFS volume.
> >     >>> 3. mirror sync=bitmap after ZFS send completes.
> >     >>> 4. Live migrate.
> >     >>>
> >     >>> Stefan
> >     >> thank you very much. This is clear now.
> >     >>
> >     >> If I am not mistaken,  this can be very easy done with
> >     >> the current implementation without further QEMU modifications.
> >     >> Daniel just needs to start mirror and put it on pause for the
> >     >> duration of stage (2).
> >     >>
> >     >> Will this work?
> >     > I think it's a interesting idea but I'm not sure if sync=none +
> >     pause
> >     > can be done atomically.  Without atomicity a block might be sent
> >     to the
> >     > destination while the ZFS send is still in progress.
> >     >
> >     > Stefan
> >     Atomicity here is completely impossible.
> >
> >     The case is like this.
> >
> >     1) start the mirror
> >     2) pause the mirror
> >     3) snapshot + ZFS send
> >     4) resume mirror
> >     5) live migrate
> >
> >     The worst case problem - some additional blocks which
> >     would be send twice. This should not be very big deal.
> >     This is actually which backup always does. The amount
> >     of such blocks will not be really big.
> >
> >     Den
> >
> >
> > I guess it won't be possible to start mirror in 1) or it will
> > instantly fail because the block device on destination doesn't exist
> > at that moment, so it's not even possible to start nbd server.
> >
> > Or am I wrong?
> >
> good point, by I guess you can create empty volume of the
> proper size it at step 0, setup QEMU mirror and start to copy
> the data to that volume. I may be completely wrong here as
> I do not know ZFS management procedures and tools.
>
> Can you share the commands you are using to perform the
> op? May be we will be able to find suitable solution.
>
> Den
>
>
the idea is following:

1) virsh qemu-monitor-command test-domain '{ "execute":
"block-dirty-bitmap-add", "arguments": {"node": "drive-scsi0-0-0", "name":
"migration", "granularity": 65536}}'

2) zfs snapshot zstore/test-volume@migration

3) zfs send -R zstore/test-volume@migration | ssh dest-host zfs recv
zstore/test-volume

4) virsh qemu-monitor-command test-domain '{ "execute": "drive-mirror",
"arguments": {"device": "drive-scsi0-0-0-0", "target": "nbd:IP:port",
"sync": "incremental" , "bitmap": "migration", "mode":"existing", "format":
"raw"}}'

5) virsh migrate test-domain --live ....

But your point with paused mirror is interesting. The solution could also
be to start the mirror (sync: none) as paused (with some new parameter
perhaps) so it would create its own dirty bitmap but not start mirroring
until resumed with "block-job-resume". In the meantime the ZFS volume would
be transfered and mirror resumed just after that.

This looks like a much cleaner solution though.

S pozdravom / Best regards
Daniel Kucera.
Stefan Hajnoczi May 11, 2017, 6:35 p.m. UTC | #11
On Thu, May 11, 2017 at 04:28:14PM +0200, Denis V. Lunev wrote:
> On 05/11/2017 04:16 PM, Daniel Kučera wrote:
> >
> > 2017-05-10 17:05 GMT+02:00 Denis V. Lunev <den@openvz.org
> > <mailto:den@openvz.org>>:
> >
> >     On 05/10/2017 05:00 PM, Stefan Hajnoczi wrote:
> >     > On Wed, May 10, 2017 at 03:25:31PM +0200, Denis V. Lunev wrote:
> >     >> On 05/09/2017 06:52 PM, Stefan Hajnoczi wrote:
> >     >>> On Mon, May 08, 2017 at 05:07:18PM -0400, John Snow wrote:
> >     >>>> On 05/08/2017 05:02 PM, Denis V. Lunev wrote:
> >     >>>>> On 05/08/2017 10:35 PM, Stefan Hajnoczi wrote:
> >     >>>>>> On Thu, May 04, 2017 at 12:54:40PM +0200, Daniel Kucera wrote:
> >     >>>>>>
> >     >>>>>> Seems like a logical extension along the same lines as the
> >     backup block
> >     >>>>>> job's dirty bitmap sync mode.
> >     >>>>>>
> >     >>>>>>> parameter bitmap chooses existing dirtymap instead of
> >     newly created
> >     >>>>>>> in mirror_start_job
> >     >>>>>>>
> >     >>>>>>> Signed-off-by: Daniel Kucera <daniel.kucera@gmail.com
> >     <mailto:daniel.kucera@gmail.com>>
> >     >>>>> Can you pls describe the use case pls in a bit more details.
> >     >>>>>
> >     >>>>> For now this could be a bit strange:
> >     >>>>> - dirty bitmap, which can be found via bdrv_create_dirty_bitmap
> >     >>>>>   could be read-only or read-write, i.e. being modified by
> >     writes
> >     >>>>>   or be read-only, which should not be modified. Thus adding
> >     >>>>>   r/o bitmap to the mirror could result in interesting things.
> >     >>>>>
> >     >>>> This patch as it was submitted does not put the bitmap into a
> >     read-only
> >     >>>> mode; it leaves it RW and modifies it as it processes the
> >     mirror command.
> >     >>>>
> >     >>>> Though you do raise a good point; this bitmap is now in-use
> >     by a job and
> >     >>>> should not be allowed to be deleted by the user, but our existing
> >     >>>> mechanism treats a locked bitmap as one that is also in R/O
> >     mode. This
> >     >>>> would be a different use case.
> >     >>>>
> >     >>>>> Minimally we should prohibit usage of r/o bitmaps this way.
> >     >>>>>
> >     >>>>> So, why to use mirror, not backup for the case?
> >     >>>>>
> >     >>>> My guess is for pivot semantics.
> >     >>> Daniel posted his workflow in a previous revision of this series:
> >     >>>
> >     >>> He is doing a variation on non-shared storage migration with
> >     the mirror
> >     >>> block job, but using the ZFS send operation to transfer the
> >     initial copy
> >     >>> of the disk.
> >     >>>
> >     >>> Once ZFS send completes it's necessary to transfer all the
> >     blocks that
> >     >>> were dirtied while the transfer was taking place.
> >     >>>
> >     >>> 1. Create dirty bitmap and start tracking dirty blocks in QEMU.
> >     >>> 2. Snapshot and send ZFS volume.
> >     >>> 3. mirror sync=bitmap after ZFS send completes.
> >     >>> 4. Live migrate.
> >     >>>
> >     >>> Stefan
> >     >> thank you very much. This is clear now.
> >     >>
> >     >> If I am not mistaken,  this can be very easy done with
> >     >> the current implementation without further QEMU modifications.
> >     >> Daniel just needs to start mirror and put it on pause for the
> >     >> duration of stage (2).
> >     >>
> >     >> Will this work?
> >     > I think it's a interesting idea but I'm not sure if sync=none +
> >     pause
> >     > can be done atomically.  Without atomicity a block might be sent
> >     to the
> >     > destination while the ZFS send is still in progress.
> >     >
> >     > Stefan
> >     Atomicity here is completely impossible.
> >
> >     The case is like this.
> >
> >     1) start the mirror
> >     2) pause the mirror
> >     3) snapshot + ZFS send
> >     4) resume mirror
> >     5) live migrate
> >
> >     The worst case problem - some additional blocks which
> >     would be send twice. This should not be very big deal.
> >     This is actually which backup always does. The amount
> >     of such blocks will not be really big.
> >
> >     Den
> >
> >
> > I guess it won't be possible to start mirror in 1) or it will
> > instantly fail because the block device on destination doesn't exist
> > at that moment, so it's not even possible to start nbd server.
> >
> > Or am I wrong?
> >
> good point, by I guess you can create empty volume of the
> proper size it at step 0, setup QEMU mirror and start to copy
> the data to that volume. I may be completely wrong here as
> I do not know ZFS management procedures and tools.
> 
> Can you share the commands you are using to perform the
> op? May be we will be able to find suitable solution.

Daniel's proposed patch isn't large or invasive.  The QMP interface is
consistent with the backup block job's sync=bitmap mode, it's a logical
extension.

If the concerns about the bitmap lifecycle are addressed and a test case
is included then I don't see anything preventing this patch from being
merged.

Stefan
John Snow May 11, 2017, 6:43 p.m. UTC | #12
On 05/11/2017 02:35 PM, Stefan Hajnoczi wrote:
> Daniel's proposed patch isn't large or invasive.  The QMP interface is
> consistent with the backup block job's sync=bitmap mode, it's a logical
> extension.
> 
> If the concerns about the bitmap lifecycle are addressed and a test case
> is included then I don't see anything preventing this patch from being
> merged.
> 
I agree.

Daniel, I'm preoccupied with some other tasks at the moment, but I will
look over your patch again with an eye for how the bitmap data is
handled on failure cases. Once we agree on a design, we'll need some
basic tests (a success case, and one for each type of unique failure
path) and then we're good to go.

--js
Denis V. Lunev May 11, 2017, 6:46 p.m. UTC | #13
On 05/11/2017 08:43 PM, John Snow wrote:
>
> On 05/11/2017 02:35 PM, Stefan Hajnoczi wrote:
>> Daniel's proposed patch isn't large or invasive.  The QMP interface is
>> consistent with the backup block job's sync=bitmap mode, it's a logical
>> extension.
>>
>> If the concerns about the bitmap lifecycle are addressed and a test case
>> is included then I don't see anything preventing this patch from being
>> merged.
>>
> I agree.
>
> Daniel, I'm preoccupied with some other tasks at the moment, but I will
> look over your patch again with an eye for how the bitmap data is
> handled on failure cases. Once we agree on a design, we'll need some
> basic tests (a success case, and one for each type of unique failure
> path) and then we're good to go.
>
> --js
yes, no problem from my side too, except the point raised
by John about deletion of the bitmap during mirror.

Den
diff mbox

Patch

diff --git a/block/mirror.c b/block/mirror.c
index 9f5eb69..1bfbe2e 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -49,7 +49,7 @@  typedef struct MirrorBlockJob {
     BlockDriverState *to_replace;
     /* Used to block operations on the drive-mirror-replace target */
     Error *replace_blocker;
-    bool is_none_mode;
+    MirrorSyncMode sync_mode;
     BlockMirrorBackingMode backing_mode;
     BlockdevOnError on_source_error, on_target_error;
     bool synced;
@@ -523,7 +523,9 @@  static void mirror_exit(BlockJob *job, void *opaque)
     bdrv_child_try_set_perm(mirror_top_bs->backing, 0, BLK_PERM_ALL,
                             &error_abort);
     if (s->backing_mode == MIRROR_SOURCE_BACKING_CHAIN) {
-        BlockDriverState *backing = s->is_none_mode ? src : s->base;
+        BlockDriverState *backing =
+        (s->sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) ||
+        (s->sync_mode == MIRROR_SYNC_MODE_NONE) ? src : s->base;
         if (backing_bs(target_bs) != backing) {
             bdrv_set_backing_hd(target_bs, backing, &local_err);
             if (local_err) {
@@ -771,7 +773,8 @@  static void coroutine_fn mirror_run(void *opaque)
     mirror_free_init(s);
 
     s->last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
-    if (!s->is_none_mode) {
+    if ((s->sync_mode != MIRROR_SYNC_MODE_INCREMENTAL) &&
+      (s->sync_mode != MIRROR_SYNC_MODE_NONE)) {
         ret = mirror_dirty_init(s);
         if (ret < 0 || block_job_is_cancelled(&s->common)) {
             goto immediate_exit;
@@ -1114,7 +1117,8 @@  static void mirror_start_job(const char *job_id, BlockDriverState *bs,
                              BlockCompletionFunc *cb,
                              void *opaque,
                              const BlockJobDriver *driver,
-                             bool is_none_mode, BlockDriverState *base,
+                             MirrorSyncMode sync_mode, const char *bitmap,
+                             BlockDriverState *base,
                              bool auto_complete, const char *filter_node_name,
                              Error **errp)
 {
@@ -1203,7 +1207,7 @@  static void mirror_start_job(const char *job_id, BlockDriverState *bs,
     s->replaces = g_strdup(replaces);
     s->on_source_error = on_source_error;
     s->on_target_error = on_target_error;
-    s->is_none_mode = is_none_mode;
+    s->sync_mode = sync_mode;
     s->backing_mode = backing_mode;
     s->base = base;
     s->granularity = granularity;
@@ -1213,9 +1217,21 @@  static void mirror_start_job(const char *job_id, BlockDriverState *bs,
         s->should_complete = true;
     }
 
-    s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity, NULL, errp);
-    if (!s->dirty_bitmap) {
-        goto fail;
+    if (s->sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) {
+        if (bitmap == NULL) {
+            error_setg(errp, "Mode incremental requires parameter 'bitmap'");
+            goto fail;
+        }
+        s->dirty_bitmap = bdrv_find_dirty_bitmap(bs, bitmap);
+        if (!s->dirty_bitmap) {
+            error_setg(errp, "Bitmap '%s' not found", bitmap);
+            goto fail;
+        }
+    } else {
+        s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity, NULL, errp);
+        if (!s->dirty_bitmap) {
+            goto fail;
+        }
     }
 
     /* Required permissions are already taken with blk_new() */
@@ -1265,24 +1281,20 @@  fail:
 void mirror_start(const char *job_id, BlockDriverState *bs,
                   BlockDriverState *target, const char *replaces,
                   int64_t speed, uint32_t granularity, int64_t buf_size,
-                  MirrorSyncMode mode, BlockMirrorBackingMode backing_mode,
+                  MirrorSyncMode mode, const char *bitmap,
+                  BlockMirrorBackingMode backing_mode,
                   BlockdevOnError on_source_error,
                   BlockdevOnError on_target_error,
                   bool unmap, const char *filter_node_name, Error **errp)
 {
-    bool is_none_mode;
     BlockDriverState *base;
 
-    if (mode == MIRROR_SYNC_MODE_INCREMENTAL) {
-        error_setg(errp, "Sync mode 'incremental' not supported");
-        return;
-    }
-    is_none_mode = mode == MIRROR_SYNC_MODE_NONE;
     base = mode == MIRROR_SYNC_MODE_TOP ? backing_bs(bs) : NULL;
+
     mirror_start_job(job_id, bs, BLOCK_JOB_DEFAULT, target, replaces,
                      speed, granularity, buf_size, backing_mode,
                      on_source_error, on_target_error, unmap, NULL, NULL,
-                     &mirror_job_driver, is_none_mode, base, false,
+                     &mirror_job_driver, mode, bitmap, base, false,
                      filter_node_name, errp);
 }
 
@@ -1305,7 +1317,8 @@  void commit_active_start(const char *job_id, BlockDriverState *bs,
     mirror_start_job(job_id, bs, creation_flags, base, NULL, speed, 0, 0,
                      MIRROR_LEAVE_BACKING_CHAIN,
                      on_error, on_error, true, cb, opaque,
-                     &commit_active_job_driver, false, base, auto_complete,
+                     &commit_active_job_driver, MIRROR_SYNC_MODE_FULL,
+                     NULL, base, auto_complete,
                      filter_node_name, &local_err);
     if (local_err) {
         error_propagate(errp, local_err);
diff --git a/blockdev.c b/blockdev.c
index 6428206..a25e7a8 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -3379,6 +3379,7 @@  static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
                                    enum MirrorSyncMode sync,
                                    BlockMirrorBackingMode backing_mode,
                                    bool has_speed, int64_t speed,
+                                   bool has_bitmap, const char *bitmap,
                                    bool has_granularity, uint32_t granularity,
                                    bool has_buf_size, int64_t buf_size,
                                    bool has_on_source_error,
@@ -3435,12 +3436,22 @@  static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
         sync = MIRROR_SYNC_MODE_FULL;
     }
 
+    if (sync == MIRROR_SYNC_MODE_INCREMENTAL && !has_bitmap) {
+        error_setg(errp, "Mode incremental requires parameter 'bitmap'");
+        return;
+    }
+
+    if (has_bitmap && sync != MIRROR_SYNC_MODE_INCREMENTAL) {
+        error_setg(errp, "Bitmap set but mode is not incremental");
+        return;
+    }
+
     /* pass the node name to replace to mirror start since it's loose coupling
      * and will allow to check whether the node still exist at mirror completion
      */
     mirror_start(job_id, bs, target,
                  has_replaces ? replaces : NULL,
-                 speed, granularity, buf_size, sync, backing_mode,
+                 speed, granularity, buf_size, sync, bitmap, backing_mode,
                  on_source_error, on_target_error, unmap, filter_node_name,
                  errp);
 }
@@ -3575,6 +3586,7 @@  void qmp_drive_mirror(DriveMirror *arg, Error **errp)
     blockdev_mirror_common(arg->has_job_id ? arg->job_id : NULL, bs, target_bs,
                            arg->has_replaces, arg->replaces, arg->sync,
                            backing_mode, arg->has_speed, arg->speed,
+                           arg->has_bitmap, arg->bitmap,
                            arg->has_granularity, arg->granularity,
                            arg->has_buf_size, arg->buf_size,
                            arg->has_on_source_error, arg->on_source_error,
@@ -3593,6 +3605,7 @@  void qmp_blockdev_mirror(bool has_job_id, const char *job_id,
                          bool has_replaces, const char *replaces,
                          MirrorSyncMode sync,
                          bool has_speed, int64_t speed,
+                         bool has_bitmap, const char *bitmap,
                          bool has_granularity, uint32_t granularity,
                          bool has_buf_size, int64_t buf_size,
                          bool has_on_source_error,
@@ -3627,6 +3640,7 @@  void qmp_blockdev_mirror(bool has_job_id, const char *job_id,
     blockdev_mirror_common(has_job_id ? job_id : NULL, bs, target_bs,
                            has_replaces, replaces, sync, backing_mode,
                            has_speed, speed,
+                           has_bitmap, bitmap,
                            has_granularity, granularity,
                            has_buf_size, buf_size,
                            has_on_source_error, on_source_error,
diff --git a/include/block/block_int.h b/include/block/block_int.h
index 4f8cd29..3c59d69 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -827,6 +827,7 @@  void commit_active_start(const char *job_id, BlockDriverState *bs,
  * @granularity: The chosen granularity for the dirty bitmap.
  * @buf_size: The amount of data that can be in flight at one time.
  * @mode: Whether to collapse all images in the chain to the target.
+ * @bitmap: The dirty bitmap if sync_mode is MIRROR_SYNC_MODE_INCREMENTAL.
  * @backing_mode: How to establish the target's backing chain after completion.
  * @on_source_error: The action to take upon error reading from the source.
  * @on_target_error: The action to take upon error writing to the target.
@@ -844,7 +845,8 @@  void commit_active_start(const char *job_id, BlockDriverState *bs,
 void mirror_start(const char *job_id, BlockDriverState *bs,
                   BlockDriverState *target, const char *replaces,
                   int64_t speed, uint32_t granularity, int64_t buf_size,
-                  MirrorSyncMode mode, BlockMirrorBackingMode backing_mode,
+                  MirrorSyncMode mode, const char *bitmap,
+                  BlockMirrorBackingMode backing_mode,
                   BlockdevOnError on_source_error,
                   BlockdevOnError on_target_error,
                   bool unmap, const char *filter_node_name, Error **errp);
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 87fb747..8e3d9b2 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1502,6 +1502,10 @@ 
 #
 # @speed:  the maximum speed, in bytes per second
 #
+# @bitmap: the name of dirty bitmap if sync is "incremental".
+#          Must be present if sync is "incremental", must NOT be present
+#          otherwise. (Since 2.10)
+#
 # @sync: what parts of the disk image should be copied to the destination
 #        (all the disk, only the sectors allocated in the topmost image, or
 #        only new I/O).
@@ -1533,7 +1537,7 @@ 
   'data': { '*job-id': 'str', 'device': 'str', 'target': 'str',
             '*format': 'str', '*node-name': 'str', '*replaces': 'str',
             'sync': 'MirrorSyncMode', '*mode': 'NewImageMode',
-            '*speed': 'int', '*granularity': 'uint32',
+            '*speed': 'int', '*bitmap': 'str', '*granularity': 'uint32',
             '*buf-size': 'int', '*on-source-error': 'BlockdevOnError',
             '*on-target-error': 'BlockdevOnError',
             '*unmap': 'bool' } }
@@ -1652,6 +1656,10 @@ 
 #
 # @speed:  the maximum speed, in bytes per second
 #
+# @bitmap: the name of dirty bitmap if sync is "incremental".
+#          Must be present if sync is "incremental", must NOT be present
+#          otherwise. (Since 2.10)
+#
 # @sync: what parts of the disk image should be copied to the destination
 #        (all the disk, only the sectors allocated in the topmost image, or
 #        only new I/O).
@@ -1694,7 +1702,7 @@ 
   'data': { '*job-id': 'str', 'device': 'str', 'target': 'str',
             '*replaces': 'str',
             'sync': 'MirrorSyncMode',
-            '*speed': 'int', '*granularity': 'uint32',
+            '*speed': 'int', '*bitmap': 'str', '*granularity': 'uint32',
             '*buf-size': 'int', '*on-source-error': 'BlockdevOnError',
             '*on-target-error': 'BlockdevOnError',
             '*filter-node-name': 'str' } }