diff mbox

[10/16] docs: block replication's description

Message ID 1441183880-26993-11-git-send-email-wency@cn.fujitsu.com
State New
Headers show

Commit Message

Wen Congyang Sept. 2, 2015, 8:51 a.m. UTC
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Signed-off-by: Yang Hongyang <yanghy@cn.fujitsu.com>
Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
---
 docs/block-replication.txt | 183 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 183 insertions(+)
 create mode 100644 docs/block-replication.txt

Comments

Eric Blake Sept. 2, 2015, 8:41 p.m. UTC | #1
On 09/02/2015 02:51 AM, Wen Congyang wrote:
> Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
> Signed-off-by: Yang Hongyang <yanghy@cn.fujitsu.com>
> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> Signed-off-by: Gonglei <arei.gonglei@huawei.com>
> ---
>  docs/block-replication.txt | 183 +++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 183 insertions(+)
>  create mode 100644 docs/block-replication.txt
> 


> +
> +    1) Primary write requests will be copied and forwarded to Secondary
> +       QEMU.
> +    2) Before Primary write requests are written to Secondary disk, the
> +       original sector content will be read from Secondary disk and
> +       buffered in the Disk buffer, but it will not overwrite the existing
> +       sector content(it could be from either "Secondary Write Requests" or

space before '(' in English sentences.

> +       previous COW of "Primary Write Requests") in the Disk buffer.
> +    3) Primary write requests will be written to Secondary disk.
> +    4) Secondary write requests will be buffered in the Disk buffer and it
> +       will overwrite the existing sector content in the buffer.
> +
> +== Architecture ==

> +                3 NBD  ------->  3 NBD                                               |
> +                client    ||     server                                          2 filter
> +                          ||        ^                                                ^
> +--------.                 ||        |                                                |
> +Primary |                 ||  Secondary disk <--------- hidden-disk 5 <--------- active-disk 4
> +--------'                 ||        |          backing        ^       backing
> +                          ||        |                         |
> +                          ||        |                         |
> +                          ||        '-------------------------'
> +                          ||           drive-backup sync=none
> +

> +
> +4) The disk on the secondary is represented by a custom block device
> +(called active-disk). It should be an empty disk, and the format should
> +support bdrv_make_empty() and backing file.

s/be an empty disk/start as an empty disk/

> +
> +5) The hidden-disk is created automatically. It buffers the original content
> +that is modified by the primary VM. It should also be an empty disk, and

s/be/start as/

> +the driver supports bdrv_make_empty() and backing file.

Missing mention that a drive-backup job is run to allow hidden-disk to
buffer any state that would otherwise be lost by the speculative
write-through of the NBD server into the secondary disk.

> +
> +== Failure Handling ==
> +There are 6 internal errors when block replication is running:
> +1. I/O error on primary disk
> +2. Forwarding primary write requests failed
> +3. Backup failed
> +4. I/O error on secondary disk
> +5. I/O error on active disk
> +6. Making active disk or hidden disk empty failed
> +In case 1 and 5, we just report the error to the disk layer. In case 2, 3,
> +4 and 6, we just report block replication's error to FT/HA manager(which

space before '('

> +decides when to do a new checkpoint, when to do failover).
> +There is one internal error when doing failover:
> +1. Commiting the data in active disk/hidden disk to secondary disk failed

s/Commiting/Committing/

> +We just to report this error to FT/HA manager.
> +
> +== New block driver interface ==

> +
> +== Usage ==
> +Primary:
> +  -drive if=xxx,driver=quorum,read-pattern=fifo,id=colo1,vote-threshold=1\
> +         children.0.file.filename=1.raw,\
> +         children.0.driver=raw,\
> +
> +  Run qmp command in primary qemu:
> +    child_add disk1 child.driver=replication,child.mode=primary,\
> +              child.file.host=xxx,child.file.port=xxx,\
> +              child.file.driver=nbd,child.ignore-errors=on

My comments earlier in this series mean this step should be two QMP
commands: the first is blockdev-add to create an unassociated BDS, the
second to then add that BDS into the quorum.

> +  Note:
> +  1. There should be only one NBD Client for each primary disk.
> +  2. host is the secondary physical machine's hostname or IP
> +  3. Each disk must have its own export name.
> +  4. It is all a single argument to -drive and child_add, and you should
> +     ignore the leading whitespace.
> +  5. The qmp command line must be run after running qmp command line in
> +     secondary qemu.
> +
> +Secondary:
> +  -drive if=none,driver=raw,file=1.raw,id=colo1 \
> +  -drive if=xxx,driver=replication,mode=secondary,\
> +         file.file.filename=active_disk.qcow2,\
> +         file.driver=qcow2,\
> +         file.backing.file.filename=hidden_disk.qcow2,\
> +         file.backing.driver=qcow2,\
> +         file.backing.allow-write-backing-file=on,\
> +         file.backing.backing.backing_reference=colo1\
> +
> +  Then run qmp command in secondary qemu:
> +    nbd-server-start host:port
> +    nbd-server-add -w colo1
> +
> +  Note:
> +  1. The export name in secondary QEMU command line is the secondary
> +     disk's id.
> +  2. The export name for the same disk must be the same
> +  3. The qmp command nbd-server-start and nbd-server-add must be run
> +     before running the qmp command migrate on primary QEMU
> +  4. Don't use nbd-server-start's other options
> +  5. Active disk, hidden disk and nbd target's length should be the
> +     same.
> +  6. It is better to put active disk and hidden disk in ramdisk.
> +  7. It is all a single argument to -drive, and you should ignore
> +     the leading whitespace.

Missing: document the steps taken during failover (that is, how do I
promote a Secondary into a new Primary, and then attach a new Secondary
to that point).  In particular, I suspect there may be differences
between whether you want to roll back to the state of the last
checkpoint (in hidden_disk) or just go with the current state of the
Secondary (in Active); either way, it probably involves doing an active
commit of the state you want into Secondary, then the formation of a new
quorum to start handing replication data off through a new NBD client
connection.
Wen Congyang Sept. 9, 2015, 8:22 a.m. UTC | #2
On 09/03/2015 04:41 AM, Eric Blake wrote:
> On 09/02/2015 02:51 AM, Wen Congyang wrote:
>> Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
>> Signed-off-by: Yang Hongyang <yanghy@cn.fujitsu.com>
>> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
>> Signed-off-by: Gonglei <arei.gonglei@huawei.com>
>> ---
>>  docs/block-replication.txt | 183 +++++++++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 183 insertions(+)
>>  create mode 100644 docs/block-replication.txt
>>
> 
> 
>> +
>> +    1) Primary write requests will be copied and forwarded to Secondary
>> +       QEMU.
>> +    2) Before Primary write requests are written to Secondary disk, the
>> +       original sector content will be read from Secondary disk and
>> +       buffered in the Disk buffer, but it will not overwrite the existing
>> +       sector content(it could be from either "Secondary Write Requests" or
> 
> space before '(' in English sentences.
> 
>> +       previous COW of "Primary Write Requests") in the Disk buffer.
>> +    3) Primary write requests will be written to Secondary disk.
>> +    4) Secondary write requests will be buffered in the Disk buffer and it
>> +       will overwrite the existing sector content in the buffer.
>> +
>> +== Architecture ==
> 
>> +                3 NBD  ------->  3 NBD                                               |
>> +                client    ||     server                                          2 filter
>> +                          ||        ^                                                ^
>> +--------.                 ||        |                                                |
>> +Primary |                 ||  Secondary disk <--------- hidden-disk 5 <--------- active-disk 4
>> +--------'                 ||        |          backing        ^       backing
>> +                          ||        |                         |
>> +                          ||        |                         |
>> +                          ||        '-------------------------'
>> +                          ||           drive-backup sync=none
>> +
> 
>> +
>> +4) The disk on the secondary is represented by a custom block device
>> +(called active-disk). It should be an empty disk, and the format should
>> +support bdrv_make_empty() and backing file.
> 
> s/be an empty disk/start as an empty disk/
> 
>> +
>> +5) The hidden-disk is created automatically. It buffers the original content
>> +that is modified by the primary VM. It should also be an empty disk, and
> 
> s/be/start as/
> 
>> +the driver supports bdrv_make_empty() and backing file.
> 
> Missing mention that a drive-backup job is run to allow hidden-disk to
> buffer any state that would otherwise be lost by the speculative
> write-through of the NBD server into the secondary disk.
> 
>> +
>> +== Failure Handling ==
>> +There are 6 internal errors when block replication is running:
>> +1. I/O error on primary disk
>> +2. Forwarding primary write requests failed
>> +3. Backup failed
>> +4. I/O error on secondary disk
>> +5. I/O error on active disk
>> +6. Making active disk or hidden disk empty failed
>> +In case 1 and 5, we just report the error to the disk layer. In case 2, 3,
>> +4 and 6, we just report block replication's error to FT/HA manager(which
> 
> space before '('
> 
>> +decides when to do a new checkpoint, when to do failover).
>> +There is one internal error when doing failover:
>> +1. Commiting the data in active disk/hidden disk to secondary disk failed
> 
> s/Commiting/Committing/
> 
>> +We just to report this error to FT/HA manager.
>> +
>> +== New block driver interface ==
> 
>> +
>> +== Usage ==
>> +Primary:
>> +  -drive if=xxx,driver=quorum,read-pattern=fifo,id=colo1,vote-threshold=1\
>> +         children.0.file.filename=1.raw,\
>> +         children.0.driver=raw,\
>> +
>> +  Run qmp command in primary qemu:
>> +    child_add disk1 child.driver=replication,child.mode=primary,\
>> +              child.file.host=xxx,child.file.port=xxx,\
>> +              child.file.driver=nbd,child.ignore-errors=on
> 
> My comments earlier in this series mean this step should be two QMP
> commands: the first is blockdev-add to create an unassociated BDS, the
> second to then add that BDS into the quorum.
> 
>> +  Note:
>> +  1. There should be only one NBD Client for each primary disk.
>> +  2. host is the secondary physical machine's hostname or IP
>> +  3. Each disk must have its own export name.
>> +  4. It is all a single argument to -drive and child_add, and you should
>> +     ignore the leading whitespace.
>> +  5. The qmp command line must be run after running qmp command line in
>> +     secondary qemu.
>> +
>> +Secondary:
>> +  -drive if=none,driver=raw,file=1.raw,id=colo1 \
>> +  -drive if=xxx,driver=replication,mode=secondary,\
>> +         file.file.filename=active_disk.qcow2,\
>> +         file.driver=qcow2,\
>> +         file.backing.file.filename=hidden_disk.qcow2,\
>> +         file.backing.driver=qcow2,\
>> +         file.backing.allow-write-backing-file=on,\
>> +         file.backing.backing.backing_reference=colo1\
>> +
>> +  Then run qmp command in secondary qemu:
>> +    nbd-server-start host:port
>> +    nbd-server-add -w colo1
>> +
>> +  Note:
>> +  1. The export name in secondary QEMU command line is the secondary
>> +     disk's id.
>> +  2. The export name for the same disk must be the same
>> +  3. The qmp command nbd-server-start and nbd-server-add must be run
>> +     before running the qmp command migrate on primary QEMU
>> +  4. Don't use nbd-server-start's other options
>> +  5. Active disk, hidden disk and nbd target's length should be the
>> +     same.
>> +  6. It is better to put active disk and hidden disk in ramdisk.
>> +  7. It is all a single argument to -drive, and you should ignore
>> +     the leading whitespace.
> 
> Missing: document the steps taken during failover (that is, how do I
> promote a Secondary into a new Primary, and then attach a new Secondary
> to that point).  In particular, I suspect there may be differences

Continuous block replication is in the TODO list. But I think it is very
easy to implement it if the quorum's child can be hot-added/removed.

> between whether you want to roll back to the state of the last
> checkpoint (in hidden_disk) or just go with the current state of the

For periodic checkpoint, the secondary vm is not running, so just commit
hidden_disk to secondary disk.
For COLO, the secondary vm is running, and we need this state, so just commit
active disk to secondary disk(hidden_disk is also committed).

In which case, do we need to drop secondary disk and commit hidden disk?

Thanks
Wen Congyang

> Secondary (in Active); either way, it probably involves doing an active
> commit of the state you want into Secondary, then the formation of a new
> quorum to start handing replication data off through a new NBD client
> connection.
>
diff mbox

Patch

diff --git a/docs/block-replication.txt b/docs/block-replication.txt
new file mode 100644
index 0000000..b3cee8f
--- /dev/null
+++ b/docs/block-replication.txt
@@ -0,0 +1,183 @@ 
+Block replication
+----------------------------------------
+Copyright Fujitsu, Corp. 2015
+Copyright (c) 2015 Intel Corporation
+Copyright (c) 2015 HUAWEI TECHNOLOGIES CO., LTD.
+
+This work is licensed under the terms of the GNU GPL, version 2 or later.
+See the COPYING file in the top-level directory.
+
+Block replication is used for continuous checkpoints. It is designed
+for COLO (COurse-grain LOck-stepping) where the Secondary VM is running.
+It can also be applied for FT/HA (Fault-tolerance/High Assurance) scenario,
+where the Secondary VM is not running.
+
+This document gives an overview of block replication's design.
+
+== Background ==
+High availability solutions such as micro checkpoint and COLO will do
+consecutive checkpoints. The VM state of Primary VM and Secondary VM is
+identical right after a VM checkpoint, but becomes different as the VM
+executes till the next checkpoint. To support disk contents checkpoint,
+the modified disk contents in the Secondary VM must be buffered, and are
+only dropped at next checkpoint time. To reduce the network transportation
+effort at the time of checkpoint, the disk modification operations of
+Primary disk are asynchronously forwarded to the Secondary node.
+
+== Workflow ==
+The following is the image of block replication workflow:
+
+        +----------------------+            +------------------------+
+        |Primary Write Requests|            |Secondary Write Requests|
+        +----------------------+            +------------------------+
+                  |                                       |
+                  |                                      (4)
+                  |                                       V
+                  |                              /-------------\
+                  |      Copy and Forward        |             |
+                  |---------(1)----------+       | Disk Buffer |
+                  |                      |       |             |
+                  |                     (3)      \-------------/
+                  |                 speculative      ^
+                  |                write through    (2)
+                  |                      |           |
+                  V                      V           |
+           +--------------+           +----------------+
+           | Primary Disk |           | Secondary Disk |
+           +--------------+           +----------------+
+
+    1) Primary write requests will be copied and forwarded to Secondary
+       QEMU.
+    2) Before Primary write requests are written to Secondary disk, the
+       original sector content will be read from Secondary disk and
+       buffered in the Disk buffer, but it will not overwrite the existing
+       sector content(it could be from either "Secondary Write Requests" or
+       previous COW of "Primary Write Requests") in the Disk buffer.
+    3) Primary write requests will be written to Secondary disk.
+    4) Secondary write requests will be buffered in the Disk buffer and it
+       will overwrite the existing sector content in the buffer.
+
+== Architecture ==
+We are going to implement block replication from many basic
+blocks that are already in QEMU.
+
+         virtio-blk       ||
+             ^            ||                            .----------
+             |            ||                            | Secondary
+        1 Quorum          ||                            '----------
+         /      \         ||
+        /        \        ||
+   Primary    2 filter
+     disk         ^                                                             virtio-blk
+                  |                                                                  ^
+                3 NBD  ------->  3 NBD                                               |
+                client    ||     server                                          2 filter
+                          ||        ^                                                ^
+--------.                 ||        |                                                |
+Primary |                 ||  Secondary disk <--------- hidden-disk 5 <--------- active-disk 4
+--------'                 ||        |          backing        ^       backing
+                          ||        |                         |
+                          ||        |                         |
+                          ||        '-------------------------'
+                          ||           drive-backup sync=none
+
+1) The disk on the primary is represented by a block device with two
+children, providing replication between a primary disk and the host that
+runs the secondary VM. The read pattern for quorum can be extended to
+make the primary always read from the local disk instead of going through
+NBD.
+
+2) The new block filter(the name is replication) will control the block
+replication.
+
+3) The secondary disk receives writes from the primary VM through QEMU's
+embedded NBD server (speculative write-through).
+
+4) The disk on the secondary is represented by a custom block device
+(called active-disk). It should be an empty disk, and the format should
+support bdrv_make_empty() and backing file.
+
+5) The hidden-disk is created automatically. It buffers the original content
+that is modified by the primary VM. It should also be an empty disk, and
+the driver supports bdrv_make_empty() and backing file.
+
+== Failure Handling ==
+There are 6 internal errors when block replication is running:
+1. I/O error on primary disk
+2. Forwarding primary write requests failed
+3. Backup failed
+4. I/O error on secondary disk
+5. I/O error on active disk
+6. Making active disk or hidden disk empty failed
+In case 1 and 5, we just report the error to the disk layer. In case 2, 3,
+4 and 6, we just report block replication's error to FT/HA manager(which
+decides when to do a new checkpoint, when to do failover).
+There is one internal error when doing failover:
+1. Commiting the data in active disk/hidden disk to secondary disk failed
+We just to report this error to FT/HA manager.
+
+== New block driver interface ==
+We add three block driver interfaces to control block replication:
+a. bdrv_start_replication()
+   Start block replication, called in migration/checkpoint thread.
+   We must call bdrv_start_replication() in secondary QEMU before
+   calling bdrv_start_replication() in primary QEMU. The caller
+   must hold the I/O mutex lock if it is in migration/checkpoint
+   thread.
+b. bdrv_do_checkpoint()
+   This interface is called after all VM state is transferred to
+   Secondary QEMU. The Disk buffer will be dropped in this interface.
+   The caller must hold the I/O mutex lock if it is in migration/checkpoint
+   thread.
+c. bdrv_stop_replication()
+   It is called on failover. We will flush the Disk buffer into
+   Secondary Disk and stop block replication. The vm should be stopped
+   before calling it if you use this API to shutdown the guest, or other
+   things except failover. The caller must hold the I/O mutex lock if it is
+   in migration/checkpoint thread.
+
+== Usage ==
+Primary:
+  -drive if=xxx,driver=quorum,read-pattern=fifo,id=colo1,vote-threshold=1\
+         children.0.file.filename=1.raw,\
+         children.0.driver=raw,\
+
+  Run qmp command in primary qemu:
+    child_add disk1 child.driver=replication,child.mode=primary,\
+              child.file.host=xxx,child.file.port=xxx,\
+              child.file.driver=nbd,child.ignore-errors=on
+  Note:
+  1. There should be only one NBD Client for each primary disk.
+  2. host is the secondary physical machine's hostname or IP
+  3. Each disk must have its own export name.
+  4. It is all a single argument to -drive and child_add, and you should
+     ignore the leading whitespace.
+  5. The qmp command line must be run after running qmp command line in
+     secondary qemu.
+
+Secondary:
+  -drive if=none,driver=raw,file=1.raw,id=colo1 \
+  -drive if=xxx,driver=replication,mode=secondary,\
+         file.file.filename=active_disk.qcow2,\
+         file.driver=qcow2,\
+         file.backing.file.filename=hidden_disk.qcow2,\
+         file.backing.driver=qcow2,\
+         file.backing.allow-write-backing-file=on,\
+         file.backing.backing.backing_reference=colo1\
+
+  Then run qmp command in secondary qemu:
+    nbd-server-start host:port
+    nbd-server-add -w colo1
+
+  Note:
+  1. The export name in secondary QEMU command line is the secondary
+     disk's id.
+  2. The export name for the same disk must be the same
+  3. The qmp command nbd-server-start and nbd-server-add must be run
+     before running the qmp command migrate on primary QEMU
+  4. Don't use nbd-server-start's other options
+  5. Active disk, hidden disk and nbd target's length should be the
+     same.
+  6. It is better to put active disk and hidden disk in ramdisk.
+  7. It is all a single argument to -drive, and you should ignore
+     the leading whitespace.