From patchwork Wed Sep 13 18:19:07 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Reitz X-Patchwork-Id: 813567 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=2001:4830:134:3::11; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3xsqvZ3FpVz9s3T for ; Thu, 14 Sep 2017 04:29:46 +1000 (AEST) Received: from localhost ([::1]:44038 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dsCPw-0007ql-Ci for incoming@patchwork.ozlabs.org; Wed, 13 Sep 2017 14:29:44 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:37787) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dsCIT-00019T-L2 for qemu-devel@nongnu.org; Wed, 13 Sep 2017 14:22:03 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dsCIR-0006aX-HP for qemu-devel@nongnu.org; Wed, 13 Sep 2017 14:22:01 -0400 Received: from mx1.redhat.com ([209.132.183.28]:47016) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dsCIK-0006W6-DT; Wed, 13 Sep 2017 14:21:52 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8764F356F4; Wed, 13 Sep 2017 18:21:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 8764F356F4 Authentication-Results: ext-mx06.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx06.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=mreitz@redhat.com Received: from localhost (ovpn-204-23.brq.redhat.com [10.40.204.23]) by smtp.corp.redhat.com (Postfix) with ESMTPS id A817769FBC; Wed, 13 Sep 2017 18:21:41 +0000 (UTC) From: Max Reitz To: qemu-block@nongnu.org Date: Wed, 13 Sep 2017 20:19:07 +0200 Message-Id: <20170913181910.29688-16-mreitz@redhat.com> In-Reply-To: <20170913181910.29688-1-mreitz@redhat.com> References: <20170913181910.29688-1-mreitz@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Wed, 13 Sep 2017 18:21:51 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 15/18] block/mirror: Add active mirroring X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , qemu-devel@nongnu.org, Max Reitz , Stefan Hajnoczi , John Snow Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" This patch implements active synchronous mirroring. In active mode, the passive mechanism will still be in place and is used to copy all initially dirty clusters off the source disk; but every write request will write data both to the source and the target disk, so the source cannot be dirtied faster than data is mirrored to the target. Also, once the block job has converged (BLOCK_JOB_READY sent), source and target are guaranteed to stay in sync (unless an error occurs). Optionally, dirty data can be copied to the target disk on read operations, too. Active mode is completely optional and currently disabled at runtime. A later patch will add a way for users to enable it. Signed-off-by: Max Reitz --- qapi/block-core.json | 23 +++++++ block/mirror.c | 187 +++++++++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 205 insertions(+), 5 deletions(-) diff --git a/qapi/block-core.json b/qapi/block-core.json index bb11815608..e072cfa67c 100644 --- a/qapi/block-core.json +++ b/qapi/block-core.json @@ -938,6 +938,29 @@ 'data': ['top', 'full', 'none', 'incremental'] } ## +# @MirrorCopyMode: +# +# An enumeration whose values tell the mirror block job when to +# trigger writes to the target. +# +# @passive: copy data in background only. +# +# @active-write: when data is written to the source, write it +# (synchronously) to the target as well. In addition, +# data is copied in background just like in @passive +# mode. +# +# @active-read-write: write data to the target (synchronously) both +# when it is read from and written to the source. +# In addition, data is copied in background just +# like in @passive mode. +# +# Since: 2.11 +## +{ 'enum': 'MirrorCopyMode', + 'data': ['passive', 'active-write', 'active-read-write'] } + +## # @BlockJobType: # # Type of a block job. diff --git a/block/mirror.c b/block/mirror.c index 8fea619a68..c429aa77bb 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -54,8 +54,12 @@ typedef struct MirrorBlockJob { Error *replace_blocker; bool is_none_mode; BlockMirrorBackingMode backing_mode; + MirrorCopyMode copy_mode; BlockdevOnError on_source_error, on_target_error; bool synced; + /* Set when the target is synced (dirty bitmap is clean, nothing + * in flight) and the job is running in active mode */ + bool actively_synced; bool should_complete; int64_t granularity; size_t buf_size; @@ -77,6 +81,7 @@ typedef struct MirrorBlockJob { int target_cluster_size; int max_iov; bool initial_zeroing_ongoing; + int in_active_write_counter; /* Signals that we are no longer accessing source and target and the mirror * BDS should thus relinquish all permissions */ @@ -112,6 +117,7 @@ static BlockErrorAction mirror_error_action(MirrorBlockJob *s, bool read, int error) { s->synced = false; + s->actively_synced = false; if (read) { return block_job_error_action(&s->common, s->on_source_error, true, error); @@ -283,13 +289,12 @@ static int mirror_cow_align(MirrorBlockJob *s, int64_t *offset, return ret; } -static inline void mirror_wait_for_free_in_flight_slot(MirrorBlockJob *s) +static inline void mirror_wait_for_any_operation(MirrorBlockJob *s, bool active) { MirrorOp *op; QTAILQ_FOREACH(op, &s->ops_in_flight, next) { - if (!op->is_active_write) { - /* Only non-active operations use up in-flight slots */ + if (op->is_active_write == active) { qemu_co_queue_wait(&op->waiting_requests, NULL); return; } @@ -297,6 +302,12 @@ static inline void mirror_wait_for_free_in_flight_slot(MirrorBlockJob *s) abort(); } +static inline void mirror_wait_for_free_in_flight_slot(MirrorBlockJob *s) +{ + /* Only non-active operations use up in-flight slots */ + mirror_wait_for_any_operation(s, false); +} + /* Submit async read while handling COW. * Returns: The number of bytes copied after and including offset, * excluding any bytes copied prior to offset due to alignment. @@ -861,6 +872,7 @@ static void coroutine_fn mirror_run(void *opaque) /* Report BLOCK_JOB_READY and wait for complete. */ block_job_event_ready(&s->common); s->synced = true; + s->actively_synced = true; while (!block_job_is_cancelled(&s->common) && !s->should_complete) { block_job_yield(&s->common); } @@ -912,6 +924,12 @@ static void coroutine_fn mirror_run(void *opaque) int64_t cnt, delta; bool should_complete; + /* Do not start passive operations while there are active + * writes in progress */ + while (s->in_active_write_counter) { + mirror_wait_for_any_operation(s, true); + } + if (s->ret < 0) { ret = s->ret; goto immediate_exit; @@ -961,6 +979,9 @@ static void coroutine_fn mirror_run(void *opaque) */ block_job_event_ready(&s->common); s->synced = true; + if (s->copy_mode != MIRROR_COPY_MODE_PASSIVE) { + s->actively_synced = true; + } } should_complete = s->should_complete || @@ -1195,16 +1216,171 @@ static BdrvChildRole source_child_role = { .drained_end = source_child_cb_drained_end, }; +static void do_sync_target_write(MirrorBlockJob *job, uint64_t offset, + uint64_t bytes, QEMUIOVector *qiov, int flags) +{ + BdrvDirtyBitmapIter *iter; + QEMUIOVector target_qiov; + uint64_t dirty_offset; + int dirty_bytes; + + qemu_iovec_init(&target_qiov, qiov->niov); + + iter = bdrv_dirty_iter_new(job->dirty_bitmap, offset >> BDRV_SECTOR_BITS); + + while (true) { + bool valid_area; + int ret; + + bdrv_dirty_bitmap_lock(job->dirty_bitmap); + valid_area = bdrv_dirty_iter_next_area(iter, offset + bytes, + &dirty_offset, &dirty_bytes); + bdrv_dirty_bitmap_unlock(job->dirty_bitmap); + if (!valid_area) { + break; + } + + job->common.len += dirty_bytes; + + assert(dirty_offset - offset <= SIZE_MAX); + if (qiov) { + qemu_iovec_reset(&target_qiov); + qemu_iovec_concat(&target_qiov, qiov, + dirty_offset - offset, dirty_bytes); + } + + ret = blk_co_pwritev(job->target, dirty_offset, dirty_bytes, + qiov ? &target_qiov : NULL, flags); + if (ret >= 0) { + assert(dirty_offset % BDRV_SECTOR_SIZE == 0); + assert(dirty_bytes % BDRV_SECTOR_SIZE == 0); + bdrv_reset_dirty_bitmap(job->dirty_bitmap, + dirty_offset >> BDRV_SECTOR_BITS, + dirty_bytes >> BDRV_SECTOR_BITS); + + job->common.offset += dirty_bytes; + } else { + BlockErrorAction action; + + action = mirror_error_action(job, false, -ret); + if (action == BLOCK_ERROR_ACTION_REPORT) { + if (!job->ret) { + job->ret = ret; + } + break; + } + } + } + + bdrv_dirty_iter_free(iter); + qemu_iovec_destroy(&target_qiov); +} + +static MirrorOp *coroutine_fn active_write_prepare(MirrorBlockJob *s, + uint64_t offset, + uint64_t bytes) +{ + MirrorOp *op; + uint64_t start_chunk = offset / s->granularity; + uint64_t end_chunk = DIV_ROUND_UP(offset + bytes, s->granularity); + + op = g_new(MirrorOp, 1); + *op = (MirrorOp){ + .s = s, + .offset = offset, + .bytes = bytes, + .is_active_write = true, + }; + qemu_co_queue_init(&op->waiting_requests); + QTAILQ_INSERT_TAIL(&s->ops_in_flight, op, next); + + s->in_active_write_counter++; + + mirror_wait_on_conflicts(op, s, offset, bytes); + + bitmap_set(s->in_flight_bitmap, start_chunk, end_chunk - start_chunk); + + return op; +} + +static void coroutine_fn active_write_settle(MirrorOp *op) +{ + uint64_t start_chunk = op->offset / op->s->granularity; + uint64_t end_chunk = DIV_ROUND_UP(op->offset + op->bytes, + op->s->granularity); + + if (!--op->s->in_active_write_counter && op->s->actively_synced) { + /* Assert that we are back in sync once all active write + * operations are settled */ + assert(!bdrv_get_dirty_count(op->s->dirty_bitmap)); + } + bitmap_clear(op->s->in_flight_bitmap, start_chunk, end_chunk - start_chunk); + QTAILQ_REMOVE(&op->s->ops_in_flight, op, next); + qemu_co_queue_restart_all(&op->waiting_requests); + g_free(op); +} + static int coroutine_fn bdrv_mirror_top_preadv(BlockDriverState *bs, uint64_t offset, uint64_t bytes, QEMUIOVector *qiov, int flags) { - return bdrv_co_preadv(bs->file, offset, bytes, qiov, flags); + MirrorOp *op = NULL; + MirrorBDSOpaque *s = bs->opaque; + int ret = 0; + bool copy_to_target; + + copy_to_target = s->job->ret >= 0 && + s->job->copy_mode == MIRROR_COPY_MODE_ACTIVE_READ_WRITE; + + if (copy_to_target) { + op = active_write_prepare(s->job, offset, bytes); + } + + ret = bdrv_co_preadv(bs->file, offset, bytes, qiov, flags); + if (ret < 0) { + goto out; + } + + if (copy_to_target) { + do_sync_target_write(s->job, offset, bytes, qiov, 0); + } + +out: + if (copy_to_target) { + active_write_settle(op); + } + return ret; } static int coroutine_fn bdrv_mirror_top_pwritev(BlockDriverState *bs, uint64_t offset, uint64_t bytes, QEMUIOVector *qiov, int flags) { - return bdrv_co_pwritev(bs->file, offset, bytes, qiov, flags); + MirrorOp *op = NULL; + MirrorBDSOpaque *s = bs->opaque; + int ret = 0; + bool copy_to_target; + + copy_to_target = s->job->ret >= 0 && + (s->job->copy_mode == MIRROR_COPY_MODE_ACTIVE_WRITE || + s->job->copy_mode == MIRROR_COPY_MODE_ACTIVE_READ_WRITE); + + if (copy_to_target) { + op = active_write_prepare(s->job, offset, bytes); + } + + ret = bdrv_co_pwritev(bs->file, offset, bytes, qiov, flags); + if (ret < 0) { + goto out; + } + + if (copy_to_target) { + do_sync_target_write(s->job, offset, bytes, qiov, flags); + } + +out: + if (copy_to_target) { + active_write_settle(op); + } + return ret; } static int coroutine_fn bdrv_mirror_top_flush(BlockDriverState *bs) @@ -1398,6 +1574,7 @@ static void mirror_start_job(const char *job_id, BlockDriverState *bs, s->on_target_error = on_target_error; s->is_none_mode = is_none_mode; s->backing_mode = backing_mode; + s->copy_mode = MIRROR_COPY_MODE_PASSIVE; s->base = base; s->granularity = granularity; s->buf_size = ROUND_UP(buf_size, granularity);