From patchwork Mon Sep 25 14:55:26 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Blake X-Patchwork-Id: 818300 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=2001:4830:134:3::11; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3y172W3ZPtz9t67 for ; Tue, 26 Sep 2017 01:16:03 +1000 (AEST) Received: from localhost ([::1]:42938 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dwV73-0006t2-Jj for incoming@patchwork.ozlabs.org; Mon, 25 Sep 2017 11:16:01 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:54888) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dwUpE-00079i-JD for qemu-devel@nongnu.org; Mon, 25 Sep 2017 10:57:38 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dwUpD-0003x7-4d for qemu-devel@nongnu.org; Mon, 25 Sep 2017 10:57:36 -0400 Received: from mx1.redhat.com ([209.132.183.28]:55163) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dwUp9-0003vB-No; Mon, 25 Sep 2017 10:57:31 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id A60AF806B2; Mon, 25 Sep 2017 14:57:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com A60AF806B2 Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=eblake@redhat.com Received: from red.redhat.com (ovpn-124-97.rdu2.redhat.com [10.10.124.97]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4ACC017F5F; Mon, 25 Sep 2017 14:57:29 +0000 (UTC) From: Eric Blake To: qemu-devel@nongnu.org Date: Mon, 25 Sep 2017 09:55:26 -0500 Message-Id: <20170925145526.32690-21-eblake@redhat.com> In-Reply-To: <20170925145526.32690-1-eblake@redhat.com> References: <20170925145526.32690-1-eblake@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Mon, 25 Sep 2017 14:57:30 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH v10 20/20] dirty-bitmap: Convert internal hbitmap size/granularity X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, vsementsov@virtuozzo.com, famz@redhat.com, qemu-block@nongnu.org, Max Reitz , jsnow@redhat.com Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" Now that all callers are using byte-based interfaces, there's no reason for our internal hbitmap to remain with sector-based granularity. It also simplifies our internal scaling, since we already know that hbitmap widens requests out to granularity boundaries. Signed-off-by: Eric Blake Reviewed-by: John Snow Reviewed-by: Kevin Wolf Reviewed-by: Fam Zheng --- v9: no change v8: rebase to earlier truncate changes (R-b kept) v7: rebase to dirty_iter_next cleanup (no semantic change, R-b kept) v6: no change v5: fix bdrv_dirty_bitmap_truncate [John] v4: rebase to earlier changes, include serialization, R-b dropped v3: no change v2: no change --- block/dirty-bitmap.c | 62 +++++++++++++++------------------------------------- 1 file changed, 18 insertions(+), 44 deletions(-) diff --git a/block/dirty-bitmap.c b/block/dirty-bitmap.c index 58a3f330a9..bd04e991b1 100644 --- a/block/dirty-bitmap.c +++ b/block/dirty-bitmap.c @@ -38,7 +38,7 @@ */ struct BdrvDirtyBitmap { QemuMutex *mutex; - HBitmap *bitmap; /* Dirty sector bitmap implementation */ + HBitmap *bitmap; /* Dirty bitmap implementation */ HBitmap *meta; /* Meta dirty bitmap */ BdrvDirtyBitmap *successor; /* Anonymous child; implies frozen status */ char *name; /* Optional non-empty unique ID */ @@ -130,12 +130,7 @@ BdrvDirtyBitmap *bdrv_create_dirty_bitmap(BlockDriverState *bs, } bitmap = g_new0(BdrvDirtyBitmap, 1); bitmap->mutex = &bs->dirty_bitmap_mutex; - /* - * TODO - let hbitmap track full granularity. For now, it is tracking - * only sector granularity, as a shortcut for our iterators. - */ - bitmap->bitmap = hbitmap_alloc(DIV_ROUND_UP(bitmap_size, BDRV_SECTOR_SIZE), - ctz32(granularity) - BDRV_SECTOR_BITS); + bitmap->bitmap = hbitmap_alloc(bitmap_size, ctz32(granularity)); bitmap->size = bitmap_size; bitmap->name = g_strdup(name); bitmap->disabled = false; @@ -312,7 +307,7 @@ void bdrv_dirty_bitmap_truncate(BlockDriverState *bs, int64_t bytes) QLIST_FOREACH(bitmap, &bs->dirty_bitmaps, list) { assert(!bdrv_dirty_bitmap_frozen(bitmap)); assert(!bitmap->active_iterators); - hbitmap_truncate(bitmap->bitmap, DIV_ROUND_UP(bytes, BDRV_SECTOR_SIZE)); + hbitmap_truncate(bitmap->bitmap, bytes); bitmap->size = bytes; } bdrv_dirty_bitmaps_unlock(bs); @@ -442,7 +437,7 @@ bool bdrv_get_dirty_locked(BlockDriverState *bs, BdrvDirtyBitmap *bitmap, int64_t offset) { if (bitmap) { - return hbitmap_get(bitmap->bitmap, offset >> BDRV_SECTOR_BITS); + return hbitmap_get(bitmap->bitmap, offset); } else { return false; } @@ -470,7 +465,7 @@ uint32_t bdrv_get_default_bitmap_granularity(BlockDriverState *bs) uint32_t bdrv_dirty_bitmap_granularity(const BdrvDirtyBitmap *bitmap) { - return BDRV_SECTOR_SIZE << hbitmap_granularity(bitmap->bitmap); + return 1U << hbitmap_granularity(bitmap->bitmap); } BdrvDirtyBitmapIter *bdrv_dirty_iter_new(BdrvDirtyBitmap *bitmap) @@ -503,20 +498,16 @@ void bdrv_dirty_iter_free(BdrvDirtyBitmapIter *iter) int64_t bdrv_dirty_iter_next(BdrvDirtyBitmapIter *iter) { - int64_t ret = hbitmap_iter_next(&iter->hbi); - return ret < 0 ? -1 : ret * BDRV_SECTOR_SIZE; + return hbitmap_iter_next(&iter->hbi); } /* Called within bdrv_dirty_bitmap_lock..unlock */ void bdrv_set_dirty_bitmap_locked(BdrvDirtyBitmap *bitmap, int64_t offset, int64_t bytes) { - int64_t end_sector = DIV_ROUND_UP(offset + bytes, BDRV_SECTOR_SIZE); - assert(bdrv_dirty_bitmap_enabled(bitmap)); assert(!bdrv_dirty_bitmap_readonly(bitmap)); - hbitmap_set(bitmap->bitmap, offset >> BDRV_SECTOR_BITS, - end_sector - (offset >> BDRV_SECTOR_BITS)); + hbitmap_set(bitmap->bitmap, offset, bytes); } void bdrv_set_dirty_bitmap(BdrvDirtyBitmap *bitmap, @@ -531,12 +522,9 @@ void bdrv_set_dirty_bitmap(BdrvDirtyBitmap *bitmap, void bdrv_reset_dirty_bitmap_locked(BdrvDirtyBitmap *bitmap, int64_t offset, int64_t bytes) { - int64_t end_sector = DIV_ROUND_UP(offset + bytes, BDRV_SECTOR_SIZE); - assert(bdrv_dirty_bitmap_enabled(bitmap)); assert(!bdrv_dirty_bitmap_readonly(bitmap)); - hbitmap_reset(bitmap->bitmap, offset >> BDRV_SECTOR_BITS, - end_sector - (offset >> BDRV_SECTOR_BITS)); + hbitmap_reset(bitmap->bitmap, offset, bytes); } void bdrv_reset_dirty_bitmap(BdrvDirtyBitmap *bitmap, @@ -556,8 +544,7 @@ void bdrv_clear_dirty_bitmap(BdrvDirtyBitmap *bitmap, HBitmap **out) hbitmap_reset_all(bitmap->bitmap); } else { HBitmap *backup = bitmap->bitmap; - bitmap->bitmap = hbitmap_alloc(DIV_ROUND_UP(bitmap->size, - BDRV_SECTOR_SIZE), + bitmap->bitmap = hbitmap_alloc(bitmap->size, hbitmap_granularity(backup)); *out = backup; } @@ -576,51 +563,40 @@ void bdrv_undo_clear_dirty_bitmap(BdrvDirtyBitmap *bitmap, HBitmap *in) uint64_t bdrv_dirty_bitmap_serialization_size(const BdrvDirtyBitmap *bitmap, uint64_t offset, uint64_t bytes) { - assert(QEMU_IS_ALIGNED(offset | bytes, BDRV_SECTOR_SIZE)); - return hbitmap_serialization_size(bitmap->bitmap, - offset >> BDRV_SECTOR_BITS, - bytes >> BDRV_SECTOR_BITS); + return hbitmap_serialization_size(bitmap->bitmap, offset, bytes); } uint64_t bdrv_dirty_bitmap_serialization_align(const BdrvDirtyBitmap *bitmap) { - return hbitmap_serialization_align(bitmap->bitmap) * BDRV_SECTOR_SIZE; + return hbitmap_serialization_align(bitmap->bitmap); } void bdrv_dirty_bitmap_serialize_part(const BdrvDirtyBitmap *bitmap, uint8_t *buf, uint64_t offset, uint64_t bytes) { - assert(QEMU_IS_ALIGNED(offset | bytes, BDRV_SECTOR_SIZE)); - hbitmap_serialize_part(bitmap->bitmap, buf, offset >> BDRV_SECTOR_BITS, - bytes >> BDRV_SECTOR_BITS); + hbitmap_serialize_part(bitmap->bitmap, buf, offset, bytes); } void bdrv_dirty_bitmap_deserialize_part(BdrvDirtyBitmap *bitmap, uint8_t *buf, uint64_t offset, uint64_t bytes, bool finish) { - assert(QEMU_IS_ALIGNED(offset | bytes, BDRV_SECTOR_SIZE)); - hbitmap_deserialize_part(bitmap->bitmap, buf, offset >> BDRV_SECTOR_BITS, - bytes >> BDRV_SECTOR_BITS, finish); + hbitmap_deserialize_part(bitmap->bitmap, buf, offset, bytes, finish); } void bdrv_dirty_bitmap_deserialize_zeroes(BdrvDirtyBitmap *bitmap, uint64_t offset, uint64_t bytes, bool finish) { - assert(QEMU_IS_ALIGNED(offset | bytes, BDRV_SECTOR_SIZE)); - hbitmap_deserialize_zeroes(bitmap->bitmap, offset >> BDRV_SECTOR_BITS, - bytes >> BDRV_SECTOR_BITS, finish); + hbitmap_deserialize_zeroes(bitmap->bitmap, offset, bytes, finish); } void bdrv_dirty_bitmap_deserialize_ones(BdrvDirtyBitmap *bitmap, uint64_t offset, uint64_t bytes, bool finish) { - assert(QEMU_IS_ALIGNED(offset | bytes, BDRV_SECTOR_SIZE)); - hbitmap_deserialize_ones(bitmap->bitmap, offset >> BDRV_SECTOR_BITS, - bytes >> BDRV_SECTOR_BITS, finish); + hbitmap_deserialize_ones(bitmap->bitmap, offset, bytes, finish); } void bdrv_dirty_bitmap_deserialize_finish(BdrvDirtyBitmap *bitmap) @@ -631,7 +607,6 @@ void bdrv_dirty_bitmap_deserialize_finish(BdrvDirtyBitmap *bitmap) void bdrv_set_dirty(BlockDriverState *bs, int64_t offset, int64_t bytes) { BdrvDirtyBitmap *bitmap; - int64_t end_sector = DIV_ROUND_UP(offset + bytes, BDRV_SECTOR_SIZE); if (QLIST_EMPTY(&bs->dirty_bitmaps)) { return; @@ -643,8 +618,7 @@ void bdrv_set_dirty(BlockDriverState *bs, int64_t offset, int64_t bytes) continue; } assert(!bdrv_dirty_bitmap_readonly(bitmap)); - hbitmap_set(bitmap->bitmap, offset >> BDRV_SECTOR_BITS, - end_sector - (offset >> BDRV_SECTOR_BITS)); + hbitmap_set(bitmap->bitmap, offset, bytes); } bdrv_dirty_bitmaps_unlock(bs); } @@ -654,12 +628,12 @@ void bdrv_set_dirty(BlockDriverState *bs, int64_t offset, int64_t bytes) */ void bdrv_set_dirty_iter(BdrvDirtyBitmapIter *iter, int64_t offset) { - hbitmap_iter_init(&iter->hbi, iter->hbi.hb, offset >> BDRV_SECTOR_BITS); + hbitmap_iter_init(&iter->hbi, iter->hbi.hb, offset); } int64_t bdrv_get_dirty_count(BdrvDirtyBitmap *bitmap) { - return hbitmap_count(bitmap->bitmap) << BDRV_SECTOR_BITS; + return hbitmap_count(bitmap->bitmap); } int64_t bdrv_get_meta_dirty_count(BdrvDirtyBitmap *bitmap)