From patchwork Fri Jun 21 15:15:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Sementsov-Ogievskiy X-Patchwork-Id: 1120362 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=virtuozzo.com Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 45Vj2Z51gnz9sNR for ; Sat, 22 Jun 2019 01:16:38 +1000 (AEST) Received: from localhost ([::1]:36006 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1heLHH-0003B1-GT for incoming@patchwork.ozlabs.org; Fri, 21 Jun 2019 11:16:35 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54140) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1heLGT-00037R-80 for qemu-devel@nongnu.org; Fri, 21 Jun 2019 11:15:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1heLGR-00078M-Ll for qemu-devel@nongnu.org; Fri, 21 Jun 2019 11:15:45 -0400 Received: from relay.sw.ru ([185.231.240.75]:37036) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1heLGR-000763-8b; Fri, 21 Jun 2019 11:15:43 -0400 Received: from [10.94.3.0] (helo=kvm.qa.sw.ru) by relay.sw.ru with esmtp (Exim 4.92) (envelope-from ) id 1heLGN-0002HP-1n; Fri, 21 Jun 2019 18:15:39 +0300 From: Vladimir Sementsov-Ogievskiy To: qemu-devel@nongnu.org, qemu-block@nongnu.org Date: Fri, 21 Jun 2019 18:15:37 +0300 Message-Id: <20190621151538.30384-1-vsementsov@virtuozzo.com> X-Mailer: git-send-email 2.18.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 185.231.240.75 Subject: [Qemu-devel] [PATCH] blockjob: drain all job nodes in block_job_drain X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, vsementsov@virtuozzo.com, jsnow@redhat.com, mreitz@redhat.com Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" Instead of draining additional nodes in each job code, let's do it in common block_job_drain, draining just all job's children. It's also a first step to finally get rid of blockjob->blk. Signed-off-by: Vladimir Sementsov-Ogievskiy --- Hi all! As a follow-up for "block: drop bs->job" recently merged, I'm now trying to drop BlockJob.blk pointer, jobs really works with several nodes and now reason to keep special blk for one of the children, and no reason to handle nodes differently in, for example, backup code.. And as a first step I need to sort out block_job_drain, and here is my suggestion on it. block/backup.c | 18 +----------------- block/mirror.c | 26 +++----------------------- blockjob.c | 7 ++++++- 3 files changed, 10 insertions(+), 41 deletions(-) diff --git a/block/backup.c b/block/backup.c index 715e1d3be8..7930004bbd 100644 --- a/block/backup.c +++ b/block/backup.c @@ -320,21 +320,6 @@ void backup_do_checkpoint(BlockJob *job, Error **errp) hbitmap_set(backup_job->copy_bitmap, 0, backup_job->len); } -static void backup_drain(BlockJob *job) -{ - BackupBlockJob *s = container_of(job, BackupBlockJob, common); - - /* Need to keep a reference in case blk_drain triggers execution - * of backup_complete... - */ - if (s->target) { - BlockBackend *target = s->target; - blk_ref(target); - blk_drain(target); - blk_unref(target); - } -} - static BlockErrorAction backup_error_action(BackupBlockJob *job, bool read, int error) { @@ -493,8 +478,7 @@ static const BlockJobDriver backup_job_driver = { .commit = backup_commit, .abort = backup_abort, .clean = backup_clean, - }, - .drain = backup_drain, + } }; static int64_t backup_calculate_cluster_size(BlockDriverState *target, diff --git a/block/mirror.c b/block/mirror.c index d17be4cdbc..6bea99558f 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -644,14 +644,11 @@ static int mirror_exit_common(Job *job) bdrv_ref(mirror_top_bs); bdrv_ref(target_bs); - /* Remove target parent that still uses BLK_PERM_WRITE/RESIZE before + /* + * Remove target parent that still uses BLK_PERM_WRITE/RESIZE before * inserting target_bs at s->to_replace, where we might not be able to get * these permissions. - * - * Note that blk_unref() alone doesn't necessarily drop permissions because - * we might be running nested inside mirror_drain(), which takes an extra - * reference, so use an explicit blk_set_perm() first. */ - blk_set_perm(s->target, 0, BLK_PERM_ALL, &error_abort); + */ blk_unref(s->target); s->target = NULL; @@ -1143,21 +1140,6 @@ static bool mirror_drained_poll(BlockJob *job) return !!s->in_flight; } -static void mirror_drain(BlockJob *job) -{ - MirrorBlockJob *s = container_of(job, MirrorBlockJob, common); - - /* Need to keep a reference in case blk_drain triggers execution - * of mirror_complete... - */ - if (s->target) { - BlockBackend *target = s->target; - blk_ref(target); - blk_drain(target); - blk_unref(target); - } -} - static const BlockJobDriver mirror_job_driver = { .job_driver = { .instance_size = sizeof(MirrorBlockJob), @@ -1172,7 +1154,6 @@ static const BlockJobDriver mirror_job_driver = { .complete = mirror_complete, }, .drained_poll = mirror_drained_poll, - .drain = mirror_drain, }; static const BlockJobDriver commit_active_job_driver = { @@ -1189,7 +1170,6 @@ static const BlockJobDriver commit_active_job_driver = { .complete = mirror_complete, }, .drained_poll = mirror_drained_poll, - .drain = mirror_drain, }; static void coroutine_fn diff --git a/blockjob.c b/blockjob.c index 458ae76f51..0cabdc867d 100644 --- a/blockjob.c +++ b/blockjob.c @@ -94,8 +94,13 @@ void block_job_drain(Job *job) BlockJob *bjob = container_of(job, BlockJob, job); const JobDriver *drv = job->driver; BlockJobDriver *bjdrv = container_of(drv, BlockJobDriver, job_driver); + GSList *l; + + for (l = bjob->nodes; l; l = l->next) { + BdrvChild *c = l->data; + bdrv_drain(c->bs); + } - blk_drain(bjob->blk); if (bjdrv->drain) { bjdrv->drain(bjob); }