From patchwork Mon May 6 17:18:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 1095949 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 44yV7l2JZSz9s9G for ; Tue, 7 May 2019 03:28:19 +1000 (AEST) Received: from localhost ([127.0.0.1]:59793 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hNhPV-00058G-7X for incoming@patchwork.ozlabs.org; Mon, 06 May 2019 13:28:17 -0400 Received: from eggs.gnu.org ([209.51.188.92]:34177) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hNhG0-0005gB-W8 for qemu-devel@nongnu.org; Mon, 06 May 2019 13:18:29 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hNhFz-00053q-UM for qemu-devel@nongnu.org; Mon, 06 May 2019 13:18:28 -0400 Received: from mx1.redhat.com ([209.132.183.28]:55872) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hNhFx-00050R-F8; Mon, 06 May 2019 13:18:25 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C1C023019888; Mon, 6 May 2019 17:18:24 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-117-82.ams2.redhat.com [10.36.117.82]) by smtp.corp.redhat.com (Postfix) with ESMTP id 876811001DC1; Mon, 6 May 2019 17:18:23 +0000 (UTC) From: Kevin Wolf To: qemu-block@nongnu.org Date: Mon, 6 May 2019 19:18:04 +0200 Message-Id: <20190506171805.14236-10-kwolf@redhat.com> In-Reply-To: <20190506171805.14236-1-kwolf@redhat.com> References: <20190506171805.14236-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.47]); Mon, 06 May 2019 17:18:24 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 09/10] blockjob: Remove AioContext notifiers X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" The notifiers made sure that the job is quiesced and that the job->aio_context field is updated. The first part is unnecessary today since bdrv_set_aio_context_ignore() drains the block node, and this means drainig the block job, too. The second part can be done in the .set_aio_ctx callback of the block job BdrvChildRole. The notifiers were problematic because they poll the AioContext while the graph is in an inconsistent state with some nodes already in the new context, but others still in the old context. So removing the notifiers not only simplifies the code, but actually makes the code safer. Signed-off-by: Kevin Wolf --- blockjob.c | 43 ++----------------------------------------- 1 file changed, 2 insertions(+), 41 deletions(-) diff --git a/blockjob.c b/blockjob.c index 24e6093a9c..9ca942ba01 100644 --- a/blockjob.c +++ b/blockjob.c @@ -81,10 +81,6 @@ BlockJob *block_job_get(const char *id) } } -static void block_job_attached_aio_context(AioContext *new_context, - void *opaque); -static void block_job_detach_aio_context(void *opaque); - void block_job_free(Job *job) { BlockJob *bjob = container_of(job, BlockJob, job); @@ -92,28 +88,10 @@ void block_job_free(Job *job) bs->job = NULL; block_job_remove_all_bdrv(bjob); - blk_remove_aio_context_notifier(bjob->blk, - block_job_attached_aio_context, - block_job_detach_aio_context, bjob); blk_unref(bjob->blk); error_free(bjob->blocker); } -static void block_job_attached_aio_context(AioContext *new_context, - void *opaque) -{ - BlockJob *job = opaque; - const JobDriver *drv = job->job.driver; - BlockJobDriver *bjdrv = container_of(drv, BlockJobDriver, job_driver); - - job->job.aio_context = new_context; - if (bjdrv->attached_aio_context) { - bjdrv->attached_aio_context(job, new_context); - } - - job_resume(&job->job); -} - void block_job_drain(Job *job) { BlockJob *bjob = container_of(job, BlockJob, job); @@ -126,23 +104,6 @@ void block_job_drain(Job *job) } } -static void block_job_detach_aio_context(void *opaque) -{ - BlockJob *job = opaque; - - /* In case the job terminates during aio_poll()... */ - job_ref(&job->job); - - job_pause(&job->job); - - while (!job->job.paused && !job_is_completed(&job->job)) { - job_drain(&job->job); - } - - job->job.aio_context = NULL; - job_unref(&job->job); -} - static char *child_job_get_parent_desc(BdrvChild *c) { BlockJob *job = c->opaque; @@ -212,6 +173,8 @@ static void child_job_set_aio_ctx(BdrvChild *c, AioContext *ctx, *ignore = g_slist_prepend(*ignore, sibling); bdrv_set_aio_context_ignore(sibling->bs, ctx, ignore); } + + job->job.aio_context = ctx; } static const BdrvChildRole child_job = { @@ -471,8 +434,6 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver, bdrv_op_unblock(bs, BLOCK_OP_TYPE_DATAPLANE, job->blocker); - blk_add_aio_context_notifier(blk, block_job_attached_aio_context, - block_job_detach_aio_context, job); blk_set_allow_aio_context_change(blk, true); /* Only set speed when necessary to avoid NotSupported error */