From patchwork Thu May 8 14:34:34 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 347102 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 7800714009E for ; Fri, 9 May 2014 01:16:14 +1000 (EST) Received: from localhost ([::1]:47265 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WiPRE-0001KP-SK for incoming@patchwork.ozlabs.org; Thu, 08 May 2014 10:36:44 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39443) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WiPPw-0008QE-6a for qemu-devel@nongnu.org; Thu, 08 May 2014 10:35:30 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1WiPPm-0002yQ-Pi for qemu-devel@nongnu.org; Thu, 08 May 2014 10:35:24 -0400 Received: from mx1.redhat.com ([209.132.183.28]:32115) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WiPPm-0002yI-J5 for qemu-devel@nongnu.org; Thu, 08 May 2014 10:35:14 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s48EZCuQ029050 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 8 May 2014 10:35:12 -0400 Received: from localhost (ovpn-112-43.ams2.redhat.com [10.36.112.43]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s48EZAZm028510; Thu, 8 May 2014 10:35:11 -0400 From: Stefan Hajnoczi To: Date: Thu, 8 May 2014 16:34:34 +0200 Message-Id: <1399559698-31900-2-git-send-email-stefanha@redhat.com> In-Reply-To: <1399559698-31900-1-git-send-email-stefanha@redhat.com> References: <1399559698-31900-1-git-send-email-stefanha@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: Kevin Wolf , Paolo Bonzini , Stefan Hajnoczi , Christian Borntraeger Subject: [Qemu-devel] [PATCH v3 01/25] block: use BlockDriverState AioContext X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Drop the assumption that we're using the main AioContext. Convert qemu_aio_wait() to aio_poll() and qemu_bh_new() to aio_bh_new() so the BlockDriverState AioContext is used. Note there is still one qemu_aio_wait() left in bdrv_create() but we do not have a BlockDriverState there and only main loop code invokes this function. Signed-off-by: Stefan Hajnoczi --- block.c | 27 ++++++++++++++++++--------- 1 file changed, 18 insertions(+), 9 deletions(-) diff --git a/block.c b/block.c index b749d31..75e8fcc 100644 --- a/block.c +++ b/block.c @@ -2703,10 +2703,12 @@ static int bdrv_prwv_co(BlockDriverState *bs, int64_t offset, /* Fast-path if already in coroutine context */ bdrv_rw_co_entry(&rwco); } else { + AioContext *aio_context = bdrv_get_aio_context(bs); + co = qemu_coroutine_create(bdrv_rw_co_entry); qemu_coroutine_enter(co, &rwco); while (rwco.ret == NOT_DONE) { - qemu_aio_wait(); + aio_poll(aio_context, true); } } return rwco.ret; @@ -3939,10 +3941,12 @@ int64_t bdrv_get_block_status(BlockDriverState *bs, int64_t sector_num, /* Fast-path if already in coroutine context */ bdrv_get_block_status_co_entry(&data); } else { + AioContext *aio_context = bdrv_get_aio_context(bs); + co = qemu_coroutine_create(bdrv_get_block_status_co_entry); qemu_coroutine_enter(co, &data); while (!data.done) { - qemu_aio_wait(); + aio_poll(aio_context, true); } } return data.ret; @@ -4537,7 +4541,7 @@ static BlockDriverAIOCB *bdrv_aio_rw_vector(BlockDriverState *bs, acb->is_write = is_write; acb->qiov = qiov; acb->bounce = qemu_blockalign(bs, qiov->size); - acb->bh = qemu_bh_new(bdrv_aio_bh_cb, acb); + acb->bh = aio_bh_new(bdrv_get_aio_context(bs), bdrv_aio_bh_cb, acb); if (is_write) { qemu_iovec_to_buf(acb->qiov, 0, acb->bounce, qiov->size); @@ -4576,13 +4580,14 @@ typedef struct BlockDriverAIOCBCoroutine { static void bdrv_aio_co_cancel_em(BlockDriverAIOCB *blockacb) { + AioContext *aio_context = bdrv_get_aio_context(blockacb->bs); BlockDriverAIOCBCoroutine *acb = container_of(blockacb, BlockDriverAIOCBCoroutine, common); bool done = false; acb->done = &done; while (!done) { - qemu_aio_wait(); + aio_poll(aio_context, true); } } @@ -4619,7 +4624,7 @@ static void coroutine_fn bdrv_co_do_rw(void *opaque) acb->req.nb_sectors, acb->req.qiov, acb->req.flags); } - acb->bh = qemu_bh_new(bdrv_co_em_bh, acb); + acb->bh = aio_bh_new(bdrv_get_aio_context(bs), bdrv_co_em_bh, acb); qemu_bh_schedule(acb->bh); } @@ -4655,7 +4660,7 @@ static void coroutine_fn bdrv_aio_flush_co_entry(void *opaque) BlockDriverState *bs = acb->common.bs; acb->req.error = bdrv_co_flush(bs); - acb->bh = qemu_bh_new(bdrv_co_em_bh, acb); + acb->bh = aio_bh_new(bdrv_get_aio_context(bs), bdrv_co_em_bh, acb); qemu_bh_schedule(acb->bh); } @@ -4682,7 +4687,7 @@ static void coroutine_fn bdrv_aio_discard_co_entry(void *opaque) BlockDriverState *bs = acb->common.bs; acb->req.error = bdrv_co_discard(bs, acb->req.sector, acb->req.nb_sectors); - acb->bh = qemu_bh_new(bdrv_co_em_bh, acb); + acb->bh = aio_bh_new(bdrv_get_aio_context(bs), bdrv_co_em_bh, acb); qemu_bh_schedule(acb->bh); } @@ -4922,10 +4927,12 @@ int bdrv_flush(BlockDriverState *bs) /* Fast-path if already in coroutine context */ bdrv_flush_co_entry(&rwco); } else { + AioContext *aio_context = bdrv_get_aio_context(bs); + co = qemu_coroutine_create(bdrv_flush_co_entry); qemu_coroutine_enter(co, &rwco); while (rwco.ret == NOT_DONE) { - qemu_aio_wait(); + aio_poll(aio_context, true); } } @@ -5035,10 +5042,12 @@ int bdrv_discard(BlockDriverState *bs, int64_t sector_num, int nb_sectors) /* Fast-path if already in coroutine context */ bdrv_discard_co_entry(&rwco); } else { + AioContext *aio_context = bdrv_get_aio_context(bs); + co = qemu_coroutine_create(bdrv_discard_co_entry); qemu_coroutine_enter(co, &rwco); while (rwco.ret == NOT_DONE) { - qemu_aio_wait(); + aio_poll(aio_context, true); } }