From patchwork Mon Feb 4 12:12:51 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 217919 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 51D632C02AD for ; Tue, 5 Feb 2013 00:30:40 +1100 (EST) Received: from localhost ([::1]:38691 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1U2Kw3-0001bE-MF for incoming@patchwork.ozlabs.org; Mon, 04 Feb 2013 07:14:07 -0500 Received: from eggs.gnu.org ([208.118.235.92]:42458) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1U2KvQ-0000o0-0B for qemu-devel@nongnu.org; Mon, 04 Feb 2013 07:13:33 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1U2KvH-000104-Gz for qemu-devel@nongnu.org; Mon, 04 Feb 2013 07:13:27 -0500 Received: from mx1.redhat.com ([209.132.183.28]:4813) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1U2KvH-0000zw-9I for qemu-devel@nongnu.org; Mon, 04 Feb 2013 07:13:19 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r14CDHQE006779 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Mon, 4 Feb 2013 07:13:17 -0500 Received: from localhost (ovpn-112-30.ams2.redhat.com [10.36.112.30]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id r14CDGnp018815; Mon, 4 Feb 2013 07:13:16 -0500 From: Stefan Hajnoczi To: Date: Mon, 4 Feb 2013 13:12:51 +0100 Message-Id: <1359979973-31338-9-git-send-email-stefanha@redhat.com> In-Reply-To: <1359979973-31338-1-git-send-email-stefanha@redhat.com> References: <1359979973-31338-1-git-send-email-stefanha@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: Anthony Liguori , Jan Kiszka , Fabien Chouteau , Blue Swirl , Stefan Hajnoczi , Paolo Bonzini , Amos Kong , Laszlo Ersek Subject: [Qemu-devel] [PATCH v3 08/10] aio: extract aio_dispatch() from aio_poll() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org We will need to loop over AioHandlers calling ->io_read()/->io_write() when aio_poll() is converted from select(2) to g_poll(2). Luckily the code for this already exists, extract it into the new aio_dispatch() function. Two small changes: * aio_poll() checks !node->deleted to avoid calling handlers that have been deleted. * Fix typo 'then' -> 'them' in aio_poll() comment. Signed-off-by: Stefan Hajnoczi Reviewed-by: Laszlo Ersek --- aio-posix.c | 57 +++++++++++++++++++++++++++++++++++---------------------- 1 file changed, 35 insertions(+), 22 deletions(-) diff --git a/aio-posix.c b/aio-posix.c index fe4dbb4..35131a3 100644 --- a/aio-posix.c +++ b/aio-posix.c @@ -129,30 +129,12 @@ bool aio_pending(AioContext *ctx) return false; } -bool aio_poll(AioContext *ctx, bool blocking) +static bool aio_dispatch(AioContext *ctx) { - static struct timeval tv0; AioHandler *node; - fd_set rdfds, wrfds; - int max_fd = -1; - int ret; - bool busy, progress; - - progress = false; - - /* - * If there are callbacks left that have been queued, we need to call then. - * Do not call select in this case, because it is possible that the caller - * does not need a complete flush (as is the case for qemu_aio_wait loops). - */ - if (aio_bh_poll(ctx)) { - blocking = false; - progress = true; - } + bool progress = false; /* - * Then dispatch any pending callbacks from the GSource. - * * We have to walk very carefully in case qemu_aio_set_fd_handler is * called while we're walking. */ @@ -167,11 +149,15 @@ bool aio_poll(AioContext *ctx, bool blocking) node->pfd.revents = 0; /* See comment in aio_pending. */ - if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read) { + if (!node->deleted && + (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) && + node->io_read) { node->io_read(node->opaque); progress = true; } - if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write) { + if (!node->deleted && + (revents & (G_IO_OUT | G_IO_ERR)) && + node->io_write) { node->io_write(node->opaque); progress = true; } @@ -186,6 +172,33 @@ bool aio_poll(AioContext *ctx, bool blocking) g_free(tmp); } } + return progress; +} + +bool aio_poll(AioContext *ctx, bool blocking) +{ + static struct timeval tv0; + AioHandler *node; + fd_set rdfds, wrfds; + int max_fd = -1; + int ret; + bool busy, progress; + + progress = false; + + /* + * If there are callbacks left that have been queued, we need to call them. + * Do not call select in this case, because it is possible that the caller + * does not need a complete flush (as is the case for qemu_aio_wait loops). + */ + if (aio_bh_poll(ctx)) { + blocking = false; + progress = true; + } + + if (aio_dispatch(ctx)) { + progress = true; + } if (progress && !blocking) { return true;