From patchwork Fri Oct 26 14:05:40 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 194528 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 2A76C2C020B for ; Sat, 27 Oct 2012 02:56:18 +1100 (EST) Received: from localhost ([::1]:35421 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TRkcw-0008WB-1D for incoming@patchwork.ozlabs.org; Fri, 26 Oct 2012 10:11:10 -0400 Received: from eggs.gnu.org ([208.118.235.92]:51962) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TRkbi-0006bx-B5 for qemu-devel@nongnu.org; Fri, 26 Oct 2012 10:10:00 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TRkbW-0005at-Tb for qemu-devel@nongnu.org; Fri, 26 Oct 2012 10:09:54 -0400 Received: from mail-bk0-f45.google.com ([209.85.214.45]:61615) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TRkbW-0005Sp-M7 for qemu-devel@nongnu.org; Fri, 26 Oct 2012 10:09:42 -0400 Received: by mail-bk0-f45.google.com with SMTP id jf3so1090170bkc.4 for ; Fri, 26 Oct 2012 07:09:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:x-mailer:in-reply-to :references; bh=4d2X3KJs3rrARfFUcSz0Eg0FDI64Gbeasp4j0YhcHqI=; b=Bn/TohxnEM88xvBv6ics3LabNGz4rb9xDpxncLRsX5DMt+rdAF7/exSMI8pz64/T0V G5PoHFzUdxNflQ+KLuKG86dRSWvDpvKioKRdjS2v0YLlYCvZK2YdABA2GsvajgHOElzu fzOZ36/mWctnBVuMJX5mfntZl/+TKbYy54caeKHpBEIPRO0I+YtbuDIZ1waIy/zILKsj nQSMHmE4K490T8blHz5TjX8s5XQS+i6HGnc3QUWE4Xgmfm5EK2NJ8d915kd5QaTgRLJk uT4FDwN9NbMKnIV5l4p9AYlGkudhQgd09XqdYA923zI+CXjj90FPc7BWtGgcdXDvJsRn xOEw== Received: by 10.204.7.213 with SMTP id e21mr6984827bke.32.1351260582322; Fri, 26 Oct 2012 07:09:42 -0700 (PDT) Received: from yakj.usersys.redhat.com (nat-pool-mxp-t.redhat.com. [209.132.186.18]) by mx.google.com with ESMTPS id s20sm1082468bkw.15.2012.10.26.07.09.40 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 26 Oct 2012 07:09:41 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Fri, 26 Oct 2012 16:05:40 +0200 Message-Id: <1351260355-19802-11-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.7.12.1 In-Reply-To: <1351260355-19802-1-git-send-email-pbonzini@redhat.com> References: <1351260355-19802-1-git-send-email-pbonzini@redhat.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 209.85.214.45 Cc: aliguori@us.ibm.com, stefanha@redhat.com Subject: [Qemu-devel] [PATCH 10/25] aio: add Win32 implementation X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org The Win32 implementation will only accept EventNotifiers, thus a few drivers are disabled under Windows. EventNotifiers are a good match for the GSource implementation, too, because the Win32 port of glib allows to place their HANDLEs in a GPollFD. Signed-off-by: Paolo Bonzini --- Makefile.objs | 6 +-- aio.c => aio-posix.c | 0 aio.c => aio-win32.c | 137 ++++++++++++++++----------------------------------- block/Makefile.objs | 6 ++- main-loop.c | 2 +- 5 file modificati, 51 inserzioni(+), 100 rimozioni(-) copy aio.c => aio-posix.c (100%) rename aio.c => aio-win32.c (56%) diff --git a/Makefile.objs b/Makefile.objs index 6f7b40b..da744e1 100644 --- a/Makefile.objs +++ b/Makefile.objs @@ -42,12 +42,12 @@ coroutine-obj-$(CONFIG_WIN32) += coroutine-win32.o # block-obj-y is code used by both qemu system emulation and qemu-img block-obj-y = cutils.o iov.o cache-utils.o qemu-option.o module.o async.o -block-obj-y += nbd.o block.o blockjob.o aio.o aes.o qemu-config.o +block-obj-y += nbd.o block.o blockjob.o aes.o qemu-config.o block-obj-y += qemu-progress.o qemu-sockets.o uri.o block-obj-y += $(coroutine-obj-y) $(qobject-obj-y) $(version-obj-y) block-obj-$(CONFIG_POSIX) += posix-aio-compat.o -block-obj-$(CONFIG_POSIX) += event_notifier-posix.o -block-obj-$(CONFIG_WIN32) += event_notifier-win32.o +block-obj-$(CONFIG_POSIX) += event_notifier-posix.o aio-posix.o +block-obj-$(CONFIG_WIN32) += event_notifier-win32.o aio-win32.o block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o block-obj-y += block/ diff --git a/aio.c b/aio-posix.c similarity index 100% copy from aio.c copy to aio-posix.c diff --git a/aio.c b/aio-win32.c similarity index 56% rename from aio.c rename to aio-win32.c index 4424722..9881fdb 100644 --- a/aio.c +++ b/aio-win32.c @@ -1,10 +1,12 @@ /* * QEMU aio implementation * - * Copyright IBM, Corp. 2008 + * Copyright IBM Corp., 2008 + * Copyright Red Hat Inc., 2012 * * Authors: * Anthony Liguori + * Paolo Bonzini * * This work is licensed under the terms of the GNU GPL, version 2. See * the COPYING file in the top-level directory. @@ -18,43 +20,30 @@ #include "qemu-queue.h" #include "qemu_socket.h" -struct AioHandler -{ +struct AioHandler { + EventNotifier *e; + EventNotifierHandler *io_notify; + AioFlushEventNotifierHandler *io_flush; GPollFD pfd; - IOHandler *io_read; - IOHandler *io_write; - AioFlushHandler *io_flush; int deleted; - void *opaque; QLIST_ENTRY(AioHandler) node; }; -static AioHandler *find_aio_handler(AioContext *ctx, int fd) +void aio_set_event_notifier(AioContext *ctx, + EventNotifier *e, + EventNotifierHandler *io_notify, + AioFlushEventNotifierHandler *io_flush) { AioHandler *node; QLIST_FOREACH(node, &ctx->aio_handlers, node) { - if (node->pfd.fd == fd) - if (!node->deleted) - return node; + if (node->e == e && !node->deleted) { + break; + } } - return NULL; -} - -void aio_set_fd_handler(AioContext *ctx, - int fd, - IOHandler *io_read, - IOHandler *io_write, - AioFlushHandler *io_flush, - void *opaque) -{ - AioHandler *node; - - node = find_aio_handler(ctx, fd); - /* Are we deleting the fd handler? */ - if (!io_read && !io_write) { + if (!io_notify) { if (node) { /* If the lock is held, just mark the node as deleted */ if (ctx->walking_handlers) { @@ -73,49 +62,23 @@ void aio_set_fd_handler(AioContext *ctx, if (node == NULL) { /* Alloc and insert if it's not already there */ node = g_malloc0(sizeof(AioHandler)); - node->pfd.fd = fd; + node->e = e; + node->pfd.fd = (uintptr_t)event_notifier_get_handle(e); + node->pfd.events = G_IO_IN; QLIST_INSERT_HEAD(&ctx->aio_handlers, node, node); } /* Update handler with latest information */ - node->io_read = io_read; - node->io_write = io_write; + node->io_notify = io_notify; node->io_flush = io_flush; - node->opaque = opaque; - - node->pfd.events = (io_read ? G_IO_IN | G_IO_HUP : 0); - node->pfd.events |= (io_write ? G_IO_OUT : 0); } } -void aio_set_event_notifier(AioContext *ctx, - EventNotifier *notifier, - EventNotifierHandler *io_read, - AioFlushEventNotifierHandler *io_flush) -{ - aio_set_fd_handler(ctx, event_notifier_get_fd(notifier), - (IOHandler *)io_read, NULL, - (AioFlushHandler *)io_flush, notifier); -} - bool aio_pending(AioContext *ctx) { AioHandler *node; QLIST_FOREACH(node, &ctx->aio_handlers, node) { - int revents; - - /* - * FIXME: right now we cannot get G_IO_HUP and G_IO_ERR because - * main-loop.c is still select based (due to the slirp legacy). - * If main-loop.c ever switches to poll, G_IO_ERR should be - * tested too. Dispatching G_IO_ERR to both handlers should be - * okay, since handlers need to be ready for spurious wakeups. - */ - revents = node->pfd.revents & node->pfd.events; - if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read) { - return true; - } - if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write) { + if (node->pfd.revents && node->io_notify) { return true; } } @@ -125,12 +88,10 @@ bool aio_pending(AioContext *ctx) bool aio_poll(AioContext *ctx, bool blocking) { - static struct timeval tv0; AioHandler *node; - fd_set rdfds, wrfds; - int max_fd = -1; - int ret; + HANDLE events[MAXIMUM_WAIT_OBJECTS + 1]; bool busy, progress; + int count; progress = false; @@ -153,20 +114,12 @@ bool aio_poll(AioContext *ctx, bool blocking) node = QLIST_FIRST(&ctx->aio_handlers); while (node) { AioHandler *tmp; - int revents; ctx->walking_handlers++; - revents = node->pfd.revents & node->pfd.events; - node->pfd.revents = 0; - - /* See comment in aio_pending. */ - if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read) { - node->io_read(node->opaque); - progress = true; - } - if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write) { - node->io_write(node->opaque); + if (node->pfd.revents && node->io_notify) { + node->pfd.revents = 0; + node->io_notify(node->e); progress = true; } @@ -187,29 +140,22 @@ bool aio_poll(AioContext *ctx, bool blocking) ctx->walking_handlers++; - FD_ZERO(&rdfds); - FD_ZERO(&wrfds); - /* fill fd sets */ busy = false; + count = 0; QLIST_FOREACH(node, &ctx->aio_handlers, node) { /* If there aren't pending AIO operations, don't invoke callbacks. * Otherwise, if there are no AIO requests, qemu_aio_wait() would * wait indefinitely. */ if (!node->deleted && node->io_flush) { - if (node->io_flush(node->opaque) == 0) { + if (node->io_flush(node->e) == 0) { continue; } busy = true; } - if (!node->deleted && node->io_read) { - FD_SET(node->pfd.fd, &rdfds); - max_fd = MAX(max_fd, node->pfd.fd + 1); - } - if (!node->deleted && node->io_write) { - FD_SET(node->pfd.fd, &wrfds); - max_fd = MAX(max_fd, node->pfd.fd + 1); + if (!node->deleted && node->io_notify) { + events[count++] = event_notifier_get_handle(node->e); } } @@ -221,10 +167,17 @@ bool aio_poll(AioContext *ctx, bool blocking) } /* wait until next event */ - ret = select(max_fd, &rdfds, &wrfds, NULL, blocking ? NULL : &tv0); + for (;;) { + int timeout = blocking ? INFINITE : 0; + int ret = WaitForMultipleObjects(count, events, FALSE, timeout); + + /* if we have any signaled events, dispatch event */ + if ((DWORD) (ret - WAIT_OBJECT_0) >= count) { + break; + } + + blocking = false; - /* if we have any readable fds, dispatch event */ - if (ret > 0) { /* we have to walk very carefully in case * qemu_aio_set_fd_handler is called while we're walking */ node = QLIST_FIRST(&ctx->aio_handlers); @@ -234,15 +187,9 @@ bool aio_poll(AioContext *ctx, bool blocking) ctx->walking_handlers++; if (!node->deleted && - FD_ISSET(node->pfd.fd, &rdfds) && - node->io_read) { - node->io_read(node->opaque); - progress = true; - } - if (!node->deleted && - FD_ISSET(node->pfd.fd, &wrfds) && - node->io_write) { - node->io_write(node->opaque); + event_notifier_get_handle(node->e) == events[ret - WAIT_OBJECT_0] && + node->io_notify) { + node->io_notify(node->e); progress = true; } diff --git a/block/Makefile.objs b/block/Makefile.objs index 554f429..684765b 100644 --- a/block/Makefile.objs +++ b/block/Makefile.objs @@ -2,13 +2,17 @@ block-obj-y += raw.o cow.o qcow.o vdi.o vmdk.o cloop.o dmg.o bochs.o vpc.o vvfat block-obj-y += qcow2.o qcow2-refcount.o qcow2-cluster.o qcow2-snapshot.o qcow2-cache.o block-obj-y += qed.o qed-gencb.o qed-l2-cache.o qed-table.o qed-cluster.o block-obj-y += qed-check.o -block-obj-y += parallels.o nbd.o blkdebug.o sheepdog.o blkverify.o +block-obj-y += parallels.o blkdebug.o blkverify.o block-obj-$(CONFIG_WIN32) += raw-win32.o block-obj-$(CONFIG_POSIX) += raw-posix.o + +ifeq ($(CONFIG_POSIX),y) +block-obj-y += nbd.o sheepdog.o block-obj-$(CONFIG_LIBISCSI) += iscsi.o block-obj-$(CONFIG_CURL) += curl.o block-obj-$(CONFIG_RBD) += rbd.o block-obj-$(CONFIG_GLUSTERFS) += gluster.o +endif common-obj-y += stream.o common-obj-y += commit.o diff --git a/main-loop.c b/main-loop.c index 67800fe..b290c79 100644 --- a/main-loop.c +++ b/main-loop.c @@ -534,6 +534,7 @@ bool qemu_aio_wait(void) return aio_poll(qemu_aio_context, true); } +#ifdef CONFIG_POSIX void qemu_aio_set_fd_handler(int fd, IOHandler *io_read, IOHandler *io_write, @@ -546,7 +547,6 @@ void qemu_aio_set_fd_handler(int fd, qemu_set_fd_handler2(fd, NULL, io_read, io_write, opaque); } -#ifdef CONFIG_POSIX void qemu_aio_set_event_notifier(EventNotifier *notifier, EventNotifierHandler *io_read, AioFlushEventNotifierHandler *io_flush)