From patchwork Mon Jan 14 16:34:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 1024713 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 43dfnZ066Kz9sBQ for ; Tue, 15 Jan 2019 03:58:58 +1100 (AEDT) Received: from localhost ([127.0.0.1]:45958 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gj5Zg-0004jY-0Y for incoming@patchwork.ozlabs.org; Mon, 14 Jan 2019 11:58:56 -0500 Received: from eggs.gnu.org ([209.51.188.92]:60853) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gj5CN-00025a-UP for qemu-devel@nongnu.org; Mon, 14 Jan 2019 11:34:52 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gj5CM-0001zM-DF for qemu-devel@nongnu.org; Mon, 14 Jan 2019 11:34:51 -0500 Received: from mx1.redhat.com ([209.132.183.28]:55446) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gj5CJ-0001cS-JV; Mon, 14 Jan 2019 11:34:47 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8EEE111536; Mon, 14 Jan 2019 16:34:20 +0000 (UTC) Received: from localhost (unknown [10.36.118.30]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7B7F35D73F; Mon, 14 Jan 2019 16:34:19 +0000 (UTC) From: Stefan Hajnoczi To: qemu-devel@nongnu.org Date: Mon, 14 Jan 2019 16:34:15 +0000 Message-Id: <20190114163416.1094-2-stefanha@redhat.com> In-Reply-To: <20190114163416.1094-1-stefanha@redhat.com> References: <20190114163416.1094-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Mon, 14 Jan 2019 16:34:20 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 1/2] aio-posix: Unregister fd from ctx epoll when removing fd_handler. X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Fam Zheng , Peter Maydell , qemu-block@nongnu.org, Stefan Weil , Max Reitz , Remy Noel , Stefan Hajnoczi , Paolo Bonzini , Kevin Wolf Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Remy Noel Cleaning the events will cause aio_epoll_update to unregister the fd. Otherwise, the fd is kept registered until it is destroyed. Signed-off-by: Remy Noel Reviewed-by: Stefan Hajnoczi Reviewed-by: Paolo Bonzini Message-id: 20181220152030.28035-2-remy.noel@blade-group.com Signed-off-by: Stefan Hajnoczi --- util/aio-posix.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/util/aio-posix.c b/util/aio-posix.c index 51c41ed3c9..a927319d2c 100644 --- a/util/aio-posix.c +++ b/util/aio-posix.c @@ -245,6 +245,9 @@ void aio_set_fd_handler(AioContext *ctx, QLIST_REMOVE(node, node); deleted = true; } + /* Clean events in order to unregister fd from the ctx epoll. */ + node->pfd.events = 0; + poll_disable_change = -!node->io_poll; } else { poll_disable_change = !io_poll - (node && !node->io_poll); From patchwork Mon Jan 14 16:34:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 1024704 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 43dfV62g47z9sDT for ; Tue, 15 Jan 2019 03:45:34 +1100 (AEDT) Received: from localhost ([127.0.0.1]:42862 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gj5Mi-0002Ks-6i for incoming@patchwork.ozlabs.org; Mon, 14 Jan 2019 11:45:32 -0500 Received: from eggs.gnu.org ([209.51.188.92]:60912) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gj5CT-0002Ai-8U for qemu-devel@nongnu.org; Mon, 14 Jan 2019 11:34:59 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gj5CO-00022r-IX for qemu-devel@nongnu.org; Mon, 14 Jan 2019 11:34:55 -0500 Received: from mx1.redhat.com ([209.132.183.28]:41432) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gj5CJ-0001eW-PI; Mon, 14 Jan 2019 11:34:48 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 1F8FC88E4F; Mon, 14 Jan 2019 16:34:25 +0000 (UTC) Received: from localhost (unknown [10.36.118.30]) by smtp.corp.redhat.com (Postfix) with ESMTP id EB5AA5C1B4; Mon, 14 Jan 2019 16:34:21 +0000 (UTC) From: Stefan Hajnoczi To: qemu-devel@nongnu.org Date: Mon, 14 Jan 2019 16:34:16 +0000 Message-Id: <20190114163416.1094-3-stefanha@redhat.com> In-Reply-To: <20190114163416.1094-1-stefanha@redhat.com> References: <20190114163416.1094-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Mon, 14 Jan 2019 16:34:25 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 2/2] aio-posix: Fix concurrent aio_poll/set_fd_handler. X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Fam Zheng , Peter Maydell , qemu-block@nongnu.org, Stefan Weil , Max Reitz , Remy Noel , Stefan Hajnoczi , Paolo Bonzini , Kevin Wolf Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Remy Noel It is possible for an io_poll callback to be concurrently executed along with an aio_set_fd_handlers. This can cause all sorts of problems, like a NULL callback or a bad opaque pointer. This changes set_fd_handlers so that it no longer modify existing handlers entries and instead, always insert those after having proper initialisation. Tested-by: Stefan Hajnoczi Signed-off-by: Remy Noel Reviewed-by: Stefan Hajnoczi Reviewed-by: Paolo Bonzini Message-id: 20181220152030.28035-3-remy.noel@blade-group.com Signed-off-by: Stefan Hajnoczi --- util/aio-posix.c | 89 ++++++++++++++++++++++++++++-------------------- util/aio-win32.c | 67 ++++++++++++++++-------------------- 2 files changed, 82 insertions(+), 74 deletions(-) diff --git a/util/aio-posix.c b/util/aio-posix.c index a927319d2c..8640dfde9f 100644 --- a/util/aio-posix.c +++ b/util/aio-posix.c @@ -200,6 +200,31 @@ static AioHandler *find_aio_handler(AioContext *ctx, int fd) return NULL; } +static bool aio_remove_fd_handler(AioContext *ctx, AioHandler *node) +{ + /* If the GSource is in the process of being destroyed then + * g_source_remove_poll() causes an assertion failure. Skip + * removal in that case, because glib cleans up its state during + * destruction anyway. + */ + if (!g_source_is_destroyed(&ctx->source)) { + g_source_remove_poll(&ctx->source, &node->pfd); + } + + /* If a read is in progress, just mark the node as deleted */ + if (qemu_lockcnt_count(&ctx->list_lock)) { + node->deleted = 1; + node->pfd.revents = 0; + return false; + } + /* Otherwise, delete it for real. We can't just mark it as + * deleted because deleted nodes are only cleaned up while + * no one is walking the handlers list. + */ + QLIST_REMOVE(node, node); + return true; +} + void aio_set_fd_handler(AioContext *ctx, int fd, bool is_external, @@ -209,6 +234,7 @@ void aio_set_fd_handler(AioContext *ctx, void *opaque) { AioHandler *node; + AioHandler *new_node = NULL; bool is_new = false; bool deleted = false; int poll_disable_change; @@ -223,28 +249,6 @@ void aio_set_fd_handler(AioContext *ctx, qemu_lockcnt_unlock(&ctx->list_lock); return; } - - /* If the GSource is in the process of being destroyed then - * g_source_remove_poll() causes an assertion failure. Skip - * removal in that case, because glib cleans up its state during - * destruction anyway. - */ - if (!g_source_is_destroyed(&ctx->source)) { - g_source_remove_poll(&ctx->source, &node->pfd); - } - - /* If a read is in progress, just mark the node as deleted */ - if (qemu_lockcnt_count(&ctx->list_lock)) { - node->deleted = 1; - node->pfd.revents = 0; - } else { - /* Otherwise, delete it for real. We can't just mark it as - * deleted because deleted nodes are only cleaned up while - * no one is walking the handlers list. - */ - QLIST_REMOVE(node, node); - deleted = true; - } /* Clean events in order to unregister fd from the ctx epoll. */ node->pfd.events = 0; @@ -252,24 +256,32 @@ void aio_set_fd_handler(AioContext *ctx, } else { poll_disable_change = !io_poll - (node && !node->io_poll); if (node == NULL) { - /* Alloc and insert if it's not already there */ - node = g_new0(AioHandler, 1); - node->pfd.fd = fd; - QLIST_INSERT_HEAD_RCU(&ctx->aio_handlers, node, node); - - g_source_add_poll(&ctx->source, &node->pfd); is_new = true; } + /* Alloc and insert if it's not already there */ + new_node = g_new0(AioHandler, 1); /* Update handler with latest information */ - node->io_read = io_read; - node->io_write = io_write; - node->io_poll = io_poll; - node->opaque = opaque; - node->is_external = is_external; + new_node->io_read = io_read; + new_node->io_write = io_write; + new_node->io_poll = io_poll; + new_node->opaque = opaque; + new_node->is_external = is_external; - node->pfd.events = (io_read ? G_IO_IN | G_IO_HUP | G_IO_ERR : 0); - node->pfd.events |= (io_write ? G_IO_OUT | G_IO_ERR : 0); + if (is_new) { + new_node->pfd.fd = fd; + } else { + new_node->pfd = node->pfd; + } + g_source_add_poll(&ctx->source, &new_node->pfd); + + new_node->pfd.events = (io_read ? G_IO_IN | G_IO_HUP | G_IO_ERR : 0); + new_node->pfd.events |= (io_write ? G_IO_OUT | G_IO_ERR : 0); + + QLIST_INSERT_HEAD_RCU(&ctx->aio_handlers, new_node, node); + } + if (node) { + deleted = aio_remove_fd_handler(ctx, node); } /* No need to order poll_disable_cnt writes against other updates; @@ -281,7 +293,12 @@ void aio_set_fd_handler(AioContext *ctx, atomic_set(&ctx->poll_disable_cnt, atomic_read(&ctx->poll_disable_cnt) + poll_disable_change); - aio_epoll_update(ctx, node, is_new); + if (new_node) { + aio_epoll_update(ctx, new_node, is_new); + } else if (node) { + /* Unregister deleted fd_handler */ + aio_epoll_update(ctx, node, false); + } qemu_lockcnt_unlock(&ctx->list_lock); aio_notify(ctx); diff --git a/util/aio-win32.c b/util/aio-win32.c index c58957cc4b..a23b9c364d 100644 --- a/util/aio-win32.c +++ b/util/aio-win32.c @@ -35,6 +35,22 @@ struct AioHandler { QLIST_ENTRY(AioHandler) node; }; +static void aio_remove_fd_handler(AioContext *ctx, AioHandler *node) +{ + /* If aio_poll is in progress, just mark the node as deleted */ + if (qemu_lockcnt_count(&ctx->list_lock)) { + node->deleted = 1; + node->pfd.revents = 0; + } else { + /* Otherwise, delete it for real. We can't just mark it as + * deleted because deleted nodes are only cleaned up after + * releasing the list_lock. + */ + QLIST_REMOVE(node, node); + g_free(node); + } +} + void aio_set_fd_handler(AioContext *ctx, int fd, bool is_external, @@ -44,41 +60,23 @@ void aio_set_fd_handler(AioContext *ctx, void *opaque) { /* fd is a SOCKET in our case */ - AioHandler *node; + AioHandler *old_node; + AioHandler *node = NULL; qemu_lockcnt_lock(&ctx->list_lock); - QLIST_FOREACH(node, &ctx->aio_handlers, node) { - if (node->pfd.fd == fd && !node->deleted) { + QLIST_FOREACH(old_node, &ctx->aio_handlers, node) { + if (old_node->pfd.fd == fd && !old_node->deleted) { break; } } - /* Are we deleting the fd handler? */ - if (!io_read && !io_write) { - if (node) { - /* If aio_poll is in progress, just mark the node as deleted */ - if (qemu_lockcnt_count(&ctx->list_lock)) { - node->deleted = 1; - node->pfd.revents = 0; - } else { - /* Otherwise, delete it for real. We can't just mark it as - * deleted because deleted nodes are only cleaned up after - * releasing the list_lock. - */ - QLIST_REMOVE(node, node); - g_free(node); - } - } - } else { + if (io_read || io_write) { HANDLE event; long bitmask = 0; - if (node == NULL) { - /* Alloc and insert if it's not already there */ - node = g_new0(AioHandler, 1); - node->pfd.fd = fd; - QLIST_INSERT_HEAD_RCU(&ctx->aio_handlers, node, node); - } + /* Alloc and insert if it's not already there */ + node = g_new0(AioHandler, 1); + node->pfd.fd = fd; node->pfd.events = 0; if (node->io_read) { @@ -104,9 +102,13 @@ void aio_set_fd_handler(AioContext *ctx, bitmask |= FD_WRITE | FD_CONNECT; } + QLIST_INSERT_HEAD_RCU(&ctx->aio_handlers, node, node); event = event_notifier_get_handle(&ctx->notifier); WSAEventSelect(node->pfd.fd, event, bitmask); } + if (old_node) { + aio_remove_fd_handler(ctx, old_node); + } qemu_lockcnt_unlock(&ctx->list_lock); aio_notify(ctx); @@ -139,18 +141,7 @@ void aio_set_event_notifier(AioContext *ctx, if (node) { g_source_remove_poll(&ctx->source, &node->pfd); - /* aio_poll is in progress, just mark the node as deleted */ - if (qemu_lockcnt_count(&ctx->list_lock)) { - node->deleted = 1; - node->pfd.revents = 0; - } else { - /* Otherwise, delete it for real. We can't just mark it as - * deleted because deleted nodes are only cleaned up after - * releasing the list_lock. - */ - QLIST_REMOVE(node, node); - g_free(node); - } + aio_remove_fd_handler(ctx, node); } } else { if (node == NULL) {