From patchwork Wed Sep 26 02:55:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fam Zheng X-Patchwork-Id: 974786 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=2001:4830:134:3::11; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 42KjJ634YBz9s89 for ; Wed, 26 Sep 2018 12:56:20 +1000 (AEST) Received: from localhost ([::1]:56132 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1g4zzt-00029V-WA for incoming@patchwork.ozlabs.org; Tue, 25 Sep 2018 22:56:18 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:51732) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1g4zzF-00028P-Sh for qemu-devel@nongnu.org; Tue, 25 Sep 2018 22:55:38 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1g4zzD-0002ak-CR for qemu-devel@nongnu.org; Tue, 25 Sep 2018 22:55:37 -0400 Received: from mx1.redhat.com ([209.132.183.28]:54088) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1g4zzD-0002ZO-2d for qemu-devel@nongnu.org; Tue, 25 Sep 2018 22:55:35 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.24]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6DF9B30E5F94; Wed, 26 Sep 2018 02:55:34 +0000 (UTC) Received: from lemon.usersys.redhat.com (ovpn-12-196.pek2.redhat.com [10.72.12.196]) by smtp.corp.redhat.com (Postfix) with ESMTP id 43F1E308BE75; Wed, 26 Sep 2018 02:55:32 +0000 (UTC) From: Fam Zheng To: qemu-devel@nongnu.org Date: Wed, 26 Sep 2018 10:55:22 +0800 Message-Id: <20180926025526.26961-2-famz@redhat.com> In-Reply-To: <20180926025526.26961-1-famz@redhat.com> References: <20180926025526.26961-1-famz@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.24 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.45]); Wed, 26 Sep 2018 02:55:34 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 1/5] aio-posix: fix concurrent access to poll_disable_cnt X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Paolo Bonzini It is valid for an aio_set_fd_handler to happen concurrently with aio_poll. In that case, poll_disable_cnt can change under the heels of aio_poll, and the assertion on poll_disable_cnt can fail in run_poll_handlers. Therefore, this patch simply checks the counter on every polling iteration. There are no particular needs for ordering, since the polling loop is terminated anyway by aio_notify at the end of aio_set_fd_handler. Signed-off-by: Paolo Bonzini Message-Id: <20180912171040.1732-2-pbonzini@redhat.com> Reviewed-by: Fam Zheng Signed-off-by: Fam Zheng --- util/aio-posix.c | 26 +++++++++++++++----------- 1 file changed, 15 insertions(+), 11 deletions(-) diff --git a/util/aio-posix.c b/util/aio-posix.c index 131ba6b4a8..5c29380575 100644 --- a/util/aio-posix.c +++ b/util/aio-posix.c @@ -211,6 +211,7 @@ void aio_set_fd_handler(AioContext *ctx, AioHandler *node; bool is_new = false; bool deleted = false; + int poll_disable_change; qemu_lockcnt_lock(&ctx->list_lock); @@ -244,11 +245,9 @@ void aio_set_fd_handler(AioContext *ctx, QLIST_REMOVE(node, node); deleted = true; } - - if (!node->io_poll) { - ctx->poll_disable_cnt--; - } + poll_disable_change = -!node->io_poll; } else { + poll_disable_change = !io_poll - (node && !node->io_poll); if (node == NULL) { /* Alloc and insert if it's not already there */ node = g_new0(AioHandler, 1); @@ -257,10 +256,6 @@ void aio_set_fd_handler(AioContext *ctx, g_source_add_poll(&ctx->source, &node->pfd); is_new = true; - - ctx->poll_disable_cnt += !io_poll; - } else { - ctx->poll_disable_cnt += !io_poll - !node->io_poll; } /* Update handler with latest information */ @@ -274,6 +269,15 @@ void aio_set_fd_handler(AioContext *ctx, node->pfd.events |= (io_write ? G_IO_OUT | G_IO_ERR : 0); } + /* No need to order poll_disable_cnt writes against other updates; + * the counter is only used to avoid wasting time and latency on + * iterated polling when the system call will be ultimately necessary. + * Changing handlers is a rare event, and a little wasted polling until + * the aio_notify below is not an issue. + */ + atomic_set(&ctx->poll_disable_cnt, + atomic_read(&ctx->poll_disable_cnt) + poll_disable_change); + aio_epoll_update(ctx, node, is_new); qemu_lockcnt_unlock(&ctx->list_lock); aio_notify(ctx); @@ -525,7 +529,6 @@ static bool run_poll_handlers(AioContext *ctx, int64_t max_ns) assert(ctx->notify_me); assert(qemu_lockcnt_count(&ctx->list_lock) > 0); - assert(ctx->poll_disable_cnt == 0); trace_run_poll_handlers_begin(ctx, max_ns); @@ -533,7 +536,8 @@ static bool run_poll_handlers(AioContext *ctx, int64_t max_ns) do { progress = run_poll_handlers_once(ctx); - } while (!progress && qemu_clock_get_ns(QEMU_CLOCK_REALTIME) < end_time); + } while (!progress && qemu_clock_get_ns(QEMU_CLOCK_REALTIME) < end_time + && !atomic_read(&ctx->poll_disable_cnt)); trace_run_poll_handlers_end(ctx, progress); @@ -552,7 +556,7 @@ static bool run_poll_handlers(AioContext *ctx, int64_t max_ns) */ static bool try_poll_mode(AioContext *ctx, bool blocking) { - if (blocking && ctx->poll_max_ns && ctx->poll_disable_cnt == 0) { + if (blocking && ctx->poll_max_ns && !atomic_read(&ctx->poll_disable_cnt)) { /* See qemu_soonest_timeout() uint64_t hack */ int64_t max_ns = MIN((uint64_t)aio_compute_timeout(ctx), (uint64_t)ctx->poll_ns);