From patchwork Wed Sep 26 02:55:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fam Zheng X-Patchwork-Id: 974793 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=2001:4830:134:3::11; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 42KjLn5pfMz9s4Z for ; Wed, 26 Sep 2018 12:58:41 +1000 (AEST) Received: from localhost ([::1]:56140 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1g502B-0003w4-Bp for incoming@patchwork.ozlabs.org; Tue, 25 Sep 2018 22:58:39 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:51751) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1g4zzG-00028Q-MQ for qemu-devel@nongnu.org; Tue, 25 Sep 2018 22:55:39 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1g4zzE-0002dW-WC for qemu-devel@nongnu.org; Tue, 25 Sep 2018 22:55:38 -0400 Received: from mx1.redhat.com ([209.132.183.28]:64857) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1g4zzE-0002cG-Nk for qemu-devel@nongnu.org; Tue, 25 Sep 2018 22:55:36 -0400 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.24]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 1D8575277B; Wed, 26 Sep 2018 02:55:36 +0000 (UTC) Received: from lemon.usersys.redhat.com (ovpn-12-196.pek2.redhat.com [10.72.12.196]) by smtp.corp.redhat.com (Postfix) with ESMTP id EB0A1308BE75; Wed, 26 Sep 2018 02:55:34 +0000 (UTC) From: Fam Zheng To: qemu-devel@nongnu.org Date: Wed, 26 Sep 2018 10:55:23 +0800 Message-Id: <20180926025526.26961-3-famz@redhat.com> In-Reply-To: <20180926025526.26961-1-famz@redhat.com> References: <20180926025526.26961-1-famz@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.24 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Wed, 26 Sep 2018 02:55:36 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 2/5] aio-posix: compute timeout before polling X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Paolo Bonzini This is a preparation for the next patch, and also a very small optimization. Compute the timeout only once, before invoking try_poll_mode, and adjust it in run_poll_handlers. The adjustment is the polling time when polling fails, or zero (non-blocking) if polling succeeds. Fixes: 70232b5253a3c4e03ed1ac47ef9246a8ac66c6fa Signed-off-by: Paolo Bonzini Message-Id: <20180912171040.1732-3-pbonzini@redhat.com> Reviewed-by: Fam Zheng Signed-off-by: Fam Zheng --- util/aio-posix.c | 59 +++++++++++++++++++++++++++-------------------- util/trace-events | 4 ++-- 2 files changed, 36 insertions(+), 27 deletions(-) diff --git a/util/aio-posix.c b/util/aio-posix.c index 5c29380575..c54d1ebfb1 100644 --- a/util/aio-posix.c +++ b/util/aio-posix.c @@ -490,7 +490,7 @@ static void add_pollfd(AioHandler *node) npfd++; } -static bool run_poll_handlers_once(AioContext *ctx) +static bool run_poll_handlers_once(AioContext *ctx, int64_t *timeout) { bool progress = false; AioHandler *node; @@ -500,6 +500,7 @@ static bool run_poll_handlers_once(AioContext *ctx) aio_node_check(ctx, node->is_external) && node->io_poll(node->opaque) && node->opaque != &ctx->notifier) { + *timeout = 0; progress = true; } @@ -522,31 +523,38 @@ static bool run_poll_handlers_once(AioContext *ctx) * * Returns: true if progress was made, false otherwise */ -static bool run_poll_handlers(AioContext *ctx, int64_t max_ns) +static bool run_poll_handlers(AioContext *ctx, int64_t max_ns, int64_t *timeout) { bool progress; - int64_t end_time; + int64_t start_time, elapsed_time; assert(ctx->notify_me); assert(qemu_lockcnt_count(&ctx->list_lock) > 0); - trace_run_poll_handlers_begin(ctx, max_ns); - - end_time = qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + max_ns; + trace_run_poll_handlers_begin(ctx, max_ns, *timeout); + start_time = qemu_clock_get_ns(QEMU_CLOCK_REALTIME); do { - progress = run_poll_handlers_once(ctx); - } while (!progress && qemu_clock_get_ns(QEMU_CLOCK_REALTIME) < end_time + progress = run_poll_handlers_once(ctx, timeout); + elapsed_time = qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - start_time; + } while (!progress && elapsed_time < max_ns && !atomic_read(&ctx->poll_disable_cnt)); - trace_run_poll_handlers_end(ctx, progress); + /* If time has passed with no successful polling, adjust *timeout to + * keep the same ending time. + */ + if (*timeout != -1) { + *timeout -= MIN(*timeout, elapsed_time); + } + trace_run_poll_handlers_end(ctx, progress, *timeout); return progress; } /* try_poll_mode: * @ctx: the AioContext - * @blocking: busy polling is only attempted when blocking is true + * @timeout: timeout for blocking wait, computed by the caller and updated if + * polling succeeds. * * ctx->notify_me must be non-zero so this function can detect aio_notify(). * @@ -554,19 +562,16 @@ static bool run_poll_handlers(AioContext *ctx, int64_t max_ns) * * Returns: true if progress was made, false otherwise */ -static bool try_poll_mode(AioContext *ctx, bool blocking) +static bool try_poll_mode(AioContext *ctx, int64_t *timeout) { - if (blocking && ctx->poll_max_ns && !atomic_read(&ctx->poll_disable_cnt)) { - /* See qemu_soonest_timeout() uint64_t hack */ - int64_t max_ns = MIN((uint64_t)aio_compute_timeout(ctx), - (uint64_t)ctx->poll_ns); + /* See qemu_soonest_timeout() uint64_t hack */ + int64_t max_ns = MIN((uint64_t)*timeout, (uint64_t)ctx->poll_ns); - if (max_ns) { - poll_set_started(ctx, true); + if (max_ns && !atomic_read(&ctx->poll_disable_cnt)) { + poll_set_started(ctx, true); - if (run_poll_handlers(ctx, max_ns)) { - return true; - } + if (run_poll_handlers(ctx, max_ns, timeout)) { + return true; } } @@ -575,7 +580,7 @@ static bool try_poll_mode(AioContext *ctx, bool blocking) /* Even if we don't run busy polling, try polling once in case it can make * progress and the caller will be able to avoid ppoll(2)/epoll_wait(2). */ - return run_poll_handlers_once(ctx); + return run_poll_handlers_once(ctx, timeout); } bool aio_poll(AioContext *ctx, bool blocking) @@ -605,8 +610,14 @@ bool aio_poll(AioContext *ctx, bool blocking) start = qemu_clock_get_ns(QEMU_CLOCK_REALTIME); } - progress = try_poll_mode(ctx, blocking); - if (!progress) { + timeout = blocking ? aio_compute_timeout(ctx) : 0; + progress = try_poll_mode(ctx, &timeout); + assert(!(timeout && progress)); + + /* If polling is allowed, non-blocking aio_poll does not need the + * system call---a single round of run_poll_handlers_once suffices. + */ + if (timeout || atomic_read(&ctx->poll_disable_cnt)) { assert(npfd == 0); /* fill pollfds */ @@ -620,8 +631,6 @@ bool aio_poll(AioContext *ctx, bool blocking) } } - timeout = blocking ? aio_compute_timeout(ctx) : 0; - /* wait until next event */ if (aio_epoll_check_poll(ctx, pollfds, npfd, timeout)) { AioHandler epoll_handler; diff --git a/util/trace-events b/util/trace-events index 4822434c89..79569b7fdf 100644 --- a/util/trace-events +++ b/util/trace-events @@ -1,8 +1,8 @@ # See docs/devel/tracing.txt for syntax documentation. # util/aio-posix.c -run_poll_handlers_begin(void *ctx, int64_t max_ns) "ctx %p max_ns %"PRId64 -run_poll_handlers_end(void *ctx, bool progress) "ctx %p progress %d" +run_poll_handlers_begin(void *ctx, int64_t max_ns, int64_t timeout) "ctx %p max_ns %"PRId64 " timeout %"PRId64 +run_poll_handlers_end(void *ctx, bool progress, int64_t timeout) "ctx %p progress %d new timeout %"PRId64 poll_shrink(void *ctx, int64_t old, int64_t new) "ctx %p old %"PRId64" new %"PRId64 poll_grow(void *ctx, int64_t old, int64_t new) "ctx %p old %"PRId64" new %"PRId64