From patchwork Mon Mar 25 13:34:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergio Lopez X-Patchwork-Id: 1064341 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 44Sb8n1qkHz9sSN for ; Tue, 26 Mar 2019 00:44:20 +1100 (AEDT) Received: from localhost ([127.0.0.1]:42754 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h8Pth-0000fp-FY for incoming@patchwork.ozlabs.org; Mon, 25 Mar 2019 09:44:17 -0400 Received: from eggs.gnu.org ([209.51.188.92]:33507) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h8PtA-0000dF-LA for qemu-devel@nongnu.org; Mon, 25 Mar 2019 09:43:45 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1h8Pt9-0006AV-DI for qemu-devel@nongnu.org; Mon, 25 Mar 2019 09:43:44 -0400 Received: from mx1.redhat.com ([209.132.183.28]:55224) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1h8Pt6-0005zV-De; Mon, 25 Mar 2019 09:43:40 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id EA2123092661; Mon, 25 Mar 2019 13:35:37 +0000 (UTC) Received: from dritchie.redhat.com (unknown [10.33.36.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id E8F6B60493; Mon, 25 Mar 2019 13:35:30 +0000 (UTC) From: Sergio Lopez To: qemu-block@nongnu.org Date: Mon, 25 Mar 2019 14:34:36 +0100 Message-Id: <20190325133435.27953-1-slp@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.43]); Mon, 25 Mar 2019 13:35:38 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH] thread-pool: Use an EventNotifier to coordinate with AioContext X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, fam@euphon.net, Sergio Lopez , qemu-devel@nongnu.org, mreitz@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" Our current ThreadPool implementation lacks support for AioContext's event notifications. This not only means that it can't take advantage from the IOThread's adaptive polling, but also that the latter works against it, as it delays the execution of the bottom-halves completions. Here we implement handlers for both io_poll and io_read, and an EventNotifier which is signaled from the worker threads after returning from the asynchronous function. The original issue and the improvement obtained from this patch can be illustrated by the following fio test: * Host: Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz * pseudo-storage: null_blk gb=10 nr_devices=2 irqmode=2 completion_nsec=30000 (latency ~= NVMe) * fio cmdline: fio --time_based --runtime=30 --rw=randread --name=randread --filename=/dev/vdb --direct=1 --ioengine=libaio --iodepth=1 ================ ==============================| latency (us) | | master (poll-max-ns=50000) | 69.87 | | master (poll-max-ns=0 | 56.11 | | patched (poll-max-ns=50000) | 49.45 | ============================================== Signed-off-by: Sergio Lopez --- util/thread-pool.c | 48 ++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 46 insertions(+), 2 deletions(-) diff --git a/util/thread-pool.c b/util/thread-pool.c index 610646d131..058ed7f0ae 100644 --- a/util/thread-pool.c +++ b/util/thread-pool.c @@ -61,6 +61,7 @@ struct ThreadPool { QemuSemaphore sem; int max_threads; QEMUBH *new_thread_bh; + EventNotifier e; /* The following variables are only accessed from one AioContext. */ QLIST_HEAD(, ThreadPoolElement) head; @@ -72,6 +73,7 @@ struct ThreadPool { int new_threads; /* backlog of threads we need to create */ int pending_threads; /* threads created but not running yet */ bool stopping; + bool event_notifier_enabled; }; static void *worker_thread(void *opaque) @@ -108,6 +110,9 @@ static void *worker_thread(void *opaque) /* Write ret before state. */ smp_wmb(); req->state = THREAD_DONE; + if (pool->event_notifier_enabled) { + event_notifier_set(&pool->e); + } qemu_mutex_lock(&pool->lock); @@ -160,10 +165,10 @@ static void spawn_thread(ThreadPool *pool) } } -static void thread_pool_completion_bh(void *opaque) +static bool thread_pool_process_completions(ThreadPool *pool) { - ThreadPool *pool = opaque; ThreadPoolElement *elem, *next; + bool progress = false; aio_context_acquire(pool->ctx); restart: @@ -172,6 +177,8 @@ restart: continue; } + progress = true; + trace_thread_pool_complete(pool, elem, elem->common.opaque, elem->ret); QLIST_REMOVE(elem, all); @@ -202,6 +209,32 @@ restart: } } aio_context_release(pool->ctx); + + return progress; +} + +static void thread_pool_completion_bh(void *opaque) +{ + ThreadPool *pool = opaque; + + thread_pool_process_completions(pool); +} + +static void thread_pool_completion_cb(EventNotifier *e) +{ + ThreadPool *pool = container_of(e, ThreadPool, e); + + if (event_notifier_test_and_clear(&pool->e)) { + thread_pool_completion_bh(pool); + } +} + +static bool thread_pool_poll_cb(void *opaque) +{ + EventNotifier *e = opaque; + ThreadPool *pool = container_of(e, ThreadPool, e); + + return thread_pool_process_completions(pool); } static void thread_pool_cancel(BlockAIOCB *acb) @@ -311,6 +344,13 @@ static void thread_pool_init_one(ThreadPool *pool, AioContext *ctx) pool->max_threads = 64; pool->new_thread_bh = aio_bh_new(ctx, spawn_thread_bh_fn, pool); + if (event_notifier_init(&pool->e, false) >= 0) { + aio_set_event_notifier(ctx, &pool->e, false, + thread_pool_completion_cb, + thread_pool_poll_cb); + pool->event_notifier_enabled = true; + } + QLIST_INIT(&pool->head); QTAILQ_INIT(&pool->request_list); } @@ -346,6 +386,10 @@ void thread_pool_free(ThreadPool *pool) qemu_mutex_unlock(&pool->lock); + if (pool->event_notifier_enabled) { + aio_set_event_notifier(pool->ctx, &pool->e, false, NULL, NULL); + event_notifier_cleanup(&pool->e); + } qemu_bh_delete(pool->completion_bh); qemu_sem_destroy(&pool->sem); qemu_cond_destroy(&pool->worker_stopped);