From patchwork Fri Sep 13 11:50:59 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 274750 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 0BD872C016F for ; Fri, 13 Sep 2013 22:02:25 +1000 (EST) Received: from localhost ([::1]:46290 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VKS4r-0005qv-IH for incoming@patchwork.ozlabs.org; Fri, 13 Sep 2013 08:02:21 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:40712) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VKRug-00078B-Bq for qemu-devel@nongnu.org; Fri, 13 Sep 2013 07:51:55 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VKRub-0001Xl-Ac for qemu-devel@nongnu.org; Fri, 13 Sep 2013 07:51:50 -0400 Received: from mx1.redhat.com ([209.132.183.28]:3182) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VKRub-0001XZ-1y for qemu-devel@nongnu.org; Fri, 13 Sep 2013 07:51:45 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r8DBphCf007198 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 13 Sep 2013 07:51:43 -0400 Received: from dhcp-200-207.str.redhat.com (dhcp-192-197.str.redhat.com [10.33.192.197]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id r8DBp5F6032643; Fri, 13 Sep 2013 07:51:42 -0400 From: Kevin Wolf To: anthony@codemonkey.ws Date: Fri, 13 Sep 2013 13:50:59 +0200 Message-Id: <1379073063-14963-30-git-send-email-kwolf@redhat.com> In-Reply-To: <1379073063-14963-1-git-send-email-kwolf@redhat.com> References: <1379073063-14963-1-git-send-email-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: kwolf@redhat.com, qemu-devel@nongnu.org Subject: [Qemu-devel] [PULL 29/33] coroutine: add ./configure --disable-coroutine-pool X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Stefan Hajnoczi The 'gthread' coroutine backend was written before the freelist (aka pool) existed in qemu-coroutine.c. This means that every thread is expected to exit when its coroutine terminates. It is not possible to reuse threads from a pool. This patch automatically disables the pool when 'gthread' is used. This allows the 'gthread' backend to work again (for example, tests/test-coroutine completes successfully instead of hanging). I considered implementing thread reuse but I don't want quirks like CPU affinity differences due to coroutine threads being recycled. The 'gthread' backend is a reference backend and it's therefore okay to skip the pool optimization. Note this patch also makes it easy to toggle the pool for benchmarking purposes: ./configure --with-coroutine-backend=ucontext \ --disable-coroutine-pool Reported-by: Gabriel Kerneis Signed-off-by: Stefan Hajnoczi Reviewed-by: Gabriel Kerneis Signed-off-by: Kevin Wolf --- configure | 24 ++++++++++++++++++++++++ qemu-coroutine.c | 34 +++++++++++++++++++--------------- 2 files changed, 43 insertions(+), 15 deletions(-) diff --git a/configure b/configure index b499028..1b6f68b 100755 --- a/configure +++ b/configure @@ -238,6 +238,7 @@ win_sdk="no" want_tools="yes" libiscsi="" coroutine="" +coroutine_pool="" seccomp="" glusterfs="" glusterfs_discard="no" @@ -888,6 +889,10 @@ for opt do ;; --with-coroutine=*) coroutine="$optarg" ;; + --disable-coroutine-pool) coroutine_pool="no" + ;; + --enable-coroutine-pool) coroutine_pool="yes" + ;; --disable-docs) docs="no" ;; --enable-docs) docs="yes" @@ -1189,6 +1194,8 @@ echo " --disable-seccomp disable seccomp support" echo " --enable-seccomp enables seccomp support" echo " --with-coroutine=BACKEND coroutine backend. Supported options:" echo " gthread, ucontext, sigaltstack, windows" +echo " --disable-coroutine-pool disable coroutine freelist (worse performance)" +echo " --enable-coroutine-pool enable coroutine freelist (better performance)" echo " --enable-glusterfs enable GlusterFS backend" echo " --disable-glusterfs disable GlusterFS backend" echo " --enable-gcov enable test coverage analysis with gcov" @@ -3362,6 +3369,17 @@ else esac fi +if test "$coroutine_pool" = ""; then + if test "$coroutine" = "gthread"; then + coroutine_pool=no + else + coroutine_pool=yes + fi +fi +if test "$coroutine" = "gthread" -a "$coroutine_pool" = "yes"; then + error_exit "'gthread' coroutine backend does not support pool (use --disable-coroutine-pool)" +fi + ########################################## # check if we have open_by_handle_at @@ -3733,6 +3751,7 @@ echo "build guest agent $guest_agent" echo "QGA VSS support $guest_agent_with_vss" echo "seccomp support $seccomp" echo "coroutine backend $coroutine" +echo "coroutine pool $coroutine_pool" echo "GlusterFS support $glusterfs" echo "virtio-blk-data-plane $virtio_blk_data_plane" echo "gcov $gcov_tool" @@ -4092,6 +4111,11 @@ if test "$rbd" = "yes" ; then fi echo "CONFIG_COROUTINE_BACKEND=$coroutine" >> $config_host_mak +if test "$coroutine_pool" = "yes" ; then + echo "CONFIG_COROUTINE_POOL=1" >> $config_host_mak +else + echo "CONFIG_COROUTINE_POOL=0" >> $config_host_mak +fi if test "$open_by_handle_at" = "yes" ; then echo "CONFIG_OPEN_BY_HANDLE=y" >> $config_host_mak diff --git a/qemu-coroutine.c b/qemu-coroutine.c index 423430d..4708521 100644 --- a/qemu-coroutine.c +++ b/qemu-coroutine.c @@ -30,15 +30,17 @@ static unsigned int pool_size; Coroutine *qemu_coroutine_create(CoroutineEntry *entry) { - Coroutine *co; - - qemu_mutex_lock(&pool_lock); - co = QSLIST_FIRST(&pool); - if (co) { - QSLIST_REMOVE_HEAD(&pool, pool_next); - pool_size--; + Coroutine *co = NULL; + + if (CONFIG_COROUTINE_POOL) { + qemu_mutex_lock(&pool_lock); + co = QSLIST_FIRST(&pool); + if (co) { + QSLIST_REMOVE_HEAD(&pool, pool_next); + pool_size--; + } + qemu_mutex_unlock(&pool_lock); } - qemu_mutex_unlock(&pool_lock); if (!co) { co = qemu_coroutine_new(); @@ -51,15 +53,17 @@ Coroutine *qemu_coroutine_create(CoroutineEntry *entry) static void coroutine_delete(Coroutine *co) { - qemu_mutex_lock(&pool_lock); - if (pool_size < POOL_MAX_SIZE) { - QSLIST_INSERT_HEAD(&pool, co, pool_next); - co->caller = NULL; - pool_size++; + if (CONFIG_COROUTINE_POOL) { + qemu_mutex_lock(&pool_lock); + if (pool_size < POOL_MAX_SIZE) { + QSLIST_INSERT_HEAD(&pool, co, pool_next); + co->caller = NULL; + pool_size++; + qemu_mutex_unlock(&pool_lock); + return; + } qemu_mutex_unlock(&pool_lock); - return; } - qemu_mutex_unlock(&pool_lock); qemu_coroutine_delete(co); }