From patchwork Wed Oct 31 15:34:36 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 195989 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id CDA2D2C0168 for ; Thu, 1 Nov 2012 05:52:30 +1100 (EST) Received: from localhost ([::1]:58560 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TTdOv-0001FV-0k for incoming@patchwork.ozlabs.org; Wed, 31 Oct 2012 14:52:29 -0400 Received: from eggs.gnu.org ([208.118.235.92]:60095) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TTdOU-0000SL-QB for qemu-devel@nongnu.org; Wed, 31 Oct 2012 14:52:06 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TTdOO-00079z-Th for qemu-devel@nongnu.org; Wed, 31 Oct 2012 14:52:02 -0400 Received: from mx1.redhat.com ([209.132.183.28]:54072) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TTdOO-00079g-Lc for qemu-devel@nongnu.org; Wed, 31 Oct 2012 14:51:56 -0400 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q9VIptMx007507 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Wed, 31 Oct 2012 14:51:55 -0400 Received: from localhost (ovpn-112-25.ams2.redhat.com [10.36.112.25]) by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q9VIps4k028486; Wed, 31 Oct 2012 14:51:54 -0400 From: Stefan Hajnoczi To: Date: Wed, 31 Oct 2012 16:34:36 +0100 Message-Id: <1351697677-31598-3-git-send-email-stefanha@redhat.com> In-Reply-To: <1351697677-31598-1-git-send-email-stefanha@redhat.com> References: <1351697677-31598-1-git-send-email-stefanha@redhat.com> X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 209.132.183.28 Cc: Kevin Wolf , Paolo Bonzini , Anthony Liguori , Stefan Hajnoczi Subject: [Qemu-devel] [PATCH v2 2/3] aio: use g_slice_alloc() for AIOCB pooling X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org AIO control blocks are frequently acquired and released because each aio request involves at least one AIOCB. Therefore, we pool them to avoid heap allocation overhead. The problem with the freelist approach in AIOPool is thread-safety. If we want BlockDriverStates to associate with AioContexts that execute in multiple threads, then a global freelist becomes a problem. This patch drops the freelist and instead uses g_slice_alloc() which is tuned for per-thread fixed-size object pools. qemu_aio_get() and qemu_aio_release() are now thread-safe. Note that the change from g_malloc0() to g_slice_alloc() should be safe since the freelist reuse case doesn't zero the AIOCB either. Signed-off-by: Stefan Hajnoczi Reviewed-by: Paolo Bonzini --- block.c | 15 ++++----------- qemu-aio.h | 2 -- 2 files changed, 4 insertions(+), 13 deletions(-) diff --git a/block.c b/block.c index da1fdca..ea0f7d8 100644 --- a/block.c +++ b/block.c @@ -3909,13 +3909,8 @@ void *qemu_aio_get(AIOPool *pool, BlockDriverState *bs, { BlockDriverAIOCB *acb; - if (pool->free_aiocb) { - acb = pool->free_aiocb; - pool->free_aiocb = acb->next; - } else { - acb = g_malloc0(pool->aiocb_size); - acb->pool = pool; - } + acb = g_slice_alloc(pool->aiocb_size); + acb->pool = pool; acb->bs = bs; acb->cb = cb; acb->opaque = opaque; @@ -3924,10 +3919,8 @@ void *qemu_aio_get(AIOPool *pool, BlockDriverState *bs, void qemu_aio_release(void *p) { - BlockDriverAIOCB *acb = (BlockDriverAIOCB *)p; - AIOPool *pool = acb->pool; - acb->next = pool->free_aiocb; - pool->free_aiocb = acb; + BlockDriverAIOCB *acb = p; + g_slice_free1(acb->pool->aiocb_size, acb); } /**************************************************************/ diff --git a/qemu-aio.h b/qemu-aio.h index 111b0b3..b29c509 100644 --- a/qemu-aio.h +++ b/qemu-aio.h @@ -24,7 +24,6 @@ typedef void BlockDriverCompletionFunc(void *opaque, int ret); typedef struct AIOPool { void (*cancel)(BlockDriverAIOCB *acb); size_t aiocb_size; - BlockDriverAIOCB *free_aiocb; } AIOPool; struct BlockDriverAIOCB { @@ -32,7 +31,6 @@ struct BlockDriverAIOCB { BlockDriverState *bs; BlockDriverCompletionFunc *cb; void *opaque; - BlockDriverAIOCB *next; }; void *qemu_aio_get(AIOPool *pool, BlockDriverState *bs,