From patchwork Wed Oct 31 13:03:29 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 195869 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 83C222C0040 for ; Thu, 1 Nov 2012 00:04:05 +1100 (EST) Received: from localhost ([::1]:46010 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TTXxg-0007hF-MC for incoming@patchwork.ozlabs.org; Wed, 31 Oct 2012 09:04:00 -0400 Received: from eggs.gnu.org ([208.118.235.92]:51089) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TTXxR-0007em-5x for qemu-devel@nongnu.org; Wed, 31 Oct 2012 09:03:53 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TTXxI-0003Qe-FS for qemu-devel@nongnu.org; Wed, 31 Oct 2012 09:03:45 -0400 Received: from mx1.redhat.com ([209.132.183.28]:63086) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TTXxI-0003Q0-5H for qemu-devel@nongnu.org; Wed, 31 Oct 2012 09:03:36 -0400 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q9VD3ZjG024997 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Wed, 31 Oct 2012 09:03:35 -0400 Received: from localhost (dhcp-64-10.muc.redhat.com [10.32.64.10]) by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q9VD3Y5t008082; Wed, 31 Oct 2012 09:03:34 -0400 From: Stefan Hajnoczi To: Date: Wed, 31 Oct 2012 14:03:29 +0100 Message-Id: <1351688609-18984-1-git-send-email-stefanha@redhat.com> X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 209.132.183.28 Cc: Kevin Wolf , Paolo Bonzini , Anthony Liguori , Stefan Hajnoczi Subject: [Qemu-devel] [PATCH] aio: Use g_slice_alloc() for AIOCB pooling X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org AIO control blocks are frequently acquired and released because each aio request involves at least one AIOCB. Therefore pool them to avoid heap allocation overhead. The problem with the freelist approach in AIOPool is thread-safety. If we want BlockDriverStates to associate with AioContexts that execute in multiple threads, then a global freelist becomes a problem. This patch drops the freelist and instead uses g_slice_alloc() which is tuned for per-thread fixed-size object pools. qemu_aio_get() and qemu_aio_release() are now thread-safe. Note that the change from g_malloc0() to g_slice_alloc() should be safe since the freelist reuse case doesn't zero the AIOCB either. Signed-off-by: Stefan Hajnoczi --- block.c | 31 ++++++++++++------------------- block/blkdebug.c | 4 ++-- block/blkverify.c | 4 ++-- block/curl.c | 4 ++-- block/gluster.c | 6 +++--- block/iscsi.c | 12 ++++++------ block/linux-aio.c | 4 ++-- block/qed.c | 4 ++-- block/rbd.c | 4 ++-- block/sheepdog.c | 4 ++-- block/win32-aio.c | 4 ++-- dma-helpers.c | 4 ++-- hw/ide/core.c | 4 ++-- qemu-aio.h | 15 +++++++-------- thread-pool.c | 4 ++-- 15 files changed, 50 insertions(+), 58 deletions(-) diff --git a/block.c b/block.c index da1fdca..854ebd6 100644 --- a/block.c +++ b/block.c @@ -3521,7 +3521,7 @@ int bdrv_aio_multiwrite(BlockDriverState *bs, BlockRequest *reqs, int num_reqs) void bdrv_aio_cancel(BlockDriverAIOCB *acb) { - acb->pool->cancel(acb); + acb->aiocb_info->cancel(acb); } /* block I/O throttling */ @@ -3711,7 +3711,7 @@ static void bdrv_aio_cancel_em(BlockDriverAIOCB *blockacb) qemu_aio_release(acb); } -static AIOPool bdrv_em_aio_pool = { +static const AIOCBInfo bdrv_em_aiocb_info = { .aiocb_size = sizeof(BlockDriverAIOCBSync), .cancel = bdrv_aio_cancel_em, }; @@ -3740,7 +3740,7 @@ static BlockDriverAIOCB *bdrv_aio_rw_vector(BlockDriverState *bs, { BlockDriverAIOCBSync *acb; - acb = qemu_aio_get(&bdrv_em_aio_pool, bs, cb, opaque); + acb = qemu_aio_get(&bdrv_em_aiocb_info, bs, cb, opaque); acb->is_write = is_write; acb->qiov = qiov; acb->bounce = qemu_blockalign(bs, qiov->size); @@ -3785,7 +3785,7 @@ static void bdrv_aio_co_cancel_em(BlockDriverAIOCB *blockacb) qemu_aio_flush(); } -static AIOPool bdrv_em_co_aio_pool = { +static const AIOCBInfo bdrv_em_co_aiocb_info = { .aiocb_size = sizeof(BlockDriverAIOCBCoroutine), .cancel = bdrv_aio_co_cancel_em, }; @@ -3828,7 +3828,7 @@ static BlockDriverAIOCB *bdrv_co_aio_rw_vector(BlockDriverState *bs, Coroutine *co; BlockDriverAIOCBCoroutine *acb; - acb = qemu_aio_get(&bdrv_em_co_aio_pool, bs, cb, opaque); + acb = qemu_aio_get(&bdrv_em_co_aiocb_info, bs, cb, opaque); acb->req.sector = sector_num; acb->req.nb_sectors = nb_sectors; acb->req.qiov = qiov; @@ -3858,7 +3858,7 @@ BlockDriverAIOCB *bdrv_aio_flush(BlockDriverState *bs, Coroutine *co; BlockDriverAIOCBCoroutine *acb; - acb = qemu_aio_get(&bdrv_em_co_aio_pool, bs, cb, opaque); + acb = qemu_aio_get(&bdrv_em_co_aiocb_info, bs, cb, opaque); co = qemu_coroutine_create(bdrv_aio_flush_co_entry); qemu_coroutine_enter(co, acb); @@ -3884,7 +3884,7 @@ BlockDriverAIOCB *bdrv_aio_discard(BlockDriverState *bs, trace_bdrv_aio_discard(bs, sector_num, nb_sectors, opaque); - acb = qemu_aio_get(&bdrv_em_co_aio_pool, bs, cb, opaque); + acb = qemu_aio_get(&bdrv_em_co_aiocb_info, bs, cb, opaque); acb->req.sector = sector_num; acb->req.nb_sectors = nb_sectors; co = qemu_coroutine_create(bdrv_aio_discard_co_entry); @@ -3904,18 +3904,13 @@ void bdrv_init_with_whitelist(void) bdrv_init(); } -void *qemu_aio_get(AIOPool *pool, BlockDriverState *bs, +void *qemu_aio_get(const AIOCBInfo *aiocb_info, BlockDriverState *bs, BlockDriverCompletionFunc *cb, void *opaque) { BlockDriverAIOCB *acb; - if (pool->free_aiocb) { - acb = pool->free_aiocb; - pool->free_aiocb = acb->next; - } else { - acb = g_malloc0(pool->aiocb_size); - acb->pool = pool; - } + acb = g_slice_alloc(aiocb_info->aiocb_size); + acb->aiocb_info = aiocb_info; acb->bs = bs; acb->cb = cb; acb->opaque = opaque; @@ -3924,10 +3919,8 @@ void *qemu_aio_get(AIOPool *pool, BlockDriverState *bs, void qemu_aio_release(void *p) { - BlockDriverAIOCB *acb = (BlockDriverAIOCB *)p; - AIOPool *pool = acb->pool; - acb->next = pool->free_aiocb; - pool->free_aiocb = acb; + BlockDriverAIOCB *acb = p; + g_slice_free1(acb->aiocb_info->aiocb_size, acb); } /**************************************************************/ diff --git a/block/blkdebug.c b/block/blkdebug.c index 1206d52..d61ece8 100644 --- a/block/blkdebug.c +++ b/block/blkdebug.c @@ -41,7 +41,7 @@ typedef struct BlkdebugAIOCB { static void blkdebug_aio_cancel(BlockDriverAIOCB *blockacb); -static AIOPool blkdebug_aio_pool = { +static const AIOCBInfo blkdebug_aiocb_info = { .aiocb_size = sizeof(BlkdebugAIOCB), .cancel = blkdebug_aio_cancel, }; @@ -335,7 +335,7 @@ static BlockDriverAIOCB *inject_error(BlockDriverState *bs, return NULL; } - acb = qemu_aio_get(&blkdebug_aio_pool, bs, cb, opaque); + acb = qemu_aio_get(&blkdebug_aiocb_info, bs, cb, opaque); acb->ret = -error; bh = qemu_bh_new(error_callback_bh, acb); diff --git a/block/blkverify.c b/block/blkverify.c index 9d5f1ec..4beede7 100644 --- a/block/blkverify.c +++ b/block/blkverify.c @@ -48,7 +48,7 @@ static void blkverify_aio_cancel(BlockDriverAIOCB *blockacb) } } -static AIOPool blkverify_aio_pool = { +static const AIOCBInfo blkverify_aiocb_info = { .aiocb_size = sizeof(BlkverifyAIOCB), .cancel = blkverify_aio_cancel, }; @@ -233,7 +233,7 @@ static BlkverifyAIOCB *blkverify_aio_get(BlockDriverState *bs, bool is_write, BlockDriverCompletionFunc *cb, void *opaque) { - BlkverifyAIOCB *acb = qemu_aio_get(&blkverify_aio_pool, bs, cb, opaque); + BlkverifyAIOCB *acb = qemu_aio_get(&blkverify_aiocb_info, bs, cb, opaque); acb->bh = NULL; acb->is_write = is_write; diff --git a/block/curl.c b/block/curl.c index c1074cd..1179484 100644 --- a/block/curl.c +++ b/block/curl.c @@ -438,7 +438,7 @@ static void curl_aio_cancel(BlockDriverAIOCB *blockacb) // Do we have to implement canceling? Seems to work without... } -static AIOPool curl_aio_pool = { +static const AIOCBInfo curl_aiocb_info = { .aiocb_size = sizeof(CURLAIOCB), .cancel = curl_aio_cancel, }; @@ -505,7 +505,7 @@ static BlockDriverAIOCB *curl_aio_readv(BlockDriverState *bs, { CURLAIOCB *acb; - acb = qemu_aio_get(&curl_aio_pool, bs, cb, opaque); + acb = qemu_aio_get(&curl_aiocb_info, bs, cb, opaque); acb->qiov = qiov; acb->sector_num = sector_num; diff --git a/block/gluster.c b/block/gluster.c index 3588d73..1c90174 100644 --- a/block/gluster.c +++ b/block/gluster.c @@ -388,7 +388,7 @@ static void qemu_gluster_aio_cancel(BlockDriverAIOCB *blockacb) } } -static AIOPool gluster_aio_pool = { +static const AIOCBInfo gluster_aiocb_info = { .aiocb_size = sizeof(GlusterAIOCB), .cancel = qemu_gluster_aio_cancel, }; @@ -439,7 +439,7 @@ static BlockDriverAIOCB *qemu_gluster_aio_rw(BlockDriverState *bs, size = nb_sectors * BDRV_SECTOR_SIZE; s->qemu_aio_count++; - acb = qemu_aio_get(&gluster_aio_pool, bs, cb, opaque); + acb = qemu_aio_get(&gluster_aiocb_info, bs, cb, opaque); acb->size = size; acb->ret = 0; acb->finished = NULL; @@ -484,7 +484,7 @@ static BlockDriverAIOCB *qemu_gluster_aio_flush(BlockDriverState *bs, GlusterAIOCB *acb; BDRVGlusterState *s = bs->opaque; - acb = qemu_aio_get(&gluster_aio_pool, bs, cb, opaque); + acb = qemu_aio_get(&gluster_aiocb_info, bs, cb, opaque); acb->size = 0; acb->ret = 0; acb->finished = NULL; diff --git a/block/iscsi.c b/block/iscsi.c index d0b1a10..a6a819d 100644 --- a/block/iscsi.c +++ b/block/iscsi.c @@ -133,7 +133,7 @@ iscsi_aio_cancel(BlockDriverAIOCB *blockacb) } } -static AIOPool iscsi_aio_pool = { +static const AIOCBInfo iscsi_aiocb_info = { .aiocb_size = sizeof(IscsiAIOCB), .cancel = iscsi_aio_cancel, }; @@ -234,7 +234,7 @@ iscsi_aio_writev(BlockDriverState *bs, int64_t sector_num, uint64_t lba; struct iscsi_data data; - acb = qemu_aio_get(&iscsi_aio_pool, bs, cb, opaque); + acb = qemu_aio_get(&iscsi_aiocb_info, bs, cb, opaque); trace_iscsi_aio_writev(iscsi, sector_num, nb_sectors, opaque, acb); acb->iscsilun = iscsilun; @@ -325,7 +325,7 @@ iscsi_aio_readv(BlockDriverState *bs, int64_t sector_num, qemu_read_size = BDRV_SECTOR_SIZE * (size_t)nb_sectors; - acb = qemu_aio_get(&iscsi_aio_pool, bs, cb, opaque); + acb = qemu_aio_get(&iscsi_aiocb_info, bs, cb, opaque); trace_iscsi_aio_readv(iscsi, sector_num, nb_sectors, opaque, acb); acb->iscsilun = iscsilun; @@ -430,7 +430,7 @@ iscsi_aio_flush(BlockDriverState *bs, struct iscsi_context *iscsi = iscsilun->iscsi; IscsiAIOCB *acb; - acb = qemu_aio_get(&iscsi_aio_pool, bs, cb, opaque); + acb = qemu_aio_get(&iscsi_aiocb_info, bs, cb, opaque); acb->iscsilun = iscsilun; acb->canceled = 0; @@ -483,7 +483,7 @@ iscsi_aio_discard(BlockDriverState *bs, IscsiAIOCB *acb; struct unmap_list list[1]; - acb = qemu_aio_get(&iscsi_aio_pool, bs, cb, opaque); + acb = qemu_aio_get(&iscsi_aiocb_info, bs, cb, opaque); acb->iscsilun = iscsilun; acb->canceled = 0; @@ -558,7 +558,7 @@ static BlockDriverAIOCB *iscsi_aio_ioctl(BlockDriverState *bs, assert(req == SG_IO); - acb = qemu_aio_get(&iscsi_aio_pool, bs, cb, opaque); + acb = qemu_aio_get(&iscsi_aiocb_info, bs, cb, opaque); acb->iscsilun = iscsilun; acb->canceled = 0; diff --git a/block/linux-aio.c b/block/linux-aio.c index 6ca984d..91ef863 100644 --- a/block/linux-aio.c +++ b/block/linux-aio.c @@ -140,7 +140,7 @@ static void laio_cancel(BlockDriverAIOCB *blockacb) } } -static AIOPool laio_pool = { +static const AIOCBInfo laio_aiocb_info = { .aiocb_size = sizeof(struct qemu_laiocb), .cancel = laio_cancel, }; @@ -154,7 +154,7 @@ BlockDriverAIOCB *laio_submit(BlockDriverState *bs, void *aio_ctx, int fd, struct iocb *iocbs; off_t offset = sector_num * 512; - laiocb = qemu_aio_get(&laio_pool, bs, cb, opaque); + laiocb = qemu_aio_get(&laio_aiocb_info, bs, cb, opaque); laiocb->nbytes = nb_sectors * 512; laiocb->ctx = s; laiocb->ret = -EINPROGRESS; diff --git a/block/qed.c b/block/qed.c index 6c182ca..0b5374a 100644 --- a/block/qed.c +++ b/block/qed.c @@ -30,7 +30,7 @@ static void qed_aio_cancel(BlockDriverAIOCB *blockacb) } } -static AIOPool qed_aio_pool = { +static const AIOCBInfo qed_aiocb_info = { .aiocb_size = sizeof(QEDAIOCB), .cancel = qed_aio_cancel, }; @@ -1311,7 +1311,7 @@ static BlockDriverAIOCB *qed_aio_setup(BlockDriverState *bs, BlockDriverCompletionFunc *cb, void *opaque, int flags) { - QEDAIOCB *acb = qemu_aio_get(&qed_aio_pool, bs, cb, opaque); + QEDAIOCB *acb = qemu_aio_get(&qed_aiocb_info, bs, cb, opaque); trace_qed_aio_setup(bs->opaque, acb, sector_num, nb_sectors, opaque, flags); diff --git a/block/rbd.c b/block/rbd.c index 015a9db..0aaacaf 100644 --- a/block/rbd.c +++ b/block/rbd.c @@ -570,7 +570,7 @@ static void qemu_rbd_aio_cancel(BlockDriverAIOCB *blockacb) acb->cancelled = 1; } -static AIOPool rbd_aio_pool = { +static const AIOCBInfo rbd_aiocb_info = { .aiocb_size = sizeof(RBDAIOCB), .cancel = qemu_rbd_aio_cancel, }; @@ -672,7 +672,7 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs, BDRVRBDState *s = bs->opaque; - acb = qemu_aio_get(&rbd_aio_pool, bs, cb, opaque); + acb = qemu_aio_get(&rbd_aiocb_info, bs, cb, opaque); acb->cmd = cmd; acb->qiov = qiov; if (cmd == RBD_AIO_DISCARD) { diff --git a/block/sheepdog.c b/block/sheepdog.c index 9306174..a48f58c 100644 --- a/block/sheepdog.c +++ b/block/sheepdog.c @@ -420,7 +420,7 @@ static void sd_aio_cancel(BlockDriverAIOCB *blockacb) acb->canceled = true; } -static AIOPool sd_aio_pool = { +static const AIOCBInfo sd_aiocb_info = { .aiocb_size = sizeof(SheepdogAIOCB), .cancel = sd_aio_cancel, }; @@ -431,7 +431,7 @@ static SheepdogAIOCB *sd_aio_setup(BlockDriverState *bs, QEMUIOVector *qiov, { SheepdogAIOCB *acb; - acb = qemu_aio_get(&sd_aio_pool, bs, cb, opaque); + acb = qemu_aio_get(&sd_aiocb_info, bs, cb, opaque); acb->qiov = qiov; diff --git a/block/win32-aio.c b/block/win32-aio.c index c34dc73..444c228 100644 --- a/block/win32-aio.c +++ b/block/win32-aio.c @@ -131,7 +131,7 @@ static void win32_aio_cancel(BlockDriverAIOCB *blockacb) } } -static AIOPool win32_aio_pool = { +static const AIOCBInfo win32_aiocb_info = { .aiocb_size = sizeof(QEMUWin32AIOCB), .cancel = win32_aio_cancel, }; @@ -145,7 +145,7 @@ BlockDriverAIOCB *win32_aio_submit(BlockDriverState *bs, uint64_t offset = sector_num * 512; DWORD rc; - waiocb = qemu_aio_get(&win32_aio_pool, bs, cb, opaque); + waiocb = qemu_aio_get(&win32_aiocb_info, bs, cb, opaque); waiocb->nbytes = nb_sectors * 512; waiocb->qiov = qiov; waiocb->is_read = (type == QEMU_AIO_READ); diff --git a/dma-helpers.c b/dma-helpers.c index 0c18e9e..4f5fb64 100644 --- a/dma-helpers.c +++ b/dma-helpers.c @@ -195,7 +195,7 @@ static void dma_aio_cancel(BlockDriverAIOCB *acb) dma_complete(dbs, 0); } -static AIOPool dma_aio_pool = { +static const AIOCBInfo dma_aiocb_info = { .aiocb_size = sizeof(DMAAIOCB), .cancel = dma_aio_cancel, }; @@ -205,7 +205,7 @@ BlockDriverAIOCB *dma_bdrv_io( DMAIOFunc *io_func, BlockDriverCompletionFunc *cb, void *opaque, DMADirection dir) { - DMAAIOCB *dbs = qemu_aio_get(&dma_aio_pool, bs, cb, opaque); + DMAAIOCB *dbs = qemu_aio_get(&dma_aiocb_info, bs, cb, opaque); trace_dma_bdrv_io(dbs, bs, sector_num, (dir == DMA_DIRECTION_TO_DEVICE)); diff --git a/hw/ide/core.c b/hw/ide/core.c index d683a8c..7d6b0fa 100644 --- a/hw/ide/core.c +++ b/hw/ide/core.c @@ -336,7 +336,7 @@ static void trim_aio_cancel(BlockDriverAIOCB *acb) qemu_aio_release(iocb); } -static AIOPool trim_aio_pool = { +static const AIOCBInfo trim_aiocb_info = { .aiocb_size = sizeof(TrimAIOCB), .cancel = trim_aio_cancel, }; @@ -360,7 +360,7 @@ BlockDriverAIOCB *ide_issue_trim(BlockDriverState *bs, TrimAIOCB *iocb; int i, j, ret; - iocb = qemu_aio_get(&trim_aio_pool, bs, cb, opaque); + iocb = qemu_aio_get(&trim_aiocb_info, bs, cb, opaque); iocb->bh = qemu_bh_new(ide_trim_bh_cb, iocb); iocb->ret = 0; diff --git a/qemu-aio.h b/qemu-aio.h index 1b7eb6e..a29264b 100644 --- a/qemu-aio.h +++ b/qemu-aio.h @@ -20,22 +20,21 @@ typedef struct BlockDriverAIOCB BlockDriverAIOCB; typedef void BlockDriverCompletionFunc(void *opaque, int ret); +typedef void BlockDriverCancelFunc(BlockDriverAIOCB *acb); -typedef struct AIOPool { - void (*cancel)(BlockDriverAIOCB *acb); - int aiocb_size; - BlockDriverAIOCB *free_aiocb; -} AIOPool; +typedef struct AIOCBInfo { + size_t aiocb_size; + BlockDriverCancelFunc *cancel; +} AIOCBInfo; struct BlockDriverAIOCB { - AIOPool *pool; + const AIOCBInfo *aiocb_info; BlockDriverState *bs; BlockDriverCompletionFunc *cb; void *opaque; - BlockDriverAIOCB *next; }; -void *qemu_aio_get(AIOPool *pool, BlockDriverState *bs, +void *qemu_aio_get(const AIOCBInfo *aiocb_info, BlockDriverState *bs, BlockDriverCompletionFunc *cb, void *opaque); void qemu_aio_release(void *p); diff --git a/thread-pool.c b/thread-pool.c index 266f12f..894cde6 100644 --- a/thread-pool.c +++ b/thread-pool.c @@ -206,7 +206,7 @@ static void thread_pool_cancel(BlockDriverAIOCB *acb) qemu_mutex_unlock(&lock); } -static AIOPool thread_pool_cb_pool = { +static const AIOCBInfo thread_pool_aiocb_info = { .aiocb_size = sizeof(ThreadPoolElement), .cancel = thread_pool_cancel, }; @@ -216,7 +216,7 @@ BlockDriverAIOCB *thread_pool_submit_aio(ThreadPoolFunc *func, void *arg, { ThreadPoolElement *req; - req = qemu_aio_get(&thread_pool_cb_pool, NULL, cb, opaque); + req = qemu_aio_get(&thread_pool_aiocb_info, NULL, cb, opaque); req->func = func; req->arg = arg; req->state = THREAD_QUEUED;