From patchwork Wed Jul 30 11:39:42 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 374817 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id D2D4B140195 for ; Wed, 30 Jul 2014 21:44:07 +1000 (EST) Received: from localhost ([::1]:50365 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XCSIf-0003EB-W0 for incoming@patchwork.ozlabs.org; Wed, 30 Jul 2014 07:44:06 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50008) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XCSGT-0007cs-Tv for qemu-devel@nongnu.org; Wed, 30 Jul 2014 07:41:57 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XCSGN-0004Zd-Rn for qemu-devel@nongnu.org; Wed, 30 Jul 2014 07:41:49 -0400 Received: from mail-pd0-f182.google.com ([209.85.192.182]:44767) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XCSGN-0004ZT-KT for qemu-devel@nongnu.org; Wed, 30 Jul 2014 07:41:43 -0400 Received: by mail-pd0-f182.google.com with SMTP id fp1so1331085pdb.13 for ; Wed, 30 Jul 2014 04:41:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=edBGjy7rR8N0khFTs0KXthKsjt7tFWobSxcGeNcqJ+A=; b=k62wcBb5wxFnN9h/IRf9IYXipEnis9s9gtbW0F6Tx2Jgh+XFEOOPmP2J9DSn5TECF4 M/9dSi0nsnYzzTzyzEjZHTw1xTOU/GFGitkOdVJYQnvQl/4GzO/P06Cnhsdb2pnXWIe+ pa/CbagISQIL4/cZov82uk2ahDneu2JtG+/Moq4toUWS9DcRpHDBSZFyC85RUYfsD3LA 6bfOGynt76YtazHx5grYiI2YxhzcktwWNOR/eMUjT6TtTkyHyrS7eKDwxO4Aa+bfbBJi 52stEpG14U19af9ZVe1yotccGgAw2ByqkDPxRP97MPXDE4QT3ieu6UMgD7PhRSUK9yk8 817Q== X-Received: by 10.68.57.140 with SMTP id i12mr3979708pbq.44.1406720503031; Wed, 30 Jul 2014 04:41:43 -0700 (PDT) Received: from localhost ([183.49.45.24]) by mx.google.com with ESMTPSA id v9sm3128693pdp.15.2014.07.30.04.41.37 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Wed, 30 Jul 2014 04:41:41 -0700 (PDT) From: Ming Lei To: qemu-devel@nongnu.org, Peter Maydell , Paolo Bonzini , Stefan Hajnoczi Date: Wed, 30 Jul 2014 19:39:42 +0800 Message-Id: <1406720388-18671-10-git-send-email-ming.lei@canonical.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1406720388-18671-1-git-send-email-ming.lei@canonical.com> References: <1406720388-18671-1-git-send-email-ming.lei@canonical.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.85.192.182 Cc: Kevin Wolf , Ming Lei , Fam Zheng , "Michael S. Tsirkin" Subject: [Qemu-devel] [PATCH 09/15] linux-aio: fix submit aio as a batch X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org In the enqueue path, we can't complete request, otherwise "Co-routine re-entered recursively" may be caused, so this patch fixes the issue with below ideas: - for -EAGAIN or partial completion, retry the submission by an introduced event handler - for part of completion, also update the io queue - for other failure, return the failure if in enqueue path, otherwise, abort all queued I/O Signed-off-by: Ming Lei --- block/linux-aio.c | 90 ++++++++++++++++++++++++++++++++++++++++------------- 1 file changed, 68 insertions(+), 22 deletions(-) diff --git a/block/linux-aio.c b/block/linux-aio.c index 7ac7e8c..5eb9c92 100644 --- a/block/linux-aio.c +++ b/block/linux-aio.c @@ -51,6 +51,7 @@ struct qemu_laio_state { /* io queue for submit at batch */ LaioQueue io_q; + EventNotifier retry; /* handle -EAGAIN and partial completion */ }; static inline ssize_t io_event_ret(struct io_event *ev) @@ -154,45 +155,80 @@ static void ioq_init(LaioQueue *io_q) io_q->plugged = 0; } -static int ioq_submit(struct qemu_laio_state *s) +static void abort_queue(struct qemu_laio_state *s) +{ + int i; + for (i = 0; i < s->io_q.idx; i++) { + struct qemu_laiocb *laiocb = container_of(s->io_q.iocbs[i], + struct qemu_laiocb, + iocb); + laiocb->ret = -EIO; + qemu_laio_process_completion(s, laiocb); + } +} + +static int ioq_submit(struct qemu_laio_state *s, bool enqueue) { int ret, i = 0; int len = s->io_q.idx; + int j = 0; - do { - ret = io_submit(s->ctx, len, s->io_q.iocbs); - } while (i++ < 3 && ret == -EAGAIN); + if (!len) { + return 0; + } - /* empty io queue */ - s->io_q.idx = 0; + ret = io_submit(s->ctx, len, s->io_q.iocbs); + if (ret == -EAGAIN) { + event_notifier_set(&s->retry); + return 0; + } else if (ret < 0) { + if (enqueue) { + return ret; + } - if (ret < 0) { - i = 0; - } else { - i = ret; + /* in non-queue path, all IOs have to be completed */ + abort_queue(s); + ret = len; + } else if (ret == 0) { + goto out; } - for (; i < len; i++) { - struct qemu_laiocb *laiocb = - container_of(s->io_q.iocbs[i], struct qemu_laiocb, iocb); - - laiocb->ret = (ret < 0) ? ret : -EIO; - qemu_laio_process_completion(s, laiocb); + for (i = ret; i < len; i++) { + s->io_q.iocbs[j++] = s->io_q.iocbs[i]; } + + out: + /* update io queue */ + s->io_q.idx -= ret; + return ret; } -static void ioq_enqueue(struct qemu_laio_state *s, struct iocb *iocb) +static void ioq_submit_retry(EventNotifier *e) +{ + struct qemu_laio_state *s = container_of(e, struct qemu_laio_state, retry); + + event_notifier_test_and_clear(e); + ioq_submit(s, false); +} + +static int ioq_enqueue(struct qemu_laio_state *s, struct iocb *iocb) { unsigned int idx = s->io_q.idx; + if (unlikely(idx == s->io_q.size)) { + return -1; + } + s->io_q.iocbs[idx++] = iocb; s->io_q.idx = idx; - /* submit immediately if queue is full */ - if (idx == s->io_q.size) { - ioq_submit(s); + /* submit immediately if queue depth is above 2/3 */ + if (idx > s->io_q.size * 2 / 3) { + return ioq_submit(s, true); } + + return 0; } void laio_io_plug(BlockDriverState *bs, void *aio_ctx) @@ -214,7 +250,7 @@ int laio_io_unplug(BlockDriverState *bs, void *aio_ctx, bool unplug) } if (s->io_q.idx > 0) { - ret = ioq_submit(s); + ret = ioq_submit(s, false); } return ret; @@ -258,7 +294,9 @@ BlockDriverAIOCB *laio_submit(BlockDriverState *bs, void *aio_ctx, int fd, goto out_free_aiocb; } } else { - ioq_enqueue(s, iocbs); + if (ioq_enqueue(s, iocbs) < 0) { + goto out_free_aiocb; + } } return &laiocb->common; @@ -272,6 +310,7 @@ void laio_detach_aio_context(void *s_, AioContext *old_context) struct qemu_laio_state *s = s_; aio_set_event_notifier(old_context, &s->e, NULL); + aio_set_event_notifier(old_context, &s->retry, NULL); } void laio_attach_aio_context(void *s_, AioContext *new_context) @@ -279,6 +318,7 @@ void laio_attach_aio_context(void *s_, AioContext *new_context) struct qemu_laio_state *s = s_; aio_set_event_notifier(new_context, &s->e, qemu_laio_completion_cb); + aio_set_event_notifier(new_context, &s->retry, ioq_submit_retry); } void *laio_init(void) @@ -295,9 +335,14 @@ void *laio_init(void) } ioq_init(&s->io_q); + if (event_notifier_init(&s->retry, false) < 0) { + goto out_notifer_init; + } return s; +out_notifer_init: + io_destroy(s->ctx); out_close_efd: event_notifier_cleanup(&s->e); out_free_state: @@ -310,6 +355,7 @@ void laio_cleanup(void *s_) struct qemu_laio_state *s = s_; event_notifier_cleanup(&s->e); + event_notifier_cleanup(&s->retry); if (io_destroy(s->ctx) != 0) { fprintf(stderr, "%s: destroy AIO context %p failed\n",