From patchwork Mon Nov 24 11:31:31 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 413624 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id E283414013F for ; Mon, 24 Nov 2014 22:32:49 +1100 (AEDT) Received: from localhost ([::1]:52045 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Xsrsu-0003mr-58 for incoming@patchwork.ozlabs.org; Mon, 24 Nov 2014 06:32:48 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49553) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XsrsJ-0002rk-6S for qemu-devel@nongnu.org; Mon, 24 Nov 2014 06:32:17 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XsrsC-00048Y-Bb for qemu-devel@nongnu.org; Mon, 24 Nov 2014 06:32:11 -0500 Received: from mail-pd0-f177.google.com ([209.85.192.177]:65415) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XsrsC-00048T-3H for qemu-devel@nongnu.org; Mon, 24 Nov 2014 06:32:04 -0500 Received: by mail-pd0-f177.google.com with SMTP id ft15so9574513pdb.36 for ; Mon, 24 Nov 2014 03:32:03 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=qQMTy+5bwYM/NMzAxH6lMYtrkx7XVWXMeVcUKWpF0yE=; b=kBc9pyDoyy7GLa6uMCvzu8h7iaTw2dqB5AdVqSueFfIUWhWPhNgrqOA/GlSyQtXA31 D/CtHzY6NJ8SQ659soyQW03XPlRsMjuHA93Y2VwmQ/j0N7HY9NQw0I/I1DKgoUifGKuW Vvc2SHSHgdmit5pzgOeFTh6rqiCQSqttLqU0jqgJKurtgxjsdmteSgNB8aoMt6thgYqC dy1heazBcRH3/iwZ2UUhu3BJ73B7VICj0Jg3Pl+dUDdmjh9OjEDIwge5p6YakNoZG84Z JBHq3X+z/tbZnuA76xIc5PpFl4qqzTt4Mu5xPa487EfhCJXbU1u5HkUsg21I2V358iHE fUDg== X-Received: by 10.70.96.145 with SMTP id ds17mr8624669pdb.88.1416828723417; Mon, 24 Nov 2014 03:32:03 -0800 (PST) Received: from localhost ([116.24.103.233]) by mx.google.com with ESMTPSA id z9sm12174830pdp.73.2014.11.24.03.32.02 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Mon, 24 Nov 2014 03:32:02 -0800 (PST) From: Ming Lei To: qemu-devel@nongnu.org, Paolo Bonzini , Stefan Hajnoczi , Kevin Wolf Date: Mon, 24 Nov 2014 19:31:31 +0800 Message-Id: <1416828693-30767-2-git-send-email-ming.lei@canonical.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1416828693-30767-1-git-send-email-ming.lei@canonical.com> References: <1416828693-30767-1-git-send-email-ming.lei@canonical.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.85.192.177 Cc: Ming Lei Subject: [Qemu-devel] [PATCH v4 1/3] linux-aio: fix submit aio as a batch X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org In the submit path, we can't complete request directly, otherwise "Co-routine re-entered recursively" may be caused, so this patch fixes the issue with below ideas: - for -EAGAIN or partial completion, retry the submision in following completion cb which is run in BH context - for part of completion, update the io queue too - for case of io queue full, submit queued requests immediatelly and return failure to caller - for other failure, abort all queued requests in BH context Signed-off-by: Ming Lei --- block/linux-aio.c | 93 +++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 73 insertions(+), 20 deletions(-) diff --git a/block/linux-aio.c b/block/linux-aio.c index d92513b..11fcedb 100644 --- a/block/linux-aio.c +++ b/block/linux-aio.c @@ -38,11 +38,19 @@ struct qemu_laiocb { QLIST_ENTRY(qemu_laiocb) node; }; +/* + * TODO: support to batch I/O from multiple bs in one same + * AIO context, one important use case is multi-lun scsi, + * so in future the IO queue should be per AIO context. + */ typedef struct { struct iocb *iocbs[MAX_QUEUED_IO]; int plugged; unsigned int size; unsigned int idx; + + /* abort queued requests in BH context */ + QEMUBH *abort_bh; } LaioQueue; struct qemu_laio_state { @@ -59,6 +67,8 @@ struct qemu_laio_state { int event_max; }; +static int ioq_submit(struct qemu_laio_state *s); + static inline ssize_t io_event_ret(struct io_event *ev) { return (ssize_t)(((uint64_t)ev->res2 << 32) | ev->res); @@ -91,6 +101,13 @@ static void qemu_laio_process_completion(struct qemu_laio_state *s, qemu_aio_unref(laiocb); } +static void qemu_laio_start_retry(struct qemu_laio_state *s) +{ + if (s->io_q.idx) { + ioq_submit(s); + } +} + /* The completion BH fetches completed I/O requests and invokes their * callbacks. * @@ -135,6 +152,8 @@ static void qemu_laio_completion_bh(void *opaque) qemu_laio_process_completion(s, laiocb); } + + qemu_laio_start_retry(s); } static void qemu_laio_completion_cb(EventNotifier *e) @@ -177,45 +196,75 @@ static void ioq_init(LaioQueue *io_q) io_q->plugged = 0; } +/* always return >= 0 */ static int ioq_submit(struct qemu_laio_state *s) { - int ret, i = 0; + int ret; int len = s->io_q.idx; - do { - ret = io_submit(s->ctx, len, s->io_q.iocbs); - } while (i++ < 3 && ret == -EAGAIN); - - /* empty io queue */ - s->io_q.idx = 0; + if (!len) { + return 0; + } + ret = io_submit(s->ctx, len, s->io_q.iocbs); if (ret < 0) { - i = 0; - } else { - i = ret; + /* retry in following completion cb */ + if (ret == -EAGAIN) { + return 0; + } + + /* abort in BH context for avoiding Co-routine re-entered */ + qemu_bh_schedule(s->io_q.abort_bh); + ret = len; } - for (; i < len; i++) { - struct qemu_laiocb *laiocb = - container_of(s->io_q.iocbs[i], struct qemu_laiocb, iocb); + /* + * update io queue, and retry will be started automatically + * in following completion cb for the remainder + */ + if (ret > 0) { + if (ret < len) { + memmove(&s->io_q.iocbs[0], &s->io_q.iocbs[ret], + (len - ret) * sizeof(struct iocb *)); + } + s->io_q.idx -= ret; + } + + return ret; +} + +static void ioq_abort_bh(void *opaque) +{ + struct qemu_laio_state *s = opaque; + int i; - laiocb->ret = (ret < 0) ? ret : -EIO; + for (i = 0; i < s->io_q.idx; i++) { + struct qemu_laiocb *laiocb = container_of(s->io_q.iocbs[i], + struct qemu_laiocb, + iocb); + laiocb->ret = -EIO; qemu_laio_process_completion(s, laiocb); } - return ret; } -static void ioq_enqueue(struct qemu_laio_state *s, struct iocb *iocb) +static int ioq_enqueue(struct qemu_laio_state *s, struct iocb *iocb) { unsigned int idx = s->io_q.idx; + if (unlikely(idx == s->io_q.size)) { + ioq_submit(s); + return -EAGAIN; + } + s->io_q.iocbs[idx++] = iocb; s->io_q.idx = idx; - /* submit immediately if queue is full */ - if (idx == s->io_q.size) { - ioq_submit(s); + /* submit immediately if queue depth is above 2/3 */ + if (idx > s->io_q.size * 2 / 3) { + return ioq_submit(s); } + + return 0; } void laio_io_plug(BlockDriverState *bs, void *aio_ctx) @@ -281,7 +330,9 @@ BlockAIOCB *laio_submit(BlockDriverState *bs, void *aio_ctx, int fd, goto out_free_aiocb; } } else { - ioq_enqueue(s, iocbs); + if (ioq_enqueue(s, iocbs) < 0) { + goto out_free_aiocb; + } } return &laiocb->common; @@ -296,12 +347,14 @@ void laio_detach_aio_context(void *s_, AioContext *old_context) aio_set_event_notifier(old_context, &s->e, NULL); qemu_bh_delete(s->completion_bh); + qemu_bh_delete(s->io_q.abort_bh); } void laio_attach_aio_context(void *s_, AioContext *new_context) { struct qemu_laio_state *s = s_; + s->io_q.abort_bh = aio_bh_new(new_context, ioq_abort_bh, s); s->completion_bh = aio_bh_new(new_context, qemu_laio_completion_bh, s); aio_set_event_notifier(new_context, &s->e, qemu_laio_completion_cb); }