From patchwork Tue Sep 1 13:51:51 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 32747 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [199.232.76.165]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by bilbo.ozlabs.org (Postfix) with ESMTPS id 98E58B7B83 for ; Tue, 1 Sep 2009 23:57:06 +1000 (EST) Received: from localhost ([127.0.0.1]:42238 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1MiTr5-0005Ay-Rp for incoming@patchwork.ozlabs.org; Tue, 01 Sep 2009 09:57:03 -0400 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1MiTnH-00033e-JD for qemu-devel@nongnu.org; Tue, 01 Sep 2009 09:53:07 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1MiTnC-0002vf-Jk for qemu-devel@nongnu.org; Tue, 01 Sep 2009 09:53:07 -0400 Received: from [199.232.76.173] (port=57664 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1MiTnC-0002v6-Cf for qemu-devel@nongnu.org; Tue, 01 Sep 2009 09:53:02 -0400 Received: from mx1.redhat.com ([209.132.183.28]:37836) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1MiTnB-00080u-OI for qemu-devel@nongnu.org; Tue, 01 Sep 2009 09:53:02 -0400 Received: from int-mx04.intmail.prod.int.phx2.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.17]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id n81Dr0sW010009 for ; Tue, 1 Sep 2009 09:53:01 -0400 Received: from localhost.localdomain (dhcp-5-217.str.redhat.com [10.32.5.217]) by int-mx04.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id n81DqvGm018215; Tue, 1 Sep 2009 09:53:00 -0400 From: Kevin Wolf To: qemu-devel@nongnu.org Date: Tue, 1 Sep 2009 15:51:51 +0200 Message-Id: <1251813112-17408-3-git-send-email-kwolf@redhat.com> In-Reply-To: <1251813112-17408-1-git-send-email-kwolf@redhat.com> References: <1251813112-17408-1-git-send-email-kwolf@redhat.com> X-Scanned-By: MIMEDefang 2.67 on 10.5.11.17 X-detected-operating-system: by monty-python.gnu.org: Genre and OS details not recognized. Cc: Kevin Wolf Subject: [Qemu-devel] [PATCH 2/3] virtio-blk: Use bdrv_aio_multiwrite X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org It is quite common for virtio-blk to submit more than one write request in a row to the qemu block layer. Use bdrv_aio_multiwrite to allow block drivers to optimize their handling of the requests. Signed-off-by: Kevin Wolf --- hw/virtio-blk.c | 50 ++++++++++++++++++++++++++++++++++++++++++-------- 1 files changed, 42 insertions(+), 8 deletions(-) diff --git a/hw/virtio-blk.c b/hw/virtio-blk.c index c160246..5c88c12 100644 --- a/hw/virtio-blk.c +++ b/hw/virtio-blk.c @@ -252,15 +252,40 @@ static void virtio_blk_handle_scsi(VirtIOBlockReq *req) } #endif /* __linux__ */ -static void virtio_blk_handle_write(VirtIOBlockReq *req) +static void do_multiwrite(BlockDriverState *bs, BlockRequest *blkreq, + int num_writes) { - BlockDriverAIOCB *acb; + int i, ret; + ret = bdrv_aio_multiwrite(bs, blkreq, num_writes); + + if (ret != 0) { + for (i = 0; i < num_writes; i++) { + if (blkreq[i].error) { + virtio_blk_req_complete(blkreq[i].opaque, VIRTIO_BLK_S_IOERR); + } + } + } +} - acb = bdrv_aio_writev(req->dev->bs, req->out->sector, &req->qiov, - req->qiov.size / 512, virtio_blk_rw_complete, req); - if (!acb) { - virtio_blk_req_complete(req, VIRTIO_BLK_S_IOERR); +static void virtio_blk_handle_write(BlockRequest *blkreq, int *num_writes, + VirtIOBlockReq *req, BlockDriverState **old_bs) +{ + if (req->dev->bs != *old_bs || *num_writes == 32) { + if (*old_bs != NULL) { + do_multiwrite(*old_bs, blkreq, *num_writes); + } + *num_writes = 0; + *old_bs = req->dev->bs; } + + blkreq[*num_writes].sector = req->out->sector; + blkreq[*num_writes].nb_sectors = req->qiov.size / 512; + blkreq[*num_writes].qiov = &req->qiov; + blkreq[*num_writes].cb = virtio_blk_rw_complete; + blkreq[*num_writes].opaque = req; + blkreq[*num_writes].error = 0; + + (*num_writes)++; } static void virtio_blk_handle_read(VirtIOBlockReq *req) @@ -278,6 +303,9 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) { VirtIOBlock *s = to_virtio_blk(vdev); VirtIOBlockReq *req; + BlockRequest blkreq[32]; + int num_writes = 0; + BlockDriverState *old_bs = NULL; while ((req = virtio_blk_get_request(s))) { if (req->elem.out_num < 1 || req->elem.in_num < 1) { @@ -299,13 +327,18 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) } else if (req->out->type & VIRTIO_BLK_T_OUT) { qemu_iovec_init_external(&req->qiov, &req->elem.out_sg[1], req->elem.out_num - 1); - virtio_blk_handle_write(req); + virtio_blk_handle_write(blkreq, &num_writes, req, &old_bs); } else { qemu_iovec_init_external(&req->qiov, &req->elem.in_sg[0], req->elem.in_num - 1); virtio_blk_handle_read(req); } } + + if (num_writes > 0) { + do_multiwrite(old_bs, blkreq, num_writes); + } + /* * FIXME: Want to check for completions before returning to guest mode, * so cached reads and writes are reported as quickly as possible. But @@ -324,7 +357,8 @@ static void virtio_blk_dma_restart_bh(void *opaque) s->rq = NULL; while (req) { - virtio_blk_handle_write(req); + bdrv_aio_writev(req->dev->bs, req->out->sector, &req->qiov, + req->qiov.size / 512, virtio_blk_rw_complete, req); req = req->next; } }