From patchwork Mon Jan 11 13:11:19 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 42617 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [199.232.76.165]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id B4326B6EDF for ; Tue, 12 Jan 2010 00:15:02 +1100 (EST) Received: from localhost ([127.0.0.1]:39912 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NUK4L-0000qf-Id for incoming@patchwork.ozlabs.org; Mon, 11 Jan 2010 08:12:29 -0500 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NUK3L-0000q7-6e for qemu-devel@nongnu.org; Mon, 11 Jan 2010 08:11:27 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1NUK3G-0000pj-J6 for qemu-devel@nongnu.org; Mon, 11 Jan 2010 08:11:26 -0500 Received: from [199.232.76.173] (port=59629 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NUK3G-0000pg-Di for qemu-devel@nongnu.org; Mon, 11 Jan 2010 08:11:22 -0500 Received: from verein.lst.de ([213.95.11.210]:38046) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_3DES_EDE_CBC_SHA1:24) (Exim 4.60) (envelope-from ) id 1NUK3F-0001dv-Dj for qemu-devel@nongnu.org; Mon, 11 Jan 2010 08:11:22 -0500 Received: from verein.lst.de (localhost [127.0.0.1]) by verein.lst.de (8.12.3/8.12.3/Debian-7.1) with ESMTP id o0BDBJWY025009 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Mon, 11 Jan 2010 14:11:20 +0100 Received: (from hch@localhost) by verein.lst.de (8.12.3/8.12.3/Debian-7.2) id o0BDBJUp025008; Mon, 11 Jan 2010 14:11:19 +0100 Date: Mon, 11 Jan 2010 14:11:19 +0100 From: Christoph Hellwig To: Dor Laor Subject: Re: [Qemu-devel] Re: [RFC][PATCH] performance improvement for windows guests, running on top of virtio block device Message-ID: <20100111131119.GB24241@lst.de> References: <1263195647.2005.44.camel@localhost> <4B4AE1BD.4000400@redhat.com> <4B4AE95D.7080305@redhat.com> <4B4AED19.3060401@redhat.com> Mime-Version: 1.0 Content-Disposition: inline In-Reply-To: <4B4AED19.3060401@redhat.com> User-Agent: Mutt/1.3.28i X-Spam-Score: 1.797 (*) DRASTIC_REDUCED X-Scanned-By: MIMEDefang 2.39 X-detected-operating-system: by monty-python.gnu.org: GNU/Linux 2.6 (newer, 2) Cc: Vadim Rozenfeld , qemu-devel , Avi Kivity X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org On Mon, Jan 11, 2010 at 11:19:21AM +0200, Dor Laor wrote: > >Attached results with rhel5.4 (qemu0.11) for win2k8 32bit guest. Note > >the drastic reduction in cpu consumption. > > Attachment did not survive the email server, so you'll have to trust me > saying that cpu consumption was done from 65% -> 40% for reads and from > 80% --> 30% for writes For what kind of workload, and using what qemu parameters, and cpu consumtion in the host or guest? Either way this is an awfull lot, did you use kernel AIO on the host? Any chance you could publish the benchmark, guest and host configs so we have meaningfull numbers? FYI below is the manually applied patch without all the wrapping: Index: qemu/hw/virtio-blk.c =================================================================== --- qemu.orig/hw/virtio-blk.c 2010-01-11 14:05:09.619254004 +0100 +++ qemu/hw/virtio-blk.c 2010-01-11 14:06:54.385013004 +0100 @@ -28,6 +28,7 @@ typedef struct VirtIOBlock char serial_str[BLOCK_SERIAL_STRLEN + 1]; QEMUBH *bh; size_t config_size; + unsigned int pending; } VirtIOBlock; static VirtIOBlock *to_virtio_blk(VirtIODevice *vdev) @@ -87,6 +88,8 @@ typedef struct VirtIOBlockReq struct VirtIOBlockReq *next; } VirtIOBlockReq; +static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq); + static void virtio_blk_req_complete(VirtIOBlockReq *req, int status) { VirtIOBlock *s = req->dev; @@ -95,6 +98,12 @@ static void virtio_blk_req_complete(Virt virtqueue_push(s->vq, &req->elem, req->qiov.size + sizeof(*req->in)); virtio_notify(&s->vdev, s->vq); + if (--s->pending == 0) { + virtio_queue_set_notification(s->vq, 1); + virtio_blk_handle_output(&s->vdev, s->vq); + } + + qemu_free(req); } @@ -340,6 +349,9 @@ static void virtio_blk_handle_output(Vir exit(1); } + if (++s->pending == 1) + virtio_queue_set_notification(s->vq, 0); + req->out = (void *)req->elem.out_sg[0].iov_base; req->in = (void *)req->elem.in_sg[req->elem.in_num - 1].iov_base;