From patchwork Mon Jan 11 13:42:48 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 42619 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [199.232.76.165]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id F2AFEB7BF3 for ; Tue, 12 Jan 2010 00:46:13 +1100 (EST) Received: from localhost ([127.0.0.1]:44027 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NUKZ1-0005op-R8 for incoming@patchwork.ozlabs.org; Mon, 11 Jan 2010 08:44:11 -0500 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NUKXr-0005oR-KY for qemu-devel@nongnu.org; Mon, 11 Jan 2010 08:43:00 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1NUKXl-0005nM-FT for qemu-devel@nongnu.org; Mon, 11 Jan 2010 08:42:57 -0500 Received: from [199.232.76.173] (port=59490 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NUKXl-0005nH-AW for qemu-devel@nongnu.org; Mon, 11 Jan 2010 08:42:53 -0500 Received: from verein.lst.de ([213.95.11.210]:41965) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_3DES_EDE_CBC_SHA1:24) (Exim 4.60) (envelope-from ) id 1NUKXk-00018M-Pa for qemu-devel@nongnu.org; Mon, 11 Jan 2010 08:42:53 -0500 Received: from verein.lst.de (localhost [127.0.0.1]) by verein.lst.de (8.12.3/8.12.3/Debian-7.1) with ESMTP id o0BDgmWY026946 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Mon, 11 Jan 2010 14:42:49 +0100 Received: (from hch@localhost) by verein.lst.de (8.12.3/8.12.3/Debian-7.2) id o0BDgmar026945; Mon, 11 Jan 2010 14:42:48 +0100 Date: Mon, 11 Jan 2010 14:42:48 +0100 From: Christoph Hellwig To: Avi Kivity Subject: Re: [Qemu-devel] Re: [RFC][PATCH] performance improvement for windows guests, running on top of virtio block device Message-ID: <20100111134248.GA25622@lst.de> References: <1263195647.2005.44.camel@localhost> <4B4AE1BD.4000400@redhat.com> Mime-Version: 1.0 Content-Disposition: inline In-Reply-To: <4B4AE1BD.4000400@redhat.com> User-Agent: Mutt/1.3.28i X-Spam-Score: 0 () X-Scanned-By: MIMEDefang 2.39 X-detected-operating-system: by monty-python.gnu.org: GNU/Linux 2.6 (newer, 2) Cc: Dor Laor , Vadim Rozenfeld , qemu-devel X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org On Mon, Jan 11, 2010 at 10:30:53AM +0200, Avi Kivity wrote: > The patch has potential to reduce performance on volumes with multiple > spindles. Consider two processes issuing sequential reads into a RAID > array. With this patch, the reads will be executed sequentially rather > than in parallel, so I think a follow-on patch to make the minimum depth > a parameter (set by the guest? the host?) would be helpful. Let's think about the life cycle of I/O requests a bit. We have an idle virtqueue (aka one virtio-blk device). The first (read) request comes in, we get the virtio notify from the guest, which calls into virtio_blk_handle_output. With the new code we now disable the notify once we start processing the first request. If the second request hits the queue before we call into virtio_blk_get_request the second time we're fine even with the new code as we keep picking it up. If however it hits after we leave virtio_blk_handle_output, but before we complete the first request we do indeed introduce additional latency. So instead of disabling notify while requests are active we might want to only disable it while we are inside virtio_blk_handle_output. Something like the following minimally tested patch: Index: qemu/hw/virtio-blk.c =================================================================== --- qemu.orig/hw/virtio-blk.c 2010-01-11 14:28:42.896010503 +0100 +++ qemu/hw/virtio-blk.c 2010-01-11 14:40:13.535256353 +0100 @@ -328,7 +328,15 @@ static void virtio_blk_handle_output(Vir int num_writes = 0; BlockDriverState *old_bs = NULL; + /* + * While we are processing requests there is no need to get further + * notifications from the guest - it'll just burn cpu cycles doing + * useless context switches into the host. + */ + virtio_queue_set_notification(s->vq, 0); + while ((req = virtio_blk_get_request(s))) { +handle_request: if (req->elem.out_num < 1 || req->elem.in_num < 1) { fprintf(stderr, "virtio-blk missing headers\n"); exit(1); @@ -358,6 +366,18 @@ static void virtio_blk_handle_output(Vir } } + /* + * Once we're done processing all pending requests re-enable the queue + * notification. If there's an entry pending after we enabled + * notification again we hit a race and need to process it before + * returning. + */ + virtio_queue_set_notification(s->vq, 1); + req = virtio_blk_get_request(s); + if (req) { + goto handle_request; + } + if (num_writes > 0) { do_multiwrite(old_bs, blkreq, num_writes); }