From patchwork Fri Apr 5 15:23:56 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 234174 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 0FF592C00D1 for ; Sat, 6 Apr 2013 02:24:36 +1100 (EST) Received: from localhost ([::1]:34721 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UO8VG-000363-8i for incoming@patchwork.ozlabs.org; Fri, 05 Apr 2013 11:24:34 -0400 Received: from eggs.gnu.org ([208.118.235.92]:56645) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UO8Ut-00034K-VZ for qemu-devel@nongnu.org; Fri, 05 Apr 2013 11:24:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UO8Uo-0005q6-2I for qemu-devel@nongnu.org; Fri, 05 Apr 2013 11:24:11 -0400 Received: from mx1.redhat.com ([209.132.183.28]:61985) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UO8Un-0005pw-R0 for qemu-devel@nongnu.org; Fri, 05 Apr 2013 11:24:06 -0400 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r35FO4gc021398 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 5 Apr 2013 11:24:04 -0400 Received: from yakj.usersys.redhat.com (ovpn-112-42.ams2.redhat.com [10.36.112.42]) by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id r35FO0x4023220; Fri, 5 Apr 2013 11:24:01 -0400 Message-ID: <515EEC8C.8010203@redhat.com> Date: Fri, 05 Apr 2013 17:23:56 +0200 From: Paolo Bonzini User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130311 Thunderbird/17.0.4 MIME-Version: 1.0 To: Kevin Wolf References: <1363963683-26157-1-git-send-email-owasserm@redhat.com> <1363963683-26157-8-git-send-email-owasserm@redhat.com> <20130405134445.GD2351@dhcp-200-207.str.redhat.com> In-Reply-To: <20130405134445.GD2351@dhcp-200-207.str.redhat.com> X-Enigmail-Version: 1.5.1 X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: Orit Wasserman , quintela@redhat.com, chegu_vinod@hp.com, qemu-devel@nongnu.org, mst@redhat.com Subject: Re: [Qemu-devel] [PATCH v5 7/7] Use qemu_put_buffer_async for guest memory pages X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Il 05/04/2013 15:44, Kevin Wolf ha scritto: > This seems to have killed savevm performance. I noticed that > qemu-iotests case 007 took forever on my test box (882 seconds instead > of something like 10 seconds). It can be reproduced by this script: > > export MALLOC_PERTURB_=11 > qemu-img create -f qcow2 -o compat=1.1 test.qcow2 1M > time qemu-system-x86_64 -nographic -hda $TEST_IMG -serial none -monitor stdio < savevm test > quit > EOF > > This used to take about 0.6s for me, after this patch it's around 10s. The solution could be to make bdrv_load_vmstate take an iov/iovcnt pair. Alternatively, you can try the attached patch. I haven't yet tested it though, and won't be able to do so today. Paolo diff --git a/savevm.c b/savevm.c index b1d8988..af99d64 100644 --- a/savevm.c +++ b/savevm.c @@ -525,27 +525,24 @@ static void qemu_file_set_error(QEMUFile *f, int ret) static void qemu_fflush(QEMUFile *f) { ssize_t ret = 0; - int i = 0; if (!f->ops->writev_buffer && !f->ops->put_buffer) { return; } - if (f->is_write && f->iovcnt > 0) { + if (f->is_write) { if (f->ops->writev_buffer) { - ret = f->ops->writev_buffer(f->opaque, f->iov, f->iovcnt); - if (ret >= 0) { - f->pos += ret; + if (f->iovcnt > 0) { + ret = f->ops->writev_buffer(f->opaque, f->iov, f->iovcnt); } } else { - for (i = 0; i < f->iovcnt && ret >= 0; i++) { - ret = f->ops->put_buffer(f->opaque, f->iov[i].iov_base, f->pos, - f->iov[i].iov_len); - if (ret >= 0) { - f->pos += ret; - } + if (f->buf_index > 0) { + ret = f->ops->put_buffer(f->opaque, f->buf, f->pos, f->buf_index); } } + if (ret >= 0) { + f->pos += ret; + } f->buf_index = 0; f->iovcnt = 0; } @@ -631,6 +628,11 @@ static void add_to_iovec(QEMUFile *f, const uint8_t *buf, int size) f->iov[f->iovcnt].iov_base = (uint8_t *)buf; f->iov[f->iovcnt++].iov_len = size; } + + f->is_write = 1; + if (f->buf_index >= IO_BUF_SIZE || f->iovcnt >= MAX_IOV_SIZE) { + qemu_fflush(f); + } } void qemu_put_buffer_async(QEMUFile *f, const uint8_t *buf, int size) @@ -645,13 +647,11 @@ void qemu_put_buffer_async(QEMUFile *f, const uint8_t *buf, int size) abort(); } - add_to_iovec(f, buf, size); - - f->is_write = 1; - f->bytes_xfer += size; - - if (f->buf_index >= IO_BUF_SIZE || f->iovcnt >= MAX_IOV_SIZE) { - qemu_fflush(f); + if (f->ops->writev_buffer) { + f->bytes_xfer += size; + add_to_iovec(f, buf, size); + } else { + qemu_put_buffer(f, buf, size); } } @@ -674,9 +674,17 @@ void qemu_put_buffer(QEMUFile *f, const uint8_t *buf, int size) if (l > size) l = size; memcpy(f->buf + f->buf_index, buf, l); - f->is_write = 1; - f->buf_index += l; - qemu_put_buffer_async(f, f->buf + (f->buf_index - l), l); + f->bytes_xfer += size; + if (f->ops->writev_buffer) { + add_to_iovec(f, f->buf + f->buf_index, l); + f->buf_index += l; + } else { + f->is_write = 1; + f->buf_index += l; + if (f->buf_index == IO_BUF_SIZE) { + qemu_fflush(f); + } + } if (qemu_file_get_error(f)) { break; } @@ -697,14 +705,16 @@ void qemu_put_byte(QEMUFile *f, int v) abort(); } - f->buf[f->buf_index++] = v; - f->is_write = 1; + f->buf[f->buf_index] = v; f->bytes_xfer++; - - add_to_iovec(f, f->buf + (f->buf_index - 1), 1); - - if (f->buf_index >= IO_BUF_SIZE || f->iovcnt >= MAX_IOV_SIZE) { - qemu_fflush(f); + if (f->ops->writev_buffer) { + add_to_iovec(f, f->buf + f->buf_index, 1); + f->buf_index++; + } else { + f->buf_index++; + if (f->buf_index == IO_BUF_SIZE) { + qemu_fflush(f); + } } }