From patchwork Fri Apr 5 15:42:29 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 234183 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id BEE752C027F for ; Sat, 6 Apr 2013 02:44:06 +1100 (EST) Received: from localhost ([::1]:51311 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UO8o9-0006U0-1F for incoming@patchwork.ozlabs.org; Fri, 05 Apr 2013 11:44:05 -0400 Received: from eggs.gnu.org ([208.118.235.92]:32810) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UO8mq-0004r9-PB for qemu-devel@nongnu.org; Fri, 05 Apr 2013 11:42:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UO8mm-0004jp-Fq for qemu-devel@nongnu.org; Fri, 05 Apr 2013 11:42:44 -0400 Received: from mx1.redhat.com ([209.132.183.28]:36589) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UO8mm-0004ja-7m for qemu-devel@nongnu.org; Fri, 05 Apr 2013 11:42:40 -0400 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r35FgaEa029818 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 5 Apr 2013 11:42:36 -0400 Received: from yakj.usersys.redhat.com (ovpn-112-42.ams2.redhat.com [10.36.112.42]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id r35FgW4K019633; Fri, 5 Apr 2013 11:42:33 -0400 Message-ID: <515EF0E5.9090205@redhat.com> Date: Fri, 05 Apr 2013 17:42:29 +0200 From: Paolo Bonzini User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130311 Thunderbird/17.0.4 MIME-Version: 1.0 To: Kevin Wolf References: <1363963683-26157-1-git-send-email-owasserm@redhat.com> <1363963683-26157-8-git-send-email-owasserm@redhat.com> <20130405134445.GD2351@dhcp-200-207.str.redhat.com> <515EEC8C.8010203@redhat.com> <20130405153927.GE2351@dhcp-200-207.str.redhat.com> In-Reply-To: <20130405153927.GE2351@dhcp-200-207.str.redhat.com> X-Enigmail-Version: 1.5.1 X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: Orit Wasserman , quintela@redhat.com, chegu_vinod@hp.com, qemu-devel@nongnu.org, mst@redhat.com Subject: Re: [Qemu-devel] [PATCH v5 7/7] Use qemu_put_buffer_async for guest memory pages X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Il 05/04/2013 17:39, Kevin Wolf ha scritto: >> > The solution could be to make bdrv_load_vmstate take an iov/iovcnt pair. > Ah, so you're saying that instead of linearising the buffer it breaks up > the requests in tiny pieces? Only for RAM (header/page/header/page...), because the page comes straight from the guest memory. Device state is still buffered and fast. > Implementing vectored bdrv_load/save_vmstate should be easy in theory. > >> > Alternatively, you can try the attached patch. I haven't yet tested it >> > though, and won't be able to do so today. > Attempted to write to buffer while read buffer is not empty > > Program received signal SIGABRT, Aborted. Second try. Paolo diff --git a/savevm.c b/savevm.c index b1d8988..5871642 100644 --- a/savevm.c +++ b/savevm.c @@ -525,27 +525,24 @@ static void qemu_file_set_error(QEMUFile *f, int ret) static void qemu_fflush(QEMUFile *f) { ssize_t ret = 0; - int i = 0; if (!f->ops->writev_buffer && !f->ops->put_buffer) { return; } - if (f->is_write && f->iovcnt > 0) { + if (f->is_write) { if (f->ops->writev_buffer) { - ret = f->ops->writev_buffer(f->opaque, f->iov, f->iovcnt); - if (ret >= 0) { - f->pos += ret; + if (f->iovcnt > 0) { + ret = f->ops->writev_buffer(f->opaque, f->iov, f->iovcnt); } } else { - for (i = 0; i < f->iovcnt && ret >= 0; i++) { - ret = f->ops->put_buffer(f->opaque, f->iov[i].iov_base, f->pos, - f->iov[i].iov_len); - if (ret >= 0) { - f->pos += ret; - } + if (f->buf_index > 0) { + ret = f->ops->put_buffer(f->opaque, f->buf, f->pos, f->buf_index); } } + if (ret >= 0) { + f->pos += ret; + } f->buf_index = 0; f->iovcnt = 0; } @@ -631,6 +628,11 @@ static void add_to_iovec(QEMUFile *f, const uint8_t *buf, int size) f->iov[f->iovcnt].iov_base = (uint8_t *)buf; f->iov[f->iovcnt++].iov_len = size; } + + f->is_write = 1; + if (f->buf_index >= IO_BUF_SIZE || f->iovcnt >= MAX_IOV_SIZE) { + qemu_fflush(f); + } } void qemu_put_buffer_async(QEMUFile *f, const uint8_t *buf, int size) @@ -645,13 +647,11 @@ void qemu_put_buffer_async(QEMUFile *f, const uint8_t *buf, int size) abort(); } - add_to_iovec(f, buf, size); - - f->is_write = 1; - f->bytes_xfer += size; - - if (f->buf_index >= IO_BUF_SIZE || f->iovcnt >= MAX_IOV_SIZE) { - qemu_fflush(f); + if (f->ops->writev_buffer) { + f->bytes_xfer += size; + add_to_iovec(f, buf, size); + } else { + qemu_put_buffer(f, buf, size); } } @@ -674,9 +674,17 @@ void qemu_put_buffer(QEMUFile *f, const uint8_t *buf, int size) if (l > size) l = size; memcpy(f->buf + f->buf_index, buf, l); - f->is_write = 1; - f->buf_index += l; - qemu_put_buffer_async(f, f->buf + (f->buf_index - l), l); + f->bytes_xfer += size; + if (f->ops->writev_buffer) { + add_to_iovec(f, f->buf + f->buf_index, l); + f->buf_index += l; + } else { + f->is_write = 1; + f->buf_index += l; + if (f->buf_index == IO_BUF_SIZE) { + qemu_fflush(f); + } + } if (qemu_file_get_error(f)) { break; } @@ -697,14 +705,17 @@ void qemu_put_byte(QEMUFile *f, int v) abort(); } - f->buf[f->buf_index++] = v; - f->is_write = 1; + f->buf[f->buf_index] = v; f->bytes_xfer++; - - add_to_iovec(f, f->buf + (f->buf_index - 1), 1); - - if (f->buf_index >= IO_BUF_SIZE || f->iovcnt >= MAX_IOV_SIZE) { - qemu_fflush(f); + if (f->ops->writev_buffer) { + add_to_iovec(f, f->buf + f->buf_index, 1); + f->buf_index++; + } else { + f->is_write = 1; + f->buf_index++; + if (f->buf_index == IO_BUF_SIZE) { + qemu_fflush(f); + } } }