From patchwork Mon Apr 27 14:18:36 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Michael S. Tsirkin" X-Patchwork-Id: 465041 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 7C1B91402B6 for ; Tue, 28 Apr 2015 00:19:11 +1000 (AEST) Received: from localhost ([::1]:55373 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YmjsL-0005Bv-Ax for incoming@patchwork.ozlabs.org; Mon, 27 Apr 2015 10:19:09 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49091) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Ymjrv-0004WA-Ex for qemu-devel@nongnu.org; Mon, 27 Apr 2015 10:18:44 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Ymjrs-0004jE-5a for qemu-devel@nongnu.org; Mon, 27 Apr 2015 10:18:43 -0400 Received: from mx1.redhat.com ([209.132.183.28]:39791) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Ymjrr-0004is-Uu for qemu-devel@nongnu.org; Mon, 27 Apr 2015 10:18:40 -0400 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by mx1.redhat.com (Postfix) with ESMTPS id E688D8E78C for ; Mon, 27 Apr 2015 14:18:38 +0000 (UTC) Received: from redhat.com (ovpn-116-63.ams2.redhat.com [10.36.116.63]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP id t3REIaGu020087; Mon, 27 Apr 2015 10:18:37 -0400 Date: Mon, 27 Apr 2015 16:18:36 +0200 From: "Michael S. Tsirkin" To: qemu-devel@nongnu.org Message-ID: <1430144304-13514-1-git-send-email-mst@redhat.com> MIME-Version: 1.0 Content-Disposition: inline X-Mutt-Fcc: =sent X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: Paolo Bonzini Subject: [Qemu-devel] [PATCH RFC] virtio: add virtqueue_fill_partial X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org On error, virtio blk dirties guest memory but doesn't want to tell guest about it. Add virtqueue_fill_partial to cover this use case. This gets two parameters: host_len is >= the amount of guest memory actually written, guest_len is how much we guarantee to guest. Cc: Paolo Bonzini Signed-off-by: Michael S. Tsirkin --- include/hw/virtio/virtio.h | 3 +++ hw/virtio/virtio.c | 25 ++++++++++++++++++++----- 2 files changed, 23 insertions(+), 5 deletions(-) diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h index e3adb1d..9957aae 100644 --- a/include/hw/virtio/virtio.h +++ b/include/hw/virtio/virtio.h @@ -135,6 +135,9 @@ void virtio_del_queue(VirtIODevice *vdev, int n); void virtqueue_push(VirtQueue *vq, const VirtQueueElement *elem, unsigned int len); void virtqueue_flush(VirtQueue *vq, unsigned int count); +void virtqueue_fill_partial(VirtQueue *vq, const VirtQueueElement *elem, + unsigned int host_len, unsigned int guest_len, + unsigned int idx); void virtqueue_fill(VirtQueue *vq, const VirtQueueElement *elem, unsigned int len, unsigned int idx); diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index 159e5c6..111b0db 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -241,17 +241,26 @@ int virtio_queue_empty(VirtQueue *vq) return vring_avail_idx(vq) == vq->last_avail_idx; } -void virtqueue_fill(VirtQueue *vq, const VirtQueueElement *elem, - unsigned int len, unsigned int idx) +/* + * Some devices dirty guest memory but don't want to tell guest about it. In + * that case, use virtqueue_fill_partial: host_len is >= the amount of guest + * memory actually written, guest_len is how much we guarantee to guest. + * If you know exactly how much was written, use virtqueue_fill instead. + */ +void virtqueue_fill_partial(VirtQueue *vq, const VirtQueueElement *elem, + unsigned int host_len, unsigned int guest_len, + unsigned int idx) { unsigned int offset; int i; - trace_virtqueue_fill(vq, elem, len, idx); + assert(host_len >= guest_len); + + trace_virtqueue_fill(vq, elem, guest_len, idx); offset = 0; for (i = 0; i < elem->in_num; i++) { - size_t size = MIN(len - offset, elem->in_sg[i].iov_len); + size_t size = MIN(host_len - offset, elem->in_sg[i].iov_len); cpu_physical_memory_unmap(elem->in_sg[i].iov_base, elem->in_sg[i].iov_len, @@ -269,7 +278,13 @@ void virtqueue_fill(VirtQueue *vq, const VirtQueueElement *elem, /* Get a pointer to the next entry in the used ring. */ vring_used_ring_id(vq, idx, elem->index); - vring_used_ring_len(vq, idx, len); + vring_used_ring_len(vq, idx, guest_len); +} + +void virtqueue_fill(VirtQueue *vq, const VirtQueueElement *elem, + unsigned int len, unsigned int idx) +{ + virtqueue_fill_partial(vq, elem, len, len, idx); } void virtqueue_flush(VirtQueue *vq, unsigned int count)