From patchwork Wed Sep 21 15:52:21 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 672960 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3sfPRl57dtz9sQw for ; Thu, 22 Sep 2016 01:58:27 +1000 (AEST) Received: from localhost ([::1]:43434 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bmjuj-0001tX-7F for incoming@patchwork.ozlabs.org; Wed, 21 Sep 2016 11:58:25 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48137) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bmjp9-0004s7-CW for qemu-devel@nongnu.org; Wed, 21 Sep 2016 11:52:40 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bmjp8-0005re-4v for qemu-devel@nongnu.org; Wed, 21 Sep 2016 11:52:39 -0400 Received: from mx1.redhat.com ([209.132.183.28]:60610) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bmjp7-0005pZ-RN for qemu-devel@nongnu.org; Wed, 21 Sep 2016 11:52:38 -0400 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 4D95B4DB14; Wed, 21 Sep 2016 15:52:37 +0000 (UTC) Received: from localhost (ovpn-112-31.ams2.redhat.com [10.36.112.31]) by int-mx13.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u8LFqZf2027113; Wed, 21 Sep 2016 11:52:36 -0400 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Date: Wed, 21 Sep 2016 16:52:21 +0100 Message-Id: <1474473146-19337-5-git-send-email-stefanha@redhat.com> In-Reply-To: <1474473146-19337-1-git-send-email-stefanha@redhat.com> References: <1474473146-19337-1-git-send-email-stefanha@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.26 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Wed, 21 Sep 2016 15:52:37 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH v5 4/9] virtio: handle virtqueue_map_desc() errors X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Cornelia Huck , Fam Zheng , Stefan Hajnoczi , "Michael S. Tsirkin" Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" Errors can occur during virtqueue_pop(), especially in virtqueue_map_desc(). In order to handle this we must unmap iov[] before returning NULL. The caller will consider the virtqueue empty and the virtio_error() call will have marked the device broken. Signed-off-by: Stefan Hajnoczi Reviewed-by: Cornelia Huck --- hw/virtio/virtio.c | 74 ++++++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 55 insertions(+), 19 deletions(-) diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index bac6b51..f2d6c3c 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -484,30 +484,33 @@ int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes, return in_bytes <= in_total && out_bytes <= out_total; } -static void virtqueue_map_desc(unsigned int *p_num_sg, hwaddr *addr, struct iovec *iov, +static bool virtqueue_map_desc(VirtIODevice *vdev, unsigned int *p_num_sg, + hwaddr *addr, struct iovec *iov, unsigned int max_num_sg, bool is_write, hwaddr pa, size_t sz) { + bool ok = false; unsigned num_sg = *p_num_sg; assert(num_sg <= max_num_sg); if (!sz) { - error_report("virtio: zero sized buffers are not allowed"); - exit(1); + virtio_error(vdev, "virtio: zero sized buffers are not allowed"); + goto out; } while (sz) { hwaddr len = sz; if (num_sg == max_num_sg) { - error_report("virtio: too many write descriptors in indirect table"); - exit(1); + virtio_error(vdev, "virtio: too many write descriptors in " + "indirect table"); + goto out; } iov[num_sg].iov_base = cpu_physical_memory_map(pa, &len, is_write); if (!iov[num_sg].iov_base) { - error_report("virtio: bogus descriptor or out of resources"); - exit(1); + virtio_error(vdev, "virtio: bogus descriptor or out of resources"); + goto out; } iov[num_sg].iov_len = len; @@ -517,7 +520,28 @@ static void virtqueue_map_desc(unsigned int *p_num_sg, hwaddr *addr, struct iove pa += len; num_sg++; } + ok = true; + +out: *p_num_sg = num_sg; + return ok; +} + +/* Only used by error code paths before we have a VirtQueueElement (therefore + * virtqueue_unmap_sg() can't be used). Assumes buffers weren't written to + * yet. + */ +static void virtqueue_undo_map_desc(unsigned int out_num, unsigned int in_num, + struct iovec *iov) +{ + unsigned int i; + + for (i = 0; i < out_num + in_num; i++) { + int is_write = i >= out_num; + + cpu_physical_memory_unmap(iov->iov_base, iov->iov_len, is_write, 0); + iov++; + } } static void virtqueue_map_iovec(struct iovec *sg, hwaddr *addr, @@ -609,8 +633,8 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz) max = vq->vring.num; if (vq->inuse >= vq->vring.num) { - error_report("Virtqueue size exceeded"); - exit(1); + virtio_error(vdev, "Virtqueue size exceeded"); + return NULL; } i = head = virtqueue_get_head(vq, vq->last_avail_idx++); @@ -621,8 +645,8 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz) vring_desc_read(vdev, &desc, desc_pa, i); if (desc.flags & VRING_DESC_F_INDIRECT) { if (desc.len % sizeof(VRingDesc)) { - error_report("Invalid size for indirect buffer table"); - exit(1); + virtio_error(vdev, "Invalid size for indirect buffer table"); + return NULL; } /* loop over the indirect descriptor table */ @@ -634,22 +658,30 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz) /* Collect all the descriptors */ do { + bool map_ok; + if (desc.flags & VRING_DESC_F_WRITE) { - virtqueue_map_desc(&in_num, addr + out_num, iov + out_num, - VIRTQUEUE_MAX_SIZE - out_num, true, desc.addr, desc.len); + map_ok = virtqueue_map_desc(vdev, &in_num, addr + out_num, + iov + out_num, + VIRTQUEUE_MAX_SIZE - out_num, true, + desc.addr, desc.len); } else { if (in_num) { - error_report("Incorrect order for descriptors"); - exit(1); + virtio_error(vdev, "Incorrect order for descriptors"); + goto err_undo_map; } - virtqueue_map_desc(&out_num, addr, iov, - VIRTQUEUE_MAX_SIZE, false, desc.addr, desc.len); + map_ok = virtqueue_map_desc(vdev, &out_num, addr, iov, + VIRTQUEUE_MAX_SIZE, false, + desc.addr, desc.len); + } + if (!map_ok) { + goto err_undo_map; } /* If we've got too many, that implies a descriptor loop. */ if ((in_num + out_num) > max) { - error_report("Looped descriptor"); - exit(1); + virtio_error(vdev, "Looped descriptor"); + goto err_undo_map; } } while ((i = virtqueue_read_next_desc(vdev, &desc, desc_pa, max)) != max); @@ -669,6 +701,10 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz) trace_virtqueue_pop(vq, elem, elem->in_num, elem->out_num); return elem; + +err_undo_map: + virtqueue_undo_map_desc(out_num, in_num, iov); + return NULL; } /* Reading and writing a structure directly to QEMUFile is *awful*, but