From patchwork Wed May 17 21:45:15 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Michael S. Tsirkin" X-Patchwork-Id: 763778 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3wSnv14gBzz9s4q for ; Thu, 18 May 2017 07:46:05 +1000 (AEST) Received: from localhost ([::1]:50871 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dB6lf-0000rg-5l for incoming@patchwork.ozlabs.org; Wed, 17 May 2017 17:46:03 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48152) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dB6kz-0000ps-6M for qemu-devel@nongnu.org; Wed, 17 May 2017 17:45:22 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dB6ky-0005N6-1R for qemu-devel@nongnu.org; Wed, 17 May 2017 17:45:21 -0400 Received: from mx1.redhat.com ([209.132.183.28]:58562) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dB6kx-0005Mn-OB for qemu-devel@nongnu.org; Wed, 17 May 2017 17:45:19 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 9F107C05681A; Wed, 17 May 2017 21:45:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 9F107C05681A Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=mst@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 9F107C05681A Received: from redhat.com (ovpn-120-251.rdu2.redhat.com [10.10.120.251]) by smtp.corp.redhat.com (Postfix) with ESMTP id 347265B808; Wed, 17 May 2017 21:45:16 +0000 (UTC) Date: Thu, 18 May 2017 00:45:15 +0300 From: "Michael S. Tsirkin" To: qemu-devel@nongnu.org Message-ID: <1495057396-13387-5-git-send-email-mst@redhat.com> References: <1495057396-13387-1-git-send-email-mst@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1495057396-13387-1-git-send-email-mst@redhat.com> X-Mutt-Fcc: =sent X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Wed, 17 May 2017 21:45:18 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 04/13] libvhost-user: fix crash when rings aren't ready X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= , "Dr . David Alan Gilbert" , Felipe Franciosi , =?utf-8?Q?Marc-Andr=C3=A9?= Lureau Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Marc-André Lureau Calling libvhost-user functions like vu_queue_get_avail_bytes() when the queue doesn't yet have addresses will result in the crashes like the following: Program received signal SIGSEGV, Segmentation fault. 0x000055c414112ce4 in vring_avail_idx (vq=0x55c41582fd68, vq=0x55c41582fd68) at /home/dgilbert/git/qemu/contrib/libvhost-user/libvhost-user.c:940 940 vq->shadow_avail_idx = vq->vring.avail->idx; (gdb) p vq $1 = (VuVirtq *) 0x55c41582fd68 (gdb) p vq->vring $2 = {num = 0, desc = 0x0, avail = 0x0, used = 0x0, log_guest_addr = 0, flags = 0} at /home/dgilbert/git/qemu/contrib/libvhost-user/libvhost-user.c:940 No locals. at /home/dgilbert/git/qemu/contrib/libvhost-user/libvhost-user.c:960 num_heads = out_bytes=out_bytes@entry=0x7fffd035d7c4, max_in_bytes=max_in_bytes@entry=0, max_out_bytes=max_out_bytes@entry=0) at /home/dgilbert/git/qemu/contrib/libvhost-user/libvhost-user.c:1034 Add a pre-condition checks on vring.avail before accessing it. Fix documentation and return type of vu_queue_empty() while at it. Signed-off-by: Marc-André Lureau Tested-by: Dr. David Alan Gilbert Reviewed-by: Philippe Mathieu-Daudé Reviewed-by: Michael S. Tsirkin Signed-off-by: Michael S. Tsirkin --- contrib/libvhost-user/libvhost-user.h | 6 +++--- contrib/libvhost-user/libvhost-user.c | 26 ++++++++++++++++++++------ 2 files changed, 23 insertions(+), 9 deletions(-) diff --git a/contrib/libvhost-user/libvhost-user.h b/contrib/libvhost-user/libvhost-user.h index 156b50e..af02a31 100644 --- a/contrib/libvhost-user/libvhost-user.h +++ b/contrib/libvhost-user/libvhost-user.h @@ -327,13 +327,13 @@ void vu_queue_set_notification(VuDev *dev, VuVirtq *vq, int enable); bool vu_queue_enabled(VuDev *dev, VuVirtq *vq); /** - * vu_queue_enabled: + * vu_queue_empty: * @dev: a VuDev context * @vq: a VuVirtq queue * - * Returns: whether the queue is empty. + * Returns: true if the queue is empty or not ready. */ -int vu_queue_empty(VuDev *dev, VuVirtq *vq); +bool vu_queue_empty(VuDev *dev, VuVirtq *vq); /** * vu_queue_notify: diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c index 61e1657..9efb9da 100644 --- a/contrib/libvhost-user/libvhost-user.c +++ b/contrib/libvhost-user/libvhost-user.c @@ -1031,6 +1031,11 @@ vu_queue_get_avail_bytes(VuDev *dev, VuVirtq *vq, unsigned int *in_bytes, idx = vq->last_avail_idx; total_bufs = in_total = out_total = 0; + if (unlikely(dev->broken) || + unlikely(!vq->vring.avail)) { + goto done; + } + while ((rc = virtqueue_num_heads(dev, vq, idx)) > 0) { unsigned int max, num_bufs, indirect = 0; struct vring_desc *desc; @@ -1121,11 +1126,16 @@ vu_queue_avail_bytes(VuDev *dev, VuVirtq *vq, unsigned int in_bytes, /* Fetch avail_idx from VQ memory only when we really need to know if * guest has added some buffers. */ -int +bool vu_queue_empty(VuDev *dev, VuVirtq *vq) { + if (unlikely(dev->broken) || + unlikely(!vq->vring.avail)) { + return true; + } + if (vq->shadow_avail_idx != vq->last_avail_idx) { - return 0; + return false; } return vring_avail_idx(vq) == vq->last_avail_idx; @@ -1174,7 +1184,8 @@ vring_notify(VuDev *dev, VuVirtq *vq) void vu_queue_notify(VuDev *dev, VuVirtq *vq) { - if (unlikely(dev->broken)) { + if (unlikely(dev->broken) || + unlikely(!vq->vring.avail)) { return; } @@ -1291,7 +1302,8 @@ vu_queue_pop(VuDev *dev, VuVirtq *vq, size_t sz) struct vring_desc *desc; int rc; - if (unlikely(dev->broken)) { + if (unlikely(dev->broken) || + unlikely(!vq->vring.avail)) { return NULL; } @@ -1445,7 +1457,8 @@ vu_queue_fill(VuDev *dev, VuVirtq *vq, { struct vring_used_elem uelem; - if (unlikely(dev->broken)) { + if (unlikely(dev->broken) || + unlikely(!vq->vring.avail)) { return; } @@ -1474,7 +1487,8 @@ vu_queue_flush(VuDev *dev, VuVirtq *vq, unsigned int count) { uint16_t old, new; - if (unlikely(dev->broken)) { + if (unlikely(dev->broken) || + unlikely(!vq->vring.avail)) { return; }