From patchwork Mon Mar 15 19:48:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Eugenio Perez Martin X-Patchwork-Id: 1453538 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=YFQ7U0gf; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4DznFc5f02z9sTD for ; Tue, 16 Mar 2021 06:55:00 +1100 (AEDT) Received: from localhost ([::1]:35366 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lLtIo-0003NL-RE for incoming@patchwork.ozlabs.org; Mon, 15 Mar 2021 15:54:58 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:37770) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lLtDl-0005gE-JI for qemu-devel@nongnu.org; Mon, 15 Mar 2021 15:49:45 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:56357) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1lLtDj-00048q-TM for qemu-devel@nongnu.org; Mon, 15 Mar 2021 15:49:45 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1615837783; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vd99cAaSU6abm6dJJrsG/VaQ4xBejc6I+hFkbFVbYYg=; b=YFQ7U0gfK1hGEMSb6raGMb2Vceekt6XNN3L4KhmpZUuBJkkRM7zR3lISZnBRgkOAM5YxTV h6OV8GNNX98YJIYtSaoZ46FpXAvxo6tqhk9zzsaZwCFotPXv/yi6V2MbNaQOkpUKRXN4Ub 2yYD9z9mM3G7GxPNreXUIgdBxZjj/0Q= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-581-12HN0WNgNReWuLyaKBKEAg-1; Mon, 15 Mar 2021 15:49:41 -0400 X-MC-Unique: 12HN0WNgNReWuLyaKBKEAg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 95DC392502; Mon, 15 Mar 2021 19:49:39 +0000 (UTC) Received: from eperezma.remote.csb (ovpn-112-173.ams2.redhat.com [10.36.112.173]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4614A1C4; Mon, 15 Mar 2021 19:49:36 +0000 (UTC) From: =?utf-8?q?Eugenio_P=C3=A9rez?= To: qemu-devel@nongnu.org Subject: [RFC v2 09/13] virtio: Add virtio_queue_full Date: Mon, 15 Mar 2021 20:48:38 +0100 Message-Id: <20210315194842.277740-10-eperezma@redhat.com> In-Reply-To: <20210315194842.277740-1-eperezma@redhat.com> References: <20210315194842.277740-1-eperezma@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eperezma@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=216.205.24.124; envelope-from=eperezma@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -29 X-Spam_score: -3.0 X-Spam_bar: --- X-Spam_report: (-3.0 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.25, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Parav Pandit , "Michael S. Tsirkin" , Guru Prasad , Jason Wang , Juan Quintela , Markus Armbruster , virtualization@lists.linux-foundation.org, Harpreet Singh Anand , Xiao W Wang , Eli Cohen , Stefano Garzarella , Michael Lilja , Jim Harford , Rob Miller Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" Check if all descriptors of the queue are available. In other words, is the complete opposite of virtio_queue_empty: If the queue is full, the driver cannot transfer more buffers to the device until the latter make some as used. In Shadow vq this situation happens with the correct guest network driver, since the rx queue is filled for the device to write. Since Shadow Virtqueue forward the available ones blindly, it will call the driver forever for them, reaching the point where no more descriptors are available. While a straightforward solution is to keep the count of them in SVQ, this specific issue is the only need for that counter. Exposing this check helps to keep the SVQ simpler storing as little status as possible. Signed-off-by: Eugenio PĂ©rez --- include/hw/virtio/virtio.h | 2 ++ hw/virtio/virtio.c | 18 ++++++++++++++++-- 2 files changed, 18 insertions(+), 2 deletions(-) diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h index c2c7cee993..899c5e3506 100644 --- a/include/hw/virtio/virtio.h +++ b/include/hw/virtio/virtio.h @@ -232,6 +232,8 @@ int virtio_queue_ready(VirtQueue *vq); int virtio_queue_empty(VirtQueue *vq); +bool virtio_queue_full(const VirtQueue *vq); + /* Host binding interface. */ uint32_t virtio_config_readb(VirtIODevice *vdev, uint32_t addr); diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index a86b3f9c26..e9a4d9ffae 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -670,6 +670,20 @@ int virtio_queue_empty(VirtQueue *vq) } } +/* + * virtio_queue_full: + * @vq The #VirtQueue + * + * Check if all descriptors of the queue are available. In other words, is the + * complete opposite of virtio_queue_empty: If the queue is full, the driver + * cannot transfer more buffers to the device until the latter make some as + * used. + */ +bool virtio_queue_full(const VirtQueue *vq) +{ + return vq->inuse >= vq->vring.num; +} + static void virtqueue_unmap_sg(VirtQueue *vq, const VirtQueueElement *elem, unsigned int len) { @@ -1439,7 +1453,7 @@ static void *virtqueue_split_pop(VirtQueue *vq, size_t sz) max = vq->vring.num; - if (vq->inuse >= vq->vring.num) { + if (unlikely(virtio_queue_full(vq))) { virtio_error(vdev, "Virtqueue size exceeded"); goto done; } @@ -1574,7 +1588,7 @@ static void *virtqueue_packed_pop(VirtQueue *vq, size_t sz) max = vq->vring.num; - if (vq->inuse >= vq->vring.num) { + if (unlikely(virtio_queue_full(vq))) { virtio_error(vdev, "Virtqueue size exceeded"); goto done; }