From patchwork Mon Jul 3 19:44:32 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Michael S. Tsirkin" X-Patchwork-Id: 783599 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3x1czy5D94z9s71 for ; Tue, 4 Jul 2017 05:45:18 +1000 (AEST) Received: from localhost ([::1]:36969 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dS7HY-0007LG-7p for incoming@patchwork.ozlabs.org; Mon, 03 Jul 2017 15:45:16 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:46384) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dS7Gx-0007Km-Rg for qemu-devel@nongnu.org; Mon, 03 Jul 2017 15:44:40 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dS7Gt-0002M6-4J for qemu-devel@nongnu.org; Mon, 03 Jul 2017 15:44:39 -0400 Received: from mx1.redhat.com ([209.132.183.28]:51328) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dS7Gs-0002LE-RF for qemu-devel@nongnu.org; Mon, 03 Jul 2017 15:44:35 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C0275C058EBB; Mon, 3 Jul 2017 19:44:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com C0275C058EBB Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=mst@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com C0275C058EBB Received: from redhat.com (ovpn-120-211.rdu2.redhat.com [10.10.120.211]) by smtp.corp.redhat.com (Postfix) with SMTP id 39C9B62926; Mon, 3 Jul 2017 19:44:33 +0000 (UTC) Date: Mon, 3 Jul 2017 22:44:32 +0300 From: "Michael S. Tsirkin" To: qemu-devel@nongnu.org Message-ID: <1499111049-13721-2-git-send-email-mst@redhat.com> References: <1499111049-13721-1-git-send-email-mst@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1499111049-13721-1-git-send-email-mst@redhat.com> X-Mutt-Fcc: =sent X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Mon, 03 Jul 2017 19:44:33 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 01/21] virtio-net: enable configurable tx queue size X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Wei Wang , Jason Wang Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Wei Wang This patch enables the virtio-net tx queue size to be configurable between 256 (the default queue size) and 1024 by the user when the vhost-user backend is used. Currently, the maximum tx queue size for other backends is 512 due to the following limitations: - QEMU backend: the QEMU backend implementation in some cases may send 1024+1 iovs to writev. - Vhost_net backend: there are possibilities that the guest sends a vring_desc of memory which crosses a MemoryRegion thereby generating more than 1024 iovs after translation from guest-physical address in the backend. Signed-off-by: Wei Wang Reviewed-by: Michael S. Tsirkin Signed-off-by: Michael S. Tsirkin --- include/hw/virtio/virtio-net.h | 1 + hw/net/virtio-net.c | 32 ++++++++++++++++++++++++++++++-- 2 files changed, 31 insertions(+), 2 deletions(-) diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h index 602b486..b81b6a4 100644 --- a/include/hw/virtio/virtio-net.h +++ b/include/hw/virtio/virtio-net.h @@ -36,6 +36,7 @@ typedef struct virtio_net_conf int32_t txburst; char *tx; uint16_t rx_queue_size; + uint16_t tx_queue_size; uint16_t mtu; } virtio_net_conf; diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c index 91eddaf..a1fc0db 100644 --- a/hw/net/virtio-net.c +++ b/hw/net/virtio-net.c @@ -34,8 +34,11 @@ /* previously fixed value */ #define VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE 256 +#define VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE 256 + /* for now, only allow larger queues; with virtio-1, guest can downsize */ #define VIRTIO_NET_RX_QUEUE_MIN_SIZE VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE +#define VIRTIO_NET_TX_QUEUE_MIN_SIZE VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE /* * Calculate the number of bytes up to and including the given 'field' of @@ -1508,15 +1511,18 @@ static void virtio_net_add_queue(VirtIONet *n, int index) n->vqs[index].rx_vq = virtio_add_queue(vdev, n->net_conf.rx_queue_size, virtio_net_handle_rx); + if (n->net_conf.tx && !strcmp(n->net_conf.tx, "timer")) { n->vqs[index].tx_vq = - virtio_add_queue(vdev, 256, virtio_net_handle_tx_timer); + virtio_add_queue(vdev, n->net_conf.tx_queue_size, + virtio_net_handle_tx_timer); n->vqs[index].tx_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, virtio_net_tx_timer, &n->vqs[index]); } else { n->vqs[index].tx_vq = - virtio_add_queue(vdev, 256, virtio_net_handle_tx_bh); + virtio_add_queue(vdev, n->net_conf.tx_queue_size, + virtio_net_handle_tx_bh); n->vqs[index].tx_bh = qemu_bh_new(virtio_net_tx_bh, &n->vqs[index]); } @@ -1927,6 +1933,17 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp) return; } + if (n->net_conf.tx_queue_size < VIRTIO_NET_TX_QUEUE_MIN_SIZE || + n->net_conf.tx_queue_size > VIRTQUEUE_MAX_SIZE || + !is_power_of_2(n->net_conf.tx_queue_size)) { + error_setg(errp, "Invalid tx_queue_size (= %" PRIu16 "), " + "must be a power of 2 between %d and %d", + n->net_conf.tx_queue_size, VIRTIO_NET_TX_QUEUE_MIN_SIZE, + VIRTQUEUE_MAX_SIZE); + virtio_cleanup(vdev); + return; + } + n->max_queues = MAX(n->nic_conf.peers.queues, 1); if (n->max_queues * 2 + 1 > VIRTIO_QUEUE_MAX) { error_setg(errp, "Invalid number of queues (= %" PRIu32 "), " @@ -1947,6 +1964,15 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp) error_report("Defaulting to \"bh\""); } + /* + * Currently, backends other than vhost-user don't support 1024 queue + * size. + */ + if (n->net_conf.tx_queue_size == VIRTQUEUE_MAX_SIZE && + n->nic_conf.peers.ncs[0]->info->type != NET_CLIENT_DRIVER_VHOST_USER) { + n->net_conf.tx_queue_size = VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE; + } + for (i = 0; i < n->max_queues; i++) { virtio_net_add_queue(n, i); } @@ -2106,6 +2132,8 @@ static Property virtio_net_properties[] = { DEFINE_PROP_STRING("tx", VirtIONet, net_conf.tx), DEFINE_PROP_UINT16("rx_queue_size", VirtIONet, net_conf.rx_queue_size, VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE), + DEFINE_PROP_UINT16("tx_queue_size", VirtIONet, net_conf.tx_queue_size, + VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE), DEFINE_PROP_UINT16("host_mtu", VirtIONet, net_conf.mtu, 0), DEFINE_PROP_BOOL("x-mtu-bypass-backend", VirtIONet, mtu_bypass_backend, true),