From patchwork Sat Sep 23 19:16:05 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Scheurich X-Patchwork-Id: 817815 X-Patchwork-Delegate: ian.stokes@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=openvswitch.org (client-ip=140.211.169.12; helo=mail.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3y00Sc2yfMz9t42 for ; Sun, 24 Sep 2017 05:16:14 +1000 (AEST) Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id 5239C95D; Sat, 23 Sep 2017 19:16:10 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@mail.linuxfoundation.org Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 232B5910 for ; Sat, 23 Sep 2017 19:16:09 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from sessmg22.ericsson.net (sessmg22.ericsson.net [193.180.251.58]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id C4DB2454 for ; Sat, 23 Sep 2017 19:16:07 +0000 (UTC) X-AuditID: c1b4fb3a-617ff700000051a3-28-59c6b2f52e9e Received: from ESESSHC011.ericsson.se (Unknown_Domain [153.88.183.51]) by sessmg22.ericsson.net (Symantec Mail Security) with SMTP id C1.02.20899.5F2B6C95; Sat, 23 Sep 2017 21:16:05 +0200 (CEST) Received: from ESESSMB107.ericsson.se ([169.254.7.166]) by ESESSHC011.ericsson.se ([153.88.183.51]) with mapi id 14.03.0352.000; Sat, 23 Sep 2017 21:16:05 +0200 From: Jan Scheurich To: "'dev@dpdk.org'" Thread-Topic: [PATCH] vhost: Expose virtio interrupt requirement on rte_vhos API Thread-Index: AdMzguDlpmqh+EinR4K846qQE8uWZA== Date: Sat, 23 Sep 2017 19:16:05 +0000 Message-ID: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [153.88.183.150] MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrOLMWRmVeSWpSXmKPExsUyM2K7se7XTcciDba81Ld492k7k8XR03uY HZg8fi1Yyurx7OZ/xgCmKC6blNSczLLUIn27BK6MO63H2Avm2lfM+HKJuYHxjmkXIweHhICJ xK6PFl2MXBxCAkcYJW51r2GEcJYwShx+0MgCUsQmYCAxe7dDFyMnh4iAssSsY9vZQMLMAoYS 237Ug4SFBbwlVq6eDhYWEQiSuDpJFsLUk1gywxKkgkVAVaJ90jpWkDCvgK/E5Y2SIGFGATGJ 76fWMIHYzALiEreezAezJQQEJJbsOc8MYYtKvHz8jxXCVpJYsf0SI0R9vsTGEztYQGxeAUGJ kzOfsExgFJqFZNQsJGWzkJRBxHUkFuz+xAZha0ssW/iaGcY+c+AxE7L4Akb2VYyixanFxbnp RkZ6qUWZycXF+Xl6eaklmxiB0XFwy2+rHYwHnzseYhTgYFTi4TWbdyxSiDWxrLgy9xCjBAez kgjvvwagEG9KYmVValF+fFFpTmrxIUZpDhYlcV6HfRcihATSE0tSs1NTC1KLYLJMHJxSDYyu E7WV/A8zXJfOy7+keafuxe6bOx80brlxS2Piwqz2JX/uP/YKTE6K+85iI2p//e9s6YIDj8+c c58vxapoM1NOtXyer+s/u2+JWRbGlYdaL1qXH3yccEt/6cZcjdAEi4zohTtP3d1Sszpl36e2 +5M+X8uIOXa05jLvqZ4lsz+xL+juuSjVe8lHiaU4I9FQi7moOBEA1QLFJ4oCAAA= X-Spam-Status: No, score=-2.3 required=5.0 tests=HTML_MESSAGE, RCVD_IN_DNSWL_MED autolearn=disabled version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org X-Content-Filtered-By: Mailman/MimeDel 2.1.12 Cc: "'dev@openvswitch.org'" Subject: [ovs-dev] [PATCH] vhost: Expose virtio interrupt requirement on rte_vhos API X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: ovs-dev-bounces@openvswitch.org Errors-To: ovs-dev-bounces@openvswitch.org Performance tests with the OVS DPDK datapath have shown that the tx throughput over a vhostuser port into a VM with an interrupt-based virtio driver is limited by the overhead incurred by virtio interrupts. The OVS PMD spends up to 30% of its cycles in system calls kicking the eventfd. Also the core running the vCPU is heavily loaded with generating the virtio interrupts in KVM on the host and handling these interrupts in the virtio-net driver in the guest. This limits the throughput to about 500-700 Kpps with a single vCPU. OVS is trying to address this issue by batching packets to a vhostuser port for some time to limit the virtio interrupt frequency. With a 50 us batching period we have measured an iperf3 throughput increase by 15% and a PMD utilization decrease from 45% to 30%. On the other hand, guests using virtio PMDs do not profit from time-based tx batching. Instead they experience a 2-3% performance penalty and an average latency increase of 30-40 us. OVS therefore intends to apply time-based tx batching only for vhostuser tx queues that need to trigger virtio interrupts. Today this information is hidden inside the rte_vhost library and not accessible to users of the API. This patch adds a function to the API to query it. Signed-off-by: Jan Scheurich Signed-off-by: Jan Scheurich > --- lib/librte_vhost/rte_vhost.h | 12 ++++++++++++ lib/librte_vhost/vhost.c | 19 +++++++++++++++++++ 2 files changed, 31 insertions(+) diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h index 8c974eb..d62338b 100644 --- a/lib/librte_vhost/rte_vhost.h +++ b/lib/librte_vhost/rte_vhost.h @@ -444,6 +444,18 @@ int rte_vhost_get_vhost_vring(int vid, uint16_t vring_idx, */ uint32_t rte_vhost_rx_queue_count(int vid, uint16_t qid); +/** + * Does the virtio driver request interrupts for a vhost tx queue? + * + * @param vid + * vhost device ID + * @param qid + * virtio queue index in mq case + * @return + * 1 if true, 0 if false + */ +int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid); + #ifdef __cplusplus } #endif diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c index 0b6aa1c..bd1ebf9 100644 --- a/lib/librte_vhost/vhost.c +++ b/lib/librte_vhost/vhost.c @@ -503,3 +503,22 @@ struct virtio_net * return *((volatile uint16_t *)&vq->avail->idx) - vq->last_avail_idx; } + +int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid) +{ + struct virtio_net *dev; + struct vhost_virtqueue *vq; + + dev = get_device(vid); + if (dev == NULL) + return 0; + + vq = dev->virtqueue[qid]; + if (vq == NULL) + return 0; + + if (unlikely(vq->enabled == 0 || vq->avail == NULL)) + return 0; + + return !(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT); +}