From patchwork Mon Oct 19 13:06:26 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Flavio Leitner X-Patchwork-Id: 532301 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (li376-54.members.linode.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 939491401AF for ; Tue, 20 Oct 2015 00:07:35 +1100 (AEDT) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id 3FF781098D; Mon, 19 Oct 2015 06:07:34 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx1e4.cudamail.com (mx1.cudamail.com [69.90.118.67]) by archives.nicira.com (Postfix) with ESMTPS id D53091097D for ; Mon, 19 Oct 2015 06:07:32 -0700 (PDT) Received: from bar2.cudamail.com (unknown [192.168.21.12]) by mx1e4.cudamail.com (Postfix) with ESMTPS id F056D1E01A0 for ; Mon, 19 Oct 2015 07:07:31 -0600 (MDT) X-ASG-Debug-ID: 1445260050-03dc5314d806f70001-byXFYA Received: from mx1-pf2.cudamail.com ([192.168.24.2]) by bar2.cudamail.com with ESMTP id tBXNpbQn7tt9D2r2 (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Mon, 19 Oct 2015 07:07:30 -0600 (MDT) X-Barracuda-Envelope-From: fbl@sysclose.org X-Barracuda-RBL-Trusted-Forwarder: 192.168.24.2 Received: from unknown (HELO sender163-mail.zoho.com) (74.201.84.163) by mx1-pf2.cudamail.com with ESMTPS (AES128-SHA encrypted); 19 Oct 2015 13:07:29 -0000 Received-SPF: none (mx1-pf2.cudamail.com: domain at sysclose.org does not designate permitted sender hosts) X-Barracuda-Apparent-Source-IP: 74.201.84.163 X-Barracuda-RBL-IP: 74.201.84.163 Received: from x240.home.com (187.121.128.190 [187.121.128.190]) by mx.zohomail.com with SMTPS id 1445260046391809.0452819990701; Mon, 19 Oct 2015 06:07:26 -0700 (PDT) X-CudaMail-Envelope-Sender: fbl@sysclose.org From: Flavio Leitner To: X-CudaMail-MID: CM-E2-1018017807 X-CudaMail-DTE: 101915 X-CudaMail-Originating-IP: 74.201.84.163 Date: Mon, 19 Oct 2015 11:06:26 -0200 X-ASG-Orig-Subj: [##CM-E2-1018017807##][RFC PATCH v2] netdev-dpdk: Add vhost-user multiqueue support Message-Id: <1445259986-12022-1-git-send-email-fbl@sysclose.org> X-Mailer: git-send-email 2.4.3 X-ZohoMail: Ss SS_10 UW UB iCHF_INT_SMD_EXT UW UB SGR3_0_06105_577 X-ZohoMail-Owner: <1445259986-12022-1-git-send-email-fbl@sysclose.org>+zmo_0_ X-ZohoMail-Sender: 187.121.128.190 X-Zoho-Virus-Status: 2 X-GBUdb-Analysis: 0, 74.201.84.163, Ugly c=0.226425 p=-0.111111 Source Normal X-MessageSniffer-Rules: 0-0-0-21700-c X-Barracuda-Connect: UNKNOWN[192.168.24.2] X-Barracuda-Start-Time: 1445260050 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 X-Barracuda-Spam-Score: 0.60 X-Barracuda-Spam-Status: No, SCORE=0.60 using per-user scores of TAG_LEVEL=3.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests=BSF_SC5_MJ1963, RDNS_NONE X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.23625 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.10 RDNS_NONE Delivered to trusted network by a host with no rDNS 0.50 BSF_SC5_MJ1963 Custom Rule MJ1963 Cc: Flavio Leitner Subject: [ovs-dev] [RFC PATCH v2] netdev-dpdk: Add vhost-user multiqueue support X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" This patch depends on the vhost-user multiple queues posted at DPDK upstream: http://dpdk.org/ml/archives/dev/2015-October/024783.html The intention of this patch is to allow others to review and test. It shouldn't be applied until the DPDK patchset is merged. I will follow up with a proposal when that happens. Signed-off-by: Flavio Leitner --- INSTALL.DPDK.md | 12 ++++++ NEWS | 1 + lib/netdev-dpdk.c | 115 ++++++++++++++++++++++++++++++++++++++++++------------ 3 files changed, 104 insertions(+), 24 deletions(-) Changelog: v2: - queue mapping to allow different number of queues - updated INSTALL.DPDK.md - updated NEWS - rebased on top of DPDK vhost-user multiple queues V6 diff --git a/INSTALL.DPDK.md b/INSTALL.DPDK.md index 7bf110c..35542f8 100644 --- a/INSTALL.DPDK.md +++ b/INSTALL.DPDK.md @@ -567,6 +567,18 @@ Follow the steps below to attach vhost-user port(s) to a VM. -numa node,memdev=mem -mem-prealloc ``` +3. Optional: Enable multiqueue support + QEMU needs to be configured with multiple queues and the number queues + must be less or equal to Open vSwitch other_config:n-dpdk-rxqs. + The $q below is the number of queues. + The $v is the number of vectors, which is '$q x 2 + 2'. + + ``` + -chardev socket,id=char2,path=/usr/local/var/run/openvswitch/vhost-user-2 + -netdev type=vhost-user,id=mynet2,chardev=char2,vhostforce,queues=$q + -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2,mq=on,vectors=$v + ``` + DPDK vhost-cuse: ---------------- diff --git a/NEWS b/NEWS index 9b9dff2..a9a8e88 100644 --- a/NEWS +++ b/NEWS @@ -27,6 +27,7 @@ Post-v2.4.0 - Add support for connection tracking through the new "ct" action and "ct_state"/"ct_zone"/"ct_mark"/"ct_label" match fields. Only available on Linux kernels with the connection tracking module loaded. + - Added multiqueue support to vhost-user v2.4.0 - 20 Aug 2015 diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index 93b0589..278aeb5 100644 --- a/lib/netdev-dpdk.c +++ b/lib/netdev-dpdk.c @@ -221,12 +221,9 @@ struct netdev_dpdk { * If the numbers match, 'txq_needs_locking' is false, otherwise it is * true and we will take a spinlock on transmission */ int real_n_txq; + int real_n_rxq; bool txq_needs_locking; - /* Spinlock for vhost transmission. Other DPDK devices use spinlocks in - * dpdk_tx_queue */ - rte_spinlock_t vhost_tx_lock; - /* virtio-net structure for vhost device */ OVSRCU_TYPE(struct virtio_net *) virtio_dev; @@ -654,13 +651,10 @@ dpdk_dev_parse_name(const char dev_name[], const char prefix[], static int vhost_construct_helper(struct netdev *netdev_) OVS_REQUIRES(dpdk_mutex) { - struct netdev_dpdk *netdev = netdev_dpdk_cast(netdev_); - if (rte_eal_init_ret) { return rte_eal_init_ret; } - rte_spinlock_init(&netdev->vhost_tx_lock); return netdev_dpdk_init(netdev_, -1, DPDK_DEV_VHOST); } @@ -830,7 +824,7 @@ netdev_dpdk_set_multiq(struct netdev *netdev_, unsigned int n_txq, } static int -netdev_dpdk_vhost_set_multiq(struct netdev *netdev_, unsigned int n_txq, +netdev_dpdk_vhost_cuse_set_multiq(struct netdev *netdev_, unsigned int n_txq, unsigned int n_rxq) { struct netdev_dpdk *netdev = netdev_dpdk_cast(netdev_); @@ -846,6 +840,32 @@ netdev_dpdk_vhost_set_multiq(struct netdev *netdev_, unsigned int n_txq, netdev->up.n_txq = n_txq; netdev->real_n_txq = 1; netdev->up.n_rxq = 1; + netdev->txq_needs_locking = netdev->real_n_txq != netdev->up.n_txq; + + ovs_mutex_unlock(&netdev->mutex); + ovs_mutex_unlock(&dpdk_mutex); + + return err; +} + +static int +netdev_dpdk_vhost_set_multiq(struct netdev *netdev_, unsigned int n_txq, + unsigned int n_rxq) +{ + struct netdev_dpdk *netdev = netdev_dpdk_cast(netdev_); + int err = 0; + + if (netdev->up.n_txq == n_txq && netdev->up.n_rxq == n_rxq) { + return err; + } + + ovs_mutex_lock(&dpdk_mutex); + ovs_mutex_lock(&netdev->mutex); + + rte_free(netdev->tx_q); + netdev->up.n_txq = n_txq; + netdev->up.n_rxq = n_rxq; + netdev_dpdk_alloc_txq(netdev, netdev->up.n_txq); ovs_mutex_unlock(&netdev->mutex); ovs_mutex_unlock(&dpdk_mutex); @@ -985,14 +1005,18 @@ netdev_dpdk_vhost_rxq_recv(struct netdev_rxq *rxq_, struct netdev *netdev = rx->up.netdev; struct netdev_dpdk *vhost_dev = netdev_dpdk_cast(netdev); struct virtio_net *virtio_dev = netdev_dpdk_get_virtio(vhost_dev); - int qid = 1; + int qid = rxq_->queue_id; uint16_t nb_rx = 0; if (OVS_UNLIKELY(!is_vhost_running(virtio_dev))) { return EAGAIN; } - nb_rx = rte_vhost_dequeue_burst(virtio_dev, qid, + if (rxq_->queue_id >= vhost_dev->real_n_rxq) { + return EOPNOTSUPP; + } + + nb_rx = rte_vhost_dequeue_burst(virtio_dev, qid * VIRTIO_QNUM + VIRTIO_TXQ, vhost_dev->dpdk_mp->mp, (struct rte_mbuf **)packets, NETDEV_MAX_BURST); @@ -1056,8 +1080,9 @@ netdev_dpdk_vhost_update_tx_counters(struct netdev_stats *stats, } static void -__netdev_dpdk_vhost_send(struct netdev *netdev, struct dp_packet **pkts, - int cnt, bool may_steal) +__netdev_dpdk_vhost_send(struct netdev *netdev, int qid, + struct dp_packet **pkts, int cnt, + bool may_steal) { struct netdev_dpdk *vhost_dev = netdev_dpdk_cast(netdev); struct virtio_net *virtio_dev = netdev_dpdk_get_virtio(vhost_dev); @@ -1072,13 +1097,16 @@ __netdev_dpdk_vhost_send(struct netdev *netdev, struct dp_packet **pkts, goto out; } - /* There is vHost TX single queue, So we need to lock it for TX. */ - rte_spinlock_lock(&vhost_dev->vhost_tx_lock); + if (vhost_dev->txq_needs_locking) { + qid = qid % vhost_dev->real_n_txq; + rte_spinlock_lock(&vhost_dev->tx_q[qid].tx_lock); + } do { + int vhost_qid = qid * VIRTIO_QNUM + VIRTIO_RXQ; unsigned int tx_pkts; - tx_pkts = rte_vhost_enqueue_burst(virtio_dev, VIRTIO_RXQ, + tx_pkts = rte_vhost_enqueue_burst(virtio_dev, vhost_qid, cur_pkts, cnt); if (OVS_LIKELY(tx_pkts)) { /* Packets have been sent.*/ @@ -1097,7 +1125,7 @@ __netdev_dpdk_vhost_send(struct netdev *netdev, struct dp_packet **pkts, * Unable to enqueue packets to vhost interface. * Check available entries before retrying. */ - while (!rte_vring_available_entries(virtio_dev, VIRTIO_RXQ)) { + while (!rte_vring_available_entries(virtio_dev, vhost_qid)) { if (OVS_UNLIKELY((rte_get_timer_cycles() - start) > timeout)) { expired = 1; break; @@ -1109,7 +1137,10 @@ __netdev_dpdk_vhost_send(struct netdev *netdev, struct dp_packet **pkts, } } } while (cnt); - rte_spinlock_unlock(&vhost_dev->vhost_tx_lock); + + if (vhost_dev->txq_needs_locking) { + rte_spinlock_unlock(&vhost_dev->tx_q[qid].tx_lock); + } rte_spinlock_lock(&vhost_dev->stats_lock); netdev_dpdk_vhost_update_tx_counters(&vhost_dev->stats, pkts, total_pkts, @@ -1214,7 +1245,7 @@ dpdk_do_tx_copy(struct netdev *netdev, int qid, struct dp_packet **pkts, } if (dev->type == DPDK_DEV_VHOST) { - __netdev_dpdk_vhost_send(netdev, (struct dp_packet **) mbufs, newcnt, true); + __netdev_dpdk_vhost_send(netdev, qid, (struct dp_packet **) mbufs, newcnt, true); } else { dpdk_queue_pkts(dev, qid, mbufs, newcnt); dpdk_queue_flush(dev, qid); @@ -1226,7 +1257,7 @@ dpdk_do_tx_copy(struct netdev *netdev, int qid, struct dp_packet **pkts, } static int -netdev_dpdk_vhost_send(struct netdev *netdev, int qid OVS_UNUSED, struct dp_packet **pkts, +netdev_dpdk_vhost_send(struct netdev *netdev, int qid, struct dp_packet **pkts, int cnt, bool may_steal) { if (OVS_UNLIKELY(pkts[0]->source != DPBUF_DPDK)) { @@ -1239,7 +1270,7 @@ netdev_dpdk_vhost_send(struct netdev *netdev, int qid OVS_UNUSED, struct dp_pack } } } else { - __netdev_dpdk_vhost_send(netdev, pkts, cnt, may_steal); + __netdev_dpdk_vhost_send(netdev, qid, pkts, cnt, may_steal); } return 0; } @@ -1755,8 +1786,39 @@ netdev_dpdk_set_admin_state(struct unixctl_conn *conn, int argc, static void set_irq_status(struct virtio_net *dev) { - dev->virtqueue[VIRTIO_RXQ]->used->flags = VRING_USED_F_NO_NOTIFY; - dev->virtqueue[VIRTIO_TXQ]->used->flags = VRING_USED_F_NO_NOTIFY; + uint32_t i; + uint64_t idx; + + for (i = 0; i < dev->virt_qp_nb; i++) { + idx = i * VIRTIO_QNUM; + dev->virtqueue[idx + VIRTIO_RXQ]->used->flags = VRING_USED_F_NO_NOTIFY; + dev->virtqueue[idx + VIRTIO_TXQ]->used->flags = VRING_USED_F_NO_NOTIFY; + } +} + + +static int +netdev_dpdk_vhost_set_queues(struct netdev_dpdk *netdev, struct virtio_net *dev) +{ + uint32_t qp_num; + + qp_num = dev->virt_qp_nb; + if (qp_num > netdev->up.n_rxq) { + VLOG_ERR("vHost Device '%s' %"PRIu64" can't be added - " + "too many queues %d > %d", dev->ifname, dev->device_fh, + qp_num, netdev->up.n_rxq); + return -1; + } + + netdev->real_n_rxq = qp_num; + netdev->real_n_txq = qp_num; + if (netdev->up.n_txq > netdev->real_n_txq) { + netdev->txq_needs_locking = true; + } else { + netdev->txq_needs_locking = false; + } + + return 0; } /* @@ -1773,12 +1835,17 @@ new_device(struct virtio_net *dev) LIST_FOR_EACH(netdev, list_node, &dpdk_list) { if (strncmp(dev->ifname, netdev->vhost_id, IF_NAME_SZ) == 0) { ovs_mutex_lock(&netdev->mutex); + if (netdev_dpdk_vhost_set_queues(netdev, dev)) { + ovs_mutex_unlock(&netdev->mutex); + ovs_mutex_unlock(&dpdk_mutex); + return -1; + } ovsrcu_set(&netdev->virtio_dev, dev); - ovs_mutex_unlock(&netdev->mutex); exists = true; dev->flags |= VIRTIO_DEV_RUNNING; /* Disable notifications. */ set_irq_status(dev); + ovs_mutex_unlock(&netdev->mutex); break; } } @@ -2233,7 +2300,7 @@ static const struct netdev_class OVS_UNUSED dpdk_vhost_cuse_class = dpdk_vhost_cuse_class_init, netdev_dpdk_vhost_cuse_construct, netdev_dpdk_vhost_destruct, - netdev_dpdk_vhost_set_multiq, + netdev_dpdk_vhost_cuse_set_multiq, netdev_dpdk_vhost_send, netdev_dpdk_vhost_get_carrier, netdev_dpdk_vhost_get_stats,