From patchwork Mon Dec 19 15:03:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 1717413 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=2605:bc80:3010::137; helo=smtp4.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=TRtz8sv8; dkim-atps=neutral Received: from smtp4.osuosl.org (smtp4.osuosl.org [IPv6:2605:bc80:3010::137]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NbNJ90Y2yz1ydb for ; Tue, 20 Dec 2022 02:03:36 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 1F462410D2; Mon, 19 Dec 2022 15:03:34 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 1F462410D2 Authentication-Results: smtp4.osuosl.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=TRtz8sv8 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id DHBPmDcXUd_T; Mon, 19 Dec 2022 15:03:32 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by smtp4.osuosl.org (Postfix) with ESMTPS id 19627410B0; Mon, 19 Dec 2022 15:03:31 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 19627410B0 Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 634FAC0032; Mon, 19 Dec 2022 15:03:30 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133]) by lists.linuxfoundation.org (Postfix) with ESMTP id B813DC002D for ; Mon, 19 Dec 2022 15:03:28 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 7DA6B40143 for ; Mon, 19 Dec 2022 15:03:28 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 7DA6B40143 Authentication-Results: smtp2.osuosl.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=TRtz8sv8 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Ou9kE0ubGhky for ; Mon, 19 Dec 2022 15:03:27 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org C6FD5400BF Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by smtp2.osuosl.org (Postfix) with ESMTPS id C6FD5400BF for ; Mon, 19 Dec 2022 15:03:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671462204; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=t7sIdTIrEpcEu+Lm0x2ji+5Xh2UhygWfIl2nGD9C0+Y=; b=TRtz8sv8+9qzIbC9tnNUUrIykTj9xGbGIlHev4KImxSSP7ul0pMgms7s78aAuSx9yUtGhF aOaPkYvo5j2fD1Q6zH3lU1A1CNCverMfiIh44dTY5/ohmnkGq9CrCIygd7LUN9vrDX/f06 pNCjzIHsz3Ls6sqe3PRpRioAx9/CyDI= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-520-GGbpRwWHNm20hFiJ-A7k9A-1; Mon, 19 Dec 2022 10:03:23 -0500 X-MC-Unique: GGbpRwWHNm20hFiJ-A7k9A-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 89CDE2802E56; Mon, 19 Dec 2022 15:03:13 +0000 (UTC) Received: from dmarchan.redhat.com (ovpn-193-100.brq.redhat.com [10.40.193.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7E8BCC15BAD; Mon, 19 Dec 2022 15:03:11 +0000 (UTC) From: David Marchand To: dev@openvswitch.org Date: Mon, 19 Dec 2022 16:03:05 +0100 Message-Id: <20221219150306.20839-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Cc: maxime.coquelin@redhat.com, i.maximets@ovn.org Subject: [ovs-dev] [PATCH v5 1/2] netdev-dpdk: Add per virtqueue statistics. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" The DPDK vhost-user library maintains more granular per queue stats which can replace what OVS was providing for vhost-user ports. The benefits for OVS: - OVS can skip parsing packet sizes on the rx side, - vhost-user is aware of which packets are transmitted to the guest, so per *transmitted* packet size stats can be reported, - more internal stats from vhost-user may be exposed, without OVS needing to understand them, Note: the vhost-user library does not provide global stats for a port. The proposed implementation is to have the global stats (exposed via netdev_get_stats()) computed by querying and aggregating all per queue stats. Since per queue stats are exposed via another netdev ops (netdev_get_custom_stats()), this may lead to some race and small discrepancies. This issue might already affect other netdev classes. Example: $ ovs-vsctl get interface vhost4 statistics | sed -e 's#[{}]##g' -e 's#, #\n#g' | grep -v =0$ rx_1_to_64_packets=12 rx_256_to_511_packets=15 rx_65_to_127_packets=21 rx_broadcast_packets=15 rx_bytes=7497 rx_multicast_packets=33 rx_packets=48 rx_q0_good_bytes=242 rx_q0_good_packets=3 rx_q0_guest_notifications=3 rx_q0_multicast_packets=3 rx_q0_size_65_127_packets=2 rx_q0_undersize_packets=1 rx_q1_broadcast_packets=15 rx_q1_good_bytes=7255 rx_q1_good_packets=45 rx_q1_guest_notifications=45 rx_q1_multicast_packets=30 rx_q1_size_256_511_packets=15 rx_q1_size_65_127_packets=19 rx_q1_undersize_packets=11 tx_1_to_64_packets=36 tx_256_to_511_packets=45 tx_65_to_127_packets=63 tx_broadcast_packets=45 tx_bytes=22491 tx_multicast_packets=99 tx_packets=144 tx_q0_broadcast_packets=30 tx_q0_good_bytes=14994 tx_q0_good_packets=96 tx_q0_guest_notifications=96 tx_q0_multicast_packets=66 tx_q0_size_256_511_packets=30 tx_q0_size_65_127_packets=42 tx_q0_undersize_packets=24 tx_q1_broadcast_packets=15 tx_q1_good_bytes=7497 tx_q1_good_packets=48 tx_q1_guest_notifications=48 tx_q1_multicast_packets=33 tx_q1_size_256_511_packets=15 tx_q1_size_65_127_packets=21 tx_q1_undersize_packets=12 Signed-off-by: David Marchand Reviewed-by: Maxime Coquelin --- Changes since v3: - rebased to master now that v22.11 landed, - fixed error code in stats helper when vhost port is not "running", - shortened rx/tx stats macro names, Changes since RFC v2: - dropped the experimental api check (now that the feature is marked stable in DPDK), - moved netdev_dpdk_get_carrier() forward declaration next to the function needing it, - used per q stats for netdev_get_stats() and removed OVS per packet size accounting logic, - fixed small packets counter (see rx_undersized_errors hack), - added more Tx stats, - added unit tests, --- lib/netdev-dpdk.c | 398 ++++++++++++++++++++++++++++++++----------- tests/system-dpdk.at | 33 +++- 2 files changed, 332 insertions(+), 99 deletions(-) diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index fff57f7827..659f53cadc 100644 --- a/lib/netdev-dpdk.c +++ b/lib/netdev-dpdk.c @@ -2363,66 +2363,11 @@ is_vhost_running(struct netdev_dpdk *dev) return (netdev_dpdk_get_vid(dev) >= 0 && dev->vhost_reconfigured); } -static inline void -netdev_dpdk_vhost_update_rx_size_counters(struct netdev_stats *stats, - unsigned int packet_size) -{ - /* Hard-coded search for the size bucket. */ - if (packet_size < 256) { - if (packet_size >= 128) { - stats->rx_128_to_255_packets++; - } else if (packet_size <= 64) { - stats->rx_1_to_64_packets++; - } else { - stats->rx_65_to_127_packets++; - } - } else { - if (packet_size >= 1523) { - stats->rx_1523_to_max_packets++; - } else if (packet_size >= 1024) { - stats->rx_1024_to_1522_packets++; - } else if (packet_size < 512) { - stats->rx_256_to_511_packets++; - } else { - stats->rx_512_to_1023_packets++; - } - } -} - static inline void netdev_dpdk_vhost_update_rx_counters(struct netdev_dpdk *dev, - struct dp_packet **packets, int count, int qos_drops) { - struct netdev_stats *stats = &dev->stats; - struct dp_packet *packet; - unsigned int packet_size; - int i; - - stats->rx_packets += count; - stats->rx_dropped += qos_drops; - for (i = 0; i < count; i++) { - packet = packets[i]; - packet_size = dp_packet_size(packet); - - if (OVS_UNLIKELY(packet_size < ETH_HEADER_LEN)) { - /* This only protects the following multicast counting from - * too short packets, but it does not stop the packet from - * further processing. */ - stats->rx_errors++; - stats->rx_length_errors++; - continue; - } - - netdev_dpdk_vhost_update_rx_size_counters(stats, packet_size); - - struct eth_header *eh = (struct eth_header *) dp_packet_data(packet); - if (OVS_UNLIKELY(eth_addr_is_multicast(eh->eth_dst))) { - stats->multicast++; - } - - stats->rx_bytes += packet_size; - } + dev->stats.rx_dropped += qos_drops; if (OVS_UNLIKELY(qos_drops)) { dev->sw_stats->rx_qos_drops += qos_drops; @@ -2474,8 +2419,7 @@ netdev_dpdk_vhost_rxq_recv(struct netdev_rxq *rxq, } rte_spinlock_lock(&dev->stats_lock); - netdev_dpdk_vhost_update_rx_counters(dev, batch->packets, - nb_rx, qos_drops); + netdev_dpdk_vhost_update_rx_counters(dev, qos_drops); rte_spinlock_unlock(&dev->stats_lock); batch->count = nb_rx; @@ -2589,24 +2533,14 @@ netdev_dpdk_filter_packet_len(struct netdev_dpdk *dev, struct rte_mbuf **pkts, static inline void netdev_dpdk_vhost_update_tx_counters(struct netdev_dpdk *dev, - struct dp_packet **packets, - int attempted, struct netdev_dpdk_sw_stats *sw_stats_add) { int dropped = sw_stats_add->tx_mtu_exceeded_drops + sw_stats_add->tx_qos_drops + sw_stats_add->tx_failure_drops + sw_stats_add->tx_invalid_hwol_drops; - struct netdev_stats *stats = &dev->stats; - int sent = attempted - dropped; - int i; - stats->tx_packets += sent; - stats->tx_dropped += dropped; - - for (i = 0; i < sent; i++) { - stats->tx_bytes += dp_packet_size(packets[i]); - } + dev->stats.tx_dropped += dropped; if (OVS_UNLIKELY(dropped || sw_stats_add->tx_retries)) { struct netdev_dpdk_sw_stats *sw_stats = dev->sw_stats; @@ -2795,13 +2729,13 @@ netdev_dpdk_vhost_send(struct netdev *netdev, int qid, { struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); int max_retries = VHOST_ENQ_RETRY_MIN; - int cnt, batch_cnt, vhost_batch_cnt; int vid = netdev_dpdk_get_vid(dev); struct netdev_dpdk_sw_stats stats; + int cnt, vhost_batch_cnt; struct rte_mbuf **pkts; int retries; - batch_cnt = cnt = dp_packet_batch_size(batch); + cnt = dp_packet_batch_size(batch); qid = dev->tx_q[qid % netdev->n_txq].map; if (OVS_UNLIKELY(vid < 0 || !dev->vhost_reconfigured || qid < 0 || !(dev->flags & NETDEV_UP))) { @@ -2851,8 +2785,7 @@ netdev_dpdk_vhost_send(struct netdev *netdev, int qid, stats.tx_retries = MIN(retries, max_retries); rte_spinlock_lock(&dev->stats_lock); - netdev_dpdk_vhost_update_tx_counters(dev, batch->packets, batch_cnt, - &stats); + netdev_dpdk_vhost_update_tx_counters(dev, &stats); rte_spinlock_unlock(&dev->stats_lock); pkts = (struct rte_mbuf **) batch->packets; @@ -3007,41 +2940,304 @@ netdev_dpdk_set_mtu(struct netdev *netdev, int mtu) return 0; } -static int -netdev_dpdk_get_carrier(const struct netdev *netdev, bool *carrier); - static int netdev_dpdk_vhost_get_stats(const struct netdev *netdev, struct netdev_stats *stats) { + struct rte_vhost_stat_name *vhost_stats_names = NULL; struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); + struct rte_vhost_stat *vhost_stats = NULL; + int vhost_stats_count; + int err; + int qid; + int vid; ovs_mutex_lock(&dev->mutex); - rte_spinlock_lock(&dev->stats_lock); - /* Supported Stats */ - stats->rx_packets = dev->stats.rx_packets; - stats->tx_packets = dev->stats.tx_packets; + if (!is_vhost_running(dev)) { + err = EPROTO; + goto out; + } + + vid = netdev_dpdk_get_vid(dev); + + /* We expect all rxqs have the same number of stats, only query rxq0. */ + qid = 0 * VIRTIO_QNUM + VIRTIO_TXQ; + err = rte_vhost_vring_stats_get_names(vid, qid, NULL, 0); + if (err < 0) { + err = EPROTO; + goto out; + } + + vhost_stats_count = err; + vhost_stats_names = xcalloc(vhost_stats_count, sizeof *vhost_stats_names); + vhost_stats = xcalloc(vhost_stats_count, sizeof *vhost_stats); + + err = rte_vhost_vring_stats_get_names(vid, qid, vhost_stats_names, + vhost_stats_count); + if (err != vhost_stats_count) { + err = EPROTO; + goto out; + } + +#define VHOST_RXQ_STATS \ + VHOST_RXQ_STAT(rx_packets, "good_packets") \ + VHOST_RXQ_STAT(rx_bytes, "good_bytes") \ + VHOST_RXQ_STAT(rx_broadcast_packets, "broadcast_packets") \ + VHOST_RXQ_STAT(multicast, "multicast_packets") \ + VHOST_RXQ_STAT(rx_undersized_errors, "undersize_packets") \ + VHOST_RXQ_STAT(rx_1_to_64_packets, "size_64_packets") \ + VHOST_RXQ_STAT(rx_65_to_127_packets, "size_65_127_packets") \ + VHOST_RXQ_STAT(rx_128_to_255_packets, "size_128_255_packets") \ + VHOST_RXQ_STAT(rx_256_to_511_packets, "size_256_511_packets") \ + VHOST_RXQ_STAT(rx_512_to_1023_packets, "size_512_1023_packets") \ + VHOST_RXQ_STAT(rx_1024_to_1522_packets, "size_1024_1518_packets") \ + VHOST_RXQ_STAT(rx_1523_to_max_packets, "size_1519_max_packets") + +#define VHOST_RXQ_STAT(MEMBER, NAME) dev->stats.MEMBER = 0; + VHOST_RXQ_STATS; +#undef VHOST_RXQ_STAT + + for (int q = 0; q < dev->up.n_rxq; q++) { + qid = q * VIRTIO_QNUM + VIRTIO_TXQ; + + err = rte_vhost_vring_stats_get(vid, qid, vhost_stats, + vhost_stats_count); + if (err != vhost_stats_count) { + err = EPROTO; + goto out; + } + + for (int i = 0; i < vhost_stats_count; i++) { +#define VHOST_RXQ_STAT(MEMBER, NAME) \ + if (string_ends_with(vhost_stats_names[i].name, NAME)) { \ + dev->stats.MEMBER += vhost_stats[i].value; \ + continue; \ + } + VHOST_RXQ_STATS; +#undef VHOST_RXQ_STAT + } + } + + /* OVS reports 64 bytes and smaller packets into "rx_1_to_64_packets". + * Since vhost only reports good packets and has no error counter, + * rx_undersized_errors is highjacked (see above) to retrieve + * "undersize_packets". */ + dev->stats.rx_1_to_64_packets += dev->stats.rx_undersized_errors; + memset(&dev->stats.rx_undersized_errors, 0xff, + sizeof dev->stats.rx_undersized_errors); + +#define VHOST_RXQ_STAT(MEMBER, NAME) stats->MEMBER = dev->stats.MEMBER; + VHOST_RXQ_STATS; +#undef VHOST_RXQ_STAT + + free(vhost_stats_names); + vhost_stats_names = NULL; + free(vhost_stats); + vhost_stats = NULL; + + /* We expect all txqs have the same number of stats, only query txq0. */ + qid = 0 * VIRTIO_QNUM; + err = rte_vhost_vring_stats_get_names(vid, qid, NULL, 0); + if (err < 0) { + err = EPROTO; + goto out; + } + + vhost_stats_count = err; + vhost_stats_names = xcalloc(vhost_stats_count, sizeof *vhost_stats_names); + vhost_stats = xcalloc(vhost_stats_count, sizeof *vhost_stats); + + err = rte_vhost_vring_stats_get_names(vid, qid, vhost_stats_names, + vhost_stats_count); + if (err != vhost_stats_count) { + err = EPROTO; + goto out; + } + +#define VHOST_TXQ_STATS \ + VHOST_TXQ_STAT(tx_packets, "good_packets") \ + VHOST_TXQ_STAT(tx_bytes, "good_bytes") \ + VHOST_TXQ_STAT(tx_broadcast_packets, "broadcast_packets") \ + VHOST_TXQ_STAT(tx_multicast_packets, "multicast_packets") \ + VHOST_TXQ_STAT(rx_undersized_errors, "undersize_packets") \ + VHOST_TXQ_STAT(tx_1_to_64_packets, "size_64_packets") \ + VHOST_TXQ_STAT(tx_65_to_127_packets, "size_65_127_packets") \ + VHOST_TXQ_STAT(tx_128_to_255_packets, "size_128_255_packets") \ + VHOST_TXQ_STAT(tx_256_to_511_packets, "size_256_511_packets") \ + VHOST_TXQ_STAT(tx_512_to_1023_packets, "size_512_1023_packets") \ + VHOST_TXQ_STAT(tx_1024_to_1522_packets, "size_1024_1518_packets") \ + VHOST_TXQ_STAT(tx_1523_to_max_packets, "size_1519_max_packets") + +#define VHOST_TXQ_STAT(MEMBER, NAME) dev->stats.MEMBER = 0; + VHOST_TXQ_STATS; +#undef VHOST_TXQ_STAT + + for (int q = 0; q < dev->up.n_txq; q++) { + qid = q * VIRTIO_QNUM; + + err = rte_vhost_vring_stats_get(vid, qid, vhost_stats, + vhost_stats_count); + if (err != vhost_stats_count) { + err = EPROTO; + goto out; + } + + for (int i = 0; i < vhost_stats_count; i++) { +#define VHOST_TXQ_STAT(MEMBER, NAME) \ + if (string_ends_with(vhost_stats_names[i].name, NAME)) { \ + dev->stats.MEMBER += vhost_stats[i].value; \ + continue; \ + } + VHOST_TXQ_STATS; +#undef VHOST_TXQ_STAT + } + } + + /* OVS reports 64 bytes and smaller packets into "tx_1_to_64_packets". + * Same as for rx, rx_undersized_errors is highjacked. */ + dev->stats.tx_1_to_64_packets += dev->stats.rx_undersized_errors; + memset(&dev->stats.rx_undersized_errors, 0xff, + sizeof dev->stats.rx_undersized_errors); + +#define VHOST_TXQ_STAT(MEMBER, NAME) stats->MEMBER = dev->stats.MEMBER; + VHOST_TXQ_STATS; +#undef VHOST_TXQ_STAT + stats->rx_dropped = dev->stats.rx_dropped; stats->tx_dropped = dev->stats.tx_dropped; - stats->multicast = dev->stats.multicast; - stats->rx_bytes = dev->stats.rx_bytes; - stats->tx_bytes = dev->stats.tx_bytes; - stats->rx_errors = dev->stats.rx_errors; - stats->rx_length_errors = dev->stats.rx_length_errors; - - stats->rx_1_to_64_packets = dev->stats.rx_1_to_64_packets; - stats->rx_65_to_127_packets = dev->stats.rx_65_to_127_packets; - stats->rx_128_to_255_packets = dev->stats.rx_128_to_255_packets; - stats->rx_256_to_511_packets = dev->stats.rx_256_to_511_packets; - stats->rx_512_to_1023_packets = dev->stats.rx_512_to_1023_packets; - stats->rx_1024_to_1522_packets = dev->stats.rx_1024_to_1522_packets; - stats->rx_1523_to_max_packets = dev->stats.rx_1523_to_max_packets; + err = 0; +out: rte_spinlock_unlock(&dev->stats_lock); ovs_mutex_unlock(&dev->mutex); + free(vhost_stats); + free(vhost_stats_names); + + return err; +} + +static int +netdev_dpdk_vhost_get_custom_stats(const struct netdev *netdev, + struct netdev_custom_stats *custom_stats) +{ + struct rte_vhost_stat_name *vhost_stats_names = NULL; + struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); + struct rte_vhost_stat *vhost_stats = NULL; + int vhost_rxq_stats_count; + int vhost_txq_stats_count; + int stat_offset; + int err; + int qid; + int vid; + + netdev_dpdk_get_sw_custom_stats(netdev, custom_stats); + stat_offset = custom_stats->size; + + ovs_mutex_lock(&dev->mutex); + + if (!is_vhost_running(dev)) { + goto out; + } + + vid = netdev_dpdk_get_vid(dev); + + qid = 0 * VIRTIO_QNUM + VIRTIO_TXQ; + err = rte_vhost_vring_stats_get_names(vid, qid, NULL, 0); + if (err < 0) { + goto out; + } + vhost_rxq_stats_count = err; + + qid = 0 * VIRTIO_QNUM; + err = rte_vhost_vring_stats_get_names(vid, qid, NULL, 0); + if (err < 0) { + goto out; + } + vhost_txq_stats_count = err; + + stat_offset += dev->up.n_rxq * vhost_rxq_stats_count; + stat_offset += dev->up.n_txq * vhost_txq_stats_count; + custom_stats->counters = xrealloc(custom_stats->counters, + stat_offset * + sizeof *custom_stats->counters); + stat_offset = custom_stats->size; + + vhost_stats_names = xcalloc(vhost_rxq_stats_count, + sizeof *vhost_stats_names); + vhost_stats = xcalloc(vhost_rxq_stats_count, sizeof *vhost_stats); + + for (int q = 0; q < dev->up.n_rxq; q++) { + qid = q * VIRTIO_QNUM + VIRTIO_TXQ; + + err = rte_vhost_vring_stats_get_names(vid, qid, vhost_stats_names, + vhost_rxq_stats_count); + if (err != vhost_rxq_stats_count) { + goto out; + } + + err = rte_vhost_vring_stats_get(vid, qid, vhost_stats, + vhost_rxq_stats_count); + if (err != vhost_rxq_stats_count) { + goto out; + } + + for (int i = 0; i < vhost_rxq_stats_count; i++) { + ovs_strlcpy(custom_stats->counters[stat_offset + i].name, + vhost_stats_names[i].name, + NETDEV_CUSTOM_STATS_NAME_SIZE); + custom_stats->counters[stat_offset + i].value = + vhost_stats[i].value; + } + stat_offset += vhost_rxq_stats_count; + } + + free(vhost_stats_names); + vhost_stats_names = NULL; + free(vhost_stats); + vhost_stats = NULL; + + vhost_stats_names = xcalloc(vhost_txq_stats_count, + sizeof *vhost_stats_names); + vhost_stats = xcalloc(vhost_txq_stats_count, sizeof *vhost_stats); + + for (int q = 0; q < dev->up.n_txq; q++) { + qid = q * VIRTIO_QNUM; + + err = rte_vhost_vring_stats_get_names(vid, qid, vhost_stats_names, + vhost_txq_stats_count); + if (err != vhost_txq_stats_count) { + goto out; + } + + err = rte_vhost_vring_stats_get(vid, qid, vhost_stats, + vhost_txq_stats_count); + if (err != vhost_txq_stats_count) { + goto out; + } + + for (int i = 0; i < vhost_txq_stats_count; i++) { + ovs_strlcpy(custom_stats->counters[stat_offset + i].name, + vhost_stats_names[i].name, + NETDEV_CUSTOM_STATS_NAME_SIZE); + custom_stats->counters[stat_offset + i].value = + vhost_stats[i].value; + } + stat_offset += vhost_txq_stats_count; + } + + free(vhost_stats_names); + vhost_stats_names = NULL; + free(vhost_stats); + vhost_stats = NULL; + +out: + ovs_mutex_unlock(&dev->mutex); + + custom_stats->size = stat_offset; + return 0; } @@ -3088,6 +3284,9 @@ netdev_dpdk_convert_xstats(struct netdev_stats *stats, #undef DPDK_XSTATS } +static int +netdev_dpdk_get_carrier(const struct netdev *netdev, bool *carrier); + static int netdev_dpdk_get_stats(const struct netdev *netdev, struct netdev_stats *stats) { @@ -3536,6 +3735,7 @@ netdev_dpdk_update_flags__(struct netdev_dpdk *dev, if (NETDEV_UP & on) { rte_spinlock_lock(&dev->stats_lock); memset(&dev->stats, 0, sizeof dev->stats); + memset(dev->sw_stats, 0, sizeof *dev->sw_stats); rte_spinlock_unlock(&dev->stats_lock); } } @@ -5036,6 +5236,11 @@ dpdk_vhost_reconfigure_helper(struct netdev_dpdk *dev) dev->tx_q[0].map = 0; } + rte_spinlock_lock(&dev->stats_lock); + memset(&dev->stats, 0, sizeof dev->stats); + memset(dev->sw_stats, 0, sizeof *dev->sw_stats); + rte_spinlock_unlock(&dev->stats_lock); + if (userspace_tso_enabled()) { dev->hw_ol_features |= NETDEV_TX_TSO_OFFLOAD; VLOG_DBG("%s: TSO enabled on vhost port", netdev_get_name(&dev->up)); @@ -5096,6 +5301,9 @@ netdev_dpdk_vhost_client_reconfigure(struct netdev *netdev) /* Register client-mode device. */ vhost_flags |= RTE_VHOST_USER_CLIENT; + /* Extended per vq statistics. */ + vhost_flags |= RTE_VHOST_USER_NET_STATS_ENABLE; + /* There is no support for multi-segments buffers. */ vhost_flags |= RTE_VHOST_USER_LINEARBUF_SUPPORT; @@ -5574,7 +5782,7 @@ static const struct netdev_class dpdk_vhost_class = { .send = netdev_dpdk_vhost_send, .get_carrier = netdev_dpdk_vhost_get_carrier, .get_stats = netdev_dpdk_vhost_get_stats, - .get_custom_stats = netdev_dpdk_get_sw_custom_stats, + .get_custom_stats = netdev_dpdk_vhost_get_custom_stats, .get_status = netdev_dpdk_vhost_user_get_status, .reconfigure = netdev_dpdk_vhost_reconfigure, .rxq_recv = netdev_dpdk_vhost_rxq_recv, @@ -5590,7 +5798,7 @@ static const struct netdev_class dpdk_vhost_client_class = { .send = netdev_dpdk_vhost_send, .get_carrier = netdev_dpdk_vhost_get_carrier, .get_stats = netdev_dpdk_vhost_get_stats, - .get_custom_stats = netdev_dpdk_get_sw_custom_stats, + .get_custom_stats = netdev_dpdk_vhost_get_custom_stats, .get_status = netdev_dpdk_vhost_user_get_status, .reconfigure = netdev_dpdk_vhost_client_reconfigure, .rxq_recv = netdev_dpdk_vhost_rxq_recv, diff --git a/tests/system-dpdk.at b/tests/system-dpdk.at index 8dc187a61d..5ef7f8ccdc 100644 --- a/tests/system-dpdk.at +++ b/tests/system-dpdk.at @@ -200,9 +200,10 @@ ADD_VETH(tap1, ns2, br10, "172.31.110.12/24") dnl Execute testpmd in background on_exit "pkill -f -x -9 'tail -f /dev/null'" tail -f /dev/null | dpdk-testpmd --socket-mem="$(cat NUMA_NODE)" --no-pci\ - --vdev="net_virtio_user,path=$OVS_RUNDIR/dpdkvhostclient0,server=1" \ - --vdev="net_tap0,iface=tap0" --file-prefix page0 \ - --single-file-segments -- -a >$OVS_RUNDIR/testpmd-dpdkvhostuserclient0.log 2>&1 & + --vdev="net_virtio_user,path=$OVS_RUNDIR/dpdkvhostclient0,queues=2,server=1" \ + --vdev="net_tap0,iface=tap0" --file-prefix page0 \ + --single-file-segments -- -a --nb-cores 2 --rxq 2 --txq 2 \ + >$OVS_RUNDIR/testpmd-dpdkvhostuserclient0.log 2>&1 & OVS_WAIT_UNTIL([grep "virtio is now ready for processing" ovs-vswitchd.log]) OVS_WAIT_UNTIL([ip link show dev tap0 | grep -qw LOWER_UP]) @@ -220,9 +221,33 @@ AT_CHECK([ip netns exec ns1 ip addr add 172.31.110.11/24 dev tap0], [], AT_CHECK([ip netns exec ns1 ip link show], [], [stdout], [stderr]) AT_CHECK([ip netns exec ns2 ip link show], [], [stdout], [stderr]) -AT_CHECK([ip netns exec ns1 ping -c 4 -I tap0 172.31.110.12], [], [stdout], +AT_CHECK([ip netns exec ns1 ping -i 0.1 -c 10 -I tap0 172.31.110.12], [], [stdout], [stderr]) +AT_CHECK([ip netns exec ns1 ip link set tap0 down], [], [stdout], [stderr]) + +# Wait for stats to be queried ("stats-update-interval") +sleep 5 +AT_CHECK([ovs-vsctl get interface dpdkvhostuserclient0 statistics], [], [stdout], [stderr]) + +AT_CHECK([test `ovs-vsctl get interface dpdkvhostuserclient0 statistics:rx_packets` -gt 0 -a dnl + `ovs-vsctl get interface dpdkvhostuserclient0 statistics:rx_bytes` -gt 0]) +AT_CHECK([test `ovs-vsctl get interface dpdkvhostuserclient0 statistics:rx_packets` -eq dnl + $((`ovs-vsctl get interface dpdkvhostuserclient0 statistics:rx_q0_good_packets` + dnl + `ovs-vsctl get interface dpdkvhostuserclient0 statistics:rx_q1_good_packets`))]) +AT_CHECK([test `ovs-vsctl get interface dpdkvhostuserclient0 statistics:rx_bytes` -eq dnl + $((`ovs-vsctl get interface dpdkvhostuserclient0 statistics:rx_q0_good_bytes` + dnl + `ovs-vsctl get interface dpdkvhostuserclient0 statistics:rx_q1_good_bytes`))]) + +AT_CHECK([test `ovs-vsctl get interface dpdkvhostuserclient0 statistics:tx_packets` -gt 0 -a dnl + `ovs-vsctl get interface dpdkvhostuserclient0 statistics:tx_bytes` -gt 0]) +AT_CHECK([test `ovs-vsctl get interface dpdkvhostuserclient0 statistics:tx_packets` -eq dnl + $((`ovs-vsctl get interface dpdkvhostuserclient0 statistics:tx_q0_good_packets` + dnl + `ovs-vsctl get interface dpdkvhostuserclient0 statistics:tx_q1_good_packets`))]) +AT_CHECK([test `ovs-vsctl get interface dpdkvhostuserclient0 statistics:tx_bytes` -eq dnl + $((`ovs-vsctl get interface dpdkvhostuserclient0 statistics:tx_q0_good_bytes` + dnl + `ovs-vsctl get interface dpdkvhostuserclient0 statistics:tx_q1_good_bytes`))]) + dnl Clean up the testpmd now pkill -f -x -9 'tail -f /dev/null' From patchwork Mon Dec 19 15:03:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 1717414 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=2605:bc80:3010::137; helo=smtp4.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=DlQmVAl6; dkim-atps=neutral Received: from smtp4.osuosl.org (smtp4.osuosl.org [IPv6:2605:bc80:3010::137]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NbNJD5ZCrz1ydb for ; Tue, 20 Dec 2022 02:03:40 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 4BB3E410C4; Mon, 19 Dec 2022 15:03:36 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 4BB3E410C4 Authentication-Results: smtp4.osuosl.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=DlQmVAl6 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qkwOUpuIotH1; Mon, 19 Dec 2022 15:03:35 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by smtp4.osuosl.org (Postfix) with ESMTPS id 1A4D8410D1; Mon, 19 Dec 2022 15:03:34 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 1A4D8410D1 Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 79BB6C007B; Mon, 19 Dec 2022 15:03:32 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138]) by lists.linuxfoundation.org (Postfix) with ESMTP id BECAAC0032 for ; Mon, 19 Dec 2022 15:03:28 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 87DD8813B4 for ; Mon, 19 Dec 2022 15:03:28 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 87DD8813B4 Authentication-Results: smtp1.osuosl.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=DlQmVAl6 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id sbMzchcqT7TQ for ; Mon, 19 Dec 2022 15:03:28 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org C8B6C8134F Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by smtp1.osuosl.org (Postfix) with ESMTPS id C8B6C8134F for ; Mon, 19 Dec 2022 15:03:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671462206; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bPVle24n4xhOAs2YcCWUjjLuOtsyXwfGrxhzsT3ru+k=; b=DlQmVAl6MiGCDgD64joTQZcM6Ln5Z/56wJ+J5zPRSO7tFDZOn9dTxbLhMUQDmP8McEJCv9 dws7kAPdo3r7v2VB5aL8mZgbvCvKTQ5CzZtAh9QhLe2zTG0nznAa0Lor6sHloLRenxXiIP qo5DN1tDGhDkc8xQ7ddIjdo3epqPRtU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-464-RaMzSIPKMPezRBQcL0UnOw-1; Mon, 19 Dec 2022 10:03:23 -0500 X-MC-Unique: RaMzSIPKMPezRBQcL0UnOw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7856788B7AD; Mon, 19 Dec 2022 15:03:17 +0000 (UTC) Received: from dmarchan.redhat.com (ovpn-193-100.brq.redhat.com [10.40.193.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id A69041121314; Mon, 19 Dec 2022 15:03:15 +0000 (UTC) From: David Marchand To: dev@openvswitch.org Date: Mon, 19 Dec 2022 16:03:06 +0100 Message-Id: <20221219150306.20839-2-david.marchand@redhat.com> In-Reply-To: <20221219150306.20839-1-david.marchand@redhat.com> References: <20221219150306.20839-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Cc: maxime.coquelin@redhat.com, i.maximets@ovn.org Subject: [ovs-dev] [PATCH v5 2/2] netdev-dpdk: Drop coverage counter for vhost IRQs. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" The vhost library now provides finegrained statistics for guest notifications: - notifications for buffer reclaim by the guest, - notifications for buffer availability to the guest, Example before this patch: $ ovs-appctl coverage/show | grep vhost_notification vhost_notification 0.0/sec 0.000/sec 2.0283/sec total: 7302 $ ovs-vsctl get interface vhost4 statistics | sed -e 's#[{}]##g' -e 's#, #\n#g' | grep guest_notifications rx_q0_guest_notifications=66 tx_q0_guest_notifications=7236 Signed-off-by: David Marchand --- lib/netdev-dpdk.c | 9 --------- 1 file changed, 9 deletions(-) diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index 659f53cadc..d7e852facf 100644 --- a/lib/netdev-dpdk.c +++ b/lib/netdev-dpdk.c @@ -78,7 +78,6 @@ VLOG_DEFINE_THIS_MODULE(netdev_dpdk); static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 20); COVERAGE_DEFINE(vhost_tx_contention); -COVERAGE_DEFINE(vhost_notification); static char *vhost_sock_dir = NULL; /* Location of vhost-user sockets */ static bool vhost_iommu_enabled = false; /* Status of vHost IOMMU support */ @@ -188,7 +187,6 @@ static int new_device(int vid); static void destroy_device(int vid); static int vring_state_changed(int vid, uint16_t queue_id, int enable); static void destroy_connection(int vid); -static void vhost_guest_notified(int vid); static const struct rte_vhost_device_ops virtio_net_device_ops = { @@ -198,7 +196,6 @@ static const struct rte_vhost_device_ops virtio_net_device_ops = .features_changed = NULL, .new_connection = NULL, .destroy_connection = destroy_connection, - .guest_notified = vhost_guest_notified, }; /* Custom software stats for dpdk ports */ @@ -4367,12 +4364,6 @@ destroy_connection(int vid) } } -static -void vhost_guest_notified(int vid OVS_UNUSED) -{ - COVERAGE_INC(vhost_notification); -} - /* * Retrieve the DPDK virtio device ID (vid) associated with a vhostuser * or vhostuserclient netdev.