From patchwork Thu Jan 5 20:24:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 1722142 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=2605:bc80:3010::138; helo=smtp1.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=L5A0c2xA; dkim-atps=neutral Received: from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Nnycr3hMWz23dq for ; Fri, 6 Jan 2023 07:24:44 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id C8B3B82017; Thu, 5 Jan 2023 20:24:42 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org C8B3B82017 Authentication-Results: smtp1.osuosl.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=L5A0c2xA X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id z2VXJMCsfIbx; Thu, 5 Jan 2023 20:24:41 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by smtp1.osuosl.org (Postfix) with ESMTPS id 3311781FE7; Thu, 5 Jan 2023 20:24:40 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 3311781FE7 Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 121E4C0033; Thu, 5 Jan 2023 20:24:40 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138]) by lists.linuxfoundation.org (Postfix) with ESMTP id 2821CC002D for ; Thu, 5 Jan 2023 20:24:39 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp1.osuosl.org (Postfix) with ESMTP id 0261181FF5 for ; Thu, 5 Jan 2023 20:24:39 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 0261181FF5 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp1.osuosl.org ([127.0.0.1]) by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id cEIdbdwVzzYN for ; Thu, 5 Jan 2023 20:24:37 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp1.osuosl.org 5969181FE7 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by smtp1.osuosl.org (Postfix) with ESMTPS id 5969181FE7 for ; Thu, 5 Jan 2023 20:24:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1672950276; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=VScDS9mmu2CJwO6wnPzWMp+/dPKvFE1s67UBexQvvU4=; b=L5A0c2xAMSHia4c3iW42blBcV3uJ+CHrkxPOR0o7UlVR3S4MGPTrqA+JF/fsyJ+CRASLgs 56TxfEZKV3uaNVwpZs1+m/AFIra4UdZ/5X7k4VuGHMotOGnO35Y3HRfE90w47A83qCAeJo xa6YBJe6J+pr/hweWIQK8J0oE56xGPg= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-325-w_gk1CBxNmG8kp8M3qXthQ-1; Thu, 05 Jan 2023 15:24:35 -0500 X-MC-Unique: w_gk1CBxNmG8kp8M3qXthQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 63A6B1C0758A; Thu, 5 Jan 2023 20:24:34 +0000 (UTC) Received: from dmarchan.redhat.com (ovpn-193-12.brq.redhat.com [10.40.193.12]) by smtp.corp.redhat.com (Postfix) with ESMTP id 262ACC15BA0; Thu, 5 Jan 2023 20:24:32 +0000 (UTC) From: David Marchand To: dev@openvswitch.org Date: Thu, 5 Jan 2023 21:24:24 +0100 Message-Id: <20230105202425.4187792-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Cc: maxime.coquelin@redhat.com, i.maximets@ovn.org Subject: [ovs-dev] [PATCH v6 1/2] netdev-dpdk: Add per virtqueue statistics. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" The DPDK vhost-user library maintains more granular per queue stats which can replace what OVS was providing for vhost-user ports. The benefits for OVS: - OVS can skip parsing packet sizes on the rx side, - dev->stats_lock won't be taken in rx/tx code unless some packet is dropped, - vhost-user is aware of which packets are transmitted to the guest, so per *transmitted* packet size stats can be reported, - more internal stats from vhost-user may be exposed, without OVS needing to understand them, Note: the vhost-user library does not provide global stats for a port. The proposed implementation is to have the global stats (exposed via netdev_get_stats()) computed by querying and aggregating all per queue stats. Since per queue stats are exposed via another netdev ops (netdev_get_custom_stats()), this may lead to some race and small discrepancies. This issue might already affect other netdev classes. Example: $ ovs-vsctl get interface vhost4 statistics | sed -e 's#[{}]##g' -e 's#, #\n#g' | grep -v =0$ rx_1_to_64_packets=12 rx_256_to_511_packets=15 rx_65_to_127_packets=21 rx_broadcast_packets=15 rx_bytes=7497 rx_multicast_packets=33 rx_packets=48 rx_q0_good_bytes=242 rx_q0_good_packets=3 rx_q0_guest_notifications=3 rx_q0_multicast_packets=3 rx_q0_size_65_127_packets=2 rx_q0_undersize_packets=1 rx_q1_broadcast_packets=15 rx_q1_good_bytes=7255 rx_q1_good_packets=45 rx_q1_guest_notifications=45 rx_q1_multicast_packets=30 rx_q1_size_256_511_packets=15 rx_q1_size_65_127_packets=19 rx_q1_undersize_packets=11 tx_1_to_64_packets=36 tx_256_to_511_packets=45 tx_65_to_127_packets=63 tx_broadcast_packets=45 tx_bytes=22491 tx_multicast_packets=99 tx_packets=144 tx_q0_broadcast_packets=30 tx_q0_good_bytes=14994 tx_q0_good_packets=96 tx_q0_guest_notifications=96 tx_q0_multicast_packets=66 tx_q0_size_256_511_packets=30 tx_q0_size_65_127_packets=42 tx_q0_undersize_packets=24 tx_q1_broadcast_packets=15 tx_q1_good_bytes=7497 tx_q1_good_packets=48 tx_q1_guest_notifications=48 tx_q1_multicast_packets=33 tx_q1_size_256_511_packets=15 tx_q1_size_65_127_packets=21 tx_q1_undersize_packets=12 Reviewed-by: Maxime Coquelin Signed-off-by: David Marchand --- Changes since v5: - added missing dev->stats_lock acquire in netdev_dpdk_vhost_get_stats, - changed netdev_dpdk_vhost_update_[rt]x_counters to take dev->stats_lock only when some packets got dropped in OVS. Since the rx side won't take the lock unless some QoS configuration is in place, this change will likely have the same effect as separating stats_lock into rx/tx dedicated locks. Testing shows a slight (around 1%) improvement of performance for some V2V setup, Changes since v3: - rebased to master now that v22.11 landed, - fixed error code in stats helper when vhost port is not "running", - shortened rx/tx stats macro names, Changes since RFC v2: - dropped the experimental api check (now that the feature is marked stable in DPDK), - moved netdev_dpdk_get_carrier() forward declaration next to the function needing it, - used per q stats for netdev_get_stats() and removed OVS per packet size accounting logic, - fixed small packets counter (see rx_undersized_errors hack), - added more Tx stats, - added unit tests, --- lib/netdev-dpdk.c | 429 ++++++++++++++++++++++++++++++++----------- tests/system-dpdk.at | 33 +++- 2 files changed, 349 insertions(+), 113 deletions(-) diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index fff57f7827..80ba650032 100644 --- a/lib/netdev-dpdk.c +++ b/lib/netdev-dpdk.c @@ -2363,70 +2363,18 @@ is_vhost_running(struct netdev_dpdk *dev) return (netdev_dpdk_get_vid(dev) >= 0 && dev->vhost_reconfigured); } -static inline void -netdev_dpdk_vhost_update_rx_size_counters(struct netdev_stats *stats, - unsigned int packet_size) -{ - /* Hard-coded search for the size bucket. */ - if (packet_size < 256) { - if (packet_size >= 128) { - stats->rx_128_to_255_packets++; - } else if (packet_size <= 64) { - stats->rx_1_to_64_packets++; - } else { - stats->rx_65_to_127_packets++; - } - } else { - if (packet_size >= 1523) { - stats->rx_1523_to_max_packets++; - } else if (packet_size >= 1024) { - stats->rx_1024_to_1522_packets++; - } else if (packet_size < 512) { - stats->rx_256_to_511_packets++; - } else { - stats->rx_512_to_1023_packets++; - } - } -} - static inline void netdev_dpdk_vhost_update_rx_counters(struct netdev_dpdk *dev, - struct dp_packet **packets, int count, int qos_drops) { - struct netdev_stats *stats = &dev->stats; - struct dp_packet *packet; - unsigned int packet_size; - int i; - - stats->rx_packets += count; - stats->rx_dropped += qos_drops; - for (i = 0; i < count; i++) { - packet = packets[i]; - packet_size = dp_packet_size(packet); - - if (OVS_UNLIKELY(packet_size < ETH_HEADER_LEN)) { - /* This only protects the following multicast counting from - * too short packets, but it does not stop the packet from - * further processing. */ - stats->rx_errors++; - stats->rx_length_errors++; - continue; - } - - netdev_dpdk_vhost_update_rx_size_counters(stats, packet_size); - - struct eth_header *eh = (struct eth_header *) dp_packet_data(packet); - if (OVS_UNLIKELY(eth_addr_is_multicast(eh->eth_dst))) { - stats->multicast++; - } - - stats->rx_bytes += packet_size; + if (OVS_LIKELY(!qos_drops)) { + return; } - if (OVS_UNLIKELY(qos_drops)) { - dev->sw_stats->rx_qos_drops += qos_drops; - } + rte_spinlock_lock(&dev->stats_lock); + dev->stats.rx_dropped += qos_drops; + dev->sw_stats->rx_qos_drops += qos_drops; + rte_spinlock_unlock(&dev->stats_lock); } /* @@ -2473,10 +2421,7 @@ netdev_dpdk_vhost_rxq_recv(struct netdev_rxq *rxq, qos_drops -= nb_rx; } - rte_spinlock_lock(&dev->stats_lock); - netdev_dpdk_vhost_update_rx_counters(dev, batch->packets, - nb_rx, qos_drops); - rte_spinlock_unlock(&dev->stats_lock); + netdev_dpdk_vhost_update_rx_counters(dev, qos_drops); batch->count = nb_rx; dp_packet_batch_init_packet_fields(batch); @@ -2589,34 +2534,27 @@ netdev_dpdk_filter_packet_len(struct netdev_dpdk *dev, struct rte_mbuf **pkts, static inline void netdev_dpdk_vhost_update_tx_counters(struct netdev_dpdk *dev, - struct dp_packet **packets, - int attempted, struct netdev_dpdk_sw_stats *sw_stats_add) { int dropped = sw_stats_add->tx_mtu_exceeded_drops + sw_stats_add->tx_qos_drops + sw_stats_add->tx_failure_drops + sw_stats_add->tx_invalid_hwol_drops; - struct netdev_stats *stats = &dev->stats; - int sent = attempted - dropped; - int i; - - stats->tx_packets += sent; - stats->tx_dropped += dropped; + struct netdev_dpdk_sw_stats *sw_stats; - for (i = 0; i < sent; i++) { - stats->tx_bytes += dp_packet_size(packets[i]); + if (OVS_LIKELY(!dropped)) { + return; } - if (OVS_UNLIKELY(dropped || sw_stats_add->tx_retries)) { - struct netdev_dpdk_sw_stats *sw_stats = dev->sw_stats; - - sw_stats->tx_retries += sw_stats_add->tx_retries; - sw_stats->tx_failure_drops += sw_stats_add->tx_failure_drops; - sw_stats->tx_mtu_exceeded_drops += sw_stats_add->tx_mtu_exceeded_drops; - sw_stats->tx_qos_drops += sw_stats_add->tx_qos_drops; - sw_stats->tx_invalid_hwol_drops += sw_stats_add->tx_invalid_hwol_drops; - } + rte_spinlock_lock(&dev->stats_lock); + sw_stats = dev->sw_stats; + dev->stats.tx_dropped += dropped; + sw_stats->tx_retries += sw_stats_add->tx_retries; + sw_stats->tx_failure_drops += sw_stats_add->tx_failure_drops; + sw_stats->tx_mtu_exceeded_drops += sw_stats_add->tx_mtu_exceeded_drops; + sw_stats->tx_qos_drops += sw_stats_add->tx_qos_drops; + sw_stats->tx_invalid_hwol_drops += sw_stats_add->tx_invalid_hwol_drops; + rte_spinlock_unlock(&dev->stats_lock); } static void @@ -2795,13 +2733,13 @@ netdev_dpdk_vhost_send(struct netdev *netdev, int qid, { struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); int max_retries = VHOST_ENQ_RETRY_MIN; - int cnt, batch_cnt, vhost_batch_cnt; int vid = netdev_dpdk_get_vid(dev); struct netdev_dpdk_sw_stats stats; + int cnt, vhost_batch_cnt; struct rte_mbuf **pkts; int retries; - batch_cnt = cnt = dp_packet_batch_size(batch); + cnt = dp_packet_batch_size(batch); qid = dev->tx_q[qid % netdev->n_txq].map; if (OVS_UNLIKELY(vid < 0 || !dev->vhost_reconfigured || qid < 0 || !(dev->flags & NETDEV_UP))) { @@ -2850,10 +2788,7 @@ netdev_dpdk_vhost_send(struct netdev *netdev, int qid, stats.tx_failure_drops += cnt; stats.tx_retries = MIN(retries, max_retries); - rte_spinlock_lock(&dev->stats_lock); - netdev_dpdk_vhost_update_tx_counters(dev, batch->packets, batch_cnt, - &stats); - rte_spinlock_unlock(&dev->stats_lock); + netdev_dpdk_vhost_update_tx_counters(dev, &stats); pkts = (struct rte_mbuf **) batch->packets; for (int i = 0; i < vhost_batch_cnt; i++) { @@ -3007,41 +2942,305 @@ netdev_dpdk_set_mtu(struct netdev *netdev, int mtu) return 0; } -static int -netdev_dpdk_get_carrier(const struct netdev *netdev, bool *carrier); - static int netdev_dpdk_vhost_get_stats(const struct netdev *netdev, struct netdev_stats *stats) { + struct rte_vhost_stat_name *vhost_stats_names = NULL; struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); + struct rte_vhost_stat *vhost_stats = NULL; + int vhost_stats_count; + int err; + int qid; + int vid; ovs_mutex_lock(&dev->mutex); + if (!is_vhost_running(dev)) { + err = EPROTO; + goto out; + } + + vid = netdev_dpdk_get_vid(dev); + + /* We expect all rxqs have the same number of stats, only query rxq0. */ + qid = 0 * VIRTIO_QNUM + VIRTIO_TXQ; + err = rte_vhost_vring_stats_get_names(vid, qid, NULL, 0); + if (err < 0) { + err = EPROTO; + goto out; + } + + vhost_stats_count = err; + vhost_stats_names = xcalloc(vhost_stats_count, sizeof *vhost_stats_names); + vhost_stats = xcalloc(vhost_stats_count, sizeof *vhost_stats); + + err = rte_vhost_vring_stats_get_names(vid, qid, vhost_stats_names, + vhost_stats_count); + if (err != vhost_stats_count) { + err = EPROTO; + goto out; + } + +#define VHOST_RXQ_STATS \ + VHOST_RXQ_STAT(rx_packets, "good_packets") \ + VHOST_RXQ_STAT(rx_bytes, "good_bytes") \ + VHOST_RXQ_STAT(rx_broadcast_packets, "broadcast_packets") \ + VHOST_RXQ_STAT(multicast, "multicast_packets") \ + VHOST_RXQ_STAT(rx_undersized_errors, "undersize_packets") \ + VHOST_RXQ_STAT(rx_1_to_64_packets, "size_64_packets") \ + VHOST_RXQ_STAT(rx_65_to_127_packets, "size_65_127_packets") \ + VHOST_RXQ_STAT(rx_128_to_255_packets, "size_128_255_packets") \ + VHOST_RXQ_STAT(rx_256_to_511_packets, "size_256_511_packets") \ + VHOST_RXQ_STAT(rx_512_to_1023_packets, "size_512_1023_packets") \ + VHOST_RXQ_STAT(rx_1024_to_1522_packets, "size_1024_1518_packets") \ + VHOST_RXQ_STAT(rx_1523_to_max_packets, "size_1519_max_packets") + +#define VHOST_RXQ_STAT(MEMBER, NAME) dev->stats.MEMBER = 0; + VHOST_RXQ_STATS; +#undef VHOST_RXQ_STAT + + for (int q = 0; q < dev->up.n_rxq; q++) { + qid = q * VIRTIO_QNUM + VIRTIO_TXQ; + + err = rte_vhost_vring_stats_get(vid, qid, vhost_stats, + vhost_stats_count); + if (err != vhost_stats_count) { + err = EPROTO; + goto out; + } + + for (int i = 0; i < vhost_stats_count; i++) { +#define VHOST_RXQ_STAT(MEMBER, NAME) \ + if (string_ends_with(vhost_stats_names[i].name, NAME)) { \ + dev->stats.MEMBER += vhost_stats[i].value; \ + continue; \ + } + VHOST_RXQ_STATS; +#undef VHOST_RXQ_STAT + } + } + + /* OVS reports 64 bytes and smaller packets into "rx_1_to_64_packets". + * Since vhost only reports good packets and has no error counter, + * rx_undersized_errors is highjacked (see above) to retrieve + * "undersize_packets". */ + dev->stats.rx_1_to_64_packets += dev->stats.rx_undersized_errors; + memset(&dev->stats.rx_undersized_errors, 0xff, + sizeof dev->stats.rx_undersized_errors); + +#define VHOST_RXQ_STAT(MEMBER, NAME) stats->MEMBER = dev->stats.MEMBER; + VHOST_RXQ_STATS; +#undef VHOST_RXQ_STAT + + free(vhost_stats_names); + vhost_stats_names = NULL; + free(vhost_stats); + vhost_stats = NULL; + + /* We expect all txqs have the same number of stats, only query txq0. */ + qid = 0 * VIRTIO_QNUM; + err = rte_vhost_vring_stats_get_names(vid, qid, NULL, 0); + if (err < 0) { + err = EPROTO; + goto out; + } + + vhost_stats_count = err; + vhost_stats_names = xcalloc(vhost_stats_count, sizeof *vhost_stats_names); + vhost_stats = xcalloc(vhost_stats_count, sizeof *vhost_stats); + + err = rte_vhost_vring_stats_get_names(vid, qid, vhost_stats_names, + vhost_stats_count); + if (err != vhost_stats_count) { + err = EPROTO; + goto out; + } + +#define VHOST_TXQ_STATS \ + VHOST_TXQ_STAT(tx_packets, "good_packets") \ + VHOST_TXQ_STAT(tx_bytes, "good_bytes") \ + VHOST_TXQ_STAT(tx_broadcast_packets, "broadcast_packets") \ + VHOST_TXQ_STAT(tx_multicast_packets, "multicast_packets") \ + VHOST_TXQ_STAT(rx_undersized_errors, "undersize_packets") \ + VHOST_TXQ_STAT(tx_1_to_64_packets, "size_64_packets") \ + VHOST_TXQ_STAT(tx_65_to_127_packets, "size_65_127_packets") \ + VHOST_TXQ_STAT(tx_128_to_255_packets, "size_128_255_packets") \ + VHOST_TXQ_STAT(tx_256_to_511_packets, "size_256_511_packets") \ + VHOST_TXQ_STAT(tx_512_to_1023_packets, "size_512_1023_packets") \ + VHOST_TXQ_STAT(tx_1024_to_1522_packets, "size_1024_1518_packets") \ + VHOST_TXQ_STAT(tx_1523_to_max_packets, "size_1519_max_packets") + +#define VHOST_TXQ_STAT(MEMBER, NAME) dev->stats.MEMBER = 0; + VHOST_TXQ_STATS; +#undef VHOST_TXQ_STAT + + for (int q = 0; q < dev->up.n_txq; q++) { + qid = q * VIRTIO_QNUM; + + err = rte_vhost_vring_stats_get(vid, qid, vhost_stats, + vhost_stats_count); + if (err != vhost_stats_count) { + err = EPROTO; + goto out; + } + + for (int i = 0; i < vhost_stats_count; i++) { +#define VHOST_TXQ_STAT(MEMBER, NAME) \ + if (string_ends_with(vhost_stats_names[i].name, NAME)) { \ + dev->stats.MEMBER += vhost_stats[i].value; \ + continue; \ + } + VHOST_TXQ_STATS; +#undef VHOST_TXQ_STAT + } + } + + /* OVS reports 64 bytes and smaller packets into "tx_1_to_64_packets". + * Same as for rx, rx_undersized_errors is highjacked. */ + dev->stats.tx_1_to_64_packets += dev->stats.rx_undersized_errors; + memset(&dev->stats.rx_undersized_errors, 0xff, + sizeof dev->stats.rx_undersized_errors); + +#define VHOST_TXQ_STAT(MEMBER, NAME) stats->MEMBER = dev->stats.MEMBER; + VHOST_TXQ_STATS; +#undef VHOST_TXQ_STAT + rte_spinlock_lock(&dev->stats_lock); - /* Supported Stats */ - stats->rx_packets = dev->stats.rx_packets; - stats->tx_packets = dev->stats.tx_packets; stats->rx_dropped = dev->stats.rx_dropped; stats->tx_dropped = dev->stats.tx_dropped; - stats->multicast = dev->stats.multicast; - stats->rx_bytes = dev->stats.rx_bytes; - stats->tx_bytes = dev->stats.tx_bytes; - stats->rx_errors = dev->stats.rx_errors; - stats->rx_length_errors = dev->stats.rx_length_errors; - - stats->rx_1_to_64_packets = dev->stats.rx_1_to_64_packets; - stats->rx_65_to_127_packets = dev->stats.rx_65_to_127_packets; - stats->rx_128_to_255_packets = dev->stats.rx_128_to_255_packets; - stats->rx_256_to_511_packets = dev->stats.rx_256_to_511_packets; - stats->rx_512_to_1023_packets = dev->stats.rx_512_to_1023_packets; - stats->rx_1024_to_1522_packets = dev->stats.rx_1024_to_1522_packets; - stats->rx_1523_to_max_packets = dev->stats.rx_1523_to_max_packets; - rte_spinlock_unlock(&dev->stats_lock); + err = 0; +out: + ovs_mutex_unlock(&dev->mutex); + free(vhost_stats); + free(vhost_stats_names); + + return err; +} + +static int +netdev_dpdk_vhost_get_custom_stats(const struct netdev *netdev, + struct netdev_custom_stats *custom_stats) +{ + struct rte_vhost_stat_name *vhost_stats_names = NULL; + struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); + struct rte_vhost_stat *vhost_stats = NULL; + int vhost_rxq_stats_count; + int vhost_txq_stats_count; + int stat_offset; + int err; + int qid; + int vid; + + netdev_dpdk_get_sw_custom_stats(netdev, custom_stats); + stat_offset = custom_stats->size; + + ovs_mutex_lock(&dev->mutex); + + if (!is_vhost_running(dev)) { + goto out; + } + + vid = netdev_dpdk_get_vid(dev); + + qid = 0 * VIRTIO_QNUM + VIRTIO_TXQ; + err = rte_vhost_vring_stats_get_names(vid, qid, NULL, 0); + if (err < 0) { + goto out; + } + vhost_rxq_stats_count = err; + + qid = 0 * VIRTIO_QNUM; + err = rte_vhost_vring_stats_get_names(vid, qid, NULL, 0); + if (err < 0) { + goto out; + } + vhost_txq_stats_count = err; + + stat_offset += dev->up.n_rxq * vhost_rxq_stats_count; + stat_offset += dev->up.n_txq * vhost_txq_stats_count; + custom_stats->counters = xrealloc(custom_stats->counters, + stat_offset * + sizeof *custom_stats->counters); + stat_offset = custom_stats->size; + + vhost_stats_names = xcalloc(vhost_rxq_stats_count, + sizeof *vhost_stats_names); + vhost_stats = xcalloc(vhost_rxq_stats_count, sizeof *vhost_stats); + + for (int q = 0; q < dev->up.n_rxq; q++) { + qid = q * VIRTIO_QNUM + VIRTIO_TXQ; + + err = rte_vhost_vring_stats_get_names(vid, qid, vhost_stats_names, + vhost_rxq_stats_count); + if (err != vhost_rxq_stats_count) { + goto out; + } + + err = rte_vhost_vring_stats_get(vid, qid, vhost_stats, + vhost_rxq_stats_count); + if (err != vhost_rxq_stats_count) { + goto out; + } + + for (int i = 0; i < vhost_rxq_stats_count; i++) { + ovs_strlcpy(custom_stats->counters[stat_offset + i].name, + vhost_stats_names[i].name, + NETDEV_CUSTOM_STATS_NAME_SIZE); + custom_stats->counters[stat_offset + i].value = + vhost_stats[i].value; + } + stat_offset += vhost_rxq_stats_count; + } + + free(vhost_stats_names); + vhost_stats_names = NULL; + free(vhost_stats); + vhost_stats = NULL; + + vhost_stats_names = xcalloc(vhost_txq_stats_count, + sizeof *vhost_stats_names); + vhost_stats = xcalloc(vhost_txq_stats_count, sizeof *vhost_stats); + + for (int q = 0; q < dev->up.n_txq; q++) { + qid = q * VIRTIO_QNUM; + + err = rte_vhost_vring_stats_get_names(vid, qid, vhost_stats_names, + vhost_txq_stats_count); + if (err != vhost_txq_stats_count) { + goto out; + } + + err = rte_vhost_vring_stats_get(vid, qid, vhost_stats, + vhost_txq_stats_count); + if (err != vhost_txq_stats_count) { + goto out; + } + + for (int i = 0; i < vhost_txq_stats_count; i++) { + ovs_strlcpy(custom_stats->counters[stat_offset + i].name, + vhost_stats_names[i].name, + NETDEV_CUSTOM_STATS_NAME_SIZE); + custom_stats->counters[stat_offset + i].value = + vhost_stats[i].value; + } + stat_offset += vhost_txq_stats_count; + } + + free(vhost_stats_names); + vhost_stats_names = NULL; + free(vhost_stats); + vhost_stats = NULL; + +out: + ovs_mutex_unlock(&dev->mutex); + + custom_stats->size = stat_offset; + return 0; } @@ -3088,6 +3287,9 @@ netdev_dpdk_convert_xstats(struct netdev_stats *stats, #undef DPDK_XSTATS } +static int +netdev_dpdk_get_carrier(const struct netdev *netdev, bool *carrier); + static int netdev_dpdk_get_stats(const struct netdev *netdev, struct netdev_stats *stats) { @@ -3536,6 +3738,7 @@ netdev_dpdk_update_flags__(struct netdev_dpdk *dev, if (NETDEV_UP & on) { rte_spinlock_lock(&dev->stats_lock); memset(&dev->stats, 0, sizeof dev->stats); + memset(dev->sw_stats, 0, sizeof *dev->sw_stats); rte_spinlock_unlock(&dev->stats_lock); } } @@ -5036,6 +5239,11 @@ dpdk_vhost_reconfigure_helper(struct netdev_dpdk *dev) dev->tx_q[0].map = 0; } + rte_spinlock_lock(&dev->stats_lock); + memset(&dev->stats, 0, sizeof dev->stats); + memset(dev->sw_stats, 0, sizeof *dev->sw_stats); + rte_spinlock_unlock(&dev->stats_lock); + if (userspace_tso_enabled()) { dev->hw_ol_features |= NETDEV_TX_TSO_OFFLOAD; VLOG_DBG("%s: TSO enabled on vhost port", netdev_get_name(&dev->up)); @@ -5096,6 +5304,9 @@ netdev_dpdk_vhost_client_reconfigure(struct netdev *netdev) /* Register client-mode device. */ vhost_flags |= RTE_VHOST_USER_CLIENT; + /* Extended per vq statistics. */ + vhost_flags |= RTE_VHOST_USER_NET_STATS_ENABLE; + /* There is no support for multi-segments buffers. */ vhost_flags |= RTE_VHOST_USER_LINEARBUF_SUPPORT; @@ -5574,7 +5785,7 @@ static const struct netdev_class dpdk_vhost_class = { .send = netdev_dpdk_vhost_send, .get_carrier = netdev_dpdk_vhost_get_carrier, .get_stats = netdev_dpdk_vhost_get_stats, - .get_custom_stats = netdev_dpdk_get_sw_custom_stats, + .get_custom_stats = netdev_dpdk_vhost_get_custom_stats, .get_status = netdev_dpdk_vhost_user_get_status, .reconfigure = netdev_dpdk_vhost_reconfigure, .rxq_recv = netdev_dpdk_vhost_rxq_recv, @@ -5590,7 +5801,7 @@ static const struct netdev_class dpdk_vhost_client_class = { .send = netdev_dpdk_vhost_send, .get_carrier = netdev_dpdk_vhost_get_carrier, .get_stats = netdev_dpdk_vhost_get_stats, - .get_custom_stats = netdev_dpdk_get_sw_custom_stats, + .get_custom_stats = netdev_dpdk_vhost_get_custom_stats, .get_status = netdev_dpdk_vhost_user_get_status, .reconfigure = netdev_dpdk_vhost_client_reconfigure, .rxq_recv = netdev_dpdk_vhost_rxq_recv, diff --git a/tests/system-dpdk.at b/tests/system-dpdk.at index 8dc187a61d..5ef7f8ccdc 100644 --- a/tests/system-dpdk.at +++ b/tests/system-dpdk.at @@ -200,9 +200,10 @@ ADD_VETH(tap1, ns2, br10, "172.31.110.12/24") dnl Execute testpmd in background on_exit "pkill -f -x -9 'tail -f /dev/null'" tail -f /dev/null | dpdk-testpmd --socket-mem="$(cat NUMA_NODE)" --no-pci\ - --vdev="net_virtio_user,path=$OVS_RUNDIR/dpdkvhostclient0,server=1" \ - --vdev="net_tap0,iface=tap0" --file-prefix page0 \ - --single-file-segments -- -a >$OVS_RUNDIR/testpmd-dpdkvhostuserclient0.log 2>&1 & + --vdev="net_virtio_user,path=$OVS_RUNDIR/dpdkvhostclient0,queues=2,server=1" \ + --vdev="net_tap0,iface=tap0" --file-prefix page0 \ + --single-file-segments -- -a --nb-cores 2 --rxq 2 --txq 2 \ + >$OVS_RUNDIR/testpmd-dpdkvhostuserclient0.log 2>&1 & OVS_WAIT_UNTIL([grep "virtio is now ready for processing" ovs-vswitchd.log]) OVS_WAIT_UNTIL([ip link show dev tap0 | grep -qw LOWER_UP]) @@ -220,9 +221,33 @@ AT_CHECK([ip netns exec ns1 ip addr add 172.31.110.11/24 dev tap0], [], AT_CHECK([ip netns exec ns1 ip link show], [], [stdout], [stderr]) AT_CHECK([ip netns exec ns2 ip link show], [], [stdout], [stderr]) -AT_CHECK([ip netns exec ns1 ping -c 4 -I tap0 172.31.110.12], [], [stdout], +AT_CHECK([ip netns exec ns1 ping -i 0.1 -c 10 -I tap0 172.31.110.12], [], [stdout], [stderr]) +AT_CHECK([ip netns exec ns1 ip link set tap0 down], [], [stdout], [stderr]) + +# Wait for stats to be queried ("stats-update-interval") +sleep 5 +AT_CHECK([ovs-vsctl get interface dpdkvhostuserclient0 statistics], [], [stdout], [stderr]) + +AT_CHECK([test `ovs-vsctl get interface dpdkvhostuserclient0 statistics:rx_packets` -gt 0 -a dnl + `ovs-vsctl get interface dpdkvhostuserclient0 statistics:rx_bytes` -gt 0]) +AT_CHECK([test `ovs-vsctl get interface dpdkvhostuserclient0 statistics:rx_packets` -eq dnl + $((`ovs-vsctl get interface dpdkvhostuserclient0 statistics:rx_q0_good_packets` + dnl + `ovs-vsctl get interface dpdkvhostuserclient0 statistics:rx_q1_good_packets`))]) +AT_CHECK([test `ovs-vsctl get interface dpdkvhostuserclient0 statistics:rx_bytes` -eq dnl + $((`ovs-vsctl get interface dpdkvhostuserclient0 statistics:rx_q0_good_bytes` + dnl + `ovs-vsctl get interface dpdkvhostuserclient0 statistics:rx_q1_good_bytes`))]) + +AT_CHECK([test `ovs-vsctl get interface dpdkvhostuserclient0 statistics:tx_packets` -gt 0 -a dnl + `ovs-vsctl get interface dpdkvhostuserclient0 statistics:tx_bytes` -gt 0]) +AT_CHECK([test `ovs-vsctl get interface dpdkvhostuserclient0 statistics:tx_packets` -eq dnl + $((`ovs-vsctl get interface dpdkvhostuserclient0 statistics:tx_q0_good_packets` + dnl + `ovs-vsctl get interface dpdkvhostuserclient0 statistics:tx_q1_good_packets`))]) +AT_CHECK([test `ovs-vsctl get interface dpdkvhostuserclient0 statistics:tx_bytes` -eq dnl + $((`ovs-vsctl get interface dpdkvhostuserclient0 statistics:tx_q0_good_bytes` + dnl + `ovs-vsctl get interface dpdkvhostuserclient0 statistics:tx_q1_good_bytes`))]) + dnl Clean up the testpmd now pkill -f -x -9 'tail -f /dev/null' From patchwork Thu Jan 5 20:24:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 1722143 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.133; helo=smtp2.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=V+5ppTx+; dkim-atps=neutral Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Nnycx30sgz23dq for ; Fri, 6 Jan 2023 07:24:49 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 532A940C92; Thu, 5 Jan 2023 20:24:47 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 532A940C92 Authentication-Results: smtp2.osuosl.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=V+5ppTx+ X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 5NoV9ccxhTbV; Thu, 5 Jan 2023 20:24:46 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [IPv6:2605:bc80:3010:104::8cd3:938]) by smtp2.osuosl.org (Postfix) with ESMTPS id 22C0540C62; Thu, 5 Jan 2023 20:24:45 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 22C0540C62 Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id E2D59C0033; Thu, 5 Jan 2023 20:24:44 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp4.osuosl.org (smtp4.osuosl.org [IPv6:2605:bc80:3010::137]) by lists.linuxfoundation.org (Postfix) with ESMTP id 85732C0033 for ; Thu, 5 Jan 2023 20:24:43 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id 54BF3419DC for ; Thu, 5 Jan 2023 20:24:43 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org 54BF3419DC Authentication-Results: smtp4.osuosl.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=V+5ppTx+ X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id NikIW5fqvOgW for ; Thu, 5 Jan 2023 20:24:42 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp4.osuosl.org B43D7419C1 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by smtp4.osuosl.org (Postfix) with ESMTPS id B43D7419C1 for ; Thu, 5 Jan 2023 20:24:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1672950280; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iizOJG5j3ZHzJPU7EjwurPEl7IUvn5u0qV2VJWNDC7U=; b=V+5ppTx+78CYw4PW/50tlDbNg/s5DC/6u7WvkZLebSLngWBzdVp1FQ145ffmUmmePa7dXc gxg4bqTfgMfnoFvNWx1PIfJxNAOYmfsiORioAAt1YW/sZgYE+YBjHm7rgDoJdPfWItId6A S9ZWq8Cs4iiEdVXjiAZDO2BXGWr4QWg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-381-O3w0QtbvORuuI5Pdg2ePsg-1; Thu, 05 Jan 2023 15:24:37 -0500 X-MC-Unique: O3w0QtbvORuuI5Pdg2ePsg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 15BE1101A52E; Thu, 5 Jan 2023 20:24:37 +0000 (UTC) Received: from dmarchan.redhat.com (ovpn-193-12.brq.redhat.com [10.40.193.12]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4667EC15BA0; Thu, 5 Jan 2023 20:24:36 +0000 (UTC) From: David Marchand To: dev@openvswitch.org Date: Thu, 5 Jan 2023 21:24:25 +0100 Message-Id: <20230105202425.4187792-2-david.marchand@redhat.com> In-Reply-To: <20230105202425.4187792-1-david.marchand@redhat.com> References: <20230105202425.4187792-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Cc: maxime.coquelin@redhat.com, i.maximets@ovn.org Subject: [ovs-dev] [PATCH v6 2/2] netdev-dpdk: Drop coverage counter for vhost IRQs. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" The vhost library now provides finegrained statistics for guest notifications: - notifications for buffer reclaim by the guest, - notifications for buffer availability to the guest, Example before this patch: $ ovs-appctl coverage/show | grep vhost_notification vhost_notification 0.0/sec 0.000/sec 2.0283/sec total: 7302 $ ovs-vsctl get interface vhost4 statistics | sed -e 's#[{}]##g' -e 's#, #\n#g' | grep guest_notifications rx_q0_guest_notifications=66 tx_q0_guest_notifications=7236 Signed-off-by: David Marchand --- lib/netdev-dpdk.c | 9 --------- 1 file changed, 9 deletions(-) diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index 80ba650032..2b256df2b9 100644 --- a/lib/netdev-dpdk.c +++ b/lib/netdev-dpdk.c @@ -78,7 +78,6 @@ VLOG_DEFINE_THIS_MODULE(netdev_dpdk); static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 20); COVERAGE_DEFINE(vhost_tx_contention); -COVERAGE_DEFINE(vhost_notification); static char *vhost_sock_dir = NULL; /* Location of vhost-user sockets */ static bool vhost_iommu_enabled = false; /* Status of vHost IOMMU support */ @@ -188,7 +187,6 @@ static int new_device(int vid); static void destroy_device(int vid); static int vring_state_changed(int vid, uint16_t queue_id, int enable); static void destroy_connection(int vid); -static void vhost_guest_notified(int vid); static const struct rte_vhost_device_ops virtio_net_device_ops = { @@ -198,7 +196,6 @@ static const struct rte_vhost_device_ops virtio_net_device_ops = .features_changed = NULL, .new_connection = NULL, .destroy_connection = destroy_connection, - .guest_notified = vhost_guest_notified, }; /* Custom software stats for dpdk ports */ @@ -4370,12 +4367,6 @@ destroy_connection(int vid) } } -static -void vhost_guest_notified(int vid OVS_UNUSED) -{ - COVERAGE_INC(vhost_notification); -} - /* * Retrieve the DPDK virtio device ID (vid) associated with a vhostuser * or vhostuserclient netdev.