diff mbox series

[ovs-dev,v4] Detailed packet drop statistics per dpdk and vhostuser ports

Message ID 1563283949-25003-1-git-send-email-sriram.v@altencalsoftlabs.com
State Changes Requested
Delegated to: Ilya Maximets
Headers show
Series [ovs-dev,v4] Detailed packet drop statistics per dpdk and vhostuser ports | expand

Commit Message

Li,Rongqing via dev July 16, 2019, 1:32 p.m. UTC
OVS may be unable to transmit packets for multiple reasons and
today there is a single counter to track packets dropped due to
any of those reasons. The most common reason is that a VM is
unable to read packets fast enough causing the vhostuser port
transmit queue on the OVS side to become full. This manifests
as a problem with VNFs not receiving all packets. Having a
separate drop counter to track packets dropped because the
transmit queue is full will clearly indicate that the problem
is on the VM side and not in OVS. Similarly maintaining separate
counters for all possible drops helps in indicating sensible
cause for packet drops.

This patch adds counters to track packets dropped due to all
possible reasons and these counters are displayed along with
other stats in "ovs-vsctl get interface <iface> statistics"
command. The detailed stats will be available for both dpdk and
vhostuser ports.

Following are the details of the new counters :

tx_failed_drops : Sometimes DPDK physical/vHost port transmit
API fails to send all/some packets. These untransmited packets
are dropped.The most likely reason for this to happen is
because of transmit queue overrun. Besides transmit queue
overrun, there are other unlikely reasons such as invalid
queue id etc.

tx_mtu_exceeded_drops : These are the packets dropped due
to MTU mismatch (i.e Pkt len > Max Dev MTU).

tx_qos_drops/rx_qos_drops : These are the packets dropped due
to transmission/reception rate exceeding the configured
Egress/Ingress policer rate on the interface.

Signed-off-by: Sriram Vatala <sriram.v@altencalsoftlabs.com>
---
 Documentation/topics/dpdk/bridge.rst               | 28 +++++++++++
 include/openvswitch/netdev.h                       |  9 ++++
 lib/netdev-dpdk.c                                  | 56 ++++++++++++++++++----
 utilities/bugtool/automake.mk                      |  3 +-
 utilities/bugtool/ovs-bugtool-get-iface-stats      | 25 ++++++++++
 .../bugtool/plugins/network-status/openvswitch.xml |  1 +
 vswitchd/bridge.c                                  |  6 ++-
 7 files changed, 118 insertions(+), 10 deletions(-)
 create mode 100755 utilities/bugtool/ovs-bugtool-get-iface-stats

Comments

0-day Robot July 16, 2019, 2:01 p.m. UTC | #1
Bleep bloop.  Greetings Sriram Vatala via dev, I am a robot and I have tried out your patch.
Thanks for your contribution.

I encountered some error that I wasn't expecting.  See the details below.


checkpatch:
WARNING: Line is 138 characters long (recommended limit is 79)
#347 FILE: utilities/bugtool/plugins/network-status/openvswitch.xml:37:
    <command label="ovs-vsctl-get-interface-statistics" filters="ovs">/usr/share/openvswitch/scripts/ovs-bugtool-get-iface-stats</command>

Lines checked: 370, Warnings: 1, Errors: 0


Please check this out.  If you feel there has been an error, please email aconole@bytheb.org

Thanks,
0-day Robot
Ben Pfaff July 16, 2019, 7:51 p.m. UTC | #2
On Tue, Jul 16, 2019 at 07:02:29PM +0530, Sriram Vatala via dev wrote:
> OVS may be unable to transmit packets for multiple reasons and
> today there is a single counter to track packets dropped due to
> any of those reasons. The most common reason is that a VM is
> unable to read packets fast enough causing the vhostuser port
> transmit queue on the OVS side to become full. This manifests
> as a problem with VNFs not receiving all packets. Having a
> separate drop counter to track packets dropped because the
> transmit queue is full will clearly indicate that the problem
> is on the VM side and not in OVS. Similarly maintaining separate
> counters for all possible drops helps in indicating sensible
> cause for packet drops.
> 
> This patch adds counters to track packets dropped due to all
> possible reasons and these counters are displayed along with
> other stats in "ovs-vsctl get interface <iface> statistics"
> command. The detailed stats will be available for both dpdk and
> vhostuser ports.

Did you consider using netdev_get_custom_stats() for these counters?  I
do not know whether they are likely to be implemented by other network
devices, and custom stats are an appropriate way to implement
specialized statistics.

> Following are the details of the new counters :
> 
> tx_failed_drops : Sometimes DPDK physical/vHost port transmit
> API fails to send all/some packets. These untransmited packets
> are dropped.The most likely reason for this to happen is
> because of transmit queue overrun. Besides transmit queue
> overrun, there are other unlikely reasons such as invalid
> queue id etc.
> 
> tx_mtu_exceeded_drops : These are the packets dropped due
> to MTU mismatch (i.e Pkt len > Max Dev MTU).
> 
> tx_qos_drops/rx_qos_drops : These are the packets dropped due
> to transmission/reception rate exceeding the configured
> Egress/Ingress policer rate on the interface.

It would make sense to include the above descriptions in
vswitchd/vswitch.xml alongside the other statistics under the
"Statistics" group for the Interface table.

Thanks for working to make OVS better!
Li,Rongqing via dev July 18, 2019, 5:49 a.m. UTC | #3
Hi Ben,

Thanks for reviewing the patch. As per your suggestion, I will use use
net_dev_custom_stats to fetch the new statistics counters and document them
in vswitchd/vswitchd.xml.

Thanks,
Sriram.

-----Original Message-----
From: Ben Pfaff <blp@ovn.org> 
Sent: 17 July 2019 01:21
To: Sriram Vatala <sriram.v@altencalsoftlabs.com>
Cc: ovs-dev@openvswitch.org
Subject: Re: [ovs-dev] [PATCH v4] Detailed packet drop statistics per dpdk
and vhostuser ports

On Tue, Jul 16, 2019 at 07:02:29PM +0530, Sriram Vatala via dev wrote:
> OVS may be unable to transmit packets for multiple reasons and today 
> there is a single counter to track packets dropped due to any of those 
> reasons. The most common reason is that a VM is unable to read packets 
> fast enough causing the vhostuser port transmit queue on the OVS side 
> to become full. This manifests as a problem with VNFs not receiving 
> all packets. Having a separate drop counter to track packets dropped 
> because the transmit queue is full will clearly indicate that the 
> problem is on the VM side and not in OVS. Similarly maintaining 
> separate counters for all possible drops helps in indicating sensible 
> cause for packet drops.
> 
> This patch adds counters to track packets dropped due to all possible 
> reasons and these counters are displayed along with other stats in 
> "ovs-vsctl get interface <iface> statistics"
> command. The detailed stats will be available for both dpdk and 
> vhostuser ports.

Did you consider using netdev_get_custom_stats() for these counters?  I do
not know whether they are likely to be implemented by other network devices,
and custom stats are an appropriate way to implement specialized statistics.

> Following are the details of the new counters :
> 
> tx_failed_drops : Sometimes DPDK physical/vHost port transmit API 
> fails to send all/some packets. These untransmited packets are 
> dropped.The most likely reason for this to happen is because of 
> transmit queue overrun. Besides transmit queue overrun, there are 
> other unlikely reasons such as invalid queue id etc.
> 
> tx_mtu_exceeded_drops : These are the packets dropped due to MTU 
> mismatch (i.e Pkt len > Max Dev MTU).
> 
> tx_qos_drops/rx_qos_drops : These are the packets dropped due to 
> transmission/reception rate exceeding the configured Egress/Ingress 
> policer rate on the interface.

It would make sense to include the above descriptions in
vswitchd/vswitch.xml alongside the other statistics under the "Statistics"
group for the Interface table.

Thanks for working to make OVS better!
diff mbox series

Patch

diff --git a/Documentation/topics/dpdk/bridge.rst b/Documentation/topics/dpdk/bridge.rst
index a3ed926..43080c2 100644
--- a/Documentation/topics/dpdk/bridge.rst
+++ b/Documentation/topics/dpdk/bridge.rst
@@ -74,6 +74,34 @@  OpenFlow14`` option::
 
     $ ovs-ofctl -O OpenFlow14 dump-ports br0
 
+Stats counter to track packet drops at Bridge port level
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Detail statistics counters for transmit dropped packets and receive
+dropped packets are implemented and supported only for DPDK physical
+and vHost ports.
+
+Following are the details of these counters :
+
+tx_failure_drops : Sometimes DPDK transmit API for physical/vHost ports
+may fail to transmit all/some packets in its packet buffer which is
+passed as input argument to the dpdk xmit API. These untransmitted
+packets are dropped. The most likely reason for this to happens is when
+the transmit queue is full or has been filled up. There are other
+unlikely reasons such as invalid Tx queue id etc.
+
+tx_mtu_exceeded_drops : These are the packets dropped due to MTU mismatch
+(i.e Pkt len > Max Dev MTU).
+
+tx_qos_drops/rx_qos_drops : These are the packets dropped due to
+transmission/reception rate exceeding the
+configured Egress/Ingress policer rate on the interface.
+
+These statistic counters can be viewed with the following commands::
+
+    $ ovs-vsctl get interface <iface> statistics
+    $ ovs-vsctl list interface
+
 EMC Insertion Probability
 -------------------------
 
diff --git a/include/openvswitch/netdev.h b/include/openvswitch/netdev.h
index 0c10f7b..7851736 100644
--- a/include/openvswitch/netdev.h
+++ b/include/openvswitch/netdev.h
@@ -61,6 +61,15 @@  struct netdev_stats {
     uint64_t tx_heartbeat_errors;
     uint64_t tx_window_errors;
 
+    /* Detailed receive drops. */
+    uint64_t rx_qos_drops;      /* rx rate exceeded conf'd qos rate */
+
+    /* Detailed transmit drops. */
+    uint64_t tx_failure_drops;  /* Dpdk tx failure, probably tx
+                                 * queue is full. */
+    uint64_t tx_qos_drops;      /* tx rate exceeded conf'd qos rate */
+    uint64_t tx_mtu_drops;      /* tx pkt len exceeded max dev MTU */
+
     /* Extended statistics based on RFC2819. */
     uint64_t rx_1_to_64_packets;
     uint64_t rx_65_to_127_packets;
diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
index 4805783..3f22701 100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -68,6 +68,7 @@ 
 #include "uuid.h"
 
 enum {VIRTIO_RXQ, VIRTIO_TXQ, VIRTIO_QNUM};
+enum {QOS_DROPS, MTU_DROPS, TX_FAILURE_DROPS, MAX_DROPS};
 
 VLOG_DEFINE_THIS_MODULE(netdev_dpdk);
 static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 20);
@@ -2170,6 +2171,7 @@  netdev_dpdk_vhost_update_rx_counters(struct netdev_stats *stats,
 
     stats->rx_packets += count;
     stats->rx_dropped += dropped;
+    stats->rx_qos_drops += dropped;
     for (i = 0; i < count; i++) {
         packet = packets[i];
         packet_size = dp_packet_size(packet);
@@ -2290,6 +2292,7 @@  netdev_dpdk_rxq_recv(struct netdev_rxq *rxq, struct dp_packet_batch *batch,
     if (OVS_UNLIKELY(dropped)) {
         rte_spinlock_lock(&dev->stats_lock);
         dev->stats.rx_dropped += dropped;
+        dev->stats.rx_qos_drops += dropped;
         rte_spinlock_unlock(&dev->stats_lock);
     }
 
@@ -2352,13 +2355,21 @@  static inline void
 netdev_dpdk_vhost_update_tx_counters(struct netdev_stats *stats,
                                      struct dp_packet **packets,
                                      int attempted,
-                                     int dropped)
+                                     int *pkt_drops)
 {
     int i;
+    int dropped = 0;
+
+    for (i = 0; i < MAX_DROPS; i++) {
+        dropped += pkt_drops[i];
+    }
     int sent = attempted - dropped;
 
     stats->tx_packets += sent;
     stats->tx_dropped += dropped;
+    stats->tx_mtu_drops += pkt_drops[MTU_DROPS];
+    stats->tx_qos_drops += pkt_drops[QOS_DROPS];
+    stats->tx_failure_drops += pkt_drops[TX_FAILURE_DROPS];
 
     for (i = 0; i < sent; i++) {
         stats->tx_bytes += dp_packet_size(packets[i]);
@@ -2376,6 +2387,7 @@  __netdev_dpdk_vhost_send(struct netdev *netdev, int qid,
     int i, retries = 0;
     int max_retries = VHOST_ENQ_RETRY_MIN;
     int vid = netdev_dpdk_get_vid(dev);
+    int pkt_drops[MAX_DROPS] = {0};
 
     qid = dev->tx_q[qid % netdev->n_txq].map;
 
@@ -2390,9 +2402,11 @@  __netdev_dpdk_vhost_send(struct netdev *netdev, int qid,
     rte_spinlock_lock(&dev->tx_q[qid].tx_lock);
 
     cnt = netdev_dpdk_filter_packet_len(dev, cur_pkts, cnt);
+    pkt_drops[MTU_DROPS] = total_pkts - cnt;
+    pkt_drops[QOS_DROPS] = cnt;
     /* Check has QoS has been configured for the netdev */
     cnt = netdev_dpdk_qos_run(dev, cur_pkts, cnt, true);
-    dropped = total_pkts - cnt;
+    pkt_drops[QOS_DROPS] -= cnt;
 
     do {
         int vhost_qid = qid * VIRTIO_QNUM + VIRTIO_RXQ;
@@ -2419,10 +2433,13 @@  __netdev_dpdk_vhost_send(struct netdev *netdev, int qid,
 
     rte_spinlock_unlock(&dev->tx_q[qid].tx_lock);
 
+    pkt_drops[TX_FAILURE_DROPS] = cnt;
     rte_spinlock_lock(&dev->stats_lock);
+    dropped = dev->stats.tx_dropped;
     netdev_dpdk_vhost_update_tx_counters(&dev->stats, pkts, total_pkts,
-                                         cnt + dropped);
+                                         pkt_drops);
     dev->tx_retries += MIN(retries, max_retries);
+    dropped = dev->stats.tx_dropped - dropped;
     rte_spinlock_unlock(&dev->stats_lock);
 
 out:
@@ -2447,12 +2464,15 @@  dpdk_do_tx_copy(struct netdev *netdev, int qid, struct dp_packet_batch *batch)
     struct rte_mbuf *pkts[PKT_ARRAY_SIZE];
     uint32_t cnt = batch_cnt;
     uint32_t dropped = 0;
+    uint32_t mtu_drops = 0;
+    uint32_t qos_drops = 0;
+    uint32_t tx_failed = 0;
 
     if (dev->type != DPDK_DEV_VHOST) {
         /* Check if QoS has been configured for this netdev. */
         cnt = netdev_dpdk_qos_run(dev, (struct rte_mbuf **) batch->packets,
                                   batch_cnt, false);
-        dropped += batch_cnt - cnt;
+        qos_drops = batch_cnt - cnt;
     }
 
     uint32_t txcnt = 0;
@@ -2465,7 +2485,7 @@  dpdk_do_tx_copy(struct netdev *netdev, int qid, struct dp_packet_batch *batch)
             VLOG_WARN_RL(&rl, "Too big size %u max_packet_len %d",
                          size, dev->max_packet_len);
 
-            dropped++;
+            mtu_drops++;
             continue;
         }
 
@@ -2488,13 +2508,17 @@  dpdk_do_tx_copy(struct netdev *netdev, int qid, struct dp_packet_batch *batch)
             __netdev_dpdk_vhost_send(netdev, qid, (struct dp_packet **) pkts,
                                      txcnt);
         } else {
-            dropped += netdev_dpdk_eth_tx_burst(dev, qid, pkts, txcnt);
+            tx_failed = netdev_dpdk_eth_tx_burst(dev, qid, pkts, txcnt);
         }
     }
 
+    dropped += mtu_drops + qos_drops + tx_failed;
     if (OVS_UNLIKELY(dropped)) {
         rte_spinlock_lock(&dev->stats_lock);
         dev->stats.tx_dropped += dropped;
+        dev->stats.tx_mtu_drops += mtu_drops;
+        dev->stats.tx_qos_drops += qos_drops;
+        dev->stats.tx_failure_drops += tx_failed;
         rte_spinlock_unlock(&dev->stats_lock);
     }
 }
@@ -2536,18 +2560,25 @@  netdev_dpdk_send__(struct netdev_dpdk *dev, int qid,
         dp_packet_delete_batch(batch, true);
     } else {
         int tx_cnt, dropped;
+        int mtu_drops, tx_failed, qos_drops;
         int batch_cnt = dp_packet_batch_size(batch);
         struct rte_mbuf **pkts = (struct rte_mbuf **) batch->packets;
 
         tx_cnt = netdev_dpdk_filter_packet_len(dev, pkts, batch_cnt);
+        mtu_drops = batch_cnt - tx_cnt;
+        qos_drops = tx_cnt;
         tx_cnt = netdev_dpdk_qos_run(dev, pkts, tx_cnt, true);
-        dropped = batch_cnt - tx_cnt;
+        qos_drops -= tx_cnt;
 
-        dropped += netdev_dpdk_eth_tx_burst(dev, qid, pkts, tx_cnt);
+        tx_failed = netdev_dpdk_eth_tx_burst(dev, qid, pkts, tx_cnt);
 
+        dropped = mtu_drops + qos_drops + tx_failed;
         if (OVS_UNLIKELY(dropped)) {
             rte_spinlock_lock(&dev->stats_lock);
             dev->stats.tx_dropped += dropped;
+            dev->stats.tx_mtu_drops += mtu_drops;
+            dev->stats.tx_qos_drops += qos_drops;
+            dev->stats.tx_failure_drops += tx_failed;
             rte_spinlock_unlock(&dev->stats_lock);
         }
     }
@@ -2661,6 +2692,11 @@  netdev_dpdk_vhost_get_stats(const struct netdev *netdev,
     stats->rx_errors = dev->stats.rx_errors;
     stats->rx_length_errors = dev->stats.rx_length_errors;
 
+    stats->tx_mtu_drops = dev->stats.tx_mtu_drops;
+    stats->tx_qos_drops = dev->stats.tx_qos_drops;
+    stats->tx_failure_drops = dev->stats.tx_failure_drops;
+    stats->rx_qos_drops = dev->stats.rx_qos_drops;
+
     stats->rx_1_to_64_packets = dev->stats.rx_1_to_64_packets;
     stats->rx_65_to_127_packets = dev->stats.rx_65_to_127_packets;
     stats->rx_128_to_255_packets = dev->stats.rx_128_to_255_packets;
@@ -2796,6 +2832,10 @@  out:
     rte_spinlock_lock(&dev->stats_lock);
     stats->tx_dropped = dev->stats.tx_dropped;
     stats->rx_dropped = dev->stats.rx_dropped;
+    stats->tx_mtu_drops = dev->stats.tx_mtu_drops;
+    stats->tx_qos_drops = dev->stats.tx_qos_drops;
+    stats->tx_failure_drops = dev->stats.tx_failure_drops;
+    stats->rx_qos_drops = dev->stats.rx_qos_drops;
     rte_spinlock_unlock(&dev->stats_lock);
 
     /* These are the available DPDK counters for packets not received due to
diff --git a/utilities/bugtool/automake.mk b/utilities/bugtool/automake.mk
index 18fa347..9657468 100644
--- a/utilities/bugtool/automake.mk
+++ b/utilities/bugtool/automake.mk
@@ -22,7 +22,8 @@  bugtool_scripts = \
 	utilities/bugtool/ovs-bugtool-ovs-bridge-datapath-type \
 	utilities/bugtool/ovs-bugtool-ovs-vswitchd-threads-affinity \
 	utilities/bugtool/ovs-bugtool-qos-configs \
-	utilities/bugtool/ovs-bugtool-get-dpdk-nic-numa
+	utilities/bugtool/ovs-bugtool-get-dpdk-nic-numa \
+	utilities/bugtool/ovs-bugtool-get-iface-stats
 
 scripts_SCRIPTS += $(bugtool_scripts)
 
diff --git a/utilities/bugtool/ovs-bugtool-get-iface-stats b/utilities/bugtool/ovs-bugtool-get-iface-stats
new file mode 100755
index 0000000..0fe175e
--- /dev/null
+++ b/utilities/bugtool/ovs-bugtool-get-iface-stats
@@ -0,0 +1,25 @@ 
+#! /bin/bash
+
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of version 2.1 of the GNU Lesser General
+# Public License as published by the Free Software Foundation.
+#
+# This library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# Lesser General Public License for more details.
+#
+# Copyright (C) 2019 Ericsson AB
+
+for bridge in `ovs-vsctl -- --real list-br`
+do
+    echo -e "\nBridge : ${bridge}\n"
+    for iface in `ovs-vsctl list-ifaces ${bridge}`
+    do
+        echo -e "iface : ${iface}"
+        ovs-vsctl get interface ${iface} statistics
+        echo -e "\n"
+    done
+    echo -e "iface : ${bridge}"
+    ovs-vsctl get interface ${bridge} statistics
+done
diff --git a/utilities/bugtool/plugins/network-status/openvswitch.xml b/utilities/bugtool/plugins/network-status/openvswitch.xml
index d39867c..f8c4ff0 100644
--- a/utilities/bugtool/plugins/network-status/openvswitch.xml
+++ b/utilities/bugtool/plugins/network-status/openvswitch.xml
@@ -34,6 +34,7 @@ 
     <command label="ovs-appctl-dpctl-dump-flows-netdev" filters="ovs" repeat="2">ovs-appctl dpctl/dump-flows netdev@ovs-netdev</command>
     <command label="ovs-appctl-dpctl-dump-flows-system" filters="ovs" repeat="2">ovs-appctl dpctl/dump-flows system@ovs-system</command>
     <command label="ovs-appctl-dpctl-show-s" filters="ovs" repeat="2">ovs-appctl dpctl/show -s</command>
+    <command label="ovs-vsctl-get-interface-statistics" filters="ovs">/usr/share/openvswitch/scripts/ovs-bugtool-get-iface-stats</command>
     <command label="ovs-ofctl-show" filters="ovs">/usr/share/openvswitch/scripts/ovs-bugtool-ovs-ofctl-loop-over-bridges "show"</command>
     <command label="ovs-ofctl-dump-flows" filters="ovs" repeat="2">/usr/share/openvswitch/scripts/ovs-bugtool-ovs-ofctl-loop-over-bridges "dump-flows"</command>
     <command label="ovs-ofctl-dump-ports" filters="ovs" repeat="2">/usr/share/openvswitch/scripts/ovs-bugtool-ovs-ofctl-loop-over-bridges "dump-ports"</command>
diff --git a/vswitchd/bridge.c b/vswitchd/bridge.c
index 2976771..bddb5d1 100644
--- a/vswitchd/bridge.c
+++ b/vswitchd/bridge.c
@@ -2408,7 +2408,11 @@  iface_refresh_stats(struct iface *iface)
     IFACE_STAT(rx_undersized_errors,    "rx_undersized_errors")     \
     IFACE_STAT(rx_oversize_errors,      "rx_oversize_errors")       \
     IFACE_STAT(rx_fragmented_errors,    "rx_fragmented_errors")     \
-    IFACE_STAT(rx_jabber_errors,        "rx_jabber_errors")
+    IFACE_STAT(rx_jabber_errors,        "rx_jabber_errors")         \
+    IFACE_STAT(rx_qos_drops,            "rx_qos_drops")             \
+    IFACE_STAT(tx_failure_drops,        "tx_failure_drops")         \
+    IFACE_STAT(tx_qos_drops,            "tx_qos_drops")             \
+    IFACE_STAT(tx_mtu_drops,            "tx_mtu_exceeded_drops")
 
 #define IFACE_STAT(MEMBER, NAME) + 1
     enum { N_IFACE_STATS = IFACE_STATS };