diff mbox series

[ovs-dev,v3,1/3] dpif-netdev: Make pmd-rxq-show time configurable.

Message ID 20221130173954.1043885-1-ktraynor@redhat.com
State Accepted
Commit 526230bfab09095cf0214c7033382463b9d506cf
Headers show
Series [ovs-dev,v3,1/3] dpif-netdev: Make pmd-rxq-show time configurable. | expand

Checks

Context Check Description
ovsrobot/apply-robot success apply and check: success
ovsrobot/github-robot-_Build_and_Test success github build: passed
ovsrobot/intel-ovs-compilation success test: success

Commit Message

Kevin Traynor Nov. 30, 2022, 5:39 p.m. UTC
pmd-rxq-show shows the Rx queue to pmd assignments as well as the
pmd usage of each Rx queue.

Up until now a tail length of 60 seconds pmd usage was shown
for each Rx queue, as this is the value used during rebalance
to avoid any spike effects.

When debugging or tuning, it is also convenient to display the
pmd usage of an Rx queue over a shorter time frame, so any changes
config or traffic that impact pmd usage can be evaluated more quickly.

A parameter is added that allows pmd-rxq-show stats pmd usage to
be shown for a shorter time frame. Values are rounded up to the
nearest 5 seconds as that is the measurement granularity and the value
used is displayed. e.g.

$ ovs-appctl dpif-netdev/pmd-rxq-show -secs 5
 Displaying last 5 seconds pmd usage %
 pmd thread numa_id 0 core_id 4:
   isolated : false
   port: dpdk0            queue-id:  0 (enabled)   pmd usage: 95 %
   overhead:  4 %

The default time frame has not changed and the maximum value
is limited to the maximum stored tail length (60 seconds).

Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
---
v2:
- fixed comments from David's review
- Squashed new unit tests into this patch
- docs can be squashed later
---
 lib/dpif-netdev-private-thread.h |  2 +-
 lib/dpif-netdev.c                | 98 ++++++++++++++++++++++++--------
 tests/pmd.at                     | 62 ++++++++++++++++++++
 3 files changed, 138 insertions(+), 24 deletions(-)

Comments

David Marchand Dec. 2, 2022, 12:29 p.m. UTC | #1
On Wed, Nov 30, 2022 at 6:40 PM Kevin Traynor <ktraynor@redhat.com> wrote:
>
> pmd-rxq-show shows the Rx queue to pmd assignments as well as the
> pmd usage of each Rx queue.
>
> Up until now a tail length of 60 seconds pmd usage was shown
> for each Rx queue, as this is the value used during rebalance
> to avoid any spike effects.
>
> When debugging or tuning, it is also convenient to display the
> pmd usage of an Rx queue over a shorter time frame, so any changes
> config or traffic that impact pmd usage can be evaluated more quickly.
>
> A parameter is added that allows pmd-rxq-show stats pmd usage to
> be shown for a shorter time frame. Values are rounded up to the
> nearest 5 seconds as that is the measurement granularity and the value
> used is displayed. e.g.
>
> $ ovs-appctl dpif-netdev/pmd-rxq-show -secs 5
>  Displaying last 5 seconds pmd usage %
>  pmd thread numa_id 0 core_id 4:
>    isolated : false
>    port: dpdk0            queue-id:  0 (enabled)   pmd usage: 95 %
>    overhead:  4 %
>
> The default time frame has not changed and the maximum value
> is limited to the maximum stored tail length (60 seconds).
>
> Signed-off-by: Kevin Traynor <ktraynor@redhat.com>

Reviewed-by: David Marchand <david.marchand@redhat.com>
Ilya Maximets Dec. 21, 2022, 8:42 p.m. UTC | #2
On 12/2/22 13:29, David Marchand wrote:
> On Wed, Nov 30, 2022 at 6:40 PM Kevin Traynor <ktraynor@redhat.com> wrote:
>>
>> pmd-rxq-show shows the Rx queue to pmd assignments as well as the
>> pmd usage of each Rx queue.
>>
>> Up until now a tail length of 60 seconds pmd usage was shown
>> for each Rx queue, as this is the value used during rebalance
>> to avoid any spike effects.
>>
>> When debugging or tuning, it is also convenient to display the
>> pmd usage of an Rx queue over a shorter time frame, so any changes
>> config or traffic that impact pmd usage can be evaluated more quickly.
>>
>> A parameter is added that allows pmd-rxq-show stats pmd usage to
>> be shown for a shorter time frame. Values are rounded up to the
>> nearest 5 seconds as that is the measurement granularity and the value
>> used is displayed. e.g.
>>
>> $ ovs-appctl dpif-netdev/pmd-rxq-show -secs 5
>>  Displaying last 5 seconds pmd usage %
>>  pmd thread numa_id 0 core_id 4:
>>    isolated : false
>>    port: dpdk0            queue-id:  0 (enabled)   pmd usage: 95 %
>>    overhead:  4 %
>>
>> The default time frame has not changed and the maximum value
>> is limited to the maximum stored tail length (60 seconds).
>>
>> Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
> 
> Reviewed-by: David Marchand <david.marchand@redhat.com>
> 

Thanks, Kevin and David!

I fixed a typo in the name of the second patch and applied the series.

Best regards, Ilya Maximets.
diff mbox series

Patch

diff --git a/lib/dpif-netdev-private-thread.h b/lib/dpif-netdev-private-thread.h
index 4472b199d..1ec3cd794 100644
--- a/lib/dpif-netdev-private-thread.h
+++ b/lib/dpif-netdev-private-thread.h
@@ -115,5 +115,5 @@  struct dp_netdev_pmd_thread {
 
     /* Write index for 'busy_cycles_intrvl'. */
-    unsigned int intrvl_idx;
+    atomic_count intrvl_idx;
     /* Busy cycles in last PMD_INTERVAL_MAX intervals. */
     atomic_ullong *busy_cycles_intrvl;
diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index 2c08a71c8..74d265a0b 100644
--- a/lib/dpif-netdev.c
+++ b/lib/dpif-netdev.c
@@ -161,9 +161,11 @@  static struct odp_support dp_netdev_support = {
 /* Time in microseconds of the interval in which rxq processing cycles used
  * in rxq to pmd assignments is measured and stored. */
-#define PMD_INTERVAL_LEN 10000000LL
+#define PMD_INTERVAL_LEN 5000000LL
+/* For converting PMD_INTERVAL_LEN to secs. */
+#define INTERVAL_USEC_TO_SEC 1000000LL
 
 /* Number of intervals for which cycles are stored
  * and used during rxq to pmd assignment. */
-#define PMD_INTERVAL_MAX 6
+#define PMD_INTERVAL_MAX 12
 
 /* Time in microseconds to try RCU quiescing. */
@@ -429,5 +431,5 @@  struct dp_netdev_rxq {
                                           queue doesn't need to be pinned to a
                                           particular core. */
-    unsigned intrvl_idx;               /* Write index for 'cycles_intrvl'. */
+    atomic_count intrvl_idx;           /* Write index for 'cycles_intrvl'. */
     struct dp_netdev_pmd_thread *pmd;  /* pmd thread that polls this queue. */
     bool is_vhost;                     /* Is rxq of a vhost port. */
@@ -616,4 +618,7 @@  dp_netdev_rxq_set_intrvl_cycles(struct dp_netdev_rxq *rx,
 static uint64_t
 dp_netdev_rxq_get_intrvl_cycles(struct dp_netdev_rxq *rx, unsigned idx);
+static uint64_t
+get_interval_values(atomic_ullong *source, atomic_count *cur_idx,
+                    int num_to_read);
 static void
 dpif_netdev_xps_revalidate_pmd(const struct dp_netdev_pmd_thread *pmd,
@@ -870,5 +875,6 @@  sorted_poll_list(struct dp_netdev_pmd_thread *pmd, struct rxq_poll **list,
 
 static void
-pmd_info_show_rxq(struct ds *reply, struct dp_netdev_pmd_thread *pmd)
+pmd_info_show_rxq(struct ds *reply, struct dp_netdev_pmd_thread *pmd,
+                  int secs)
 {
     if (pmd->core_id != NON_PMD_CORE_ID) {
@@ -878,4 +884,5 @@  pmd_info_show_rxq(struct ds *reply, struct dp_netdev_pmd_thread *pmd)
         uint64_t busy_cycles = 0;
         uint64_t total_rxq_proc_cycles = 0;
+        unsigned int intervals;
 
         ds_put_format(reply,
@@ -889,13 +896,12 @@  pmd_info_show_rxq(struct ds *reply, struct dp_netdev_pmd_thread *pmd)
         /* Get the total pmd cycles for an interval. */
         atomic_read_relaxed(&pmd->intrvl_cycles, &total_cycles);
+        /* Calculate how many intervals are to be used. */
+        intervals = DIV_ROUND_UP(secs,
+                                 PMD_INTERVAL_LEN / INTERVAL_USEC_TO_SEC);
         /* Estimate the cycles to cover all intervals. */
-        total_cycles *= PMD_INTERVAL_MAX;
-
-        for (int j = 0; j < PMD_INTERVAL_MAX; j++) {
-            uint64_t cycles;
-
-            atomic_read_relaxed(&pmd->busy_cycles_intrvl[j], &cycles);
-            busy_cycles += cycles;
-        }
+        total_cycles *= intervals;
+        busy_cycles = get_interval_values(pmd->busy_cycles_intrvl,
+                                          &pmd->intrvl_idx,
+                                          intervals);
         if (busy_cycles > total_cycles) {
             busy_cycles = total_cycles;
@@ -907,7 +913,7 @@  pmd_info_show_rxq(struct ds *reply, struct dp_netdev_pmd_thread *pmd)
             uint64_t rxq_proc_cycles = 0;
 
-            for (int j = 0; j < PMD_INTERVAL_MAX; j++) {
-                rxq_proc_cycles += dp_netdev_rxq_get_intrvl_cycles(rxq, j);
-            }
+            rxq_proc_cycles = get_interval_values(rxq->cycles_intrvl,
+                                                  &rxq->intrvl_idx,
+                                                  intervals);
             total_rxq_proc_cycles += rxq_proc_cycles;
             ds_put_format(reply, "  port: %-16s  queue-id: %2d", name,
@@ -1423,4 +1429,8 @@  dpif_netdev_pmd_info(struct unixctl_conn *conn, int argc, const char *argv[],
     bool filter_on_pmd = false;
     size_t n;
+    unsigned int secs = 0;
+    unsigned long long max_secs = (PMD_INTERVAL_LEN * PMD_INTERVAL_MAX)
+                                      / INTERVAL_USEC_TO_SEC;
+    bool first_show_rxq = true;
 
     ovs_mutex_lock(&dp_netdev_mutex);
@@ -1433,4 +1443,12 @@  dpif_netdev_pmd_info(struct unixctl_conn *conn, int argc, const char *argv[],
             argc -= 2;
             argv += 2;
+        } else if (type == PMD_INFO_SHOW_RXQ &&
+                       !strcmp(argv[1], "-secs") &&
+                       argc > 2) {
+            if (!str_to_uint(argv[2], 10, &secs)) {
+                secs = max_secs;
+            }
+            argc -= 2;
+            argv += 2;
         } else {
             dp = shash_find_data(&dp_netdevs, argv[1]);
@@ -1462,5 +1480,16 @@  dpif_netdev_pmd_info(struct unixctl_conn *conn, int argc, const char *argv[],
         }
         if (type == PMD_INFO_SHOW_RXQ) {
-            pmd_info_show_rxq(&reply, pmd);
+            if (first_show_rxq) {
+                if (!secs || secs > max_secs) {
+                    secs = max_secs;
+                } else {
+                    secs = ROUND_UP(secs,
+                                    PMD_INTERVAL_LEN / INTERVAL_USEC_TO_SEC);
+                }
+                ds_put_format(&reply, "Displaying last %u seconds "
+                              "pmd usage %%\n", secs);
+                first_show_rxq = false;
+            }
+            pmd_info_show_rxq(&reply, pmd, secs);
         } else if (type == PMD_INFO_CLEAR_STATS) {
             pmd_perf_stats_clear(&pmd->perf_stats);
@@ -1577,6 +1606,7 @@  dpif_netdev_init(void)
                              0, 3, dpif_netdev_pmd_info,
                              (void *)&clear_aux);
-    unixctl_command_register("dpif-netdev/pmd-rxq-show", "[-pmd core] [dp]",
-                             0, 3, dpif_netdev_pmd_info,
+    unixctl_command_register("dpif-netdev/pmd-rxq-show", "[-pmd core] "
+                             "[-secs secs] [dp]",
+                             0, 5, dpif_netdev_pmd_info,
                              (void *)&poll_aux);
     unixctl_command_register("dpif-netdev/pmd-perf-show",
@@ -5150,5 +5180,5 @@  dp_netdev_rxq_set_intrvl_cycles(struct dp_netdev_rxq *rx,
                                 unsigned long long cycles)
 {
-    unsigned int idx = rx->intrvl_idx++ % PMD_INTERVAL_MAX;
+    unsigned int idx = atomic_count_inc(&rx->intrvl_idx) % PMD_INTERVAL_MAX;
     atomic_store_relaxed(&rx->cycles_intrvl[idx], cycles);
 }
@@ -6890,4 +6920,7 @@  reload:
     atomic_count_init(&pmd->pmd_overloaded, 0);
 
+    pmd->intrvl_tsc_prev = 0;
+    atomic_store_relaxed(&pmd->intrvl_cycles, 0);
+
     if (!dpdk_attached) {
         dpdk_attached = dpdk_attach_thread(pmd->core_id);
@@ -6921,10 +6954,8 @@  reload:
     }
 
-    pmd->intrvl_tsc_prev = 0;
-    atomic_store_relaxed(&pmd->intrvl_cycles, 0);
     for (i = 0; i < PMD_INTERVAL_MAX; i++) {
         atomic_store_relaxed(&pmd->busy_cycles_intrvl[i], 0);
     }
-    pmd->intrvl_idx = 0;
+    atomic_count_set(&pmd->intrvl_idx, 0);
     cycles_counter_update(s);
 
@@ -9907,5 +9938,5 @@  dp_netdev_pmd_try_optimize(struct dp_netdev_pmd_thread *pmd,
                                  curr_tsc - pmd->intrvl_tsc_prev);
         }
-        idx = pmd->intrvl_idx++ % PMD_INTERVAL_MAX;
+        idx = atomic_count_inc(&pmd->intrvl_idx) % PMD_INTERVAL_MAX;
         atomic_store_relaxed(&pmd->busy_cycles_intrvl[idx], tot_proc);
         pmd->intrvl_tsc_prev = curr_tsc;
@@ -9930,4 +9961,25 @@  dp_netdev_pmd_try_optimize(struct dp_netdev_pmd_thread *pmd,
 }
 
+/* Returns the sum of a specified number of newest to
+ * oldest interval values. 'cur_idx' is where the next
+ * write will be and wrap around needs to be handled.
+ */
+static uint64_t
+get_interval_values(atomic_ullong *source, atomic_count *cur_idx,
+                    int num_to_read) {
+    unsigned int i;
+    uint64_t total = 0;
+
+    i = atomic_count_get(cur_idx) % PMD_INTERVAL_MAX;
+    for (int read = 0; read < num_to_read; read++) {
+        uint64_t interval_value;
+
+        i = i ? i - 1 : PMD_INTERVAL_MAX - 1;
+        atomic_read_relaxed(&source[i], &interval_value);
+        total += interval_value;
+    }
+    return total;
+}
+
 /* Insert 'rule' into 'cls'. */
 static void
diff --git a/tests/pmd.at b/tests/pmd.at
index 10879a349..ed90f88c4 100644
--- a/tests/pmd.at
+++ b/tests/pmd.at
@@ -71,4 +71,5 @@  CHECK_PMD_THREADS_CREATED()
 
 AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show | sed SED_NUMA_CORE_PATTERN], [0], [dnl
+Displaying last 60 seconds pmd usage %
 pmd thread numa_id <cleared> core_id <cleared>:
   isolated : false
@@ -103,4 +104,5 @@  dummy@ovs-dummy: hit:0 missed:0
 
 AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show | sed SED_NUMA_CORE_PATTERN], [0], [dnl
+Displaying last 60 seconds pmd usage %
 pmd thread numa_id <cleared> core_id <cleared>:
   isolated : false
@@ -135,4 +137,5 @@  dummy@ovs-dummy: hit:0 missed:0
 
 AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show | sed SED_NUMA_CORE_PATTERN], [0], [dnl
+Displaying last 60 seconds pmd usage %
 pmd thread numa_id <cleared> core_id <cleared>:
   isolated : false
@@ -184,4 +187,5 @@  CHECK_PMD_THREADS_CREATED([1], [], [+$TMP])
 
 AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show | sed SED_NUMA_CORE_PATTERN], [0], [dnl
+Displaying last 60 seconds pmd usage %
 pmd thread numa_id <cleared> core_id <cleared>:
   isolated : false
@@ -216,4 +220,5 @@  dummy@ovs-dummy: hit:0 missed:0
 
 AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show | sed SED_NUMA_CORE_PATTERN], [0], [dnl
+Displaying last 60 seconds pmd usage %
 pmd thread numa_id <cleared> core_id <cleared>:
   isolated : false
@@ -281,4 +286,5 @@  OVS_WAIT_UNTIL([tail -n +$TMP ovs-vswitchd.log | grep "Performing pmd to rx queu
 
 AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show], [0], [dnl
+Displaying last 60 seconds pmd usage %
 pmd thread numa_id 1 core_id 1:
   isolated : false
@@ -303,4 +309,5 @@  OVS_WAIT_UNTIL([tail -n +$TMP ovs-vswitchd.log | grep "Performing pmd to rx queu
 
 AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show], [0], [dnl
+Displaying last 60 seconds pmd usage %
 pmd thread numa_id 1 core_id 1:
   isolated : false
@@ -323,4 +330,5 @@  OVS_WAIT_UNTIL([tail -n +$TMP ovs-vswitchd.log | grep "Performing pmd to rx queu
 
 AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show], [0], [dnl
+Displaying last 60 seconds pmd usage %
 pmd thread numa_id 1 core_id 1:
   isolated : false
@@ -344,4 +352,5 @@  CHECK_PMD_THREADS_CREATED([1], [1], [+$TMP])
 
 AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show], [0], [dnl
+Displaying last 60 seconds pmd usage %
 pmd thread numa_id 1 core_id 0:
   isolated : false
@@ -472,4 +481,57 @@  OVS_VSWITCHD_STOP
 AT_CLEANUP
 
+AT_SETUP([PMD - pmd-rxq-show pmd usage time])
+OVS_VSWITCHD_START([add-port br0 p0 -- set Interface p0 type=dummy-pmd], [], [], [DUMMY_NUMA])
+
+#CHECK_CPU_DISCOVERED()
+#CHECK_PMD_THREADS_CREATED()
+
+AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show | grep Displaying], [0], [dnl
+Displaying last 60 seconds pmd usage %
+])
+
+AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show -secs -1 | grep Displaying], [0], [dnl
+Displaying last 60 seconds pmd usage %
+])
+
+AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show -secs 0 | grep Displaying], [0], [dnl
+Displaying last 60 seconds pmd usage %
+])
+
+AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show -secs 1 | grep Displaying], [0], [dnl
+Displaying last 5 seconds pmd usage %
+])
+
+AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show -secs 5 | grep Displaying], [0], [dnl
+Displaying last 5 seconds pmd usage %
+])
+
+AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show -secs 6 | grep Displaying], [0], [dnl
+Displaying last 10 seconds pmd usage %
+])
+
+AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show -secs 51 | grep Displaying], [0], [dnl
+Displaying last 55 seconds pmd usage %
+])
+
+AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show -secs 55 | grep Displaying], [0], [dnl
+Displaying last 55 seconds pmd usage %
+])
+
+AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show -secs 56 | grep Displaying], [0], [dnl
+Displaying last 60 seconds pmd usage %
+])
+
+AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show -secs 60 | grep Displaying], [0], [dnl
+Displaying last 60 seconds pmd usage %
+])
+
+AT_CHECK([ovs-appctl dpif-netdev/pmd-rxq-show -secs 61 | grep Displaying], [0], [dnl
+Displaying last 60 seconds pmd usage %
+])
+
+OVS_VSWITCHD_STOP
+AT_CLEANUP
+
 dnl Reconfigure the number of rx queues of a port, make sure that all the
 dnl queues are polled by the datapath and try to send a couple of packets.