diff mbox series

[ovs-dev,v2,5/5] dpif-netdev: Catch reloads faster.

Message ID 1561540119-23559-6-git-send-email-david.marchand@redhat.com
State Superseded
Headers show
Series Quicker pmd threads reloads | expand

Commit Message

David Marchand June 26, 2019, 9:08 a.m. UTC
Looking at the reload flag only every 1024 loops can be a long time
under load, since we might be handling 32 packets per rxq, per iteration,
which means up to poll_cnt * 32 * 1024 packets.
Look at the flag every loop, no major performance impact seen.

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Eelco Chaudron <echaudro@redhat.com>
Acked-by: Ian Stokes <ian.stokes@intel.com>
---
Changelog since v1:
- added acks, no change

Changelog since RFC v2:
- fixed commitlog on the number of packets

---
 lib/dpif-netdev.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)
diff mbox series

Patch

diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index d432269..de84098 100644
--- a/lib/dpif-netdev.c
+++ b/lib/dpif-netdev.c
@@ -5471,7 +5471,6 @@  reload:
                 poll_block();
             }
         }
-        lc = UINT_MAX;
     }
 
     pmd->intrvl_tsc_prev = 0;
@@ -5515,12 +5514,13 @@  reload:
             if (!ovsrcu_try_quiesce()) {
                 emc_cache_slow_sweep(&((pmd->flow_cache).emc_cache));
             }
+        }
 
-            atomic_read_explicit(&pmd->reload, &reload, memory_order_acquire);
-            if (reload) {
-                break;
-            }
+        atomic_read_explicit(&pmd->reload, &reload, memory_order_acquire);
+        if (OVS_UNLIKELY(reload)) {
+            break;
         }
+
         pmd_perf_end_iteration(s, rx_packets, tx_packets,
                                pmd_perf_metrics_enabled(pmd));
     }