diff mbox series

[ovs-dev,RFC,v2,5/5] dpif-netdev: Catch reloads faster.

Message ID 1557851610-5602-6-git-send-email-david.marchand@redhat.com
State Superseded
Headers show
Series Quicker pmd threads reloads | expand

Commit Message

David Marchand May 14, 2019, 4:33 p.m. UTC
Looking at the reload flag only every 1024 loops can be a long time
under load, since we might be handling 32 packets per iteration, which
means 32k packets.
Look at the flag every loop, no major performance impact seen.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 lib/dpif-netdev.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

Comments

Kevin Traynor May 22, 2019, 1:26 p.m. UTC | #1
On 14/05/2019 17:33, David Marchand wrote:
> Looking at the reload flag only every 1024 loops can be a long time
> under load, since we might be handling 32 packets per iteration, which
> means 32k packets.

32 packets is the burst size for each poll to each rxq, but there may be
multiple rxqs to be polled in that loop, so it could be 32 * num of rxqs
polled by this pmd * 1024

> Look at the flag every loop, no major performance impact seen.
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
>  lib/dpif-netdev.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
> index 77e3f0c..933d91b 100644
> --- a/lib/dpif-netdev.c
> +++ b/lib/dpif-netdev.c
> @@ -5486,7 +5486,6 @@ reload:
>                  poll_block();
>              }
>          }
> -        lc = UINT_MAX;
>      }
>  
>      pmd->intrvl_tsc_prev = 0;
> @@ -5530,12 +5529,13 @@ reload:
>              if (!ovsrcu_try_quiesce()) {
>                  emc_cache_slow_sweep(&((pmd->flow_cache).emc_cache));
>              }
> +        }
>  
> -            atomic_read_explicit(&pmd->reload, &reload, memory_order_acquire);
> -            if (reload) {
> -                break;
> -            }
> +        atomic_read_explicit(&pmd->reload, &reload, memory_order_acquire);
> +        if (OVS_UNLIKELY(reload)) {
> +            break;
>          }
> +
>          pmd_perf_end_iteration(s, rx_packets, tx_packets,
>                                 pmd_perf_metrics_enabled(pmd));
>      }
>
David Marchand May 23, 2019, 8:03 a.m. UTC | #2
On Wed, May 22, 2019 at 3:26 PM Kevin Traynor <ktraynor@redhat.com> wrote:

> On 14/05/2019 17:33, David Marchand wrote:
> > Looking at the reload flag only every 1024 loops can be a long time
> > under load, since we might be handling 32 packets per iteration, which
> > means 32k packets.
>
> 32 packets is the burst size for each poll to each rxq, but there may be
> multiple rxqs to be polled in that loop, so it could be 32 * num of rxqs
> polled by this pmd * 1024
>

Yes, I will fix this for the non rfc patchset.
diff mbox series

Patch

diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index 77e3f0c..933d91b 100644
--- a/lib/dpif-netdev.c
+++ b/lib/dpif-netdev.c
@@ -5486,7 +5486,6 @@  reload:
                 poll_block();
             }
         }
-        lc = UINT_MAX;
     }
 
     pmd->intrvl_tsc_prev = 0;
@@ -5530,12 +5529,13 @@  reload:
             if (!ovsrcu_try_quiesce()) {
                 emc_cache_slow_sweep(&((pmd->flow_cache).emc_cache));
             }
+        }
 
-            atomic_read_explicit(&pmd->reload, &reload, memory_order_acquire);
-            if (reload) {
-                break;
-            }
+        atomic_read_explicit(&pmd->reload, &reload, memory_order_acquire);
+        if (OVS_UNLIKELY(reload)) {
+            break;
         }
+
         pmd_perf_end_iteration(s, rx_packets, tx_packets,
                                pmd_perf_metrics_enabled(pmd));
     }