diff mbox series

[ovs-dev,5/5] dpif-netdev: Catch reloads faster.

Message ID 1558621432-13363-6-git-send-email-david.marchand@redhat.com
State Changes Requested
Headers show
Series Quicker pmd threads reloads | expand

Commit Message

David Marchand May 23, 2019, 2:23 p.m. UTC
Looking at the reload flag only every 1024 loops can be a long time
under load, since we might be handling 32 packets per polled rxq, per
iteration, which means up to poll_cnt * 32 * 1024 packets.
Look at the flag every loop, no major performance impact seen.

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Eelco Chaudron <echaudro@redhat.com>
---
 lib/dpif-netdev.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

---
Changelog since v2:
- fixed commitlog on the number of packets

Comments

Stokes, Ian June 24, 2019, 7:13 p.m. UTC | #1
On 5/23/2019 3:23 PM, David Marchand wrote:
> Looking at the reload flag only every 1024 loops can be a long time
> under load, since we might be handling 32 packets per polled rxq, per
> iteration, which means up to poll_cnt * 32 * 1024 packets.
> Look at the flag every loop, no major performance impact seen.
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> Acked-by: Eelco Chaudron <echaudro@redhat.com>

Looks/tested ok for me. Acked.

I've tested various setups in the series (adding/removining varios PMD 
configurations, isolating and distributing queueus, queue re-balancing). 
I also ran a number of scaling performance tests and did not see any 
impact on the performance side.

As I said in the previous patches, I'll wait until Ilya has time to test 
also as it's important that we don't see any regressions with this part 
of the codebase for the various usecases.

Thanks
Ian
David Marchand June 25, 2019, 7:10 a.m. UTC | #2
On Mon, Jun 24, 2019 at 9:14 PM Ian Stokes <ian.stokes@intel.com> wrote:

> On 5/23/2019 3:23 PM, David Marchand wrote:
> > Looking at the reload flag only every 1024 loops can be a long time
> > under load, since we might be handling 32 packets per polled rxq, per
> > iteration, which means up to poll_cnt * 32 * 1024 packets.
> > Look at the flag every loop, no major performance impact seen.
> >
> > Signed-off-by: David Marchand <david.marchand@redhat.com>
> > Acked-by: Eelco Chaudron <echaudro@redhat.com>
>
> Looks/tested ok for me. Acked.
>
> I've tested various setups in the series (adding/removining varios PMD
> configurations, isolating and distributing queueus, queue re-balancing).
> I also ran a number of scaling performance tests and did not see any
> impact on the performance side.
>
> As I said in the previous patches, I'll wait until Ilya has time to test
> also as it's important that we don't see any regressions with this part
> of the codebase for the various usecases.
>

Thanks Ian!
Ilya Maximets June 25, 2019, 11:27 a.m. UTC | #3
On 24.06.2019 22:13, Ian Stokes wrote:
> On 5/23/2019 3:23 PM, David Marchand wrote:
>> Looking at the reload flag only every 1024 loops can be a long time
>> under load, since we might be handling 32 packets per polled rxq, per
>> iteration, which means up to poll_cnt * 32 * 1024 packets.
>> Look at the flag every loop, no major performance impact seen.
>>
>> Signed-off-by: David Marchand <david.marchand@redhat.com>
>> Acked-by: Eelco Chaudron <echaudro@redhat.com>
> 
> Looks/tested ok for me. Acked.
> 
> I've tested various setups in the series (adding/removining varios PMD configurations, isolating and distributing queueus, queue re-balancing). I also ran a number of scaling performance tests and did not see any impact on the performance side.
> 
> As I said in the previous patches, I'll wait until Ilya has time to test also as it's important that we don't see any regressions with this part of the codebase for the various usecases.

Thanks, Ian, for review and testing. I'm looking at the patches
now and will reply today or tomorrow.

Best regards, Ilya Maximets.
diff mbox series

Patch

diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index b763ceb..9d79044 100644
--- a/lib/dpif-netdev.c
+++ b/lib/dpif-netdev.c
@@ -5485,7 +5485,6 @@  reload:
                 poll_block();
             }
         }
-        lc = UINT_MAX;
     }
 
     pmd->intrvl_tsc_prev = 0;
@@ -5529,12 +5528,13 @@  reload:
             if (!ovsrcu_try_quiesce()) {
                 emc_cache_slow_sweep(&((pmd->flow_cache).emc_cache));
             }
+        }
 
-            atomic_read_explicit(&pmd->reload, &reload, memory_order_acquire);
-            if (reload) {
-                break;
-            }
+        atomic_read_explicit(&pmd->reload, &reload, memory_order_acquire);
+        if (OVS_UNLIKELY(reload)) {
+            break;
         }
+
         pmd_perf_end_iteration(s, rx_packets, tx_packets,
                                pmd_perf_metrics_enabled(pmd));
     }