mbox series

[ovs-dev,RFC,0/5] Quicker pmd threads reloads

Message ID 1556626682-28858-1-git-send-email-david.marchand@redhat.com
Headers show
Series Quicker pmd threads reloads | expand

Message

David Marchand April 30, 2019, 12:17 p.m. UTC
We have been testing the rebalance code in different situations while
having traffic going through OVS.
Those tests have shown that part of the observed packets losses is due to
some time wasted in signaling/waiting for the pmd threads to reload their
polling configurations.

This RFC series is an attempt at getting pmd threads reloads quicker and
more deterministic.

Comments

Ilya Maximets May 6, 2019, 3:22 p.m. UTC | #1
On 30.04.2019 15:17, David Marchand wrote:
> We have been testing the rebalance code in different situations while
> having traffic going through OVS.
> Those tests have shown that part of the observed packets losses is due to
> some time wasted in signaling/waiting for the pmd threads to reload their
> polling configurations.
> 
> This RFC series is an attempt at getting pmd threads reloads quicker and
> more deterministic.
> 

Do you have some performance data to share?

Best regards, Ilya Maximets.
David Marchand May 7, 2019, 3:07 p.m. UTC | #2
Hello Ilya,

Thanks for looking at this series.

On Mon, May 6, 2019 at 5:22 PM Ilya Maximets <i.maximets@samsung.com> wrote:

> On 30.04.2019 15:17, David Marchand wrote:
> > We have been testing the rebalance code in different situations while
> > having traffic going through OVS.
> > Those tests have shown that part of the observed packets losses is due to
> > some time wasted in signaling/waiting for the pmd threads to reload their
> > polling configurations.
> >
> > This RFC series is an attempt at getting pmd threads reloads quicker and
> > more deterministic.
> >
>
> Do you have some performance data to share?
>

During our testing of rebalance, we were tracking packets losses with
traffic running during a rebalance.

I focused on the cycles spent in the transition between two polling
configurations.
I triggered 1000 rebalances on each patch of this series, with rte_rdtsc()
probes in reconfigure_datapath() / pmd_thread_main().
Between the moment we stop polling in a pmd for a given configuration and
the moment we start polling again in the new configuration:
- before the patches, a pmd would spend 140k/330k/3 000k cycles
(minimum/average/maximum).
- after the patches, a pmd would spend 13k/20k/43k cycles.

Originally, the numbers are highly volatile: I got a 20 000k cycles in a
previous test run (which translated to 28k lost packets with the ofrules I
had).


I had a look at your comments, I will come with the fixes later this week
(tomorrow is off in France).