mbox series

[ovs-dev,RFC,v2,0/5] Quicker pmd threads reloads

Message ID 1557851610-5602-1-git-send-email-david.marchand@redhat.com
Headers show
Series Quicker pmd threads reloads | expand

Message

David Marchand May 14, 2019, 4:33 p.m. UTC
We have been testing the rebalance code in different situations while
having traffic going through OVS.
Those tests have shown that part of the observed packets losses is due to
some time wasted in signaling/waiting for the pmd threads to reload their
polling configurations.

This RFC series is an attempt at getting pmd threads reloads quicker and
more deterministic.

Example of number of cycles spent by a pmd between two polling
configurations (in cycles minimum/average/maximum of 1000 changes):
- d58b59c17c70: 126822/312103/756580
- patch1:       113658/296157/741688
- patch2:        49198/167206/466108
- patch3:        13032/120730/341163
- patch4:        12803/112964/323455
- patch5:        13633/ 20373/ 47410

Changelog since v1:
- added numbers per patch in cover letter
- added memory ordering for explicit synchronisations between threads
  in patch 1 and patch 2

Comments

Eelco Chaudron May 20, 2019, 9:26 a.m. UTC | #1
David this patch set looks fine by me, guess a none-RFC patch would be 
next?

Acked-by: Eelco Chaudron <echaudro@redhat.com>

On 14 May 2019, at 18:33, David Marchand wrote:

> We have been testing the rebalance code in different situations while
> having traffic going through OVS.
> Those tests have shown that part of the observed packets losses is due 
> to
> some time wasted in signaling/waiting for the pmd threads to reload 
> their
> polling configurations.
>
> This RFC series is an attempt at getting pmd threads reloads quicker 
> and
> more deterministic.
>
> Example of number of cycles spent by a pmd between two polling
> configurations (in cycles minimum/average/maximum of 1000 changes):
> - d58b59c17c70: 126822/312103/756580
> - patch1:       113658/296157/741688
> - patch2:        49198/167206/466108
> - patch3:        13032/120730/341163
> - patch4:        12803/112964/323455
> - patch5:        13633/ 20373/ 47410
>
> Changelog since v1:
> - added numbers per patch in cover letter
> - added memory ordering for explicit synchronisations between threads
>   in patch 1 and patch 2
>
> -- 
> David Marchand
>
> David Marchand (5):
>   dpif-netdev: Convert exit latch to flag.
>   dpif-netdev: Trigger parallel pmd reloads.
>   dpif-netdev: Do not sleep when swapping queues.
>   dpif-netdev: Only reload static tx qid when needed.
>   dpif-netdev: Catch reloads faster.
>
>  lib/dpif-netdev.c | 131 
> +++++++++++++++++++++++++++++++++++++++++-------------
>  1 file changed, 100 insertions(+), 31 deletions(-)
>
> -- 
> 1.8.3.1
David Marchand May 20, 2019, 9:28 a.m. UTC | #2
On Mon, May 20, 2019 at 11:26 AM Eelco Chaudron <echaudro@redhat.com> wrote:

> David this patch set looks fine by me, guess a none-RFC patch would be
> next?
>
> Acked-by: Eelco Chaudron <echaudro@redhat.com>
>

Yes, just waiting for more comments, if any :-).
Thanks for the review, Eelco.
Kevin Traynor May 22, 2019, 1:51 p.m. UTC | #3
On 14/05/2019 17:33, David Marchand wrote:
> We have been testing the rebalance code in different situations while
> having traffic going through OVS.
> Those tests have shown that part of the observed packets losses is due to
> some time wasted in signaling/waiting for the pmd threads to reload their
> polling configurations.
> 
> This RFC series is an attempt at getting pmd threads reloads quicker and
> more deterministic.
> 
> Example of number of cycles spent by a pmd between two polling
> configurations (in cycles minimum/average/maximum of 1000 changes):
> - d58b59c17c70: 126822/312103/756580
> - patch1:       113658/296157/741688
> - patch2:        49198/167206/466108
> - patch3:        13032/120730/341163
> - patch4:        12803/112964/323455
> - patch5:        13633/ 20373/ 47410
> 
> Changelog since v1:
> - added numbers per patch in cover letter
> - added memory ordering for explicit synchronisations between threads
>   in patch 1 and patch 2
> 

Aside from the couple of very minor comments, the series LGTM