Message ID | 1558621432-13363-1-git-send-email-david.marchand@redhat.com |
---|---|
Headers | show |
Series | Quicker pmd threads reloads | expand |
Hello guys, On Thu, May 23, 2019 at 4:27 PM David Marchand <david.marchand@redhat.com> wrote: > We have been testing the rebalance code in different situations while > having traffic going through OVS. > Those tests have shown that part of the observed packets losses is due to > some time wasted in signaling/waiting for the pmd threads to reload their > polling configurations. > > This series is an attempt at getting pmd threads reloads quicker and > more deterministic. > > Example of number of cycles spent by a pmd between two polling > configurations (in cycles minimum/average/maximum of 1000 changes): > - d58b59c17c70: 126822/312103/756580 > - patch1: 113658/296157/741688 > - patch2: 49198/167206/466108 > - patch3: 13032/120730/341163 > - patch4: 12803/112964/323455 > - patch5: 13633/ 20373/ 47410 > > Changelog since RFC v2: > - added ack from Eelco > - > > Changelog since RFC v1: > - added numbers per patch in cover letter > - added memory ordering for explicit synchronisations between threads > in patch 1 and patch 2 > > I did not get feedback on this series apart from Eelco. Did you have a chance to look at it? Thanks.
On 06.06.2019 10:35, David Marchand wrote: > Hello guys, > > On Thu, May 23, 2019 at 4:27 PM David Marchand <david.marchand@redhat.com <mailto:david.marchand@redhat.com>> wrote: > > We have been testing the rebalance code in different situations while > having traffic going through OVS. > Those tests have shown that part of the observed packets losses is due to > some time wasted in signaling/waiting for the pmd threads to reload their > polling configurations. > > This series is an attempt at getting pmd threads reloads quicker and > more deterministic. > > Example of number of cycles spent by a pmd between two polling > configurations (in cycles minimum/average/maximum of 1000 changes): > - d58b59c17c70: 126822/312103/756580 > - patch1: 113658/296157/741688 > - patch2: 49198/167206/466108 > - patch3: 13032/120730/341163 > - patch4: 12803/112964/323455 > - patch5: 13633/ 20373/ 47410 > > Changelog since RFC v2: > - added ack from Eelco > - > > Changelog since RFC v1: > - added numbers per patch in cover letter > - added memory ordering for explicit synchronisations between threads > in patch 1 and patch 2 > > > I did not get feedback on this series apart from Eelco. > Did you have a chance to look at it? Hi David. Sorry for delays. Patches seems fine at the first glance. I hope to have some time next week to look closer and, probably, test something. Best regards, Ilya Maximets.
Hi David,
My apologies, I’ve been busy off the mailing list lately so unfortunately I didn’t have a chance to look at this, I’ll try to take a look over the next few days.
Regards
Ian
From: David Marchand [mailto:david.marchand@redhat.com]
Sent: Thursday, June 6, 2019 8:36 AM
To: Ilya Maximets <i.maximets@samsung.com>; Stokes, Ian <ian.stokes@intel.com>
Cc: ovs dev <dev@openvswitch.org>
Subject: Re: [ovs-dev] [PATCH 0/5] Quicker pmd threads reloads
Hello guys,
On Thu, May 23, 2019 at 4:27 PM David Marchand <david.marchand@redhat.com<mailto:david.marchand@redhat.com>> wrote:
We have been testing the rebalance code in different situations while
having traffic going through OVS.
Those tests have shown that part of the observed packets losses is due to
some time wasted in signaling/waiting for the pmd threads to reload their
polling configurations.
This series is an attempt at getting pmd threads reloads quicker and
more deterministic.
Example of number of cycles spent by a pmd between two polling
configurations (in cycles minimum/average/maximum of 1000 changes):
- d58b59c17c70: 126822/312103/756580
- patch1: 113658/296157/741688
- patch2: 49198/167206/466108
- patch3: 13032/120730/341163
- patch4: 12803/112964/323455
- patch5: 13633/ 20373/ 47410
Changelog since RFC v2:
- added ack from Eelco
-
Changelog since RFC v1:
- added numbers per patch in cover letter
- added memory ordering for explicit synchronisations between threads
in patch 1 and patch 2
I did not get feedback on this series apart from Eelco.
Did you have a chance to look at it?
Thanks.
--
David Marchand
On Thu, Jun 6, 2019 at 10:16 AM Stokes, Ian <ian.stokes@intel.com> wrote: > Hi David, > > > > My apologies, I’ve been busy off the mailing list lately so unfortunately > I didn’t have a chance to look at this, I’ll try to take a look over the > next few days. > No worries, I understand everyone is busy with their own stuff. Thanks.