diff mbox

[ovs-dev,RFC,4/4] dpif-netdev: Don't uninit emc on reload.

Message ID 1487688568-14820-5-git-send-email-i.maximets@samsung.com
State Superseded
Delegated to: Darrell Ball
Headers show

Commit Message

Ilya Maximets Feb. 21, 2017, 2:49 p.m. UTC
There are many reasons for reloading of pmd threads:
	* reconfiguration of one of the ports.
	* Adjusting of static_tx_qid.
	* Adding new tx/rx ports.

In many cases EMC is still useful after reload and uninit
will only lead to unnecessary upcalls/classifier lookups.

Such behaviour slows down the datapath. Uninit itself slows
down the reload path. All this factors leads to additional
unexpected latencies/drops on events not directly connected
to current PMD thread.

Lets not uninitialize emc cache on reload path.
'emc_cache_slow_sweep()' and replacements should free all
the old/unwanted entries.

Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
---

In discussion of original patch[1] Pravin Shelar said:
'''
  emc cache flows should be free on reload, otherwise flows
  might stick around for longer time.
'''

After that (for another reason) slow sweep was introduced.
There are few reasons to move uninit out of reload path
described in commit-message.

So, this patch sent as RFC, because I wanted to hear some
opinions about current situation and drawbacks of such
solution.

Any thoughts?
Thanks.

[1] https://mail.openvswitch.org/pipermail/ovs-dev/2014-August/287852.html

 lib/dpif-netdev.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Daniele Di Proietto March 10, 2017, 4:34 a.m. UTC | #1
2017-02-21 6:49 GMT-08:00 Ilya Maximets <i.maximets@samsung.com>:
> There are many reasons for reloading of pmd threads:
>         * reconfiguration of one of the ports.
>         * Adjusting of static_tx_qid.
>         * Adding new tx/rx ports.
>
> In many cases EMC is still useful after reload and uninit
> will only lead to unnecessary upcalls/classifier lookups.
>
> Such behaviour slows down the datapath. Uninit itself slows
> down the reload path. All this factors leads to additional
> unexpected latencies/drops on events not directly connected
> to current PMD thread.
>
> Lets not uninitialize emc cache on reload path.
> 'emc_cache_slow_sweep()' and replacements should free all
> the old/unwanted entries.
>
> Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
> ---
>
> In discussion of original patch[1] Pravin Shelar said:
> '''
>   emc cache flows should be free on reload, otherwise flows
>   might stick around for longer time.
> '''
>
> After that (for another reason) slow sweep was introduced.
> There are few reasons to move uninit out of reload path
> described in commit-message.
>
> So, this patch sent as RFC, because I wanted to hear some
> opinions about current situation and drawbacks of such
> solution.
>
> Any thoughts?
> Thanks.
>
> [1] https://mail.openvswitch.org/pipermail/ovs-dev/2014-August/287852.html
>

I'm in favor of this approach. Now that we have slow sweep we don't need to
clear it at every reload.

Just curious, did you measure any difference in reload time with this patch?

 We could init and uninit the EMC from the main thread in
dp_netdev_configure_pmd() and dp_netdev_del_pmd().  This way we wouldn't
need a special case for the non-pmd thread in the code, but it would be slower.
In any case it seems fine with me

Thanks,

Daniele

>  lib/dpif-netdev.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
> index e2b4f39..3bd568d 100644
> --- a/lib/dpif-netdev.c
> +++ b/lib/dpif-netdev.c
> @@ -3648,9 +3648,9 @@ pmd_thread_main(void *f_)
>      ovs_numa_thread_setaffinity_core(pmd->core_id);
>      dpdk_set_lcore_id(pmd->core_id);
>      poll_cnt = pmd_load_queues_and_ports(pmd, &poll_list);
> +    emc_cache_init(&pmd->flow_cache);
>  reload:
>      pmd_alloc_static_tx_qid(pmd);
> -    emc_cache_init(&pmd->flow_cache);
>
>      /* List port/core affinity */
>      for (i = 0; i < poll_cnt; i++) {
> @@ -3697,13 +3697,13 @@ reload:
>       * reloading the updated configuration. */
>      dp_netdev_pmd_reload_done(pmd);
>
> -    emc_cache_uninit(&pmd->flow_cache);
>      pmd_free_static_tx_qid(pmd);
>
>      if (!exiting) {
>          goto reload;
>      }
>
> +    emc_cache_uninit(&pmd->flow_cache);
>      free(poll_list);
>      pmd_free_cached_ports(pmd);
>      return NULL;
> --
> 2.7.4
>
Ferriter, Cian May 29, 2017, 3:41 p.m. UTC | #2
> -----Original Message-----
> From: ovs-dev-bounces@openvswitch.org [mailto:ovs-dev-
> bounces@openvswitch.org] On Behalf Of Daniele Di Proietto
> Sent: 10 March 2017 04:35
> To: Ilya Maximets <i.maximets@samsung.com>
> Cc: dev@openvswitch.org; Heetae Ahn <heetae82.ahn@samsung.com>
> Subject: Re: [ovs-dev] [PATCH RFC 4/4] dpif-netdev: Don't uninit emc on
> reload.
> 
> 2017-02-21 6:49 GMT-08:00 Ilya Maximets <i.maximets@samsung.com>:
> > There are many reasons for reloading of pmd threads:
> >         * reconfiguration of one of the ports.
> >         * Adjusting of static_tx_qid.
> >         * Adding new tx/rx ports.
> >
> > In many cases EMC is still useful after reload and uninit will only
> > lead to unnecessary upcalls/classifier lookups.
> >
> > Such behaviour slows down the datapath. Uninit itself slows down the
> > reload path. All this factors leads to additional unexpected
> > latencies/drops on events not directly connected to current PMD
> > thread.
> >
> > Lets not uninitialize emc cache on reload path.
> > 'emc_cache_slow_sweep()' and replacements should free all the
> > old/unwanted entries.
> >
> > Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
> > ---
> >
> > In discussion of original patch[1] Pravin Shelar said:
> > '''
> >   emc cache flows should be free on reload, otherwise flows
> >   might stick around for longer time.
> > '''
> >
> > After that (for another reason) slow sweep was introduced.
> > There are few reasons to move uninit out of reload path described in
> > commit-message.
> >
> > So, this patch sent as RFC, because I wanted to hear some opinions
> > about current situation and drawbacks of such solution.
> >
> > Any thoughts?
> > Thanks.
> >
> > [1]
> > https://mail.openvswitch.org/pipermail/ovs-dev/2014-August/287852.html
> >
> 
> I'm in favor of this approach. Now that we have slow sweep we don't need
> to clear it at every reload.
> 
> Just curious, did you measure any difference in reload time with this patch?
> 
>  We could init and uninit the EMC from the main thread in
> dp_netdev_configure_pmd() and dp_netdev_del_pmd().  This way we
> wouldn't need a special case for the non-pmd thread in the code, but it
> would be slower.
> In any case it seems fine with me
> 
> Thanks,
> 
> Daniele
> 

Hi Ilya,

This approach and your rationale behind it makes sense to me too.

I tested this patch by applying it on top of patches 2/4 and 3/4 of the series (patch 1/4 has been applied to master already). The patches applied cleanly on top of the current head of master and passed the usual sanity checks (compiles with GCC, CLANG, with and without DPDK). This patch also passes checkpatch.py. Patch 2/4 does not pass checkpatch.py, I'll reply to that patch separately.

Asides from sanity testing, I also carried out the following test:
While sending a single flow of packets in a simple Phy-Phy test with one PMD thread, add a dpdkvhostuser port to the same bridge as the phy ports. Measure the "megaflow hits" shown by "ovs-appctl dpif-netdev/pmd-stats-show" before and after the dpdkvhostuser port has been added. This test requires the "emc-insert-inv-prob" to be set to 1 (doc: http://docs.openvswitch.org/en/latest/howto/dpdk/#emc-insertion-probability). I also used " ovs-appctl dpif-netdev/pmd-stats-clear" while sending traffic to help with counting the amount of classifer lookups (megaflow hits).

On my system before the patch is applied, there are 32 megaflow hits consistently each time a dpdkvhostuser port is added to the bridge. As you have explained, this is as a result of the call to "emc_cache_uninit" and the resulting classifier lookups. After your patch, there are no megaflow hits when adding a dpdkvhostuser port. This illustrates the value of your patch

Acked-by: Cian Ferriter <cian.ferriter@intel.com>
Tested-by: Cian Ferriter <cian.ferriter@intel.com>

> >  lib/dpif-netdev.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index
> > e2b4f39..3bd568d 100644
> > --- a/lib/dpif-netdev.c
> > +++ b/lib/dpif-netdev.c
> > @@ -3648,9 +3648,9 @@ pmd_thread_main(void *f_)
> >      ovs_numa_thread_setaffinity_core(pmd->core_id);
> >      dpdk_set_lcore_id(pmd->core_id);
> >      poll_cnt = pmd_load_queues_and_ports(pmd, &poll_list);
> > +    emc_cache_init(&pmd->flow_cache);
> >  reload:
> >      pmd_alloc_static_tx_qid(pmd);
> > -    emc_cache_init(&pmd->flow_cache);
> >
> >      /* List port/core affinity */
> >      for (i = 0; i < poll_cnt; i++) {
> > @@ -3697,13 +3697,13 @@ reload:
> >       * reloading the updated configuration. */
> >      dp_netdev_pmd_reload_done(pmd);
> >
> > -    emc_cache_uninit(&pmd->flow_cache);
> >      pmd_free_static_tx_qid(pmd);
> >
> >      if (!exiting) {
> >          goto reload;
> >      }
> >
> > +    emc_cache_uninit(&pmd->flow_cache);
> >      free(poll_list);
> >      pmd_free_cached_ports(pmd);
> >      return NULL;
> > --
> > 2.7.4
> >
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
diff mbox

Patch

diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index e2b4f39..3bd568d 100644
--- a/lib/dpif-netdev.c
+++ b/lib/dpif-netdev.c
@@ -3648,9 +3648,9 @@  pmd_thread_main(void *f_)
     ovs_numa_thread_setaffinity_core(pmd->core_id);
     dpdk_set_lcore_id(pmd->core_id);
     poll_cnt = pmd_load_queues_and_ports(pmd, &poll_list);
+    emc_cache_init(&pmd->flow_cache);
 reload:
     pmd_alloc_static_tx_qid(pmd);
-    emc_cache_init(&pmd->flow_cache);
 
     /* List port/core affinity */
     for (i = 0; i < poll_cnt; i++) {
@@ -3697,13 +3697,13 @@  reload:
      * reloading the updated configuration. */
     dp_netdev_pmd_reload_done(pmd);
 
-    emc_cache_uninit(&pmd->flow_cache);
     pmd_free_static_tx_qid(pmd);
 
     if (!exiting) {
         goto reload;
     }
 
+    emc_cache_uninit(&pmd->flow_cache);
     free(poll_list);
     pmd_free_cached_ports(pmd);
     return NULL;