diff mbox

[ovs-dev] ovn-controller: Clean up cases that lead to duplicate OF flows.

Message ID 1469729416-10019-1-git-send-email-rmoats@us.ibm.com
State Accepted
Headers show

Commit Message

Ryan Moats July 28, 2016, 6:10 p.m. UTC
In physical_run, there are multiple places where OF flows can be
produced each cycle.  Because the desired flow table may not have
been completely cleared first, remove flows created during previous
runs before creating new flows.  This avoid collisions.

Signed-off-by: Ryan Moats <rmoats@us.ibm.com>
---
 ovn/controller/physical.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

Comments

Ben Pfaff July 28, 2016, 6:39 p.m. UTC | #1
On Thu, Jul 28, 2016 at 06:10:16PM +0000, Ryan Moats wrote:
> In physical_run, there are multiple places where OF flows can be
> produced each cycle.  Because the desired flow table may not have
> been completely cleared first, remove flows created during previous
> runs before creating new flows.  This avoid collisions.
> 
> Signed-off-by: Ryan Moats <rmoats@us.ibm.com>

Hi Ryan.

In the "full" cases, if ports or multicast groups have disappeared, what
deletes the associated flows?

Thanks,

Ben.
Ryan Moats July 28, 2016, 7:11 p.m. UTC | #2
Ben Pfaff <blp@ovn.org> wrote on 07/28/2016 01:39:53 PM:

> From: Ben Pfaff <blp@ovn.org>
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: dev@openvswitch.org
> Date: 07/28/2016 01:40 PM
> Subject: Re: [ovs-dev] [PATCH] ovn-controller: Clean up cases that
> lead to duplicate OF flows.
>
> On Thu, Jul 28, 2016 at 06:10:16PM +0000, Ryan Moats wrote:
> > In physical_run, there are multiple places where OF flows can be
> > produced each cycle.  Because the desired flow table may not have
> > been completely cleared first, remove flows created during previous
> > runs before creating new flows.  This avoid collisions.
> >
> > Signed-off-by: Ryan Moats <rmoats@us.ibm.com>
>
> Hi Ryan.
>
> In the "full" cases, if ports or multicast groups have disappeared, what
> deletes the associated flows?
>
> Thanks,
>
> Ben.
>

That's an excellent question - Since those are created and removed by
northd (if I read the code right), right now, we'll have stale entries
until the next time we need full logical flow processing, at which
point we'll dump the desired flow table and flush them out.

Since I don't really like that answer, I think code that addresses
this case would be useful. I propose to put it in a follow-on patch set,
since I don't think it quite fits here.

Ryan
Ben Pfaff July 28, 2016, 7:54 p.m. UTC | #3
On Thu, Jul 28, 2016 at 02:11:52PM -0500, Ryan Moats wrote:
> Ben Pfaff <blp@ovn.org> wrote on 07/28/2016 01:39:53 PM:
> 
> > From: Ben Pfaff <blp@ovn.org>
> > To: Ryan Moats/Omaha/IBM@IBMUS
> > Cc: dev@openvswitch.org
> > Date: 07/28/2016 01:40 PM
> > Subject: Re: [ovs-dev] [PATCH] ovn-controller: Clean up cases that
> > lead to duplicate OF flows.
> >
> > On Thu, Jul 28, 2016 at 06:10:16PM +0000, Ryan Moats wrote:
> > > In physical_run, there are multiple places where OF flows can be
> > > produced each cycle.  Because the desired flow table may not have
> > > been completely cleared first, remove flows created during previous
> > > runs before creating new flows.  This avoid collisions.
> > >
> > > Signed-off-by: Ryan Moats <rmoats@us.ibm.com>
> >
> > Hi Ryan.
> >
> > In the "full" cases, if ports or multicast groups have disappeared, what
> > deletes the associated flows?
> >
> > Thanks,
> >
> > Ben.
> >
> 
> That's an excellent question - Since those are created and removed by
> northd (if I read the code right), right now, we'll have stale entries
> until the next time we need full logical flow processing, at which
> point we'll dump the desired flow table and flush them out.
> 
> Since I don't really like that answer, I think code that addresses
> this case would be useful. I propose to put it in a follow-on patch set,
> since I don't think it quite fits here.

Thanks for answering.

I agree that it's related but different, so I pushed this.

Thanks for being persistent with improving the incremental update code
now that it's in.
diff mbox

Patch

diff --git a/ovn/controller/physical.c b/ovn/controller/physical.c
index e5df4f2..2e9fb73 100644
--- a/ovn/controller/physical.c
+++ b/ovn/controller/physical.c
@@ -603,6 +603,7 @@  physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve,
     if (!hc_uuid) {
         hc_uuid = xmalloc(sizeof(struct uuid));
         uuid_generate(hc_uuid);
+        VLOG_INFO("Hard coded uuid is "UUID_FMT, UUID_ARGS(hc_uuid));
     }
 
     /* This bool tracks physical mapping changes. */
@@ -750,6 +751,10 @@  physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve,
     const struct sbrec_port_binding *binding;
     if (full_binding_processing) {
         SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl) {
+            /* Because it is possible in the above code to enter this
+             * for loop without having cleared the flow table first, we
+             * should clear the old flows to avoid collisions. */
+            ofctrl_remove_flows(&binding->header_.uuid);
             consider_port_binding(mff_ovn_geneve, ct_zones, local_datapaths,
                                   patched_datapaths, binding, &ofpacts);
         }
@@ -773,12 +778,21 @@  physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve,
     struct ofpbuf remote_ofpacts;
     ofpbuf_init(&remote_ofpacts, 0);
     SBREC_MULTICAST_GROUP_FOR_EACH (mc, ctx->ovnsb_idl) {
+        /* As multicast groups are always reprocessed each time,
+         * the first step is to clean the old flows for the group
+         * so that we avoid warning messages on collisions. */
+        ofctrl_remove_flows(&mc->header_.uuid);
         consider_mc_group(mff_ovn_geneve, ct_zones,
                           local_datapaths, mc, &ofpacts, &remote_ofpacts);
     }
 
     ofpbuf_uninit(&remote_ofpacts);
 
+    /* Because flows using the hard-coded uuid are recalculated each
+     * cycle, let's first remove the old flows to avoid duplicate flow
+     * warnings. */
+    ofctrl_remove_flows(hc_uuid);
+
     /* Table 0, priority 100.
      * ======================
      *