diff mbox

[ovs-dev,6/9] ovn-controller: Handle only relevant ports and flows.

Message ID 20161205071746.16989-7-blp@ovn.org
State Changes Requested
Headers show

Commit Message

Ben Pfaff Dec. 5, 2016, 7:17 a.m. UTC
On a particular hypervisor, ovn-controller only needs to handle ports
and datapaths that have some relationship with it, that is, the
ports that actually reside on the hypervisor, plus all the other ports on
those ports' datapaths, plus all of the ports and datapaths that are
reachable from those via logical patch ports.  Until now, ovn-controller
has done a poor job of limiting what it deals with to this set.  This
commit improves the situation.

This commit gets rid of the concept of a "patched_datapath" which until now
was used to represent any datapath that contained a logical patch port.
Previously, the concept of a "local_datapath" meant a datapath with a VIF
that resides on the local hypervisor.  This commit extends that concept to
include any other datapath that can be reached from a VIF on the local
hypervisor, which is a simplification that makes the code easier to
understand in a few places.

CC: Gurucharan Shetty <guru@ovn.org>
Signed-off-by: Ben Pfaff <blp@ovn.org>
---
 ovn/controller/binding.c        | 75 ++++++++++++++++++++++++++-----
 ovn/controller/binding.h        |  7 ++-
 ovn/controller/lflow.c          | 46 ++-----------------
 ovn/controller/lflow.h          |  1 -
 ovn/controller/ovn-controller.c | 41 ++++++-----------
 ovn/controller/ovn-controller.h | 33 +++++++-------
 ovn/controller/patch.c          | 98 ++++++++++++++++-------------------------
 ovn/controller/patch.h          |  5 +--
 ovn/controller/physical.c       | 30 ++-----------
 ovn/controller/physical.h       |  4 +-
 tests/ovn-controller.at         | 19 +++++++-
 11 files changed, 165 insertions(+), 194 deletions(-)

Comments

Mickey Spiegel Dec. 6, 2016, 1:45 a.m. UTC | #1
On Sun, Dec 4, 2016 at 11:17 PM, Ben Pfaff <blp@ovn.org> wrote:

> On a particular hypervisor, ovn-controller only needs to handle ports
> and datapaths that have some relationship with it, that is, the
> ports that actually reside on the hypervisor, plus all the other ports on
> those ports' datapaths, plus all of the ports and datapaths that are
> reachable from those via logical patch ports.  Until now, ovn-controller
> has done a poor job of limiting what it deals with to this set.  This
> commit improves the situation.
>
> This commit gets rid of the concept of a "patched_datapath" which until now
> was used to represent any datapath that contained a logical patch port.
> Previously, the concept of a "local_datapath" meant a datapath with a VIF
> that resides on the local hypervisor.  This commit extends that concept to
> include any other datapath that can be reached from a VIF on the local
> hypervisor, which is a simplification that makes the code easier to
> understand in a few places.
>

I like the change to remove "patched_datapath" and consolidate into
the concept of "local_datapath". Among other issues, it solves my
localnet problem for distributed NAT.

The big question is the relationship with datapaths of interest. Both
this and datapaths of interest trigger a computation from
add_local_datapath, to find other datapaths that can be reached on
the local hypervisor. At a high level, the difference is that this
computation is based on patch ports from a datapath, while the
datapaths of interest computation is based on a list of neighbor
datapaths that is populated by northd into each datapath_binding.
Note that northd populates related_datapaths based on patch
ports, so in the end it is the same source of information, but
different parts of the computation are done in different places at
different times.

If it were only desired to do conditional monitoring of logical_flow,
then related_datapaths would not be necessary at all.
Local_datapath, as computed in this patch, could be used to
trigger the necessary logical flow clauses.

With the desire to do conditional monitoring of port_binding,
things get more complicated. The computation in this patch
depends on having the port_bindings for other datapaths,
leading to a chicken and egg problem. It seems like northd
population of related_datapaths is necessary for conditional
monitoring of port_bindings. This leads to the question
whether local_datapaths should be computed as in this
patch? or whether there should be a single computation
over related_datapaths to determine local_datapaths, which
would then be used to trigger conditional monitoring clauses
for datapaths? Is it safe for this computation to depend on
partial computation from northd? Or are there any timing
windows that might lead to corner cases?

Depending on the answer to this question, patches 2 and 5 in
this patch set may or may not be necessary.

One question and one comment inline.

>
> CC: Gurucharan Shetty <guru@ovn.org>
> Signed-off-by: Ben Pfaff <blp@ovn.org>
> ---
>  ovn/controller/binding.c        | 75 ++++++++++++++++++++++++++-----
>  ovn/controller/binding.h        |  7 ++-
>  ovn/controller/lflow.c          | 46 ++-----------------
>  ovn/controller/lflow.h          |  1 -
>  ovn/controller/ovn-controller.c | 41 ++++++-----------
>  ovn/controller/ovn-controller.h | 33 +++++++-------
>  ovn/controller/patch.c          | 98 ++++++++++++++++--------------
> -----------
>  ovn/controller/patch.h          |  5 +--
>  ovn/controller/physical.c       | 30 ++-----------
>  ovn/controller/physical.h       |  4 +-
>  tests/ovn-controller.at         | 19 +++++++-
>  11 files changed, 165 insertions(+), 194 deletions(-)
>
> diff --git a/ovn/controller/binding.c b/ovn/controller/binding.c
> index fb76032..b53af98 100644
> --- a/ovn/controller/binding.c
> +++ b/ovn/controller/binding.c
>

<snip>


> @@ -342,7 +390,8 @@ consider_local_datapath(struct controller_ctx *ctx,
>          const char *chassis = smap_get(&binding_rec->options,
>                                         "l3gateway-chassis");
>          if (!strcmp(chassis, chassis_rec->name) && ctx->ovnsb_idl_txn) {
> -            add_local_datapath(local_datapaths, binding_rec);
> +            add_local_datapath(ldatapaths, lports, binding_rec->datapath,
> +                               false, local_datapaths);
>

Why did you set has_local_l3gateway to false here, then change
it to true in patch 9?
I don't see the harm in setting it to true here.


>          }
>      } else if (chassis_rec && binding_rec->chassis == chassis_rec) {
>          if (ctx->ovnsb_idl_txn) {
>

<snip>


> --- a/ovn/controller/ovn-controller.c
> +++ b/ovn/controller/ovn-controller.c
>

There is a fragment of code to destroy patched_datapaths that
was deleted at the end of patch 5. It should not have been deleted
in patch 5, but it should be deleted here.

Mickey
Ben Pfaff Dec. 6, 2016, 5:35 p.m. UTC | #2
On Mon, Dec 05, 2016 at 05:45:15PM -0800, Mickey Spiegel wrote:
> On Sun, Dec 4, 2016 at 11:17 PM, Ben Pfaff <blp@ovn.org> wrote:
> 
> > On a particular hypervisor, ovn-controller only needs to handle ports
> > and datapaths that have some relationship with it, that is, the
> > ports that actually reside on the hypervisor, plus all the other ports on
> > those ports' datapaths, plus all of the ports and datapaths that are
> > reachable from those via logical patch ports.  Until now, ovn-controller
> > has done a poor job of limiting what it deals with to this set.  This
> > commit improves the situation.
> >
> > This commit gets rid of the concept of a "patched_datapath" which until now
> > was used to represent any datapath that contained a logical patch port.
> > Previously, the concept of a "local_datapath" meant a datapath with a VIF
> > that resides on the local hypervisor.  This commit extends that concept to
> > include any other datapath that can be reached from a VIF on the local
> > hypervisor, which is a simplification that makes the code easier to
> > understand in a few places.
> >
> 
> I like the change to remove "patched_datapath" and consolidate into
> the concept of "local_datapath". Among other issues, it solves my
> localnet problem for distributed NAT.
> 
> The big question is the relationship with datapaths of interest. Both
> this and datapaths of interest trigger a computation from
> add_local_datapath, to find other datapaths that can be reached on
> the local hypervisor. At a high level, the difference is that this
> computation is based on patch ports from a datapath, while the
> datapaths of interest computation is based on a list of neighbor
> datapaths that is populated by northd into each datapath_binding.
> Note that northd populates related_datapaths based on patch
> ports, so in the end it is the same source of information, but
> different parts of the computation are done in different places at
> different times.
> 
> If it were only desired to do conditional monitoring of logical_flow,
> then related_datapaths would not be necessary at all.
> Local_datapath, as computed in this patch, could be used to
> trigger the necessary logical flow clauses.
> 
> With the desire to do conditional monitoring of port_binding,
> things get more complicated. The computation in this patch
> depends on having the port_bindings for other datapaths,
> leading to a chicken and egg problem. It seems like northd
> population of related_datapaths is necessary for conditional
> monitoring of port_bindings. This leads to the question
> whether local_datapaths should be computed as in this
> patch? or whether there should be a single computation
> over related_datapaths to determine local_datapaths, which
> would then be used to trigger conditional monitoring clauses
> for datapaths? Is it safe for this computation to depend on
> partial computation from northd? Or are there any timing
> windows that might lead to corner cases?
> 
> Depending on the answer to this question, patches 2 and 5 in
> this patch set may or may not be necessary.

You bring up reasonable questions.

I need to review the latest version of the "datapaths of interest"
patch, because I'm not very familiar with it anymore.  I recognize that
it probably conflicts with this series.

> > @@ -342,7 +390,8 @@ consider_local_datapath(struct controller_ctx *ctx,
> >          const char *chassis = smap_get(&binding_rec->options,
> >                                         "l3gateway-chassis");
> >          if (!strcmp(chassis, chassis_rec->name) && ctx->ovnsb_idl_txn) {
> > -            add_local_datapath(local_datapaths, binding_rec);
> > +            add_local_datapath(ldatapaths, lports, binding_rec->datapath,
> > +                               false, local_datapaths);
> >
> 
> Why did you set has_local_l3gateway to false here, then change
> it to true in patch 9?
> I don't see the harm in setting it to true here.

This was a mistake.  I've now moved this from patch 9 into patch 6.

> There is a fragment of code to destroy patched_datapaths that
> was deleted at the end of patch 5. It should not have been deleted
> in patch 5, but it should be deleted here.

Thanks, I've now fixed this too.
diff mbox

Patch

diff --git a/ovn/controller/binding.c b/ovn/controller/binding.c
index fb76032..b53af98 100644
--- a/ovn/controller/binding.c
+++ b/ovn/controller/binding.c
@@ -105,17 +105,61 @@  get_local_iface_ids(const struct ovsrec_bridge *br_int,
 }
 
 static void
-add_local_datapath(struct hmap *local_datapaths,
-        const struct sbrec_port_binding *binding_rec)
+add_local_datapath__(const struct ldatapath_index *ldatapaths,
+                     const struct lport_index *lports,
+                     const struct sbrec_datapath_binding *datapath,
+                     bool has_local_l3gateway, int depth,
+                     struct hmap *local_datapaths)
 {
-    if (get_local_datapath(local_datapaths,
-                           binding_rec->datapath->tunnel_key)) {
+    uint32_t dp_key = datapath->tunnel_key;
+
+    struct local_datapath *ld = get_local_datapath(local_datapaths, dp_key);
+    if (ld) {
+        if (has_local_l3gateway) {
+            ld->has_local_l3gateway = true;
+        }
         return;
     }
 
-    struct local_datapath *ld = xzalloc(sizeof *ld);
-    hmap_insert(local_datapaths, &ld->hmap_node,
-                binding_rec->datapath->tunnel_key);
+    ld = xzalloc(sizeof *ld);
+    hmap_insert(local_datapaths, &ld->hmap_node, dp_key);
+    ld->datapath = datapath;
+    ld->ldatapath = ldatapath_lookup_by_key(ldatapaths, dp_key);
+    ovs_assert(ld->ldatapath);
+    ld->localnet_port = NULL;
+    ld->has_local_l3gateway = has_local_l3gateway;
+
+    if (depth >= 100) {
+        static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
+        VLOG_WARN_RL(&rl, "datapaths nested too deep");
+        return;
+    }
+
+    /* Recursively add logical datapaths to which this one patches. */
+    for (size_t i = 0; i < ld->ldatapath->n_lports; i++) {
+        const struct sbrec_port_binding *pb = ld->ldatapath->lports[i];
+        if (!strcmp(pb->type, "patch")) {
+            const char *peer_name = smap_get(&pb->options, "peer");
+            if (peer_name) {
+                const struct sbrec_port_binding *peer = lport_lookup_by_name(
+                    lports, peer_name);
+                if (peer && peer->datapath) {
+                    add_local_datapath__(ldatapaths, lports, peer->datapath,
+                                         false, depth + 1, local_datapaths);
+                }
+            }
+        }
+    }
+}
+
+static void
+add_local_datapath(const struct ldatapath_index *ldatapaths,
+                   const struct lport_index *lports,
+                   const struct sbrec_datapath_binding *datapath,
+                   bool has_local_l3gateway, struct hmap *local_datapaths)
+{
+    add_local_datapath__(ldatapaths, lports, datapath, has_local_l3gateway, 0,
+                         local_datapaths);
 }
 
 static void
@@ -276,6 +320,8 @@  setup_qos(const char *egress_iface, struct hmap *queue_map)
 
 static void
 consider_local_datapath(struct controller_ctx *ctx,
+                        const struct ldatapath_index *ldatapaths,
+                        const struct lport_index *lports,
                         const struct sbrec_chassis *chassis_rec,
                         const struct sbrec_port_binding *binding_rec,
                         struct hmap *qos_map,
@@ -293,7 +339,8 @@  consider_local_datapath(struct controller_ctx *ctx,
             /* Add child logical port to the set of all local ports. */
             sset_add(all_lports, binding_rec->logical_port);
         }
-        add_local_datapath(local_datapaths, binding_rec);
+        add_local_datapath(ldatapaths, lports, binding_rec->datapath,
+                           false, local_datapaths);
         if (iface_rec && qos_map && ctx->ovs_idl_txn) {
             get_qos_params(binding_rec, qos_map);
         }
@@ -328,7 +375,8 @@  consider_local_datapath(struct controller_ctx *ctx,
         }
 
         sset_add(all_lports, binding_rec->logical_port);
-        add_local_datapath(local_datapaths, binding_rec);
+        add_local_datapath(ldatapaths, lports, binding_rec->datapath,
+                           false, local_datapaths);
         if (binding_rec->chassis == chassis_rec) {
             return;
         }
@@ -342,7 +390,8 @@  consider_local_datapath(struct controller_ctx *ctx,
         const char *chassis = smap_get(&binding_rec->options,
                                        "l3gateway-chassis");
         if (!strcmp(chassis, chassis_rec->name) && ctx->ovnsb_idl_txn) {
-            add_local_datapath(local_datapaths, binding_rec);
+            add_local_datapath(ldatapaths, lports, binding_rec->datapath,
+                               false, local_datapaths);
         }
     } else if (chassis_rec && binding_rec->chassis == chassis_rec) {
         if (ctx->ovnsb_idl_txn) {
@@ -364,7 +413,8 @@  consider_local_datapath(struct controller_ctx *ctx,
 
 void
 binding_run(struct controller_ctx *ctx, const struct ovsrec_bridge *br_int,
-            const char *chassis_id, struct hmap *local_datapaths,
+            const char *chassis_id, const struct ldatapath_index *ldatapaths,
+            const struct lport_index *lports, struct hmap *local_datapaths,
             struct sset *all_lports)
 {
     const struct sbrec_chassis *chassis_rec;
@@ -388,7 +438,8 @@  binding_run(struct controller_ctx *ctx, const struct ovsrec_bridge *br_int,
      * chassis and update the binding accordingly.  This includes both
      * directly connected logical ports and children of those ports. */
     SBREC_PORT_BINDING_FOR_EACH(binding_rec, ctx->ovnsb_idl) {
-        consider_local_datapath(ctx, chassis_rec, binding_rec,
+        consider_local_datapath(ctx, ldatapaths, lports,
+                                chassis_rec, binding_rec,
                                 sset_is_empty(&egress_ifaces) ? NULL :
                                 &qos_map, local_datapaths, &lport_to_iface,
                                 all_lports);
diff --git a/ovn/controller/binding.h b/ovn/controller/binding.h
index dd764f2..0bb293d 100644
--- a/ovn/controller/binding.h
+++ b/ovn/controller/binding.h
@@ -1,4 +1,4 @@ 
-/* Copyright (c) 2015 Nicira, Inc.
+/* Copyright (c) 2015, 2016 Nicira, Inc.
  *
  * Licensed under the Apache License, Version 2.0 (the "License");
  * you may not use this file except in compliance with the License.
@@ -21,6 +21,8 @@ 
 
 struct controller_ctx;
 struct hmap;
+struct ldatapath_index;
+struct lport_index;
 struct ovsdb_idl;
 struct ovsrec_bridge;
 struct simap;
@@ -28,7 +30,8 @@  struct sset;
 
 void binding_register_ovs_idl(struct ovsdb_idl *);
 void binding_run(struct controller_ctx *, const struct ovsrec_bridge *br_int,
-                 const char *chassis_id, struct hmap *local_datapaths,
+                 const char *chassis_id, const struct ldatapath_index *,
+                 const struct lport_index *, struct hmap *local_datapaths,
                  struct sset *all_lports);
 bool binding_cleanup(struct controller_ctx *, const char *chassis_id);
 
diff --git a/ovn/controller/lflow.c b/ovn/controller/lflow.c
index 4e67365..d913998 100644
--- a/ovn/controller/lflow.c
+++ b/ovn/controller/lflow.c
@@ -68,7 +68,6 @@  static void consider_logical_flow(const struct lport_index *lports,
                                   const struct mcgroup_index *mcgroups,
                                   const struct sbrec_logical_flow *lflow,
                                   const struct hmap *local_datapaths,
-                                  const struct hmap *patched_datapaths,
                                   struct group_table *group_table,
                                   const struct simap *ct_zones,
                                   struct hmap *dhcp_opts_p,
@@ -111,7 +110,6 @@  static void
 add_logical_flows(struct controller_ctx *ctx, const struct lport_index *lports,
                   const struct mcgroup_index *mcgroups,
                   const struct hmap *local_datapaths,
-                  const struct hmap *patched_datapaths,
                   struct group_table *group_table,
                   const struct simap *ct_zones,
                   struct hmap *flow_table,
@@ -137,7 +135,7 @@  add_logical_flows(struct controller_ctx *ctx, const struct lport_index *lports,
 
     SBREC_LOGICAL_FLOW_FOR_EACH (lflow, ctx->ovnsb_idl) {
         consider_logical_flow(lports, mcgroups, lflow, local_datapaths,
-                              patched_datapaths, group_table, ct_zones,
+                              group_table, ct_zones,
                               &dhcp_opts, &dhcpv6_opts, &conj_id_ofs,
                               flow_table, expr_address_sets_p);
     }
@@ -151,7 +149,6 @@  consider_logical_flow(const struct lport_index *lports,
                       const struct mcgroup_index *mcgroups,
                       const struct sbrec_logical_flow *lflow,
                       const struct hmap *local_datapaths,
-                      const struct hmap *patched_datapaths,
                       struct group_table *group_table,
                       const struct simap *ct_zones,
                       struct hmap *dhcp_opts_p,
@@ -167,41 +164,8 @@  consider_logical_flow(const struct lport_index *lports,
     if (!ldp) {
         return;
     }
-    if (is_switch(ldp)) {
-        /* For a logical switch datapath, local_datapaths tells us if there
-         * are any local ports for this datapath.  If not, we can skip
-         * processing logical flows if that logical switch datapath is not
-         * patched to any logical router.
-         *
-         * Otherwise, we still need both ingress and egress pipeline
-         * because even if there are no local ports, we still may need to
-         * execute the ingress pipeline after a packet leaves a logical
-         * router and we need to do egress pipeline for a switch that
-         * is connected to only routers.  Further optimization is possible,
-         * but not based on what we know with local_datapaths right now.
-         *
-         * A better approach would be a kind of "flood fill" algorithm:
-         *
-         *   1. Initialize set S to the logical datapaths that have a port
-         *      located on the hypervisor.
-         *
-         *   2. For each patch port P in a logical datapath in S, add the
-         *      logical datapath of the remote end of P to S.  Iterate
-         *      until S reaches a fixed point.
-         *
-         * This can be implemented in northd, which can generate the sets and
-         * save it on each port-binding record in SB, and ovn-controller can
-         * use the information directly. However, there can be update storms
-         * when a pair of patch ports are added/removed to connect/disconnect
-         * large lrouters and lswitches. This need to be studied further.
-         */
-
-        if (!get_local_datapath(local_datapaths, ldp->tunnel_key)) {
-            if (!get_patched_datapath(patched_datapaths,
-                                      ldp->tunnel_key)) {
-                return;
-            }
-        }
+    if (!get_local_datapath(local_datapaths, ldp->tunnel_key)) {
+        return;
     }
 
     /* Determine translation of logical table IDs to physical table IDs. */
@@ -410,7 +374,6 @@  void
 lflow_run(struct controller_ctx *ctx, const struct lport_index *lports,
           const struct mcgroup_index *mcgroups,
           const struct hmap *local_datapaths,
-          const struct hmap *patched_datapaths,
           struct group_table *group_table,
           const struct simap *ct_zones,
           struct hmap *flow_table)
@@ -419,8 +382,7 @@  lflow_run(struct controller_ctx *ctx, const struct lport_index *lports,
 
     update_address_sets(ctx, &expr_address_sets);
     add_logical_flows(ctx, lports, mcgroups, local_datapaths,
-                      patched_datapaths, group_table, ct_zones, flow_table,
-                      &expr_address_sets);
+                      group_table, ct_zones, flow_table, &expr_address_sets);
     add_neighbor_flows(ctx, lports, flow_table);
 
     expr_macros_destroy(&expr_address_sets);
diff --git a/ovn/controller/lflow.h b/ovn/controller/lflow.h
index d3ca5d1..6305ce0 100644
--- a/ovn/controller/lflow.h
+++ b/ovn/controller/lflow.h
@@ -64,7 +64,6 @@  void lflow_init(void);
 void lflow_run(struct controller_ctx *, const struct lport_index *,
                const struct mcgroup_index *,
                const struct hmap *local_datapaths,
-               const struct hmap *patched_datapaths,
                struct group_table *group_table,
                const struct simap *ct_zones,
                struct hmap *flow_table);
diff --git a/ovn/controller/ovn-controller.c b/ovn/controller/ovn-controller.c
index 5b95a55..ff48a94 100644
--- a/ovn/controller/ovn-controller.c
+++ b/ovn/controller/ovn-controller.c
@@ -78,16 +78,6 @@  get_local_datapath(const struct hmap *local_datapaths, uint32_t tunnel_key)
             : NULL);
 }
 
-struct patched_datapath *
-get_patched_datapath(const struct hmap *patched_datapaths, uint32_t tunnel_key)
-{
-    struct hmap_node *node = hmap_first_with_hash(patched_datapaths,
-                                                  tunnel_key);
-    return (node
-            ? CONTAINER_OF(node, struct patched_datapath, hmap_node)
-            : NULL);
-}
-
 const struct sbrec_chassis *
 get_chassis(struct ovsdb_idl *ovnsb_idl, const char *chassis_id)
 {
@@ -230,13 +220,12 @@  get_ovnsb_remote(struct ovsdb_idl *ovs_idl)
 }
 
 static void
-update_ct_zones(struct sset *lports, struct hmap *patched_datapaths,
+update_ct_zones(struct sset *lports, const struct hmap *local_datapaths,
                 struct simap *ct_zones, unsigned long *ct_zone_bitmap,
                 struct shash *pending_ct_zones)
 {
     struct simap_node *ct_zone, *ct_zone_next;
     int scan_start = 1;
-    struct patched_datapath *pd;
     const char *user;
     struct sset all_users = SSET_INITIALIZER(&all_users);
 
@@ -245,13 +234,14 @@  update_ct_zones(struct sset *lports, struct hmap *patched_datapaths,
     }
 
     /* Local patched datapath (gateway routers) need zones assigned. */
-    HMAP_FOR_EACH(pd, hmap_node, patched_datapaths) {
-        if (!pd->local) {
+    const struct local_datapath *ld;
+    HMAP_FOR_EACH (ld, hmap_node, local_datapaths) {
+        if (!ld->has_local_l3gateway) {
             continue;
         }
 
-        char *dnat = alloc_nat_zone_key(&pd->key, "dnat");
-        char *snat = alloc_nat_zone_key(&pd->key, "snat");
+        char *dnat = alloc_nat_zone_key(&ld->datapath->header_.uuid, "dnat");
+        char *snat = alloc_nat_zone_key(&ld->datapath->header_.uuid, "snat");
         sset_add(&all_users, dnat);
         sset_add(&all_users, snat);
         free(dnat);
@@ -502,11 +492,8 @@  main(int argc, char *argv[])
 
         update_probe_interval(&ctx);
 
-        /* Contains "struct local_datapath" nodes whose hash values are the
-         * tunnel_key of datapaths with at least one local port binding. */
+        /* Contains "struct local_datapath" nodes. */
         struct hmap local_datapaths = HMAP_INITIALIZER(&local_datapaths);
-
-        struct hmap patched_datapaths = HMAP_INITIALIZER(&patched_datapaths);
         struct sset all_lports = SSET_INITIALIZER(&all_lports);
 
         const struct ovsrec_bridge *br_int = get_br_int(&ctx);
@@ -523,31 +510,29 @@  main(int argc, char *argv[])
         if (chassis_id) {
             chassis = chassis_run(&ctx, chassis_id, br_int);
             encaps_run(&ctx, br_int, chassis_id);
-            binding_run(&ctx, br_int, chassis_id, &local_datapaths,
-                        &all_lports);
+            binding_run(&ctx, br_int, chassis_id, &ldatapaths, &lports,
+                        &local_datapaths, &all_lports);
         }
 
         if (br_int && chassis) {
-            patch_run(&ctx, br_int, chassis_id, &local_datapaths,
-                      &patched_datapaths);
+            patch_run(&ctx, br_int, chassis_id, &local_datapaths);
 
             enum mf_field_id mff_ovn_geneve = ofctrl_run(br_int,
                                                          &pending_ct_zones);
 
             pinctrl_run(&ctx, &lports, br_int, chassis_id, &local_datapaths);
-            update_ct_zones(&all_lports, &patched_datapaths, &ct_zones,
+            update_ct_zones(&all_lports, &local_datapaths, &ct_zones,
                             ct_zone_bitmap, &pending_ct_zones);
             if (ctx.ovs_idl_txn) {
                 commit_ct_zones(br_int, &pending_ct_zones);
 
                 struct hmap flow_table = HMAP_INITIALIZER(&flow_table);
                 lflow_run(&ctx, &lports, &mcgroups, &local_datapaths,
-                          &patched_datapaths, &group_table, &ct_zones,
-                          &flow_table);
+                          &group_table, &ct_zones, &flow_table);
 
                 physical_run(&ctx, mff_ovn_geneve,
                              br_int, chassis_id, &ct_zones, &flow_table,
-                             &local_datapaths, &patched_datapaths);
+                             &local_datapaths);
 
                 ofctrl_put(&flow_table, &pending_ct_zones,
                            get_nb_cfg(ctx.ovnsb_idl));
diff --git a/ovn/controller/ovn-controller.h b/ovn/controller/ovn-controller.h
index 8a3e855..4bc0467 100644
--- a/ovn/controller/ovn-controller.h
+++ b/ovn/controller/ovn-controller.h
@@ -46,30 +46,31 @@  struct ct_zone_pending_entry {
     enum ct_zone_pending_state state;
 };
 
-/* Contains hmap_node whose hash values are the tunnel_key of datapaths
- * with at least one local port binding. It also stores the port binding of
- * "localnet" port if such a port exists on the datapath, which indicates
- * physical network should be used for inter-chassis communication through
- * the localnet port */
+/* A logical datapath that has some relevance to this hypervisor.  A logical
+ * datapath D is relevant to hypervisor H if:
+ *
+ *     - Some VIF or l2gateway or l3gateway port in D is located on H.
+ *
+ *     - D is reachable over a series of hops across patch ports, starting from
+ *       a datapath relevant to H.
+ *
+ * The 'hmap_node''s hash value is 'datapath->tunnel_key'. */
 struct local_datapath {
     struct hmap_node hmap_node;
+    const struct sbrec_datapath_binding *datapath;
+    const struct ldatapath *ldatapath;
+
+    /* The localnet port in this datapath, if any (at most one is allowed). */
     const struct sbrec_port_binding *localnet_port;
+
+    /* True if this datapath contains an l3gateway port located on this
+     * hypervisor. */
+    bool has_local_l3gateway;
 };
 
 struct local_datapath *get_local_datapath(const struct hmap *,
                                           uint32_t tunnel_key);
 
-/* Contains hmap_node whose hash values are the tunnel_key of datapaths
- * with at least one logical patch port binding. */
-struct patched_datapath {
-    struct hmap_node hmap_node;
-    struct uuid key;   /* UUID of the corresponding datapath. */
-    bool local; /* 'True' if the datapath is for gateway router. */
-};
-
-struct patched_datapath *get_patched_datapath(const struct hmap *,
-                                              uint32_t tunnel_key);
-
 const struct ovsrec_bridge *get_bridge(struct ovsdb_idl *,
                                        const char *br_name);
 
diff --git a/ovn/controller/patch.c b/ovn/controller/patch.c
index 79a7d81..ce82b89 100644
--- a/ovn/controller/patch.c
+++ b/ovn/controller/patch.c
@@ -184,9 +184,6 @@  add_bridge_mappings(struct controller_ctx *ctx,
                 = get_local_datapath(local_datapaths,
                                      binding->datapath->tunnel_key);
             if (!ld) {
-                /* This localnet port is on a datapath with no
-                 * logical ports bound to this chassis, so there's no need
-                 * to create patch ports for it. */
                 continue;
             }
 
@@ -250,30 +247,11 @@  add_bridge_mappings(struct controller_ctx *ctx,
     shash_destroy(&bridge_mappings);
 }
 
-static void
-add_patched_datapath(struct hmap *patched_datapaths,
-                     const struct sbrec_port_binding *binding_rec, bool local)
-{
-    struct patched_datapath *pd = get_patched_datapath(patched_datapaths,
-                                       binding_rec->datapath->tunnel_key);
-    if (pd) {
-        return;
-    }
-
-    pd = xzalloc(sizeof *pd);
-    pd->local = local;
-    pd->key = binding_rec->datapath->header_.uuid;
-    hmap_insert(patched_datapaths, &pd->hmap_node,
-                binding_rec->datapath->tunnel_key);
-}
-
 /* Add one OVS patch port for each OVN logical patch port.
  *
- * This is suboptimal for several reasons.  First, it creates an OVS port for
- * every OVN logical patch port, not just for the ones that are actually useful
- * on this hypervisor.  Second, it's wasteful to create an OVS patch port per
- * OVN logical patch port, when really there's no benefit to them beyond a way
- * to identify how a packet ingressed into a logical datapath.
+ * It's wasteful to create an OVS patch port per OVN logical patch port, when
+ * really there's no benefit to them beyond a way to identify how a packet
+ * ingressed into a logical datapath.
  *
  * There are two obvious ways to improve the situation here, by modifying
  * OVS:
@@ -293,8 +271,8 @@  static void
 add_logical_patch_ports(struct controller_ctx *ctx,
                         const struct ovsrec_bridge *br_int,
                         const char *local_chassis_id,
-                        struct shash *existing_ports,
-                        struct hmap *patched_datapaths)
+                        struct hmap *local_datapaths,
+                        struct shash *existing_ports)
 {
     const struct sbrec_chassis *chassis_rec;
     chassis_rec = get_chassis(ctx->ovnsb_idl, local_chassis_id);
@@ -302,38 +280,39 @@  add_logical_patch_ports(struct controller_ctx *ctx,
         return;
     }
 
-    const struct sbrec_port_binding *binding;
-    SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl) {
-        const char *patch_port_id = "ovn-logical-patch-port";
-        bool local_port = false;
-        if (!strcmp(binding->type, "l3gateway")) {
-            const char *chassis = smap_get(&binding->options,
-                                           "l3gateway-chassis");
-            if (chassis && !strcmp(local_chassis_id, chassis)) {
-                local_port = true;
-                patch_port_id = "ovn-l3gateway-port";
-            }
-        }
-
-        if (!strcmp(binding->type, "patch") || local_port) {
-            const char *local = binding->logical_port;
-            const char *peer = smap_get(&binding->options, "peer");
-            if (!peer) {
-                continue;
+    const struct local_datapath *ld;
+    HMAP_FOR_EACH (ld, hmap_node, local_datapaths) {
+        for (size_t i = 0; i < ld->ldatapath->n_lports; i++) {
+            const struct sbrec_port_binding *pb = ld->ldatapath->lports[i];
+            const char *patch_port_id = "ovn-logical-patch-port";
+
+            bool is_local_l3gateway = false;
+            if (!strcmp(pb->type, "l3gateway")) {
+                const char *chassis = smap_get(&pb->options,
+                                               "l3gateway-chassis");
+                if (chassis && !strcmp(local_chassis_id, chassis)) {
+                    is_local_l3gateway = true;
+                    patch_port_id = "ovn-l3gateway-port";
+                    if (pb->chassis != chassis_rec && ctx->ovnsb_idl_txn) {
+                        sbrec_port_binding_set_chassis(pb, chassis_rec);
+                    }
+                }
             }
 
-            char *src_name = patch_port_name(local, peer);
-            char *dst_name = patch_port_name(peer, local);
-            create_patch_port(ctx, patch_port_id, local,
-                              br_int, src_name, br_int, dst_name,
-                              existing_ports);
-            free(dst_name);
-            free(src_name);
-            add_patched_datapath(patched_datapaths, binding, local_port);
-            if (local_port) {
-                if (binding->chassis != chassis_rec && ctx->ovnsb_idl_txn) {
-                    sbrec_port_binding_set_chassis(binding, chassis_rec);
+            if (!strcmp(pb->type, "patch") || is_local_l3gateway) {
+                const char *local = pb->logical_port;
+                const char *peer = smap_get(&pb->options, "peer");
+                if (!peer) {
+                    continue;
                 }
+
+                char *src_name = patch_port_name(local, peer);
+                char *dst_name = patch_port_name(peer, local);
+                create_patch_port(ctx, patch_port_id, local,
+                                  br_int, src_name, br_int, dst_name,
+                                  existing_ports);
+                free(dst_name);
+                free(src_name);
             }
         }
     }
@@ -341,8 +320,7 @@  add_logical_patch_ports(struct controller_ctx *ctx,
 
 void
 patch_run(struct controller_ctx *ctx, const struct ovsrec_bridge *br_int,
-          const char *chassis_id, struct hmap *local_datapaths,
-          struct hmap *patched_datapaths)
+          const char *chassis_id, struct hmap *local_datapaths)
 {
     if (!ctx->ovs_idl_txn) {
         return;
@@ -365,8 +343,8 @@  patch_run(struct controller_ctx *ctx, const struct ovsrec_bridge *br_int,
      * should be there. */
     add_bridge_mappings(ctx, br_int, &existing_ports, local_datapaths,
                         chassis_id);
-    add_logical_patch_ports(ctx, br_int, chassis_id, &existing_ports,
-                            patched_datapaths);
+    add_logical_patch_ports(ctx, br_int, chassis_id, local_datapaths,
+                            &existing_ports);
 
     /* Now 'existing_ports' only still contains patch ports that exist in the
      * database but shouldn't.  Delete them from the database. */
diff --git a/ovn/controller/patch.h b/ovn/controller/patch.h
index 7920a48..12ebeb9 100644
--- a/ovn/controller/patch.h
+++ b/ovn/controller/patch.h
@@ -1,4 +1,4 @@ 
-/* Copyright (c) 2015 Nicira, Inc.
+/* Copyright (c) 2015, 2016 Nicira, Inc.
  *
  * Licensed under the Apache License, Version 2.0 (the "License");
  * you may not use this file except in compliance with the License.
@@ -27,7 +27,6 @@  struct hmap;
 struct ovsrec_bridge;
 
 void patch_run(struct controller_ctx *, const struct ovsrec_bridge *br_int,
-               const char *chassis_id, struct hmap *local_datapaths,
-               struct hmap *patched_datapaths);
+               const char *chassis_id, struct hmap *local_datapaths);
 
 #endif /* ovn/patch.h */
diff --git a/ovn/controller/physical.c b/ovn/controller/physical.c
index 48adb78..41673e5 100644
--- a/ovn/controller/physical.c
+++ b/ovn/controller/physical.c
@@ -157,36 +157,13 @@  static void
 consider_port_binding(enum mf_field_id mff_ovn_geneve,
                       const struct simap *ct_zones,
                       struct hmap *local_datapaths,
-                      struct hmap *patched_datapaths,
                       const struct sbrec_port_binding *binding,
                       struct ofpbuf *ofpacts_p,
                       struct hmap *flow_table)
 {
-    /* Skip the port binding if the port is on a datapath that is neither
-     * local nor with any logical patch port connected, because local ports
-     * would never need to talk to those ports.
-     *
-     * Even with this approach there could still be unnecessary port
-     * bindings processed. A better approach would be a kind of "flood
-     * fill" algorithm:
-     *
-     *   1. Initialize set S to the logical datapaths that have a port
-     *      located on the hypervisor.
-     *
-     *   2. For each patch port P in a logical datapath in S, add the
-     *      logical datapath of the remote end of P to S.  Iterate
-     *      until S reaches a fixed point.
-     *
-     * This can be implemented in northd, which can generate the sets and
-     * save it on each port-binding record in SB, and ovn-controller can
-     * use the information directly. However, there can be update storms
-     * when a pair of patch ports are added/removed to connect/disconnect
-     * large lrouters and lswitches. This need to be studied further.
-     */
     uint32_t dp_key = binding->datapath->tunnel_key;
     uint32_t port_key = binding->tunnel_key;
-    if (!get_local_datapath(local_datapaths, dp_key)
-        && !get_patched_datapath(patched_datapaths, dp_key)) {
+    if (!get_local_datapath(local_datapaths, dp_key)) {
         return;
     }
 
@@ -631,7 +608,7 @@  void
 physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve,
              const struct ovsrec_bridge *br_int, const char *this_chassis_id,
              const struct simap *ct_zones, struct hmap *flow_table,
-             struct hmap *local_datapaths, struct hmap *patched_datapaths)
+             struct hmap *local_datapaths)
 {
 
     /* This bool tracks physical mapping changes. */
@@ -782,8 +759,7 @@  physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve,
     const struct sbrec_port_binding *binding;
     SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl) {
         consider_port_binding(mff_ovn_geneve, ct_zones, local_datapaths,
-                              patched_datapaths, binding, &ofpacts,
-                              flow_table);
+                              binding, &ofpacts, flow_table);
     }
 
     /* Handle output to multicast groups, in tables 32 and 33. */
diff --git a/ovn/controller/physical.h b/ovn/controller/physical.h
index 86ce93c..ab8a9d2 100644
--- a/ovn/controller/physical.h
+++ b/ovn/controller/physical.h
@@ -1,4 +1,4 @@ 
-/* Copyright (c) 2015 Nicira, Inc.
+/* Copyright (c) 2015, 2016 Nicira, Inc.
  *
  * Licensed under the Apache License, Version 2.0 (the "License");
  * you may not use this file except in compliance with the License.
@@ -44,6 +44,6 @@  void physical_register_ovs_idl(struct ovsdb_idl *);
 void physical_run(struct controller_ctx *, enum mf_field_id mff_ovn_geneve,
                   const struct ovsrec_bridge *br_int, const char *chassis_id,
                   const struct simap *ct_zones, struct hmap *flow_table,
-                  struct hmap *local_datapaths, struct hmap *patched_datapaths);
+                  struct hmap *local_datapaths);
 
 #endif /* ovn/physical.h */
diff --git a/tests/ovn-controller.at b/tests/ovn-controller.at
index 60a6760..8ad8f67 100644
--- a/tests/ovn-controller.at
+++ b/tests/ovn-controller.at
@@ -86,17 +86,23 @@  check_patches \
     'br-int  patch-br-int-to-localnet2 patch-localnet2-to-br-int' \
     'br-eth0 patch-localnet2-to-br-int patch-br-int-to-localnet2'
 
-# Add logical patch ports.
+# Add logical patch ports to connect new logical datapath.
+#
+# (Also add a vif on this hypervisor on one of the datapaths,
+# otherwise the hypervisor will ignore both of them.)
 AT_CHECK([ovn-sbctl \
     -- --id=@dp1 create Datapath_Binding tunnel_key=1 \
     -- --id=@dp2 create Datapath_Binding tunnel_key=2 \
     -- create Port_Binding datapath=@dp1 logical_port=foo tunnel_key=1 type=patch options:peer=bar \
     -- create Port_Binding datapath=@dp2 logical_port=bar tunnel_key=2 type=patch options:peer=foo \
+    -- create Port_Binding datapath=@dp1 logical_port=dp1vif tunnel_key=3 \
 | ${PERL} $srcdir/uuidfilt.pl], [0], [<0>
 <1>
 <2>
 <3>
+<4>
 ])
+ovs-vsctl add-port br-int dp1vif -- set Interface dp1vif external_ids:iface-id=dp1vif
 check_patches \
     'br-int  patch-br-int-to-localnet2 patch-localnet2-to-br-int' \
     'br-eth0 patch-localnet2-to-br-int patch-br-int-to-localnet2' \
@@ -120,6 +126,17 @@  check_patches \
     'br-int patch-quux-to-baz patch-baz-to-quux' \
     'br-int patch-baz-to-quux patch-quux-to-baz'
 
+# Drop the vif from the patched-together datapaths and verify that the patch
+# ports disappear.
+AT_CHECK([ovs-vsctl del-port dp1vif])
+check_patches
+
+# Put the vif back and the patch ports reappear.
+AT_CHECK([ovs-vsctl add-port br-int dp1vif -- set Interface dp1vif external_ids:iface-id=dp1vif])
+check_patches \
+    'br-int patch-quux-to-baz patch-baz-to-quux' \
+    'br-int patch-baz-to-quux patch-quux-to-baz'
+
 # Create an empty database, serve it and switch to it
 # and verify that the OVS patch ports disappear
 # then put it back and verify that they reappear