diff mbox

[ovs-dev,v4] OVN localport type support

Message ID 1494405345-24762-1-git-send-email-dalvarez@redhat.com
State Superseded
Headers show

Commit Message

Daniel Alvarez Sanchez May 10, 2017, 8:35 a.m. UTC
This patch introduces a new type of OVN ports called "localport".
These ports will be present in every hypervisor and may have the
same IP/MAC addresses. They are not bound to any chassis and traffic
to these ports will never go through a tunnel.

Its main use case is the OpenStack metadata API support which relies
on a local agent running on every hypervisor and serving metadata to
VM's locally. This service is described in detail at [0].

An example to illustrate the purpose of this patch:

- One logical switch sw0 with 2 ports (p1, p2) and 1 localport (lp)
- Two hypervisors: HV1 and HV2
- p1 in HV1 (OVS port with external-id:iface-id="p1")
- p2 in HV2 (OVS port with external-id:iface-id="p2")
- lp in both hypevisors (OVS port with external-id:iface-id="lp")
- p1 should be able to reach p2 and viceversa
- lp on HV1 should be able to reach p1 but not p2
- lp on HV2 should be able to reach p2 but not p1

Explicit drop rules are inserted in table 32 with priority 150
in order to prevent traffic originated at a localport to go over
a tunnel.

[0] https://review.openstack.org/#/c/452811/

Signed-off-by: Daniel Alvarez <dalvarez@redhat.com>
---
 ovn/controller/binding.c        |   3 +-
 ovn/controller/ovn-controller.c |   2 +-
 ovn/controller/physical.c       |  34 ++++++++++-
 ovn/controller/physical.h       |   4 +-
 ovn/northd/ovn-northd.8.xml     |   8 +--
 ovn/northd/ovn-northd.c         |   6 +-
 ovn/ovn-architecture.7.xml      |  25 ++++++--
 ovn/ovn-nb.xml                  |   9 +++
 ovn/ovn-sb.xml                  |  14 +++++
 tests/ovn.at                    | 122 ++++++++++++++++++++++++++++++++++++++++
 10 files changed, 210 insertions(+), 17 deletions(-)

Comments

Ben Pfaff May 18, 2017, 10:16 p.m. UTC | #1
On Wed, May 10, 2017 at 08:35:45AM +0000, Daniel Alvarez wrote:
> This patch introduces a new type of OVN ports called "localport".
> These ports will be present in every hypervisor and may have the
> same IP/MAC addresses. They are not bound to any chassis and traffic
> to these ports will never go through a tunnel.
> 
> Its main use case is the OpenStack metadata API support which relies
> on a local agent running on every hypervisor and serving metadata to
> VM's locally. This service is described in detail at [0].
> 
> An example to illustrate the purpose of this patch:
> 
> - One logical switch sw0 with 2 ports (p1, p2) and 1 localport (lp)
> - Two hypervisors: HV1 and HV2
> - p1 in HV1 (OVS port with external-id:iface-id="p1")
> - p2 in HV2 (OVS port with external-id:iface-id="p2")
> - lp in both hypevisors (OVS port with external-id:iface-id="lp")
> - p1 should be able to reach p2 and viceversa
> - lp on HV1 should be able to reach p1 but not p2
> - lp on HV2 should be able to reach p2 but not p1
> 
> Explicit drop rules are inserted in table 32 with priority 150
> in order to prevent traffic originated at a localport to go over
> a tunnel.
> 
> [0] https://review.openstack.org/#/c/452811/
> 
> Signed-off-by: Daniel Alvarez <dalvarez@redhat.com>

Thanks for working on this!

This seems reasonable, but I'm torn about one aspect of it.  I'm not
sure whether my concern is a kind of premature optimization or not, so
let me just describe it and we can discuss.

This adds code that iterates over every local lport (suppose there are N
of them), which is nested inside a function that is executed for every
port relevant to the local hypervisor (suppose there are M of them).
That's O(N*M) time.  But the inner loop is only doing something useful
for localport logical ports, and normally there would only be 1 or so of
those; at any rate a constant number.  So in theory this could run in
O(M) time.

I see at least two ways to fix the problem, if it's a problem, but I
don't know whether it's worth fixing.  Daniel?  Russell?  (Others?)

Thanks,

Ben.
Daniel Alvarez Sanchez May 19, 2017, 2:26 p.m. UTC | #2
Thanks a lot Ben for your review!

On Fri, May 19, 2017 at 12:16 AM, Ben Pfaff <blp@ovn.org> wrote:

> On Wed, May 10, 2017 at 08:35:45AM +0000, Daniel Alvarez wrote:
> > This patch introduces a new type of OVN ports called "localport".
> > These ports will be present in every hypervisor and may have the
> > same IP/MAC addresses. They are not bound to any chassis and traffic
> > to these ports will never go through a tunnel.
> >
> > Its main use case is the OpenStack metadata API support which relies
> > on a local agent running on every hypervisor and serving metadata to
> > VM's locally. This service is described in detail at [0].
> >
> > An example to illustrate the purpose of this patch:
> >
> > - One logical switch sw0 with 2 ports (p1, p2) and 1 localport (lp)
> > - Two hypervisors: HV1 and HV2
> > - p1 in HV1 (OVS port with external-id:iface-id="p1")
> > - p2 in HV2 (OVS port with external-id:iface-id="p2")
> > - lp in both hypevisors (OVS port with external-id:iface-id="lp")
> > - p1 should be able to reach p2 and viceversa
> > - lp on HV1 should be able to reach p1 but not p2
> > - lp on HV2 should be able to reach p2 but not p1
> >
> > Explicit drop rules are inserted in table 32 with priority 150
> > in order to prevent traffic originated at a localport to go over
> > a tunnel.
> >
> > [0] https://review.openstack.org/#/c/452811/
> >
> > Signed-off-by: Daniel Alvarez <dalvarez@redhat.com>
>
> Thanks for working on this!
>
> This seems reasonable, but I'm torn about one aspect of it.  I'm not
> sure whether my concern is a kind of premature optimization or not, so
> let me just describe it and we can discuss.
>
> This adds code that iterates over every local lport (suppose there are N
> of them), which is nested inside a function that is executed for every
> port relevant to the local hypervisor (suppose there are M of them).
> That's O(N*M) time.  But the inner loop is only doing something useful
> for localport logical ports, and normally there would only be 1 or so of
> those; at any rate a constant number.  So in theory this could run in
> O(M) time.
>
> I see at least two ways to fix the problem, if it's a problem, but I
> don't know whether it's worth fixing.  Daniel?  Russell?  (Others?)
>
> Yes, I also thought about this but I don't know either if it's a problem
or not.
If we want to impose at most one logical localport per datapath then we
would have O(M) time (mimic localnet ports) but that's something I didn't
want to do unless you think it makes more sense.


> Thanks,
>
> Ben.
>
Miguel Angel Ajo May 23, 2017, 8:01 a.m. UTC | #3
If we forsee use cases with several local ports by logical switch/chassis
could one option be to allocate a bit in REG10 to mark local ports,
and then have a single rule that matches reg10 to drop output/forwarding
of packets?

Best,
Miguel Ángel Ajo



On Fri, May 19, 2017 at 4:26 PM, Daniel Alvarez Sanchez <dalvarez@redhat.com
> wrote:

> Thanks a lot Ben for your review!
>
> On Fri, May 19, 2017 at 12:16 AM, Ben Pfaff <blp@ovn.org> wrote:
>
> > On Wed, May 10, 2017 at 08:35:45AM +0000, Daniel Alvarez wrote:
> > > This patch introduces a new type of OVN ports called "localport".
> > > These ports will be present in every hypervisor and may have the
> > > same IP/MAC addresses. They are not bound to any chassis and traffic
> > > to these ports will never go through a tunnel.
> > >
> > > Its main use case is the OpenStack metadata API support which relies
> > > on a local agent running on every hypervisor and serving metadata to
> > > VM's locally. This service is described in detail at [0].
> > >
> > > An example to illustrate the purpose of this patch:
> > >
> > > - One logical switch sw0 with 2 ports (p1, p2) and 1 localport (lp)
> > > - Two hypervisors: HV1 and HV2
> > > - p1 in HV1 (OVS port with external-id:iface-id="p1")
> > > - p2 in HV2 (OVS port with external-id:iface-id="p2")
> > > - lp in both hypevisors (OVS port with external-id:iface-id="lp")
> > > - p1 should be able to reach p2 and viceversa
> > > - lp on HV1 should be able to reach p1 but not p2
> > > - lp on HV2 should be able to reach p2 but not p1
> > >
> > > Explicit drop rules are inserted in table 32 with priority 150
> > > in order to prevent traffic originated at a localport to go over
> > > a tunnel.
> > >
> > > [0] https://review.openstack.org/#/c/452811/
> > >
> > > Signed-off-by: Daniel Alvarez <dalvarez@redhat.com>
> >
> > Thanks for working on this!
> >
> > This seems reasonable, but I'm torn about one aspect of it.  I'm not
> > sure whether my concern is a kind of premature optimization or not, so
> > let me just describe it and we can discuss.
> >
> > This adds code that iterates over every local lport (suppose there are N
> > of them), which is nested inside a function that is executed for every
> > port relevant to the local hypervisor (suppose there are M of them).
> > That's O(N*M) time.  But the inner loop is only doing something useful
> > for localport logical ports, and normally there would only be 1 or so of
> > those; at any rate a constant number.  So in theory this could run in
> > O(M) time.
> >
> > I see at least two ways to fix the problem, if it's a problem, but I
> > don't know whether it's worth fixing.  Daniel?  Russell?  (Others?)
> >
> > Yes, I also thought about this but I don't know either if it's a problem
> or not.
> If we want to impose at most one logical localport per datapath then we
> would have O(M) time (mimic localnet ports) but that's something I didn't
> want to do unless you think it makes more sense.
>
>
> > Thanks,
> >
> > Ben.
> >
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
Daniel Alvarez Sanchez May 23, 2017, 1:13 p.m. UTC | #4
On Tue, May 23, 2017 at 10:01 AM, Miguel Angel Ajo Pelayo <
majopela@redhat.com> wrote:

> If we forsee use cases with several local ports by logical switch/chassis
> could one option be to allocate a bit in REG10 to mark local ports,
> and then have a single rule that matches reg10 to drop output/forwarding
> of packets?
>
> I like the idea... let's see what others say about this, I don't know how
strict we want to be consuming
bits from registers.
Thanks Miguel for the suggestion :)



> Best,
> Miguel Ángel Ajo
>
>
>
> On Fri, May 19, 2017 at 4:26 PM, Daniel Alvarez Sanchez <
> dalvarez@redhat.com> wrote:
>
>> Thanks a lot Ben for your review!
>>
>> On Fri, May 19, 2017 at 12:16 AM, Ben Pfaff <blp@ovn.org> wrote:
>>
>> > On Wed, May 10, 2017 at 08:35:45AM +0000, Daniel Alvarez wrote:
>> > > This patch introduces a new type of OVN ports called "localport".
>> > > These ports will be present in every hypervisor and may have the
>> > > same IP/MAC addresses. They are not bound to any chassis and traffic
>> > > to these ports will never go through a tunnel.
>> > >
>> > > Its main use case is the OpenStack metadata API support which relies
>> > > on a local agent running on every hypervisor and serving metadata to
>> > > VM's locally. This service is described in detail at [0].
>> > >
>> > > An example to illustrate the purpose of this patch:
>> > >
>> > > - One logical switch sw0 with 2 ports (p1, p2) and 1 localport (lp)
>> > > - Two hypervisors: HV1 and HV2
>> > > - p1 in HV1 (OVS port with external-id:iface-id="p1")
>> > > - p2 in HV2 (OVS port with external-id:iface-id="p2")
>> > > - lp in both hypevisors (OVS port with external-id:iface-id="lp")
>> > > - p1 should be able to reach p2 and viceversa
>> > > - lp on HV1 should be able to reach p1 but not p2
>> > > - lp on HV2 should be able to reach p2 but not p1
>> > >
>> > > Explicit drop rules are inserted in table 32 with priority 150
>> > > in order to prevent traffic originated at a localport to go over
>> > > a tunnel.
>> > >
>> > > [0] https://review.openstack.org/#/c/452811/
>> > >
>> > > Signed-off-by: Daniel Alvarez <dalvarez@redhat.com>
>> >
>> > Thanks for working on this!
>> >
>> > This seems reasonable, but I'm torn about one aspect of it.  I'm not
>> > sure whether my concern is a kind of premature optimization or not, so
>> > let me just describe it and we can discuss.
>> >
>> > This adds code that iterates over every local lport (suppose there are N
>> > of them), which is nested inside a function that is executed for every
>> > port relevant to the local hypervisor (suppose there are M of them).
>> > That's O(N*M) time.  But the inner loop is only doing something useful
>> > for localport logical ports, and normally there would only be 1 or so of
>> > those; at any rate a constant number.  So in theory this could run in
>> > O(M) time.
>> >
>> > I see at least two ways to fix the problem, if it's a problem, but I
>> > don't know whether it's worth fixing.  Daniel?  Russell?  (Others?)
>> >
>> > Yes, I also thought about this but I don't know either if it's a problem
>> or not.
>> If we want to impose at most one logical localport per datapath then we
>> would have O(M) time (mimic localnet ports) but that's something I didn't
>> want to do unless you think it makes more sense.
>>
>>
>> > Thanks,
>> >
>> > Ben.
>> >
>> _______________________________________________
>> dev mailing list
>> dev@openvswitch.org
>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>>
>
>
diff mbox

Patch

diff --git a/ovn/controller/binding.c b/ovn/controller/binding.c
index 95e9deb..83a7543 100644
--- a/ovn/controller/binding.c
+++ b/ovn/controller/binding.c
@@ -380,7 +380,8 @@  consider_local_datapath(struct controller_ctx *ctx,
         if (iface_rec && qos_map && ctx->ovs_idl_txn) {
             get_qos_params(binding_rec, qos_map);
         }
-        our_chassis = true;
+	if(strcmp(binding_rec->type, "localport"))
+            our_chassis = true;
     } else if (!strcmp(binding_rec->type, "l2gateway")) {
         const char *chassis_id = smap_get(&binding_rec->options,
                                           "l2gateway-chassis");
diff --git a/ovn/controller/ovn-controller.c b/ovn/controller/ovn-controller.c
index f22551d..0f4dd35 100644
--- a/ovn/controller/ovn-controller.c
+++ b/ovn/controller/ovn-controller.c
@@ -655,7 +655,7 @@  main(int argc, char *argv[])
 
                     physical_run(&ctx, mff_ovn_geneve,
                                  br_int, chassis, &ct_zones, &lports,
-                                 &flow_table, &local_datapaths);
+                                 &flow_table, &local_datapaths, &local_lports);
 
                     ofctrl_put(&flow_table, &pending_ct_zones,
                                get_nb_cfg(ctx.ovnsb_idl));
diff --git a/ovn/controller/physical.c b/ovn/controller/physical.c
index 457fc45..c98b305 100644
--- a/ovn/controller/physical.c
+++ b/ovn/controller/physical.c
@@ -293,7 +293,8 @@  consider_port_binding(enum mf_field_id mff_ovn_geneve,
                       const struct sbrec_port_binding *binding,
                       const struct sbrec_chassis *chassis,
                       struct ofpbuf *ofpacts_p,
-                      struct hmap *flow_table)
+                      struct hmap *flow_table,
+                      const struct sset *local_lports)
 {
     uint32_t dp_key = binding->datapath->tunnel_key;
     uint32_t port_key = binding->tunnel_key;
@@ -601,6 +602,32 @@  consider_port_binding(enum mf_field_id mff_ovn_geneve,
     } else {
         /* Remote port connected by tunnel */
 
+        /* Table 32, priority 150.
+         * =======================
+         *
+         * Drop traffic originated from a localport to a remote destination.
+         */
+        const char *localport;
+        SSET_FOR_EACH (localport, local_lports) {
+            /* Iterate over all local logical ports and insert a drop
+             * rule with higher priority for every localport in this
+             * datapath. */
+            const struct sbrec_port_binding *pb = lport_lookup_by_name(
+                lports, localport);
+            if (pb && pb->datapath->tunnel_key == dp_key &&
+                !strcmp(pb->type, "localport")) {
+                match_init_catchall(&match);
+                ofpbuf_clear(ofpacts_p);
+                /* Match localport logical in_port. */
+                match_set_reg(&match, MFF_LOG_INPORT - MFF_REG0,
+                              pb->tunnel_key);
+                /* Match MFF_LOG_DATAPATH, MFF_LOG_OUTPORT. */
+                match_set_metadata(&match, htonll(dp_key));
+                match_set_reg(&match, MFF_LOG_OUTPORT - MFF_REG0, port_key);
+                ofctrl_add_flow(flow_table, OFTABLE_REMOTE_OUTPUT, 150, 0,
+                                &match, ofpacts_p);
+            }
+        }
         /* Table 32, priority 100.
          * =======================
          *
@@ -769,7 +796,8 @@  physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve,
              const struct ovsrec_bridge *br_int,
              const struct sbrec_chassis *chassis,
              const struct simap *ct_zones, struct lport_index *lports,
-             struct hmap *flow_table, struct hmap *local_datapaths)
+             struct hmap *flow_table, struct hmap *local_datapaths,
+             const struct sset *local_lports)
 {
 
     /* This bool tracks physical mapping changes. */
@@ -891,7 +919,7 @@  physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve,
     SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl) {
         consider_port_binding(mff_ovn_geneve, ct_zones, lports,
                               local_datapaths, binding, chassis,
-                              &ofpacts, flow_table);
+                              &ofpacts, flow_table, local_lports);
     }
 
     /* Handle output to multicast groups, in tables 32 and 33. */
diff --git a/ovn/controller/physical.h b/ovn/controller/physical.h
index e2debed..66aa80e 100644
--- a/ovn/controller/physical.h
+++ b/ovn/controller/physical.h
@@ -32,6 +32,7 @@  struct hmap;
 struct ovsdb_idl;
 struct ovsrec_bridge;
 struct simap;
+struct sset;
 
 /* OVN Geneve option information.
  *
@@ -45,6 +46,7 @@  void physical_run(struct controller_ctx *, enum mf_field_id mff_ovn_geneve,
                   const struct ovsrec_bridge *br_int,
                   const struct sbrec_chassis *chassis,
                   const struct simap *ct_zones, struct lport_index *,
-                  struct hmap *flow_table, struct hmap *local_datapaths);
+                  struct hmap *flow_table, struct hmap *local_datapaths,
+                  const struct sset *local_lports);
 
 #endif /* ovn/physical.h */
diff --git a/ovn/northd/ovn-northd.8.xml b/ovn/northd/ovn-northd.8.xml
index c0b4c5e..7ff5245 100644
--- a/ovn/northd/ovn-northd.8.xml
+++ b/ovn/northd/ovn-northd.8.xml
@@ -492,8 +492,8 @@  output;
         </pre>
 
         <p>
-          These flows are omitted for logical ports (other than router ports)
-          that are down.
+          These flows are omitted for logical ports (other than router ports or
+          <code>localport</code> ports) that are down.
         </p>
       </li>
 
@@ -519,8 +519,8 @@  nd_na {
         </pre>
 
         <p>
-          These flows are omitted for logical ports (other than router ports)
-          that are down.
+          These flows are omitted for logical ports (other than router ports or
+          <code>localport</code> ports) that are down.
         </p>
       </li>
 
diff --git a/ovn/northd/ovn-northd.c b/ovn/northd/ovn-northd.c
index 83db753..a3bd859 100644
--- a/ovn/northd/ovn-northd.c
+++ b/ovn/northd/ovn-northd.c
@@ -3305,9 +3305,11 @@  build_lswitch_flows(struct hmap *datapaths, struct hmap *ports,
         /*
          * Add ARP/ND reply flows if either the
          *  - port is up or
-         *  - port type is router
+         *  - port type is router or
+         *  - port type is localport
          */
-        if (!lsp_is_up(op->nbsp) && strcmp(op->nbsp->type, "router")) {
+        if (!lsp_is_up(op->nbsp) && strcmp(op->nbsp->type, "router") &&
+            strcmp(op->nbsp->type, "localport")) {
             continue;
         }
 
diff --git a/ovn/ovn-architecture.7.xml b/ovn/ovn-architecture.7.xml
index d8114f1..2d2d0cc 100644
--- a/ovn/ovn-architecture.7.xml
+++ b/ovn/ovn-architecture.7.xml
@@ -409,6 +409,20 @@ 
           logical patch ports at each such point of connectivity, one on
           each side.
         </li>
+        <li>
+          <dfn>Localport ports</dfn> represent the points of local
+          connectivity between logical switches and VIFs. These ports are
+          present in every chassis (not bound to any particular one) and
+          traffic from them will never go through a tunnel. A
+          <code>localport</code> is expected to only generate traffic destined
+          for a local destination, typically in response to a request it
+          received.
+          One use case is how OpenStack Neutron uses a <code>localport</code>
+          port for serving metadata to VM's residing on every hypervisor. A
+          metadata proxy process is attached to this port on every host and all
+          VM's within the same network will reach it at the same IP/MAC address
+          without any traffic being sent over a tunnel.
+        </li>
       </ul>
     </li>
   </ul>
@@ -986,11 +1000,12 @@ 
         hypervisor.  Each flow's actions implement sending a packet to the port
         it matches.  For unicast logical output ports on remote hypervisors,
         the actions set the tunnel key to the correct value, then send the
-        packet on the tunnel port to the correct hypervisor.  (When the remote
-        hypervisor receives the packet, table 0 there will recognize it as a
-        tunneled packet and pass it along to table 33.)  For multicast logical
-        output ports, the actions send one copy of the packet to each remote
-        hypervisor, in the same way as for unicast destinations.  If a
+        packet on the tunnel port to the correct hypervisor (unless the packet
+        comes from a localport, in which case it will be dropped). (When the
+        remote hypervisor receives the packet, table 0 there will recognize it
+        as a tunneled packet and pass it along to table 33.)  For multicast
+        logical output ports, the actions send one copy of the packet to each
+        remote hypervisor, in the same way as for unicast destinations.  If a
         multicast group includes a logical port or ports on the local
         hypervisor, then its actions also resubmit to table 33.  Table 32 also
         includes a fallback flow that resubmits to table 33 if there is no
diff --git a/ovn/ovn-nb.xml b/ovn/ovn-nb.xml
index 383b5b7..b51b95d 100644
--- a/ovn/ovn-nb.xml
+++ b/ovn/ovn-nb.xml
@@ -283,6 +283,15 @@ 
             to model direct connectivity to an existing network.
           </dd>
 
+          <dt><code>localport</code></dt>
+          <dd>
+            A connection to a local VIF. Traffic that arrives on a
+            <code>localport</code> is never forwarded over a tunnel to another
+            chassis. These ports are present on every chassis and have the same
+            address in all of them. This is used to model connectivity to local
+            services that run on every hypervisor.
+          </dd>
+
           <dt><code>l2gateway</code></dt>
           <dd>
             A connection to a physical network.
diff --git a/ovn/ovn-sb.xml b/ovn/ovn-sb.xml
index 387adb8..f3c3212 100644
--- a/ovn/ovn-sb.xml
+++ b/ovn/ovn-sb.xml
@@ -1802,6 +1802,11 @@  tcp.flags = RST;
             connectivity to the corresponding physical network.
           </dd>
 
+          <dt>localport</dt>
+          <dd>
+            Always empty.  A localport port is present on every chassis.
+          </dd>
+
           <dt>l3gateway</dt>
           <dd>
             The physical location of the L3 gateway.  To successfully identify a
@@ -1882,6 +1887,15 @@  tcp.flags = RST;
             to model direct connectivity to an existing network.
           </dd>
 
+          <dt><code>localport</code></dt>
+          <dd>
+            A connection to a local VIF. Traffic that arrives on a
+            <code>localport</code> is never forwarded over a tunnel to another
+            chassis. These ports are present on every chassis and have the same
+            address in all of them. This is used to model connectivity to local
+            services that run on every hypervisor.
+          </dd>
+
           <dt><code>l2gateway</code></dt>
           <dd>
             An L2 connection to a physical network.  The chassis this
diff --git a/tests/ovn.at b/tests/ovn.at
index e67d33b..59beccf 100644
--- a/tests/ovn.at
+++ b/tests/ovn.at
@@ -7378,3 +7378,125 @@  OVN_CHECK_PACKETS([hv2/vif1-tx.pcap], [expected])
 OVN_CLEANUP([hv1],[hv2])
 
 AT_CLEANUP
+
+AT_SETUP([ovn -- 2 HVs, 1 lport/HV, localport ports])
+AT_SKIP_IF([test $HAVE_PYTHON = no])
+ovn_start
+
+ovn-nbctl ls-add ls1
+
+# Add localport to the switch
+ovn-nbctl lsp-add ls1 lp01
+ovn-nbctl lsp-set-addresses lp01 f0:00:00:00:00:01
+ovn-nbctl lsp-set-type lp01 localport
+
+net_add n1
+
+for i in 1 2; do
+    sim_add hv$i
+    as hv$i
+    ovs-vsctl add-br br-phys
+    ovn_attach n1 br-phys 192.168.0.$i
+    ovs-vsctl add-port br-int vif01 -- \
+        set Interface vif01 external-ids:iface-id=lp01 \
+                              options:tx_pcap=hv${i}/vif01-tx.pcap \
+                              options:rxq_pcap=hv${i}/vif01-rx.pcap \
+                              ofport-request=${i}0
+
+    ovs-vsctl add-port br-int vif${i}1 -- \
+        set Interface vif${i}1 external-ids:iface-id=lp${i}1 \
+                              options:tx_pcap=hv${i}/vif${i}1-tx.pcap \
+                              options:rxq_pcap=hv${i}/vif${i}1-rx.pcap \
+                              ofport-request=${i}1
+
+    ovn-nbctl lsp-add ls1 lp${i}1
+    ovn-nbctl lsp-set-addresses lp${i}1 f0:00:00:00:00:${i}1
+    ovn-nbctl lsp-set-port-security lp${i}1 f0:00:00:00:00:${i}1
+
+        OVS_WAIT_UNTIL([test x`ovn-nbctl lsp-get-up lp${i}1` = xup])
+done
+
+ovn-nbctl --wait=sb sync
+ovn-sbctl dump-flows
+
+ovn_populate_arp
+
+# Given the name of a logical port, prints the name of the hypervisor
+# on which it is located.
+vif_to_hv() {
+    echo hv${1%?}
+}
+#
+# test_packet INPORT DST SRC ETHTYPE EOUT LOUT DEFHV
+#
+# This shell function causes a packet to be received on INPORT.  The packet's
+# content has Ethernet destination DST and source SRC (each exactly 12 hex
+# digits) and Ethernet type ETHTYPE (4 hex digits).  INPORT is specified as
+# logical switch port numbers, e.g. 11 for vif11.
+#
+# EOUT is the end-to-end output port, that is, where the packet will end up
+# after possibly bouncing through one or more localnet ports.  LOUT is the
+# logical output port, which might be a localnet port, as seen by ovn-trace
+# (which doesn't know what localnet ports are connected to and therefore can't
+# figure out the end-to-end answer).
+#
+# DEFHV is the default hypervisor from where the packet is going to be sent
+# if the source port is a localport.
+for i in 1 2; do
+    for j in 0 1; do
+        : > $i$j.expected
+    done
+done
+test_packet() {
+    local inport=$1 dst=$2 src=$3 eth=$4 eout=$5 lout=$6 defhv=$7
+    echo "$@"
+
+    # First try tracing the packet.
+    uflow="inport==\"lp$inport\" && eth.dst==$dst && eth.src==$src && eth.type==0x$eth"
+    if test $lout != drop; then
+        echo "output(\"$lout\");"
+    fi > expout
+    AT_CAPTURE_FILE([trace])
+    AT_CHECK([ovn-trace --all ls1 "$uflow" | tee trace | sed '1,/Minimal trace/d'], [0], [expout])
+
+    # Then actually send a packet, for an end-to-end test.
+    local packet=$(echo $dst$src | sed 's/://g')${eth}
+    hv=`vif_to_hv $inport`
+    # If hypervisor 0 (localport) use the defhv parameter
+    if test $hv == hv0; then
+        hv=$defhv
+    fi
+    vif=vif$inport
+    as $hv ovs-appctl netdev-dummy/receive $vif $packet
+    if test $eout != drop; then
+        echo $packet >> ${eout#lp}.expected
+    fi
+}
+
+
+# lp11 and lp21 are on different hypervisors
+test_packet 11 f0:00:00:00:00:21 f0:00:00:00:00:11 1121 lp21 lp21
+test_packet 21 f0:00:00:00:00:11 f0:00:00:00:00:21 2111 lp11 lp11
+
+# Both VIFs should be able to reach the localport on their own HV
+test_packet 11 f0:00:00:00:00:01 f0:00:00:00:00:11 1101 lp01 lp01
+test_packet 21 f0:00:00:00:00:01 f0:00:00:00:00:21 2101 lp01 lp01
+
+# Packet sent from localport on same hv should reach the vif
+test_packet 01 f0:00:00:00:00:11 f0:00:00:00:00:01 0111 lp11 lp11 hv1
+test_packet 01 f0:00:00:00:00:21 f0:00:00:00:00:01 0121 lp21 lp21 hv2
+
+# Packet sent from localport on different hv should be dropped
+test_packet 01 f0:00:00:00:00:21 f0:00:00:00:00:01 0121 drop lp21 hv1
+test_packet 01 f0:00:00:00:00:11 f0:00:00:00:00:01 0111 drop lp11 hv2
+
+# Now check the packets actually received against the ones expected.
+for i in 1 2; do
+    for j in 0 1; do
+        OVN_CHECK_PACKETS([hv$i/vif$i$j-tx.pcap], [$i$j.expected])
+    done
+done
+
+OVN_CLEANUP([hv1],[hv2])
+
+AT_CLEANUP