diff mbox

[ovs-dev,patch_v3] ovn: Add additional comments regarding arp responders.

Message ID 1475715968-129474-2-git-send-email-dlu998@gmail.com
State Superseded
Headers show

Commit Message

Darrell Ball Oct. 6, 2016, 1:06 a.m. UTC
There has been enough confusion regarding logical switch datapath
arp responders in ovn to warrant some additional comments;
hence add a general description regarding why they exist and
document the special cases.

Signed-off-by: Darrell Ball <dlu998@gmail.com>
---
 ovn/northd/ovn-northd.8.xml | 51 +++++++++++++++++++++++++++++++++++++++------
 1 file changed, 45 insertions(+), 6 deletions(-)

Comments

Darrell Ball Oct. 6, 2016, 1:11 a.m. UTC | #1
This e-mail is a duplicate - ignore

On Wed, Oct 5, 2016 at 6:06 PM, Darrell Ball <dlu998@gmail.com> wrote:

> There has been enough confusion regarding logical switch datapath
> arp responders in ovn to warrant some additional comments;
> hence add a general description regarding why they exist and
> document the special cases.
>
> Signed-off-by: Darrell Ball <dlu998@gmail.com>
> ---
>  ovn/northd/ovn-northd.8.xml | 51 ++++++++++++++++++++++++++++++
> +++++++++------
>  1 file changed, 45 insertions(+), 6 deletions(-)
>
> diff --git a/ovn/northd/ovn-northd.8.xml b/ovn/northd/ovn-northd.8.xml
> index 77eb3d1..2104302 100644
> --- a/ovn/northd/ovn-northd.8.xml
> +++ b/ovn/northd/ovn-northd.8.xml
> @@ -415,20 +415,59 @@
>      <h3>Ingress Table 9: ARP/ND responder</h3>
>
>      <p>
> -      This table implements ARP/ND responder for known IPs.  It contains
> these
> -      logical flows:
> +      This table implements ARP/ND responder for known IPs.  The advantage
> +      of the arp responder flow is to limit arp broadcasts by locally
> +      responding to arp requests without the need to send to other
> +      hypervisors.  One common case is when the inport is a logical
> +      port associated with a VIF and the broadcast is responded to on the
> +      local hypervisor rather than broadcast across the whole network and
> +      responded to by the destination VM.  This behavior is proxy arp.
> +      Packets received by multiple hypervisors, as in the case of
> +      <code>localnet</code> and <code>vtep</code> logical inports need
> +      to skip these logical switch arp responders;  the reason being
> +      that northd downloads the same mac binding rules to all hypervisors
> +      and all hypervisors will receive the arp request from the external
> +      network and respond.  These skip rules are mentioned under
> +      priority-100 flows.  Arp requests arrive from VMs with a logical
> +      switch inport type of type empty, which is the default.  For this
> +      case, the logical switch proxy arp rules can be for other VMs or
> +      a logical router port.  In order to support proxy arp for logical
> +      router ports, an IP address must be configured on the logical
> +      switch router type port, with the same value as the peer of the
> +      logical router port.  The configured MAC addresses must match as
> +      well.  If the logical switch router type port does not have an
> +      IP address configured, arp requests will hit another arp responder
> +      on the logical router datapath itself, which is most commonly a
> +      distributed logical router.  The advantage of using the logical
> +      switch proxy arp rule for logical router ports is that this rule
> +      is hit before the logical switch L2 broadcast rule.  This means
> +      the arp request is not broadcast on this logical switch.  Logical
> +      switch arp responder proxy arp rules can also be hit when
> +      receiving arp requests externally on a L2 gateway port.  In this
> +      case, the hypervisor acting as an L2 gateway, responds to the arp
> +      request on behalf of a VM.  Note that arp requests received from
> +      <code>localnet</code> or <code>vtep</code> logical inports can
> +      either go directly to VMs, in which case the VM responds or can
> +      hit an arp responder for a logical router port if the packet is
> +      used to resolve a logical router port next hop address.
> +      It contains these logical flows:
>      </p>
>
>      <ul>
>        <li>
> -        Priority-100 flows to skip ARP responder if inport is of type
> -        <code>localnet</code>, and advances directly to the next table.
> +        Priority-100 flows to skip the ARP responder if inport is
> +        of type <code>localnet</code> or <code>vtep</code> and
> +        advances directly to the next table.  The inport being of type
> +        <code>router</code> has no known use case for these arp
> +        responders.  However, no skip flows are installed for these
> +        packets, as there would be some additional flow cost for this
> +        and the value appears limited.
>        </li>
>
>        <li>
>          <p>
>            Priority-50 flows that match ARP requests to each known IP
> address
> -          <var>A</var> of every logical router port, and respond with ARP
> +          <var>A</var> of every logical switch port, and respond with ARP
>            replies directly with corresponding Ethernet address
> <var>E</var>:
>          </p>
>
> @@ -455,7 +494,7 @@ output;
>          <p>
>            Priority-50 flows that match IPv6 ND neighbor solicitations to
>            each known IP address <var>A</var> (and <var>A</var>'s
> -          solicited node address) of every logical router port, and
> +          solicited node address) of every logical switch port, and
>            respond with neighbor advertisements directly with
>            corresponding Ethernet address <var>E</var>:
>          </p>
> --
> 1.9.1
>
>
diff mbox

Patch

diff --git a/ovn/northd/ovn-northd.8.xml b/ovn/northd/ovn-northd.8.xml
index 77eb3d1..2104302 100644
--- a/ovn/northd/ovn-northd.8.xml
+++ b/ovn/northd/ovn-northd.8.xml
@@ -415,20 +415,59 @@ 
     <h3>Ingress Table 9: ARP/ND responder</h3>
 
     <p>
-      This table implements ARP/ND responder for known IPs.  It contains these
-      logical flows:
+      This table implements ARP/ND responder for known IPs.  The advantage
+      of the arp responder flow is to limit arp broadcasts by locally
+      responding to arp requests without the need to send to other
+      hypervisors.  One common case is when the inport is a logical
+      port associated with a VIF and the broadcast is responded to on the
+      local hypervisor rather than broadcast across the whole network and
+      responded to by the destination VM.  This behavior is proxy arp.
+      Packets received by multiple hypervisors, as in the case of
+      <code>localnet</code> and <code>vtep</code> logical inports need
+      to skip these logical switch arp responders;  the reason being
+      that northd downloads the same mac binding rules to all hypervisors
+      and all hypervisors will receive the arp request from the external
+      network and respond.  These skip rules are mentioned under
+      priority-100 flows.  Arp requests arrive from VMs with a logical
+      switch inport type of type empty, which is the default.  For this
+      case, the logical switch proxy arp rules can be for other VMs or
+      a logical router port.  In order to support proxy arp for logical
+      router ports, an IP address must be configured on the logical
+      switch router type port, with the same value as the peer of the
+      logical router port.  The configured MAC addresses must match as
+      well.  If the logical switch router type port does not have an
+      IP address configured, arp requests will hit another arp responder
+      on the logical router datapath itself, which is most commonly a
+      distributed logical router.  The advantage of using the logical
+      switch proxy arp rule for logical router ports is that this rule
+      is hit before the logical switch L2 broadcast rule.  This means
+      the arp request is not broadcast on this logical switch.  Logical
+      switch arp responder proxy arp rules can also be hit when
+      receiving arp requests externally on a L2 gateway port.  In this
+      case, the hypervisor acting as an L2 gateway, responds to the arp
+      request on behalf of a VM.  Note that arp requests received from
+      <code>localnet</code> or <code>vtep</code> logical inports can
+      either go directly to VMs, in which case the VM responds or can
+      hit an arp responder for a logical router port if the packet is
+      used to resolve a logical router port next hop address.
+      It contains these logical flows:
     </p>
 
     <ul>
       <li>
-        Priority-100 flows to skip ARP responder if inport is of type
-        <code>localnet</code>, and advances directly to the next table.
+        Priority-100 flows to skip the ARP responder if inport is
+        of type <code>localnet</code> or <code>vtep</code> and
+        advances directly to the next table.  The inport being of type
+        <code>router</code> has no known use case for these arp
+        responders.  However, no skip flows are installed for these
+        packets, as there would be some additional flow cost for this
+        and the value appears limited.
       </li>
 
       <li>
         <p>
           Priority-50 flows that match ARP requests to each known IP address
-          <var>A</var> of every logical router port, and respond with ARP
+          <var>A</var> of every logical switch port, and respond with ARP
           replies directly with corresponding Ethernet address <var>E</var>:
         </p>
 
@@ -455,7 +494,7 @@  output;
         <p>
           Priority-50 flows that match IPv6 ND neighbor solicitations to
           each known IP address <var>A</var> (and <var>A</var>'s
-          solicited node address) of every logical router port, and
+          solicited node address) of every logical switch port, and
           respond with neighbor advertisements directly with
           corresponding Ethernet address <var>E</var>:
         </p>