From patchwork Sun Oct 18 19:52:40 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Pfaff X-Patchwork-Id: 532000 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (li376-54.members.linode.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 103E7140157 for ; Mon, 19 Oct 2015 06:53:03 +1100 (AEDT) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id D047310B6D; Sun, 18 Oct 2015 12:52:50 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx3v1.cudamail.com (mx3.cudamail.com [64.34.241.5]) by archives.nicira.com (Postfix) with ESMTPS id DD6B810B66 for ; Sun, 18 Oct 2015 12:52:49 -0700 (PDT) Received: from bar3.cudamail.com (bar1 [192.168.15.1]) by mx3v1.cudamail.com (Postfix) with ESMTP id 622B061819C for ; Sun, 18 Oct 2015 13:52:49 -0600 (MDT) X-ASG-Debug-ID: 1445197969-03dd7b105eb0130001-byXFYA Received: from mx3-pf3.cudamail.com ([192.168.14.3]) by bar3.cudamail.com with ESMTP id acbzSdziDrGwsi4P (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Sun, 18 Oct 2015 13:52:49 -0600 (MDT) X-Barracuda-Envelope-From: blp@nicira.com X-Barracuda-RBL-Trusted-Forwarder: 192.168.14.3 Received: from unknown (HELO mail-pa0-f43.google.com) (209.85.220.43) by mx3-pf3.cudamail.com with ESMTPS (RC4-SHA encrypted); 18 Oct 2015 19:52:48 -0000 Received-SPF: unknown (mx3-pf3.cudamail.com: Multiple SPF records returned) X-Barracuda-RBL-Trusted-Forwarder: 209.85.220.43 Received: by padhk11 with SMTP id hk11so7317799pad.1 for ; Sun, 18 Oct 2015 12:52:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=qwI5rWcdfkVwM5xtakOhMFgdlJKbb4e0ozrlCcsZlLI=; b=Bn0cZHZ9xRScHqE1oHjiG/QsexB5w5g+9781Amry0ZQS1KInLB6Y8QwQOsQ5nQnaJN dwd11+FmnseUOvVRiwjC+dN5LTG9ONAVoGosqEWs2JO9KP+iZCWWDn7fqDN3CB9awaCG KGG6N7nz1KtfAtnQFgvRPq4L01VuLFD6thZjSMHLDxdS247Avlc61rZhV6ZphIHSshAT 9B5upeekBe6p/tSvp9f0sJtQkdbiPaqpCD3oxOlmWLsLTXdf/w9llh6TL94HuYrlXjzy zTULCPcoWPjHIILYC8zP5D1clNh1AnkpNBOERfJCvQLzWhXn0tMB09Qq2l8zUo4qlCeE yTyQ== X-Gm-Message-State: ALoCoQleYT2uHrfV1GfUgM7iv0XSmV4TmI5x3XDE2F4yzvduhNh7dSV1gVovHL5HLsYRYVrX6Zka X-Received: by 10.66.159.66 with SMTP id xa2mr30297499pab.28.1445197968340; Sun, 18 Oct 2015 12:52:48 -0700 (PDT) Received: from sigabrt.gateway.sonic.net (173-228-112-197.dsl.dynamic.fusionbroadband.com. [173.228.112.197]) by smtp.gmail.com with ESMTPSA id ci2sm32209803pbc.66.2015.10.18.12.52.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 18 Oct 2015 12:52:46 -0700 (PDT) X-CudaMail-Envelope-Sender: blp@nicira.com X-Barracuda-Apparent-Source-IP: 173.228.112.197 From: Ben Pfaff To: dev@openvswitch.org X-CudaMail-Whitelist-To: dev@openvswitch.org X-CudaMail-MID: CM-V3-1017014035 X-CudaMail-DTE: 101815 X-CudaMail-Originating-IP: 209.85.220.43 Date: Sun, 18 Oct 2015 12:52:40 -0700 X-ASG-Orig-Subj: [##CM-V3-1017014035##][PATCH v3 3/5] physical: Fix implementation of logical patch ports. Message-Id: <1445197962-20874-4-git-send-email-blp@nicira.com> X-Mailer: git-send-email 2.1.3 In-Reply-To: <1445197962-20874-1-git-send-email-blp@nicira.com> References: <1445197962-20874-1-git-send-email-blp@nicira.com> X-Barracuda-Connect: UNKNOWN[192.168.14.3] X-Barracuda-Start-Time: 1445197969 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-ASG-Whitelist: Header =?UTF-8?B?eFwtY3VkYW1haWxcLXdoaXRlbGlzdFwtdG8=?= X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 Cc: Ben Pfaff Subject: [ovs-dev] [PATCH v3 3/5] physical: Fix implementation of logical patch ports. X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" Logical patch ports do not have a physical location and effectively reside on every hypervisor. This is fine for unicast output to logical patch ports. However, when a logical patch port is part of a logical multicast group, lumping them together with the other "local" ports in a multicast group yields packet duplication, because every hypervisor to which the packet is tunneled re-outputs it to the logical patch port. This commit fixes the problem, by treating logical patch ports as remote rather than local when they are part of a logical multicast group. This yields exactly-once semantics. Found while testing implementation of ARP in OVN logical router. The following commit adds a test that fails without this fix. Signed-off-by: Ben Pfaff Acked-by: Justin Pettit --- ovn/controller/physical.c | 43 +++++++++++++++++++++++++++++++------------ ovn/ovn-architecture.7.xml | 16 +++++++++++++--- 2 files changed, 44 insertions(+), 15 deletions(-) diff --git a/ovn/controller/physical.c b/ovn/controller/physical.c index 1b2b7fc..5821c11 100644 --- a/ovn/controller/physical.c +++ b/ovn/controller/physical.c @@ -497,6 +497,8 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, /* Handle output to multicast groups, in tables 32 and 33. */ const struct sbrec_multicast_group *mc; + struct ofpbuf remote_ofpacts; + ofpbuf_init(&remote_ofpacts, 0); SBREC_MULTICAST_GROUP_FOR_EACH (mc, ctx->ovnsb_idl) { struct sset remote_chassis = SSET_INITIALIZER(&remote_chassis); struct match match; @@ -507,11 +509,18 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, /* Go through all of the ports in the multicast group: * - * - For local ports, add actions to 'ofpacts' to set the output - * port and resubmit. + * - For remote ports, add the chassis to 'remote_chassis'. * - * - For remote ports, add the chassis to 'remote_chassis'. */ + * - For local ports (other than logical patch ports), add actions + * to 'ofpacts' to set the output port and resubmit. + * + * - For logical patch ports, add actions to 'remote_ofpacts' + * instead. (If we put them in 'ofpacts', then the output + * would happen on every hypervisor in the multicast group, + * effectively duplicating the packet.) + */ ofpbuf_clear(&ofpacts); + ofpbuf_clear(&remote_ofpacts); for (size_t i = 0; i < mc->n_ports; i++) { struct sbrec_port_binding *port = mc->ports[i]; @@ -528,7 +537,11 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, put_load(zone_id, MFF_LOG_CT_ZONE, 0, 32, &ofpacts); } - if (simap_contains(&localvif_to_ofport, + if (!strcmp(port->type, "patch")) { + put_load(port->tunnel_key, MFF_LOG_OUTPORT, 0, 32, + &remote_ofpacts); + put_resubmit(OFTABLE_DROP_LOOPBACK, &remote_ofpacts); + } else if (simap_contains(&localvif_to_ofport, port->parent_port ? port->parent_port : port->logical_port)) { put_load(port->tunnel_key, MFF_LOG_OUTPORT, 0, 32, &ofpacts); @@ -568,8 +581,13 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, * * Handle output to the remote chassis in the multicast group, if * any. */ - if (!sset_is_empty(&remote_chassis)) { - ofpbuf_clear(&ofpacts); + if (!sset_is_empty(&remote_chassis) || remote_ofpacts.size > 0) { + if (remote_ofpacts.size > 0) { + /* Following delivery to logical patch ports, restore the + * multicast group as the logical output port. */ + put_load(mc->tunnel_key, MFF_LOG_OUTPORT, 0, 32, + &remote_ofpacts); + } const char *chassis; const struct chassis_tunnel *prev = NULL; @@ -581,23 +599,24 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, } if (!prev || tun->type != prev->type) { - put_encapsulation(mff_ovn_geneve, tun, - mc->datapath, mc->tunnel_key, &ofpacts); + put_encapsulation(mff_ovn_geneve, tun, mc->datapath, + mc->tunnel_key, &remote_ofpacts); prev = tun; } - ofpact_put_OUTPUT(&ofpacts)->port = tun->ofport; + ofpact_put_OUTPUT(&remote_ofpacts)->port = tun->ofport; } - if (ofpacts.size) { + if (remote_ofpacts.size) { if (local_ports) { - put_resubmit(OFTABLE_LOCAL_OUTPUT, &ofpacts); + put_resubmit(OFTABLE_LOCAL_OUTPUT, &remote_ofpacts); } ofctrl_add_flow(flow_table, OFTABLE_REMOTE_OUTPUT, 100, - &match, &ofpacts); + &match, &remote_ofpacts); } } sset_destroy(&remote_chassis); } + ofpbuf_uninit(&remote_ofpacts); /* Table 0, priority 100. * ====================== diff --git a/ovn/ovn-architecture.7.xml b/ovn/ovn-architecture.7.xml index 343aa7e..318555b 100644 --- a/ovn/ovn-architecture.7.xml +++ b/ovn/ovn-architecture.7.xml @@ -778,6 +778,18 @@

+ Logical patch ports are a special case. Logical patch ports do not + have a physical location and effectively reside on every hypervisor. + Thus, flow table 33, for output to ports on the local hypervisor, + naturally implements output to unicast logical patch ports too. + However, applying the same logic to a logical patch port that is part + of a logical multicast group yields packet duplication, because each + hypervisor that contains a logical port in the multicast group will + also output the packet to the logical patch port. Thus, multicast + groups implement output to logical patch ports in table 32. +

+ +

Each flow in table 32 matches on a logical output port for unicast or multicast logical ports that include a logical port on a remote hypervisor. Each flow's actions implement sending a packet to the port @@ -796,9 +808,7 @@

Flows in table 33 resemble those in table 32 but for logical ports that - reside locally rather than remotely. (This includes logical patch - ports, which do not have a physical location and effectively reside on - every hypervisor.) For unicast logical output ports + reside locally rather than remotely. For unicast logical output ports on the local hypervisor, the actions just resubmit to table 34. For multicast output ports that include one or more logical ports on the local hypervisor, for each such logical port P, the actions