From patchwork Sat Oct 17 21:07:41 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Pfaff X-Patchwork-Id: 531876 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (unknown [IPv6:2600:3c00::f03c:91ff:fe6e:bdf7]) by ozlabs.org (Postfix) with ESMTP id A22451402DD for ; Sun, 18 Oct 2015 08:07:58 +1100 (AEDT) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id D475410D0D; Sat, 17 Oct 2015 14:07:57 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx3v1.cudamail.com (mx3.cudamail.com [64.34.241.5]) by archives.nicira.com (Postfix) with ESMTPS id BD2F610D05 for ; Sat, 17 Oct 2015 14:07:56 -0700 (PDT) Received: from bar4.cudamail.com (bar2 [192.168.15.2]) by mx3v1.cudamail.com (Postfix) with ESMTP id 439BA61825F for ; Sat, 17 Oct 2015 15:07:56 -0600 (MDT) X-ASG-Debug-ID: 1445116075-03dc210f929c930001-byXFYA Received: from mx3-pf1.cudamail.com ([192.168.14.2]) by bar4.cudamail.com with ESMTP id pudRSs6nTHlmFZlF (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Sat, 17 Oct 2015 15:07:55 -0600 (MDT) X-Barracuda-Envelope-From: blp@nicira.com X-Barracuda-RBL-Trusted-Forwarder: 192.168.14.2 Received: from unknown (HELO mail-pa0-f45.google.com) (209.85.220.45) by mx3-pf1.cudamail.com with ESMTPS (RC4-SHA encrypted); 17 Oct 2015 21:07:55 -0000 Received-SPF: unknown (mx3-pf1.cudamail.com: Multiple SPF records returned) X-Barracuda-RBL-Trusted-Forwarder: 209.85.220.45 Received: by pabws5 with SMTP id ws5so77768pab.2 for ; Sat, 17 Oct 2015 14:07:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=m2WmjkGW0JrDb1HOI5Co7wiyFAricddJOTpLeeGfyug=; b=HtPNa5XZKKUK/sj8Tmo4D6xv83LMu0AVBNOGduq/sV9eI0sO1Pu7uyKTGv2+GMbt6h 5XHQ8vN2Pwl4Yv5P1GLmvnUq70KY0O7fn973yDyhQBpfKca5B5MkcURgnkeNiSWNO8J9 bt+aJDIDbbOfliXBKHpEjzQXjbaVRpM8z1lkTftJApCh4aqDbY1lHcaqBJqk9G3ba6nG IV6WXSSgWQqY1EGb7rcA0U4h6d9X6ydRmG+jMdok0VV4G+I2gU6iIWpniFVgd/v7Zrjm k2KDS+m2t2SFDqPAwTPbI8nYz9obQ6LJzGixFJoaiLNPozgaVqmKoMTgdVr9/lI4EDUF WELw== X-Gm-Message-State: ALoCoQnDXIgeBXgkoj3hrz77F+AAAI7Uxgx5TFT0j6cZyX+xYVeqIS2Q3Hi4PsAYnIJog6pvbKm3 X-Received: by 10.66.253.99 with SMTP id zz3mr24727571pac.38.1445116075082; Sat, 17 Oct 2015 14:07:55 -0700 (PDT) Received: from sigabrt.gateway.sonic.net (173-228-112-197.dsl.dynamic.fusionbroadband.com. [173.228.112.197]) by smtp.gmail.com with ESMTPSA id pj10sm4755548pbc.59.2015.10.17.14.07.53 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 17 Oct 2015 14:07:53 -0700 (PDT) X-CudaMail-Envelope-Sender: blp@nicira.com X-Barracuda-Apparent-Source-IP: 173.228.112.197 From: Ben Pfaff To: dev@openvswitch.org X-CudaMail-Whitelist-To: dev@openvswitch.org X-CudaMail-MID: CM-V1-1016020469 X-CudaMail-DTE: 101715 X-CudaMail-Originating-IP: 209.85.220.45 Date: Sat, 17 Oct 2015 14:07:41 -0700 X-ASG-Orig-Subj: [##CM-V1-1016020469##][PATCH v2 1/4] physical: Fix implementation of logical patch ports. Message-Id: <1445116064-20782-2-git-send-email-blp@nicira.com> X-Mailer: git-send-email 2.1.3 In-Reply-To: <1445115992-16951-1-git-send-email-blp@nicira.com> References: <1445115992-16951-1-git-send-email-blp@nicira.com> X-Barracuda-Connect: UNKNOWN[192.168.14.2] X-Barracuda-Start-Time: 1445116075 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-ASG-Whitelist: Header =?UTF-8?B?eFwtY3VkYW1haWxcLXdoaXRlbGlzdFwtdG8=?= X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 Cc: Ben Pfaff Subject: [ovs-dev] [PATCH v2 1/4] physical: Fix implementation of logical patch ports. X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" Logical patch ports do not have a physical location and effectively reside on every hypervisor. This is fine for unicast output to logical patch ports. However, when a logical patch port is part of a logical multicast group, lumping them together with the other "local" ports in a multicast group yields packet duplication, because every hypervisor to which the packet is tunneled re-outputs it to the logical patch port. This commit fixes the problem, by treating logical patch ports as remote rather than local when they are part of a logical multicast group. This yields exactly-once semantics. Found while testing implementation of ARP in OVN logical router. The following commit adds a test that fails without this fix. Signed-off-by: Ben Pfaff Acked-by: Justin Pettit --- ovn/controller/physical.c | 43 +++++++++++++++++++++++++++++++------------ ovn/ovn-architecture.7.xml | 16 +++++++++++++--- 2 files changed, 44 insertions(+), 15 deletions(-) diff --git a/ovn/controller/physical.c b/ovn/controller/physical.c index 1b2b7fc..5821c11 100644 --- a/ovn/controller/physical.c +++ b/ovn/controller/physical.c @@ -497,6 +497,8 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, /* Handle output to multicast groups, in tables 32 and 33. */ const struct sbrec_multicast_group *mc; + struct ofpbuf remote_ofpacts; + ofpbuf_init(&remote_ofpacts, 0); SBREC_MULTICAST_GROUP_FOR_EACH (mc, ctx->ovnsb_idl) { struct sset remote_chassis = SSET_INITIALIZER(&remote_chassis); struct match match; @@ -507,11 +509,18 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, /* Go through all of the ports in the multicast group: * - * - For local ports, add actions to 'ofpacts' to set the output - * port and resubmit. + * - For remote ports, add the chassis to 'remote_chassis'. * - * - For remote ports, add the chassis to 'remote_chassis'. */ + * - For local ports (other than logical patch ports), add actions + * to 'ofpacts' to set the output port and resubmit. + * + * - For logical patch ports, add actions to 'remote_ofpacts' + * instead. (If we put them in 'ofpacts', then the output + * would happen on every hypervisor in the multicast group, + * effectively duplicating the packet.) + */ ofpbuf_clear(&ofpacts); + ofpbuf_clear(&remote_ofpacts); for (size_t i = 0; i < mc->n_ports; i++) { struct sbrec_port_binding *port = mc->ports[i]; @@ -528,7 +537,11 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, put_load(zone_id, MFF_LOG_CT_ZONE, 0, 32, &ofpacts); } - if (simap_contains(&localvif_to_ofport, + if (!strcmp(port->type, "patch")) { + put_load(port->tunnel_key, MFF_LOG_OUTPORT, 0, 32, + &remote_ofpacts); + put_resubmit(OFTABLE_DROP_LOOPBACK, &remote_ofpacts); + } else if (simap_contains(&localvif_to_ofport, port->parent_port ? port->parent_port : port->logical_port)) { put_load(port->tunnel_key, MFF_LOG_OUTPORT, 0, 32, &ofpacts); @@ -568,8 +581,13 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, * * Handle output to the remote chassis in the multicast group, if * any. */ - if (!sset_is_empty(&remote_chassis)) { - ofpbuf_clear(&ofpacts); + if (!sset_is_empty(&remote_chassis) || remote_ofpacts.size > 0) { + if (remote_ofpacts.size > 0) { + /* Following delivery to logical patch ports, restore the + * multicast group as the logical output port. */ + put_load(mc->tunnel_key, MFF_LOG_OUTPORT, 0, 32, + &remote_ofpacts); + } const char *chassis; const struct chassis_tunnel *prev = NULL; @@ -581,23 +599,24 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, } if (!prev || tun->type != prev->type) { - put_encapsulation(mff_ovn_geneve, tun, - mc->datapath, mc->tunnel_key, &ofpacts); + put_encapsulation(mff_ovn_geneve, tun, mc->datapath, + mc->tunnel_key, &remote_ofpacts); prev = tun; } - ofpact_put_OUTPUT(&ofpacts)->port = tun->ofport; + ofpact_put_OUTPUT(&remote_ofpacts)->port = tun->ofport; } - if (ofpacts.size) { + if (remote_ofpacts.size) { if (local_ports) { - put_resubmit(OFTABLE_LOCAL_OUTPUT, &ofpacts); + put_resubmit(OFTABLE_LOCAL_OUTPUT, &remote_ofpacts); } ofctrl_add_flow(flow_table, OFTABLE_REMOTE_OUTPUT, 100, - &match, &ofpacts); + &match, &remote_ofpacts); } } sset_destroy(&remote_chassis); } + ofpbuf_uninit(&remote_ofpacts); /* Table 0, priority 100. * ====================== diff --git a/ovn/ovn-architecture.7.xml b/ovn/ovn-architecture.7.xml index 0bf9337..c15a4c8 100644 --- a/ovn/ovn-architecture.7.xml +++ b/ovn/ovn-architecture.7.xml @@ -778,6 +778,18 @@

+ Logical patch ports are a special case. Logical patch ports do not + have a physical location and effectively reside on every hypervisor. + Thus, flow table 33, for output to ports on the local hypervisor, + naturally implements output to unicast logical patch ports too. + However, applying the same logic to a logical patch port that is part + of a logical multicast group yields packet duplication, because each + hypervisor that contains a logical port in the multicast group will + also output the packet to the logical patch port. Thus, multicast + groups implement output to logical patch ports in table 32. +

+ +

Each flow in table 32 matches on a logical output port for unicast or multicast logical ports that include a logical port on a remote hypervisor. Each flow's actions implement sending a packet to the port @@ -796,9 +808,7 @@

Flows in table 33 resemble those in table 32 but for logical ports that - reside locally rather than remotely. (This includes logical patch - ports, which do not have a physical location and effectively reside on - every hypervisor.) For unicast logical output ports + reside locally rather than remotely. For unicast logical output ports on the local hypervisor, the actions just resubmit to table 34. For multicast output ports that include one or more logical ports on the local hypervisor, for each such logical port P, the actions