From patchwork Sat Jul 2 17:35:31 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Moats X-Patchwork-Id: 643533 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 3rhgSw4Sx3z9s9d for ; Sun, 3 Jul 2016 03:37:04 +1000 (AEST) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id DADBD10C72; Sat, 2 Jul 2016 10:37:03 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx1e3.cudamail.com (mx1.cudamail.com [69.90.118.67]) by archives.nicira.com (Postfix) with ESMTPS id 619AF10C72 for ; Sat, 2 Jul 2016 10:37:03 -0700 (PDT) Received: from bar5.cudamail.com (localhost [127.0.0.1]) by mx1e3.cudamail.com (Postfix) with ESMTPS id D3974420C0D for ; Sat, 2 Jul 2016 11:37:02 -0600 (MDT) X-ASG-Debug-ID: 1467481022-09eadd3a0813c2f0001-byXFYA Received: from mx1-pf1.cudamail.com ([192.168.24.1]) by bar5.cudamail.com with ESMTP id xss6AuA0uhV7KneJ (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Sat, 02 Jul 2016 11:37:02 -0600 (MDT) X-Barracuda-Envelope-From: rmoats@oc7146733065.ibm.com X-Barracuda-RBL-Trusted-Forwarder: 192.168.24.1 Received: from unknown (HELO fed1rmfepo103.cox.net) (68.230.241.145) by mx1-pf1.cudamail.com with SMTP; 2 Jul 2016 17:37:01 -0000 Received-SPF: none (mx1-pf1.cudamail.com: domain at oc7146733065.ibm.com does not designate permitted sender hosts) X-Barracuda-Apparent-Source-IP: 68.230.241.145 X-Barracuda-RBL-IP: 68.230.241.145 Received: from fed1rmimpo210.cox.net ([68.230.241.161]) by fed1rmfepo103.cox.net (InterMail vM.8.01.05.28 201-2260-151-171-20160122) with ESMTP id <20160702173701.ZJZS16575.fed1rmfepo103.cox.net@fed1rmimpo210.cox.net> for ; Sat, 2 Jul 2016 13:37:01 -0400 Received: from oc7146733065.ibm.com ([68.13.99.247]) by fed1rmimpo210.cox.net with cox id Dtd01t0055LF6cs01td067; Sat, 02 Jul 2016 13:37:00 -0400 X-CT-Class: Clean X-CT-Score: 0.00 X-CT-RefID: str=0001.0A020205.5777FBBC.010B, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0 X-CT-Spam: 0 X-Authority-Analysis: v=2.1 cv=P+Oa/n0u c=1 sm=1 tr=0 a=Jmqd6mthTashISSy/JkQqg==:117 a=Jmqd6mthTashISSy/JkQqg==:17 a=L9H7d07YOLsA:10 a=9cW_t1CCXrUA:10 a=s5jvgZ67dGcA:10 a=cAmyUtKerLwA:10 a=VnNF1IyMAAAA:8 a=GIPJy0RC9EhKd_ccr3sA:9 a=skCgnbhlp52w9zbo2JeP:22 X-CM-Score: 0.00 Authentication-Results: cox.net; none Received: by oc7146733065.ibm.com (Postfix, from userid 500) id 977771880390; Sat, 2 Jul 2016 12:36:59 -0500 (CDT) X-CudaMail-Envelope-Sender: rmoats@oc7146733065.ibm.com From: Ryan Moats To: dev@openvswitch.org X-CudaMail-MID: CM-E1-701016934 X-CudaMail-DTE: 070216 X-CudaMail-Originating-IP: 68.230.241.145 Date: Sat, 2 Jul 2016 12:35:31 -0500 X-ASG-Orig-Subj: [##CM-E1-701016934##][PATCH v20 5/7] Refactor multicast group processing in physical.c Message-Id: <1467480933-5637-6-git-send-email-rmoats@us.ibm.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1467480933-5637-1-git-send-email-rmoats@us.ibm.com> References: <1467480933-5637-1-git-send-email-rmoats@us.ibm.com> X-Barracuda-Connect: UNKNOWN[192.168.24.1] X-Barracuda-Start-Time: 1467481022 X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384 X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 X-Barracuda-Spam-Score: 0.60 X-Barracuda-Spam-Status: No, SCORE=0.60 using global scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=4.0 tests=BSF_SC5_MJ1963, RDNS_NONE X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.30952 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.10 RDNS_NONE Delivered to trusted network by a host with no rDNS 0.50 BSF_SC5_MJ1963 Custom Rule MJ1963 Subject: [ovs-dev] [PATCH v20 5/7] Refactor multicast group processing in physical.c X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" Extract block from SBREC_MULTICAST_GROUP_FOR_EACH block in physical_run to helper method so that it can be reused when doing incremental processing. The is_true parameter is added for a later patch set. Signed-off-by: Ryan Moats --- ovn/controller/physical.c | 233 ++++++++++++++++++++++++---------------------- 1 file changed, 123 insertions(+), 110 deletions(-) diff --git a/ovn/controller/physical.c b/ovn/controller/physical.c index 80bd3eb..7b567d1 100644 --- a/ovn/controller/physical.c +++ b/ovn/controller/physical.c @@ -461,6 +461,126 @@ consider_port_binding(enum mf_field_id mff_ovn_geneve, } } +static void +consider_mc_group(enum mf_field_id mff_ovn_geneve, + const struct simap *ct_zones, + struct hmap *local_datapaths, + const struct sbrec_multicast_group *mc, + struct ofpbuf *ofpacts_p, + struct ofpbuf *remote_ofpacts_p, bool is_new) +{ + struct sset remote_chassis = SSET_INITIALIZER(&remote_chassis); + struct match match; + + match_init_catchall(&match); + match_set_metadata(&match, htonll(mc->datapath->tunnel_key)); + match_set_reg(&match, MFF_LOG_OUTPORT - MFF_REG0, mc->tunnel_key); + + /* Go through all of the ports in the multicast group: + * + * - For remote ports, add the chassis to 'remote_chassis'. + * + * - For local ports (other than logical patch ports), add actions + * to 'ofpacts_p' to set the output port and resubmit. + * + * - For logical patch ports, add actions to 'remote_ofpacts_p' + * instead. (If we put them in 'ofpacts', then the output + * would happen on every hypervisor in the multicast group, + * effectively duplicating the packet.) + */ + ofpbuf_clear(ofpacts_p); + ofpbuf_clear(remote_ofpacts_p); + for (size_t i = 0; i < mc->n_ports; i++) { + struct sbrec_port_binding *port = mc->ports[i]; + + if (port->datapath != mc->datapath) { + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1); + VLOG_WARN_RL(&rl, UUID_FMT": multicast group contains ports " + "in wrong datapath", + UUID_ARGS(&mc->header_.uuid)); + continue; + } + + int zone_id = simap_get(ct_zones, port->logical_port); + if (zone_id) { + put_load(zone_id, MFF_LOG_CT_ZONE, 0, 32, ofpacts_p); + } + + if (!strcmp(port->type, "patch")) { + put_load(port->tunnel_key, MFF_LOG_OUTPORT, 0, 32, + remote_ofpacts_p); + put_resubmit(OFTABLE_DROP_LOOPBACK, remote_ofpacts_p); + } else if (simap_contains(&localvif_to_ofport, + (port->parent_port && *port->parent_port) + ? port->parent_port : port->logical_port)) { + put_load(port->tunnel_key, MFF_LOG_OUTPORT, 0, 32, ofpacts_p); + put_resubmit(OFTABLE_DROP_LOOPBACK, ofpacts_p); + } else if (port->chassis && !get_localnet_port(local_datapaths, + mc->datapath->tunnel_key)) { + /* Add remote chassis only when localnet port not exist, + * otherwise multicast will reach remote ports through localnet + * port. */ + sset_add(&remote_chassis, port->chassis->name); + } + } + + /* Table 33, priority 100. + * ======================= + * + * Handle output to the local logical ports in the multicast group, if + * any. */ + bool local_ports = ofpacts_p->size > 0; + if (local_ports) { + /* Following delivery to local logical ports, restore the multicast + * group as the logical output port. */ + put_load(mc->tunnel_key, MFF_LOG_OUTPORT, 0, 32, ofpacts_p); + + ofctrl_add_flow(OFTABLE_LOCAL_OUTPUT, 100, &match, ofpacts_p, + &mc->header_.uuid, is_new); + } + + /* Table 32, priority 100. + * ======================= + * + * Handle output to the remote chassis in the multicast group, if + * any. */ + if (!sset_is_empty(&remote_chassis) || remote_ofpacts_p->size > 0) { + if (remote_ofpacts_p->size > 0) { + /* Following delivery to logical patch ports, restore the + * multicast group as the logical output port. */ + put_load(mc->tunnel_key, MFF_LOG_OUTPORT, 0, 32, + remote_ofpacts_p); + } + + const char *chassis; + const struct chassis_tunnel *prev = NULL; + SSET_FOR_EACH (chassis, &remote_chassis) { + const struct chassis_tunnel *tun + = chassis_tunnel_find(chassis); + if (!tun) { + continue; + } + + if (!prev || tun->type != prev->type) { + put_encapsulation(mff_ovn_geneve, tun, mc->datapath, + mc->tunnel_key, remote_ofpacts_p); + prev = tun; + } + ofpact_put_OUTPUT(remote_ofpacts_p)->port = tun->ofport; + } + + if (remote_ofpacts_p->size) { + if (local_ports) { + put_resubmit(OFTABLE_LOCAL_OUTPUT, remote_ofpacts_p); + } + ofctrl_add_flow(OFTABLE_REMOTE_OUTPUT, 100, + &match, remote_ofpacts_p, + &mc->header_.uuid, is_new); + } + } + sset_destroy(&remote_chassis); +} + void physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, const struct ovsrec_bridge *br_int, const char *this_chassis_id, @@ -566,117 +686,10 @@ physical_run(struct controller_ctx *ctx, enum mf_field_id mff_ovn_geneve, struct ofpbuf remote_ofpacts; ofpbuf_init(&remote_ofpacts, 0); SBREC_MULTICAST_GROUP_FOR_EACH (mc, ctx->ovnsb_idl) { - struct sset remote_chassis = SSET_INITIALIZER(&remote_chassis); - struct match match; - - match_init_catchall(&match); - match_set_metadata(&match, htonll(mc->datapath->tunnel_key)); - match_set_reg(&match, MFF_LOG_OUTPORT - MFF_REG0, mc->tunnel_key); - - /* Go through all of the ports in the multicast group: - * - * - For remote ports, add the chassis to 'remote_chassis'. - * - * - For local ports (other than logical patch ports), add actions - * to 'ofpacts' to set the output port and resubmit. - * - * - For logical patch ports, add actions to 'remote_ofpacts' - * instead. (If we put them in 'ofpacts', then the output - * would happen on every hypervisor in the multicast group, - * effectively duplicating the packet.) - */ - ofpbuf_clear(&ofpacts); - ofpbuf_clear(&remote_ofpacts); - for (size_t i = 0; i < mc->n_ports; i++) { - struct sbrec_port_binding *port = mc->ports[i]; - - if (port->datapath != mc->datapath) { - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1); - VLOG_WARN_RL(&rl, UUID_FMT": multicast group contains ports " - "in wrong datapath", - UUID_ARGS(&mc->header_.uuid)); - continue; - } - - int zone_id = simap_get(ct_zones, port->logical_port); - if (zone_id) { - put_load(zone_id, MFF_LOG_CT_ZONE, 0, 32, &ofpacts); - } - - if (!strcmp(port->type, "patch")) { - put_load(port->tunnel_key, MFF_LOG_OUTPORT, 0, 32, - &remote_ofpacts); - put_resubmit(OFTABLE_DROP_LOOPBACK, &remote_ofpacts); - } else if (simap_contains(&localvif_to_ofport, - (port->parent_port && *port->parent_port) - ? port->parent_port : port->logical_port)) { - put_load(port->tunnel_key, MFF_LOG_OUTPORT, 0, 32, &ofpacts); - put_resubmit(OFTABLE_DROP_LOOPBACK, &ofpacts); - } else if (port->chassis && !get_localnet_port(local_datapaths, - mc->datapath->tunnel_key)) { - /* Add remote chassis only when localnet port not exist, - * otherwise multicast will reach remote ports through localnet - * port. */ - sset_add(&remote_chassis, port->chassis->name); - } - } - - /* Table 33, priority 100. - * ======================= - * - * Handle output to the local logical ports in the multicast group, if - * any. */ - bool local_ports = ofpacts.size > 0; - if (local_ports) { - /* Following delivery to local logical ports, restore the multicast - * group as the logical output port. */ - put_load(mc->tunnel_key, MFF_LOG_OUTPORT, 0, 32, &ofpacts); - - ofctrl_add_flow(OFTABLE_LOCAL_OUTPUT, 100, &match, &ofpacts, - &mc->header_.uuid, true); - } - - /* Table 32, priority 100. - * ======================= - * - * Handle output to the remote chassis in the multicast group, if - * any. */ - if (!sset_is_empty(&remote_chassis) || remote_ofpacts.size > 0) { - if (remote_ofpacts.size > 0) { - /* Following delivery to logical patch ports, restore the - * multicast group as the logical output port. */ - put_load(mc->tunnel_key, MFF_LOG_OUTPORT, 0, 32, - &remote_ofpacts); - } - - const char *chassis; - const struct chassis_tunnel *prev = NULL; - SSET_FOR_EACH (chassis, &remote_chassis) { - const struct chassis_tunnel *tun - = chassis_tunnel_find(chassis); - if (!tun) { - continue; - } - - if (!prev || tun->type != prev->type) { - put_encapsulation(mff_ovn_geneve, tun, mc->datapath, - mc->tunnel_key, &remote_ofpacts); - prev = tun; - } - ofpact_put_OUTPUT(&remote_ofpacts)->port = tun->ofport; - } - - if (remote_ofpacts.size) { - if (local_ports) { - put_resubmit(OFTABLE_LOCAL_OUTPUT, &remote_ofpacts); - } - ofctrl_add_flow(OFTABLE_REMOTE_OUTPUT, 100, - &match, &remote_ofpacts, - &mc->header_.uuid, true); - } - } - sset_destroy(&remote_chassis); + consider_mc_group(mff_ovn_geneve, ct_zones, local_datapaths, + mc, &ofpacts, &remote_ofpacts, true); } + ofpbuf_uninit(&remote_ofpacts); /* Table 0, priority 100.