From patchwork Wed Apr 13 13:38:31 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Moats X-Patchwork-Id: 610030 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 3qlQ076WYMz9t0t for ; Wed, 13 Apr 2016 23:39:51 +1000 (AEST) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id CFEAF10605; Wed, 13 Apr 2016 06:38:54 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx3v3.cudamail.com (mx3.cudamail.com [64.34.241.5]) by archives.nicira.com (Postfix) with ESMTPS id DF113105F1 for ; Wed, 13 Apr 2016 06:38:51 -0700 (PDT) Received: from bar6.cudamail.com (localhost [127.0.0.1]) by mx3v3.cudamail.com (Postfix) with ESMTPS id 76234162CA8 for ; Wed, 13 Apr 2016 07:38:51 -0600 (MDT) X-ASG-Debug-ID: 1460554729-0b32377581175b0001-byXFYA Received: from mx1-pf2.cudamail.com ([192.168.24.2]) by bar6.cudamail.com with ESMTP id G62yFJr2Kh4u6wdJ (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Wed, 13 Apr 2016 07:38:50 -0600 (MDT) X-Barracuda-Envelope-From: rmoats@oc7146733065.ibm.com X-Barracuda-RBL-Trusted-Forwarder: 192.168.24.2 Received: from unknown (HELO fed1rmfepo203.cox.net) (68.230.241.148) by mx1-pf2.cudamail.com with SMTP; 13 Apr 2016 13:38:49 -0000 Received-SPF: none (mx1-pf2.cudamail.com: domain at oc7146733065.ibm.com does not designate permitted sender hosts) X-Barracuda-Apparent-Source-IP: 68.230.241.148 X-Barracuda-RBL-IP: 68.230.241.148 Received: from fed1rmimpo210.cox.net ([68.230.241.161]) by fed1rmfepo203.cox.net (InterMail vM.8.01.05.15 201-2260-151-145-20131218) with ESMTP id <20160413133847.HOPX26141.fed1rmfepo203.cox.net@fed1rmimpo210.cox.net> for ; Wed, 13 Apr 2016 09:38:47 -0400 Received: from oc7146733065.ibm.com ([68.13.99.247]) by fed1rmimpo210.cox.net with cox id hpee1s0055LF6cs01peekR; Wed, 13 Apr 2016 09:38:38 -0400 X-CT-Class: Clean X-CT-Score: 0.00 X-CT-RefID: str=0001.0A020203.570E4BE7.00F8, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0 X-CT-Spam: 0 X-Authority-Analysis: v=2.1 cv=P+Oa/n0u c=1 sm=1 tr=0 a=Jmqd6mthTashISSy/JkQqg==:117 a=Jmqd6mthTashISSy/JkQqg==:17 a=L9H7d07YOLsA:10 a=9cW_t1CCXrUA:10 a=s5jvgZ67dGcA:10 a=kziv93cY1bsA:10 a=VnNF1IyMAAAA:8 a=7gJfhaGMOsNpnxt8n5QA:9 X-CM-Score: 0.00 Authentication-Results: cox.net; none Received: by oc7146733065.ibm.com (Postfix, from userid 500) id BBF4D18802E8; Wed, 13 Apr 2016 08:38:37 -0500 (CDT) X-CudaMail-Envelope-Sender: rmoats@oc7146733065.ibm.com From: Ryan Moats To: dev@openvswitch.org X-CudaMail-MID: CM-E2-412019735 X-CudaMail-DTE: 041316 X-CudaMail-Originating-IP: 68.230.241.148 Date: Wed, 13 Apr 2016 08:38:31 -0500 X-ASG-Orig-Subj: [##CM-E2-412019735##][PATCH v14 2/6] Change encaps_run to work incrementally Message-Id: <1460554715-7824-3-git-send-email-rmoats@us.ibm.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1460554715-7824-1-git-send-email-rmoats@us.ibm.com> References: <1460554715-7824-1-git-send-email-rmoats@us.ibm.com> X-GBUdb-Analysis: 0, 68.230.241.148, Ugly c=0 p=-1 Source Normal X-MessageSniffer-Rules: 0-0-0-23842-c X-Barracuda-Connect: UNKNOWN[192.168.24.2] X-Barracuda-Start-Time: 1460554730 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 X-Barracuda-Spam-Score: 0.60 X-Barracuda-Spam-Status: No, SCORE=0.60 using per-user scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=4.0 tests=BSF_SC5_MJ1963, RDNS_NONE X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.28696 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.10 RDNS_NONE Delivered to trusted network by a host with no rDNS 0.50 BSF_SC5_MJ1963 Custom Rule MJ1963 Subject: [ovs-dev] [PATCH v14 2/6] Change encaps_run to work incrementally X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" As a side effect, tunnel context is persisted. Signed-off-by: Ryan Moats --- ovn/controller/encaps.c | 162 ++++++++++++++++++++++++++++------------ ovn/controller/ovn-controller.c | 5 ++ 2 files changed, 120 insertions(+), 47 deletions(-) diff --git a/ovn/controller/encaps.c b/ovn/controller/encaps.c index dfb11c0..4c2bb84 100644 --- a/ovn/controller/encaps.c +++ b/ovn/controller/encaps.c @@ -15,6 +15,7 @@ #include #include "encaps.h" +#include "lflow.h" #include "lib/hash.h" #include "lib/sset.h" @@ -49,6 +50,7 @@ struct tunnel_ctx { * generated we remove them. After generating all the rows, any * remaining in 'tunnel_hmap' must be deleted from the database. */ struct hmap tunnel_hmap; + struct hmap tunnel_hmap_by_uuid; /* Names of all ports in the bridge, to allow checking uniqueness when * adding a new tunnel. */ @@ -58,8 +60,18 @@ struct tunnel_ctx { const struct ovsrec_bridge *br_int; }; +struct tunnel_ctx tc = { + .tunnel_hmap = HMAP_INITIALIZER(&tc.tunnel_hmap), + .tunnel_hmap_by_uuid = HMAP_INITIALIZER(&tc.tunnel_hmap_by_uuid), + .port_names = SSET_INITIALIZER(&tc.port_names), +}; + +bool process_full_encaps = false; + struct port_hash_node { struct hmap_node node; + struct hmap_node uuid_node; + const struct uuid *uuid; const struct ovsrec_port *port; const struct ovsrec_bridge *bridge; }; @@ -92,7 +104,7 @@ port_hash_rec(const struct ovsrec_port *port) } static char * -tunnel_create_name(struct tunnel_ctx *tc, const char *chassis_id) +tunnel_create_name(const char *chassis_id) { int i; @@ -100,7 +112,7 @@ tunnel_create_name(struct tunnel_ctx *tc, const char *chassis_id) char *port_name; port_name = xasprintf("ovn-%.6s-%x", chassis_id, i); - if (!sset_contains(&tc->port_names, port_name)) { + if (!sset_contains(&tc.port_names, port_name)) { return port_name; } @@ -110,19 +122,32 @@ tunnel_create_name(struct tunnel_ctx *tc, const char *chassis_id) return NULL; } +static struct port_hash_node * +port_lookup_by_uuid(const struct uuid *uuid) +{ + struct hmap_node *node = hmap_first_with_hash(&tc.tunnel_hmap_by_uuid, + uuid_hash(uuid)); + if (node) { + return CONTAINER_OF(node, struct port_hash_node, uuid_node); + } + return NULL; +} static void -tunnel_add(struct tunnel_ctx *tc, const char *new_chassis_id, +tunnel_add(const struct sbrec_chassis *chassis_rec, const struct sbrec_encap *encap) { struct port_hash_node *hash_node; + const char *new_chassis_id = chassis_rec->name; + + /* Check whether such a row already exists in OVS. If so, update + * the uuid field and insert into the by uuid hashmap. If not, + * create the tunnel */ - /* Check whether such a row already exists in OVS. If so, remove it - * from 'tc->tunnel_hmap' and we're done. */ HMAP_FOR_EACH_WITH_HASH (hash_node, node, port_hash(new_chassis_id, encap->type, encap->ip), - &tc->tunnel_hmap) { + &tc.tunnel_hmap) { const struct ovsrec_port *port = hash_node->port; const char *chassis_id = smap_get(&port->external_ids, "ovn-chassis-id"); @@ -142,8 +167,12 @@ tunnel_add(struct tunnel_ctx *tc, const char *new_chassis_id, if (!strcmp(new_chassis_id, chassis_id) && !strcmp(encap->type, iface->type) && !strcmp(encap->ip, ip)) { - hmap_remove(&tc->tunnel_hmap, &hash_node->node); - free(hash_node); + + hash_node->uuid = &chassis_rec->header_.uuid; + if (!port_lookup_by_uuid(hash_node->uuid)) { + hmap_insert(&tc.tunnel_hmap_by_uuid, &hash_node->uuid_node, + uuid_hash(hash_node->uuid)); + } return; } } @@ -155,14 +184,14 @@ tunnel_add(struct tunnel_ctx *tc, const char *new_chassis_id, char *port_name; size_t i; - port_name = tunnel_create_name(tc, new_chassis_id); + port_name = tunnel_create_name(new_chassis_id); if (!port_name) { VLOG_WARN("Unable to allocate unique name for '%s' tunnel", new_chassis_id); return; } - iface = ovsrec_interface_insert(tc->ovs_txn); + iface = ovsrec_interface_insert(tc.ovs_txn); ovsrec_interface_set_name(iface, port_name); ovsrec_interface_set_type(iface, encap->type); smap_add(&options, "remote_ip", encap->ip); @@ -170,23 +199,25 @@ tunnel_add(struct tunnel_ctx *tc, const char *new_chassis_id, ovsrec_interface_set_options(iface, &options); smap_destroy(&options); - port = ovsrec_port_insert(tc->ovs_txn); + port = ovsrec_port_insert(tc.ovs_txn); ovsrec_port_set_name(port, port_name); ovsrec_port_set_interfaces(port, &iface, 1); const struct smap id = SMAP_CONST1(&id, "ovn-chassis-id", new_chassis_id); ovsrec_port_set_external_ids(port, &id); - ports = xmalloc(sizeof *tc->br_int->ports * (tc->br_int->n_ports + 1)); - for (i = 0; i < tc->br_int->n_ports; i++) { - ports[i] = tc->br_int->ports[i]; + ports = xmalloc(sizeof *tc.br_int->ports * (tc.br_int->n_ports + 1)); + for (i = 0; i < tc.br_int->n_ports; i++) { + ports[i] = tc.br_int->ports[i]; } - ports[tc->br_int->n_ports] = port; - ovsrec_bridge_verify_ports(tc->br_int); - ovsrec_bridge_set_ports(tc->br_int, ports, tc->br_int->n_ports + 1); + ports[tc.br_int->n_ports] = port; + ovsrec_bridge_verify_ports(tc.br_int); + ovsrec_bridge_set_ports(tc.br_int, ports, tc.br_int->n_ports + 1); - sset_add(&tc->port_names, port_name); + sset_add(&tc.port_names, port_name); free(port_name); free(ports); + // reset_flow_processing(); + process_full_encaps = true; } static void @@ -224,6 +255,24 @@ preferred_encap(const struct sbrec_chassis *chassis_rec) return best_encap; } +<<<<<<< HEAD +======= +static void +check_and_add_tunnel(const struct sbrec_chassis *chassis_rec, + const char *chassis_id) +{ + if (strcmp(chassis_rec->name, chassis_id)) { + /* Create tunnels to the other chassis. */ + const struct sbrec_encap *encap = preferred_encap(chassis_rec); + if (!encap) { + VLOG_INFO("No supported encaps for '%s'", chassis_rec->name); + return; + } + tunnel_add(chassis_rec, encap); + } +} + +>>>>>>> Change encaps_run to work incrementally void encaps_run(struct controller_ctx *ctx, const struct ovsrec_bridge *br_int, const char *chassis_id) @@ -235,12 +284,7 @@ encaps_run(struct controller_ctx *ctx, const struct ovsrec_bridge *br_int, const struct sbrec_chassis *chassis_rec; const struct ovsrec_bridge *br; - struct tunnel_ctx tc = { - .tunnel_hmap = HMAP_INITIALIZER(&tc.tunnel_hmap), - .port_names = SSET_INITIALIZER(&tc.port_names), - .br_int = br_int - }; - + tc.br_int = br_int; tc.ovs_txn = ctx->ovs_idl_txn; ovsdb_idl_txn_add_comment(tc.ovs_txn, "ovn-controller: modifying OVS tunnels '%s'", @@ -257,37 +301,61 @@ encaps_run(struct controller_ctx *ctx, const struct ovsrec_bridge *br_int, sset_add(&tc.port_names, port->name); - if (smap_get(&port->external_ids, "ovn-chassis-id")) { - struct port_hash_node *hash_node = xzalloc(sizeof *hash_node); - hash_node->bridge = br; - hash_node->port = port; - hmap_insert(&tc.tunnel_hmap, &hash_node->node, - port_hash_rec(port)); + const char *old_chassis_id = smap_get(&port->external_ids, + "ovn-chassis-id"); + if (old_chassis_id) { + if (!hmap_first_with_hash(&tc.tunnel_hmap, + port_hash_rec(port))) { + struct port_hash_node *hash_node = + xzalloc(sizeof *hash_node); + hash_node->bridge = br; + hash_node->port = port; + hmap_insert(&tc.tunnel_hmap, &hash_node->node, + port_hash_rec(port)); + process_full_encaps = true; + } } } } - SBREC_CHASSIS_FOR_EACH(chassis_rec, ctx->ovnsb_idl) { - if (strcmp(chassis_rec->name, chassis_id)) { - /* Create tunnels to the other chassis. */ - const struct sbrec_encap *encap = preferred_encap(chassis_rec); - if (!encap) { - VLOG_INFO("No supported encaps for '%s'", chassis_rec->name); + if (process_full_encaps) { + SBREC_CHASSIS_FOR_EACH (chassis_rec, ctx->ovnsb_idl) { + check_and_add_tunnel(chassis_rec, chassis_id); + } + process_full_encaps = false; + } else { + SBREC_CHASSIS_FOR_EACH_TRACKED (chassis_rec, ctx->ovnsb_idl) { + bool is_deleted = sbrec_chassis_row_get_seqno(chassis_rec, + OVSDB_IDL_CHANGE_DELETE) > 0; + bool is_new = sbrec_chassis_row_get_seqno(chassis_rec, + OVSDB_IDL_CHANGE_MODIFY) == 0; + + if (is_deleted) { + /* lookup the tunnel by row uuid and remove it */ + struct port_hash_node *port_hash = + port_lookup_by_uuid(&chassis_rec->header_.uuid); + if (port_hash) { + bridge_delete_port(port_hash->bridge, port_hash->port); + sset_find_and_delete(&tc.port_names, + port_hash->port->name); + hmap_remove(&tc.tunnel_hmap, &port_hash->node); + hmap_remove(&tc.tunnel_hmap_by_uuid, + &port_hash->uuid_node); + free(port_hash); + } continue; } - tunnel_add(&tc, chassis_rec->name, encap); + if (!is_new) { + if (strcmp(chassis_rec->name, chassis_id)) { + /* TODO: find the tunnel by looking it up based on its + * uuid and then change it. */ + ; + } + } else { + check_and_add_tunnel(chassis_rec, chassis_id); + } } } - - /* Delete any existing OVN tunnels that were not still around. */ - struct port_hash_node *hash_node, *next_hash_node; - HMAP_FOR_EACH_SAFE (hash_node, next_hash_node, node, &tc.tunnel_hmap) { - hmap_remove(&tc.tunnel_hmap, &hash_node->node); - bridge_delete_port(hash_node->bridge, hash_node->port); - free(hash_node); - } - hmap_destroy(&tc.tunnel_hmap); - sset_destroy(&tc.port_names); } /* Returns true if the database is all cleaned up, false if more work is diff --git a/ovn/controller/ovn-controller.c b/ovn/controller/ovn-controller.c index 7c68c9d..16731a4 100644 --- a/ovn/controller/ovn-controller.c +++ b/ovn/controller/ovn-controller.c @@ -304,6 +304,10 @@ main(int argc, char *argv[]) char *ovnsb_remote = get_ovnsb_remote(ovs_idl_loop.idl); struct ovsdb_idl_loop ovnsb_idl_loop = OVSDB_IDL_LOOP_INITIALIZER( ovsdb_idl_create(ovnsb_remote, &sbrec_idl_class, true, true)); + + /* track the southbound idl */ + ovsdb_idl_track_add_all(ovnsb_idl_loop.idl); + ovsdb_idl_get_initial_snapshot(ovnsb_idl_loop.idl); int probe_interval = 0; @@ -398,6 +402,7 @@ main(int argc, char *argv[]) } ovsdb_idl_loop_commit_and_wait(&ovnsb_idl_loop); ovsdb_idl_loop_commit_and_wait(&ovs_idl_loop); + ovsdb_idl_track_clear(ovnsb_idl_loop.idl); poll_block(); if (should_service_stop()) { exiting = true;