From patchwork Wed May 11 02:51:19 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gurucharan Shetty X-Patchwork-Id: 620880 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 3r4Lj23XhBz9t4F for ; Wed, 11 May 2016 13:10:30 +1000 (AEST) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id 2F66310ADB; Tue, 10 May 2016 20:10:21 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx1e4.cudamail.com (mx1.cudamail.com [69.90.118.67]) by archives.nicira.com (Postfix) with ESMTPS id 447E610AD3 for ; Tue, 10 May 2016 20:10:20 -0700 (PDT) Received: from bar5.cudamail.com (unknown [192.168.21.12]) by mx1e4.cudamail.com (Postfix) with ESMTPS id C3F0B1E0231 for ; Tue, 10 May 2016 21:10:19 -0600 (MDT) X-ASG-Debug-ID: 1462936219-09eadd4da75ab530001-byXFYA Received: from mx3-pf3.cudamail.com ([192.168.14.3]) by bar5.cudamail.com with ESMTP id ZKJCdPXlKdozrHAE (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Tue, 10 May 2016 21:10:19 -0600 (MDT) X-Barracuda-Envelope-From: guru.ovn@gmail.com X-Barracuda-RBL-Trusted-Forwarder: 192.168.14.3 Received: from unknown (HELO mail-pf0-f193.google.com) (209.85.192.193) by mx3-pf3.cudamail.com with ESMTPS (RC4-SHA encrypted); 11 May 2016 03:10:19 -0000 Received-SPF: pass (mx3-pf3.cudamail.com: SPF record at _netblocks.google.com designates 209.85.192.193 as permitted sender) X-Barracuda-Apparent-Source-IP: 209.85.192.193 X-Barracuda-RBL-IP: 209.85.192.193 Received: by mail-pf0-f193.google.com with SMTP id r187so3151517pfr.2 for ; Tue, 10 May 2016 20:10:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Zyn5kb9+w1KgUaFLUM8bGO/sjYf/BuyjdeIjouHQ6nw=; b=TO59PZTKL6aZQvdeRkQZ50pP0AeNsDOJzd9h2cdBfCW4OiqTID/iYOlwHHYsdP7O5c U2lbj/1y9Ttmu3SrZD2vWIGK+pUY82it1qV9z6DSjoDSOF8glhVD/cOwqi2V0HfnU8S3 SaFY4uhSOTPXGdxv2shX7Q5M53J50+ErH8jvV/MggUtHNMqFtxv54YUgOyQbbFykQHfz /uWR5sXAIC08qd+l2nN9S6bEVSUA0oAtKg6kkonCPbIOZkYuSG5i7MpbNqWuG8rJzH0o Huv3FFAIB65PsWwB8jDq+QH3Lsu1mSPw5+UT7RMhHoGSg96riyFvTKO2JzRtfU84Qu8E hZtw== X-Gm-Message-State: AOPr4FWSRVDKM5SU3W8DyZKOH73K8D/J5BAu3BsRDamGOuwoYjwNBSjVn8WS/tBoI98QYA== X-Received: by 10.98.38.196 with SMTP id m187mr1297904pfm.57.1462936218657; Tue, 10 May 2016 20:10:18 -0700 (PDT) Received: from ubuntu-test.eng.vmware.com ([208.91.1.34]) by smtp.gmail.com with ESMTPSA id i6sm7626628pfc.65.2016.05.10.20.10.17 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 10 May 2016 20:10:17 -0700 (PDT) X-CudaMail-Envelope-Sender: guru.ovn@gmail.com From: Gurucharan Shetty To: dev@openvswitch.org X-CudaMail-Whitelist-To: dev@openvswitch.org X-CudaMail-MID: CM-V3-509069787 X-CudaMail-DTE: 051016 X-CudaMail-Originating-IP: 209.85.192.193 Date: Tue, 10 May 2016 19:51:19 -0700 X-ASG-Orig-Subj: [##CM-V3-509069787##][PATCH 3/5 v2] ovn-controller: Refactor conntrack zone allocation. Message-Id: <1462935081-18291-3-git-send-email-guru@ovn.org> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1462935081-18291-1-git-send-email-guru@ovn.org> References: <1462935081-18291-1-git-send-email-guru@ovn.org> X-Barracuda-Connect: UNKNOWN[192.168.14.3] X-Barracuda-Start-Time: 1462936219 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-ASG-Whitelist: Header =?UTF-8?B?eFwtY3VkYW1haWxcLXdoaXRlbGlzdFwtdG8=?= X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 Cc: Gurucharan Shetty Subject: [ovs-dev] [PATCH 3/5 v2] ovn-controller: Refactor conntrack zone allocation. X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" We currently allocate conntrack zones in binding.c. It fits in nicely there because we currently only allocate conntrack zones to logical ports and binding.c is where we figure out the local ones. An upcoming commit needs conntrack zone allocation for routers in a gateway. For that reason, this commit moves conntrack zone allocation code to ovn-controller.c where it would be easily accessible for router zone allocation too. Signed-off-by: Gurucharan Shetty --- No v1. Newly added as part of v2. --- ovn/controller/binding.c | 61 ++++----------------------------------- ovn/controller/binding.h | 5 ++-- ovn/controller/ovn-controller.c | 54 ++++++++++++++++++++++++++++++++-- 3 files changed, 61 insertions(+), 59 deletions(-) diff --git a/ovn/controller/binding.c b/ovn/controller/binding.c index 76c86bf..7f11409 100644 --- a/ovn/controller/binding.c +++ b/ovn/controller/binding.c @@ -77,51 +77,6 @@ get_local_iface_ids(const struct ovsrec_bridge *br_int, struct shash *lports) } static void -update_ct_zones(struct sset *lports, struct simap *ct_zones, - unsigned long *ct_zone_bitmap) -{ - struct simap_node *ct_zone, *ct_zone_next; - const char *iface_id; - int scan_start = 1; - - /* xxx This is wasteful to assign a zone to each port--even if no - * xxx security policy is applied. */ - - /* Delete any zones that are associated with removed ports. */ - SIMAP_FOR_EACH_SAFE(ct_zone, ct_zone_next, ct_zones) { - if (!sset_contains(lports, ct_zone->name)) { - bitmap_set0(ct_zone_bitmap, ct_zone->data); - simap_delete(ct_zones, ct_zone); - } - } - - /* Assign a unique zone id for each logical port. */ - SSET_FOR_EACH(iface_id, lports) { - size_t zone; - - if (simap_contains(ct_zones, iface_id)) { - continue; - } - - /* We assume that there are 64K zones and that we own them all. */ - zone = bitmap_scan(ct_zone_bitmap, 0, scan_start, MAX_CT_ZONES + 1); - if (zone == MAX_CT_ZONES + 1) { - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1); - VLOG_WARN_RL(&rl, "exhausted all ct zones"); - return; - } - scan_start = zone + 1; - - bitmap_set1(ct_zone_bitmap, zone); - simap_put(ct_zones, iface_id, zone); - - /* xxx We should erase any old entries for this - * xxx zone, but we need a generic interface to the conntrack - * xxx table. */ - } -} - -static void add_local_datapath(struct hmap *local_datapaths, const struct sbrec_port_binding *binding_rec) { @@ -148,8 +103,8 @@ update_qos(const struct ovsrec_interface *iface_rec, void binding_run(struct controller_ctx *ctx, const struct ovsrec_bridge *br_int, - const char *chassis_id, struct simap *ct_zones, - unsigned long *ct_zone_bitmap, struct hmap *local_datapaths) + const char *chassis_id, struct sset *all_lports, + struct hmap *local_datapaths) { const struct sbrec_chassis *chassis_rec; const struct sbrec_port_binding *binding_rec; @@ -167,10 +122,9 @@ binding_run(struct controller_ctx *ctx, const struct ovsrec_bridge *br_int, * We'll remove our chassis from all port binding records below. */ } - struct sset all_lports = SSET_INITIALIZER(&all_lports); struct shash_node *node; SHASH_FOR_EACH (node, &lports) { - sset_add(&all_lports, node->name); + sset_add(all_lports, node->name); } /* Run through each binding record to see if it is resident on this @@ -181,10 +135,10 @@ binding_run(struct controller_ctx *ctx, const struct ovsrec_bridge *br_int, = shash_find_and_delete(&lports, binding_rec->logical_port); if (iface_rec || (binding_rec->parent_port && binding_rec->parent_port[0] && - sset_contains(&all_lports, binding_rec->parent_port))) { + sset_contains(all_lports, binding_rec->parent_port))) { if (binding_rec->parent_port && binding_rec->parent_port[0]) { /* Add child logical port to the set of all local ports. */ - sset_add(&all_lports, binding_rec->logical_port); + sset_add(all_lports, binding_rec->logical_port); } add_local_datapath(local_datapaths, binding_rec); if (iface_rec && ctx->ovs_idl_txn) { @@ -218,7 +172,7 @@ binding_run(struct controller_ctx *ctx, const struct ovsrec_bridge *br_int, * to list them in all_lports because we want to allocate * a conntrack zone ID for each one, as we'll be creating * a patch port for each one. */ - sset_add(&all_lports, binding_rec->logical_port); + sset_add(all_lports, binding_rec->logical_port); } } @@ -226,10 +180,7 @@ binding_run(struct controller_ctx *ctx, const struct ovsrec_bridge *br_int, VLOG_DBG("No port binding record for lport %s", node->name); } - update_ct_zones(&all_lports, ct_zones, ct_zone_bitmap); - shash_destroy(&lports); - sset_destroy(&all_lports); } /* Returns true if the database is all cleaned up, false if more work is diff --git a/ovn/controller/binding.h b/ovn/controller/binding.h index 6e19c10..25f8989 100644 --- a/ovn/controller/binding.h +++ b/ovn/controller/binding.h @@ -24,11 +24,12 @@ struct hmap; struct ovsdb_idl; struct ovsrec_bridge; struct simap; +struct sset; void binding_register_ovs_idl(struct ovsdb_idl *); void binding_run(struct controller_ctx *, const struct ovsrec_bridge *br_int, - const char *chassis_id, struct simap *ct_zones, - unsigned long *ct_zone_bitmap, struct hmap *local_datapaths); + const char *chassis_id, struct sset *all_lports, + struct hmap *local_datapaths); bool binding_cleanup(struct controller_ctx *, const char *chassis_id); #endif /* ovn/binding.h */ diff --git a/ovn/controller/ovn-controller.c b/ovn/controller/ovn-controller.c index 9d134e1..adca938 100644 --- a/ovn/controller/ovn-controller.c +++ b/ovn/controller/ovn-controller.c @@ -47,6 +47,7 @@ #include "lib/bitmap.h" #include "lib/hash.h" #include "smap.h" +#include "sset.h" #include "stream-ssl.h" #include "stream.h" #include "unixctl.h" @@ -252,6 +253,51 @@ get_ovnsb_remote_probe_interval(struct ovsdb_idl *ovs_idl, int *value) return false; } +static void +update_ct_zones(struct sset *lports, struct simap *ct_zones, + unsigned long *ct_zone_bitmap) +{ + struct simap_node *ct_zone, *ct_zone_next; + const char *iface_id; + int scan_start = 1; + + /* xxx This is wasteful to assign a zone to each port--even if no + * xxx security policy is applied. */ + + /* Delete any zones that are associated with removed ports. */ + SIMAP_FOR_EACH_SAFE(ct_zone, ct_zone_next, ct_zones) { + if (!sset_contains(lports, ct_zone->name)) { + bitmap_set0(ct_zone_bitmap, ct_zone->data); + simap_delete(ct_zones, ct_zone); + } + } + + /* Assign a unique zone id for each logical port. */ + SSET_FOR_EACH(iface_id, lports) { + size_t zone; + + if (simap_contains(ct_zones, iface_id)) { + continue; + } + + /* We assume that there are 64K zones and that we own them all. */ + zone = bitmap_scan(ct_zone_bitmap, 0, scan_start, MAX_CT_ZONES + 1); + if (zone == MAX_CT_ZONES + 1) { + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1); + VLOG_WARN_RL(&rl, "exhausted all ct zones"); + return; + } + scan_start = zone + 1; + + bitmap_set1(ct_zone_bitmap, zone); + simap_put(ct_zones, iface_id, zone); + + /* xxx We should erase any old entries for this + * xxx zone, but we need a generic interface to the conntrack + * xxx table. */ + } +} + int main(int argc, char *argv[]) { @@ -352,6 +398,7 @@ main(int argc, char *argv[]) struct hmap local_datapaths = HMAP_INITIALIZER(&local_datapaths); struct hmap patched_datapaths = HMAP_INITIALIZER(&patched_datapaths); + struct sset all_lports = SSET_INITIALIZER(&all_lports); const struct ovsrec_bridge *br_int = get_br_int(&ctx); const char *chassis_id = get_chassis_id(ctx.ovs_idl); @@ -359,8 +406,8 @@ main(int argc, char *argv[]) if (chassis_id) { chassis_run(&ctx, chassis_id); encaps_run(&ctx, br_int, chassis_id); - binding_run(&ctx, br_int, chassis_id, &ct_zones, ct_zone_bitmap, - &local_datapaths); + binding_run(&ctx, br_int, chassis_id, &all_lports, + &local_datapaths); } if (br_int) { @@ -375,6 +422,7 @@ main(int argc, char *argv[]) enum mf_field_id mff_ovn_geneve = ofctrl_run(br_int); pinctrl_run(&ctx, &lports, br_int); + update_ct_zones(&all_lports, &ct_zones, ct_zone_bitmap); struct hmap flow_table = HMAP_INITIALIZER(&flow_table); lflow_run(&ctx, &lports, &mcgroups, &local_datapaths, @@ -390,6 +438,8 @@ main(int argc, char *argv[]) lport_index_destroy(&lports); } + sset_destroy(&all_lports); + struct local_datapath *cur_node, *next_node; HMAP_FOR_EACH_SAFE (cur_node, next_node, hmap_node, &local_datapaths) { hmap_remove(&local_datapaths, &cur_node->hmap_node);