From patchwork Thu May 19 20:02:32 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gurucharan Shetty X-Patchwork-Id: 624313 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 3r9y0603RDz9t6M for ; Fri, 20 May 2016 15:57:57 +1000 (AEST) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id D10B510A49; Thu, 19 May 2016 22:57:41 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx1e3.cudamail.com (mx1.cudamail.com [69.90.118.67]) by archives.nicira.com (Postfix) with ESMTPS id D5BDE1076C for ; Thu, 19 May 2016 22:57:39 -0700 (PDT) Received: from bar5.cudamail.com (localhost [127.0.0.1]) by mx1e3.cudamail.com (Postfix) with ESMTPS id 529CD420296 for ; Thu, 19 May 2016 23:57:39 -0600 (MDT) X-ASG-Debug-ID: 1463723858-09eadd02ef101f00001-byXFYA Received: from mx3-pf2.cudamail.com ([192.168.14.1]) by bar5.cudamail.com with ESMTP id jx0IvvCUzFUIZfEz (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Thu, 19 May 2016 23:57:38 -0600 (MDT) X-Barracuda-Envelope-From: guru.ovn@gmail.com X-Barracuda-RBL-Trusted-Forwarder: 192.168.14.1 Received: from unknown (HELO mail-pf0-f196.google.com) (209.85.192.196) by mx3-pf2.cudamail.com with ESMTPS (RC4-SHA encrypted); 20 May 2016 05:57:38 -0000 Received-SPF: pass (mx3-pf2.cudamail.com: SPF record at _netblocks.google.com designates 209.85.192.196 as permitted sender) X-Barracuda-Apparent-Source-IP: 209.85.192.196 X-Barracuda-RBL-IP: 209.85.192.196 Received: by mail-pf0-f196.google.com with SMTP id g132so10254521pfb.3 for ; Thu, 19 May 2016 22:57:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=vknB7X5EX6UWjmCWwAo5B8l0xsRWVoAQc/wvRhLGCbU=; b=iW8KAXFyaDM9njWlbmxPOgeuPKfIhkTn9xojQbzzkdlVV5BMlYZe8ZA5+JnnggbmDO P87vaf+Ku3ygZprH8m8gA6q8IMMB5VkYsISkcvjn8rWuPpewrOYPJl/biUPpayXZB4o6 MSo6WViAhEDY+yLG0tEnMX/6D4HLywDO9kaMPXsmXWxmhbt3MRjsyU1UMJdUGUhdKQvi Lphc2bPcbvhBDujtEqw/iLtRMFxgQnpVfzpgMf+1xZ9tODyLE61ydGyUW8t7gycK80aV CNJE+3dxXctdY8Z1k6tVIfgJqdqQpmbVEwoqly7hW7ly7ySR2WZY7gTcG3GQ+6HrKxnJ fE+A== X-Gm-Message-State: AOPr4FVw0k2sHZv2bkoXcqodrPlAdQM1oVbtRyU7aw730Hd75poDMFa/qOe2xodO7fvCdA== X-Received: by 10.98.88.4 with SMTP id m4mr1919961pfb.71.1463723857954; Thu, 19 May 2016 22:57:37 -0700 (PDT) Received: from ubuntu.eng.vmware.com ([208.91.1.34]) by smtp.gmail.com with ESMTPSA id bf4sm24019967pac.4.2016.05.19.22.57.36 for (version=TLSv1/SSLv3 cipher=OTHER); Thu, 19 May 2016 22:57:36 -0700 (PDT) X-CudaMail-Envelope-Sender: guru.ovn@gmail.com From: Gurucharan Shetty To: dev@openvswitch.org X-CudaMail-Whitelist-To: dev@openvswitch.org X-CudaMail-MID: CM-V2-518068022 X-CudaMail-DTE: 051916 X-CudaMail-Originating-IP: 209.85.192.196 Date: Thu, 19 May 2016 13:02:32 -0700 X-ASG-Orig-Subj: [##CM-V2-518068022##][PATCH v3 3/5] ovn-controller: Refactor conntrack zone allocation. Message-Id: <1463688154-5102-3-git-send-email-guru@ovn.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1463688154-5102-1-git-send-email-guru@ovn.org> References: <1463688154-5102-1-git-send-email-guru@ovn.org> X-Barracuda-Connect: UNKNOWN[192.168.14.1] X-Barracuda-Start-Time: 1463723858 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-ASG-Whitelist: Header =?UTF-8?B?eFwtY3VkYW1haWxcLXdoaXRlbGlzdFwtdG8=?= X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 Subject: [ovs-dev] [PATCH v3 3/5] ovn-controller: Refactor conntrack zone allocation. X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" We currently allocate conntrack zones in binding.c. It fits in nicely there because we currently only allocate conntrack zones to logical ports and binding.c is where we figure out the local ones. An upcoming commit needs conntrack zone allocation for routers in a gateway. For that reason, this commit moves conntrack zone allocation code to ovn-controller.c where it would be easily accessible for router zone allocation too. Signed-off-by: Gurucharan Shetty Acked-by: Ben Pfaff --- ovn/controller/binding.c | 61 ++++------------------------------------- ovn/controller/binding.h | 5 ++-- ovn/controller/ovn-controller.c | 54 ++++++++++++++++++++++++++++++++++-- 3 files changed, 61 insertions(+), 59 deletions(-) diff --git a/ovn/controller/binding.c b/ovn/controller/binding.c index e5e55b1..a07c327 100644 --- a/ovn/controller/binding.c +++ b/ovn/controller/binding.c @@ -77,51 +77,6 @@ get_local_iface_ids(const struct ovsrec_bridge *br_int, struct shash *lports) } static void -update_ct_zones(struct sset *lports, struct simap *ct_zones, - unsigned long *ct_zone_bitmap) -{ - struct simap_node *ct_zone, *ct_zone_next; - const char *iface_id; - int scan_start = 1; - - /* xxx This is wasteful to assign a zone to each port--even if no - * xxx security policy is applied. */ - - /* Delete any zones that are associated with removed ports. */ - SIMAP_FOR_EACH_SAFE(ct_zone, ct_zone_next, ct_zones) { - if (!sset_contains(lports, ct_zone->name)) { - bitmap_set0(ct_zone_bitmap, ct_zone->data); - simap_delete(ct_zones, ct_zone); - } - } - - /* Assign a unique zone id for each logical port. */ - SSET_FOR_EACH(iface_id, lports) { - size_t zone; - - if (simap_contains(ct_zones, iface_id)) { - continue; - } - - /* We assume that there are 64K zones and that we own them all. */ - zone = bitmap_scan(ct_zone_bitmap, 0, scan_start, MAX_CT_ZONES + 1); - if (zone == MAX_CT_ZONES + 1) { - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1); - VLOG_WARN_RL(&rl, "exhausted all ct zones"); - return; - } - scan_start = zone + 1; - - bitmap_set1(ct_zone_bitmap, zone); - simap_put(ct_zones, iface_id, zone); - - /* xxx We should erase any old entries for this - * xxx zone, but we need a generic interface to the conntrack - * xxx table. */ - } -} - -static void add_local_datapath(struct hmap *local_datapaths, const struct sbrec_port_binding *binding_rec) { @@ -148,8 +103,8 @@ update_qos(const struct ovsrec_interface *iface_rec, void binding_run(struct controller_ctx *ctx, const struct ovsrec_bridge *br_int, - const char *chassis_id, struct simap *ct_zones, - unsigned long *ct_zone_bitmap, struct hmap *local_datapaths) + const char *chassis_id, struct sset *all_lports, + struct hmap *local_datapaths) { const struct sbrec_chassis *chassis_rec; const struct sbrec_port_binding *binding_rec; @@ -164,10 +119,9 @@ binding_run(struct controller_ctx *ctx, const struct ovsrec_bridge *br_int, * We'll remove our chassis from all port binding records below. */ } - struct sset all_lports = SSET_INITIALIZER(&all_lports); struct shash_node *node; SHASH_FOR_EACH (node, &lports) { - sset_add(&all_lports, node->name); + sset_add(all_lports, node->name); } /* Run through each binding record to see if it is resident on this @@ -178,10 +132,10 @@ binding_run(struct controller_ctx *ctx, const struct ovsrec_bridge *br_int, = shash_find_and_delete(&lports, binding_rec->logical_port); if (iface_rec || (binding_rec->parent_port && binding_rec->parent_port[0] && - sset_contains(&all_lports, binding_rec->parent_port))) { + sset_contains(all_lports, binding_rec->parent_port))) { if (binding_rec->parent_port && binding_rec->parent_port[0]) { /* Add child logical port to the set of all local ports. */ - sset_add(&all_lports, binding_rec->logical_port); + sset_add(all_lports, binding_rec->logical_port); } add_local_datapath(local_datapaths, binding_rec); if (iface_rec && ctx->ovs_idl_txn) { @@ -213,7 +167,7 @@ binding_run(struct controller_ctx *ctx, const struct ovsrec_bridge *br_int, * to list them in all_lports because we want to allocate * a conntrack zone ID for each one, as we'll be creating * a patch port for each one. */ - sset_add(&all_lports, binding_rec->logical_port); + sset_add(all_lports, binding_rec->logical_port); } } @@ -221,10 +175,7 @@ binding_run(struct controller_ctx *ctx, const struct ovsrec_bridge *br_int, VLOG_DBG("No port binding record for lport %s", node->name); } - update_ct_zones(&all_lports, ct_zones, ct_zone_bitmap); - shash_destroy(&lports); - sset_destroy(&all_lports); } /* Returns true if the database is all cleaned up, false if more work is diff --git a/ovn/controller/binding.h b/ovn/controller/binding.h index 6e19c10..25f8989 100644 --- a/ovn/controller/binding.h +++ b/ovn/controller/binding.h @@ -24,11 +24,12 @@ struct hmap; struct ovsdb_idl; struct ovsrec_bridge; struct simap; +struct sset; void binding_register_ovs_idl(struct ovsdb_idl *); void binding_run(struct controller_ctx *, const struct ovsrec_bridge *br_int, - const char *chassis_id, struct simap *ct_zones, - unsigned long *ct_zone_bitmap, struct hmap *local_datapaths); + const char *chassis_id, struct sset *all_lports, + struct hmap *local_datapaths); bool binding_cleanup(struct controller_ctx *, const char *chassis_id); #endif /* ovn/binding.h */ diff --git a/ovn/controller/ovn-controller.c b/ovn/controller/ovn-controller.c index bc4c24f..a87937f 100644 --- a/ovn/controller/ovn-controller.c +++ b/ovn/controller/ovn-controller.c @@ -47,6 +47,7 @@ #include "lib/bitmap.h" #include "lib/hash.h" #include "smap.h" +#include "sset.h" #include "stream-ssl.h" #include "stream.h" #include "unixctl.h" @@ -252,6 +253,51 @@ get_ovnsb_remote_probe_interval(struct ovsdb_idl *ovs_idl, int *value) return false; } +static void +update_ct_zones(struct sset *lports, struct simap *ct_zones, + unsigned long *ct_zone_bitmap) +{ + struct simap_node *ct_zone, *ct_zone_next; + const char *iface_id; + int scan_start = 1; + + /* xxx This is wasteful to assign a zone to each port--even if no + * xxx security policy is applied. */ + + /* Delete any zones that are associated with removed ports. */ + SIMAP_FOR_EACH_SAFE(ct_zone, ct_zone_next, ct_zones) { + if (!sset_contains(lports, ct_zone->name)) { + bitmap_set0(ct_zone_bitmap, ct_zone->data); + simap_delete(ct_zones, ct_zone); + } + } + + /* Assign a unique zone id for each logical port. */ + SSET_FOR_EACH(iface_id, lports) { + size_t zone; + + if (simap_contains(ct_zones, iface_id)) { + continue; + } + + /* We assume that there are 64K zones and that we own them all. */ + zone = bitmap_scan(ct_zone_bitmap, 0, scan_start, MAX_CT_ZONES + 1); + if (zone == MAX_CT_ZONES + 1) { + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1); + VLOG_WARN_RL(&rl, "exhausted all ct zones"); + return; + } + scan_start = zone + 1; + + bitmap_set1(ct_zone_bitmap, zone); + simap_put(ct_zones, iface_id, zone); + + /* xxx We should erase any old entries for this + * xxx zone, but we need a generic interface to the conntrack + * xxx table. */ + } +} + int main(int argc, char *argv[]) { @@ -353,6 +399,7 @@ main(int argc, char *argv[]) struct hmap local_datapaths = HMAP_INITIALIZER(&local_datapaths); struct hmap patched_datapaths = HMAP_INITIALIZER(&patched_datapaths); + struct sset all_lports = SSET_INITIALIZER(&all_lports); const struct ovsrec_bridge *br_int = get_br_int(&ctx); const char *chassis_id = get_chassis_id(ctx.ovs_idl); @@ -360,8 +407,8 @@ main(int argc, char *argv[]) if (chassis_id) { chassis_run(&ctx, chassis_id); encaps_run(&ctx, br_int, chassis_id); - binding_run(&ctx, br_int, chassis_id, &ct_zones, ct_zone_bitmap, - &local_datapaths); + binding_run(&ctx, br_int, chassis_id, &all_lports, + &local_datapaths); } if (br_int && chassis_id) { @@ -376,6 +423,7 @@ main(int argc, char *argv[]) enum mf_field_id mff_ovn_geneve = ofctrl_run(br_int); pinctrl_run(&ctx, &lports, br_int, chassis_id, &local_datapaths); + update_ct_zones(&all_lports, &ct_zones, ct_zone_bitmap); struct hmap flow_table = HMAP_INITIALIZER(&flow_table); lflow_run(&ctx, &lports, &mcgroups, &local_datapaths, @@ -391,6 +439,8 @@ main(int argc, char *argv[]) lport_index_destroy(&lports); } + sset_destroy(&all_lports); + struct local_datapath *cur_node, *next_node; HMAP_FOR_EACH_SAFE (cur_node, next_node, hmap_node, &local_datapaths) { hmap_remove(&local_datapaths, &cur_node->hmap_node);