From patchwork Wed Aug 3 14:43:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilya Maximets X-Patchwork-Id: 1663405 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.133; helo=smtp2.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4LyZPK6hDqz9sB4 for ; Thu, 4 Aug 2022 00:44:05 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 5584041C2A; Wed, 3 Aug 2022 14:44:03 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 5584041C2A X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id FxQmultq3v1n; Wed, 3 Aug 2022 14:44:01 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by smtp2.osuosl.org (Postfix) with ESMTPS id 869DA40154; Wed, 3 Aug 2022 14:44:00 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 869DA40154 Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 3E16BC0032; Wed, 3 Aug 2022 14:44:00 +0000 (UTC) X-Original-To: ovs-dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133]) by lists.linuxfoundation.org (Postfix) with ESMTP id AFF58C002D for ; Wed, 3 Aug 2022 14:43:58 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 96B1640C11 for ; Wed, 3 Aug 2022 14:43:58 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 96B1640C11 X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id zumHbH4hYrMD for ; Wed, 3 Aug 2022 14:43:57 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 6975A40154 Received: from relay6-d.mail.gandi.net (relay6-d.mail.gandi.net [IPv6:2001:4b98:dc4:8::226]) by smtp2.osuosl.org (Postfix) with ESMTPS id 6975A40154 for ; Wed, 3 Aug 2022 14:43:56 +0000 (UTC) Received: (Authenticated sender: i.maximets@ovn.org) by mail.gandi.net (Postfix) with ESMTPSA id 69A0AC0006; Wed, 3 Aug 2022 14:43:52 +0000 (UTC) From: Ilya Maximets To: ovs-dev@openvswitch.org Date: Wed, 3 Aug 2022 16:43:49 +0200 Message-Id: <20220803144349.3868591-1-i.maximets@ovn.org> X-Mailer: git-send-email 2.34.3 MIME-Version: 1.0 Cc: Ilya Maximets , Dumitru Ceara Subject: [ovs-dev] [PATCH ovn] northd: Use datapath groups for southbound load balancers. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" Some CMSes, like ovn-kubernetes, tend to create load balancers attached to a big number of logical switches or routers. For each of these load balancers northd creates a record in Load_Balancer table of the Southbound database with all the logical datapaths (switches, routers) listed in the 'datapaths' column. All these logical datapaths are references to datapath bindings. With large number of load balancers like these applied to the same set of load balancers, the size of the Southbound database can grow significantly and these references can take up to 90% of all the traffic between Sb DB and ovn-controllers. For example, while creating 3 load balancers (1 for tcp, udp and sctp) in a 250-node cluster in ovn-heater cluster-density test, the database update sent to every ovn-controller is about 40 KB in size, out of which 36 KB are just UUIDs of 250 logical datapaths repeated 3 times. Introducing a new column 'datapath_group' in a Load_Balancer table, so we can create a group once and just refer to it from each load balancer. This saves a lot of CPU time, memory and network bandwidth. Re-using already existing Logical_DP_Group table to store these groups. In 250 node cluster-density test with ovn-heater that creates 30K load balancers applied to all 250 logical switches the following improvement was observed: - Southbound database size decreased from 310 MB to 118 MB. - CPU time on Southbound ovsdb-server process decreased by a factor of 7, from ~35 minutes per server to ~5. - Memory consumption on Southbound ovsdb-server process decreased from 12 GB (peaked at ~14) RSS down to ~1 GB (peaked at ~2). - Memory consumption on ovn-controller processes decreased by 400 MB on each node, from 1300 MB to 900 MB. Similar decrease observed for ovn-northd as well. We're adding some extra work to ovn-northd process with this change to find/create datapath groups. CPU time in the test above increased by ~10%, but overall round trip time for changes in OVN to be propagated to OVN controllers is still noticeably lower due to improvements in other components like Sb DB. Implementation details: - Groups are not shared between logical flows and load balancers, so there could be duplicated groups this way, but that should not be a big problem. Sharing groups will require some code re-structuring with unclear benefits. - ovn-controller and ovn-sbctl are parsing both the 'datapaths' column and the 'datapath_group' to keep them backward compatible. - Load_Balancer table is not conditionally monitored by ovn-controller right now, so not changing that behavior. If conditional monitoring will be introduced later, same considerations as for logical flows should be applied. Signed-off-by: Ilya Maximets Acked-by: Ales Musil --- controller/lflow.c | 44 ++++++++-- controller/ovn-controller.c | 54 ++++++++---- northd/northd.c | 168 ++++++++++++++++++++++++------------ ovn-sb.ovsschema | 8 +- ovn-sb.xml | 5 ++ tests/ovn-northd.at | 68 ++++++++++++--- utilities/ovn-sbctl.c | 40 +++++---- 7 files changed, 278 insertions(+), 109 deletions(-) diff --git a/controller/lflow.c b/controller/lflow.c index 6055097b5..eef44389f 100644 --- a/controller/lflow.c +++ b/controller/lflow.c @@ -2063,6 +2063,20 @@ add_lb_vip_hairpin_flows(struct ovn_controller_lb *lb, ofpbuf_uninit(&ofpacts); } +static void +add_lb_ct_snat_hairpin_for_dp(const struct ovn_controller_lb *lb, + const struct sbrec_datapath_binding *datapath, + struct match *dp_match, + struct ofpbuf *dp_acts, + struct ovn_desired_flow_table *flow_table) +{ + match_set_metadata(dp_match, htonll(datapath->tunnel_key)); + ofctrl_add_or_append_flow(flow_table, OFTABLE_CT_SNAT_HAIRPIN, 200, + lb->slb->header_.uuid.parts[0], + dp_match, dp_acts, &lb->slb->header_.uuid, + NX_CTLR_NO_METER, NULL); +} + static void add_lb_ct_snat_hairpin_dp_flows(struct ovn_controller_lb *lb, uint32_t id, @@ -2088,12 +2102,15 @@ add_lb_ct_snat_hairpin_dp_flows(struct ovn_controller_lb *lb, struct match dp_match = MATCH_CATCHALL_INITIALIZER; for (size_t i = 0; i < lb->slb->n_datapaths; i++) { - match_set_metadata(&dp_match, - htonll(lb->slb->datapaths[i]->tunnel_key)); - ofctrl_add_or_append_flow(flow_table, OFTABLE_CT_SNAT_HAIRPIN, 200, - lb->slb->header_.uuid.parts[0], - &dp_match, &dp_acts, &lb->slb->header_.uuid, - NX_CTLR_NO_METER, NULL); + add_lb_ct_snat_hairpin_for_dp(lb, lb->slb->datapaths[i], + &dp_match, &dp_acts, flow_table); + } + if (lb->slb->datapath_group) { + for (size_t i = 0; i < lb->slb->datapath_group->n_datapaths; i++) { + add_lb_ct_snat_hairpin_for_dp( + lb, lb->slb->datapath_group->datapaths[i], + &dp_match, &dp_acts, flow_table); + } } ofpbuf_uninit(&dp_acts); @@ -2351,7 +2368,20 @@ consider_lb_hairpin_flows(const struct sbrec_load_balancer *sbrec_lb, } } - if (i == sbrec_lb->n_datapaths) { + if (sbrec_lb->n_datapaths && i == sbrec_lb->n_datapaths) { + return; + } + + struct sbrec_logical_dp_group *dp_group = sbrec_lb->datapath_group; + + for (i = 0; dp_group && i < dp_group->n_datapaths; i++) { + if (get_local_datapath(local_datapaths, + dp_group->datapaths[i]->tunnel_key)) { + break; + } + } + + if (dp_group && i == dp_group->n_datapaths) { return; } diff --git a/controller/ovn-controller.c b/controller/ovn-controller.c index 5449743e8..46de9e710 100644 --- a/controller/ovn-controller.c +++ b/controller/ovn-controller.c @@ -2210,6 +2210,33 @@ load_balancers_by_dp_find(struct hmap *lbs, return NULL; } +static void +load_balancers_by_dp_add_one(const struct hmap *local_datapaths, + const struct sbrec_datapath_binding *datapath, + const struct sbrec_load_balancer *lb, + struct hmap *lbs) +{ + struct local_datapath *ldp = + get_local_datapath(local_datapaths, datapath->tunnel_key); + + if (!ldp) { + return; + } + + struct load_balancers_by_dp *lbs_by_dp = + load_balancers_by_dp_find(lbs, ldp->datapath); + if (!lbs_by_dp) { + lbs_by_dp = load_balancers_by_dp_create(lbs, ldp->datapath); + } + + if (lbs_by_dp->n_dp_lbs == lbs_by_dp->n_allocated_dp_lbs) { + lbs_by_dp->dp_lbs = x2nrealloc(lbs_by_dp->dp_lbs, + &lbs_by_dp->n_allocated_dp_lbs, + sizeof *lbs_by_dp->dp_lbs); + } + lbs_by_dp->dp_lbs[lbs_by_dp->n_dp_lbs++] = lb; +} + /* Builds and returns a hmap of 'load_balancers_by_dp', one record for each * local datapath. */ @@ -2223,25 +2250,14 @@ load_balancers_by_dp_init(const struct hmap *local_datapaths, const struct sbrec_load_balancer *lb; SBREC_LOAD_BALANCER_TABLE_FOR_EACH (lb, lb_table) { for (size_t i = 0; i < lb->n_datapaths; i++) { - struct local_datapath *ldp = - get_local_datapath(local_datapaths, - lb->datapaths[i]->tunnel_key); - if (!ldp) { - continue; - } - - struct load_balancers_by_dp *lbs_by_dp = - load_balancers_by_dp_find(lbs, ldp->datapath); - if (!lbs_by_dp) { - lbs_by_dp = load_balancers_by_dp_create(lbs, ldp->datapath); - } - - if (lbs_by_dp->n_dp_lbs == lbs_by_dp->n_allocated_dp_lbs) { - lbs_by_dp->dp_lbs = x2nrealloc(lbs_by_dp->dp_lbs, - &lbs_by_dp->n_allocated_dp_lbs, - sizeof *lbs_by_dp->dp_lbs); - } - lbs_by_dp->dp_lbs[lbs_by_dp->n_dp_lbs++] = lb; + load_balancers_by_dp_add_one(local_datapaths, + lb->datapaths[i], lb, lbs); + } + for (size_t i = 0; lb->datapath_group + && i < lb->datapath_group->n_datapaths; i++) { + load_balancers_by_dp_add_one(local_datapaths, + lb->datapath_group->datapaths[i], + lb, lbs); } } return lbs; diff --git a/northd/northd.c b/northd/northd.c index 0fcd3a642..c404dbdec 100644 --- a/northd/northd.c +++ b/northd/northd.c @@ -4249,6 +4249,48 @@ build_lb_port_related_data(struct hmap *datapaths, struct hmap *ports, build_lswitch_lbs_from_lrouter(datapaths, lbs); } + +struct ovn_dp_group { + struct hmapx map; + struct sbrec_logical_dp_group *dp_group; + struct hmap_node node; +}; + +static struct ovn_dp_group * +ovn_dp_group_find(const struct hmap *dp_groups, + const struct hmapx *od, uint32_t hash) +{ + struct ovn_dp_group *dpg; + + HMAP_FOR_EACH_WITH_HASH (dpg, node, hash, dp_groups) { + if (hmapx_equals(&dpg->map, od)) { + return dpg; + } + } + return NULL; +} + +static struct sbrec_logical_dp_group * +ovn_sb_insert_logical_dp_group(struct ovsdb_idl_txn *ovnsb_txn, + const struct hmapx *od) +{ + struct sbrec_logical_dp_group *dp_group; + const struct sbrec_datapath_binding **sb; + const struct hmapx_node *node; + int n = 0; + + sb = xmalloc(hmapx_count(od) * sizeof *sb); + HMAPX_FOR_EACH (node, od) { + sb[n++] = ((struct ovn_datapath *) node->data)->sb; + } + dp_group = sbrec_logical_dp_group_insert(ovnsb_txn); + sbrec_logical_dp_group_set_datapaths( + dp_group, (struct sbrec_datapath_binding **) sb, n); + free(sb); + + return dp_group; +} + /* Syncs relevant load balancers (applied to logical switches) to the * Southbound database. */ @@ -4256,9 +4298,13 @@ static void sync_lbs(struct northd_input *input_data, struct ovsdb_idl_txn *ovnsb_txn, struct hmap *datapaths, struct hmap *lbs) { + struct hmap dp_groups = HMAP_INITIALIZER(&dp_groups); struct ovn_northd_lb *lb; - /* Delete any stale SB load balancer rows. */ + /* Delete any stale SB load balancer rows and collect existing valid + * datapath groups. */ + struct hmapx existing_sb_dp_groups = + HMAPX_INITIALIZER(&existing_sb_dp_groups); struct hmapx existing_lbs = HMAPX_INITIALIZER(&existing_lbs); const struct sbrec_load_balancer *sbrec_lb; SBREC_LOAD_BALANCER_TABLE_FOR_EACH_SAFE (sbrec_lb, @@ -4281,11 +4327,46 @@ sync_lbs(struct northd_input *input_data, struct ovsdb_idl_txn *ovnsb_txn, lb = ovn_northd_lb_find(lbs, &lb_uuid); if (!lb || !lb->n_nb_ls || !hmapx_add(&existing_lbs, lb)) { sbrec_load_balancer_delete(sbrec_lb); - } else { - lb->slb = sbrec_lb; + continue; + } + + lb->slb = sbrec_lb; + + /* Collect the datapath group. */ + struct sbrec_logical_dp_group *dp_group = sbrec_lb->datapath_group; + + if (!dp_group || !hmapx_add(&existing_sb_dp_groups, dp_group)) { + continue; + } + + struct ovn_dp_group *dpg = xzalloc(sizeof *dpg); + size_t i; + + hmapx_init(&dpg->map); + for (i = 0; i < dp_group->n_datapaths; i++) { + struct ovn_datapath *datapath_od; + + datapath_od = ovn_datapath_from_sbrec(datapaths, + dp_group->datapaths[i]); + if (!datapath_od || ovn_datapath_is_stale(datapath_od)) { + break; + } + hmapx_add(&dpg->map, datapath_od); + } + if (i == dp_group->n_datapaths) { + uint32_t hash = hash_int(hmapx_count(&dpg->map), 0); + + if (!ovn_dp_group_find(&dp_groups, &dpg->map, hash)) { + dpg->dp_group = dp_group; + hmap_insert(&dp_groups, &dpg->node, hash); + continue; + } } + hmapx_destroy(&dpg->map); + free(dpg); } hmapx_destroy(&existing_lbs); + hmapx_destroy(&existing_sb_dp_groups); /* Create SB Load balancer records if not present and sync * the SB load balancer columns. */ @@ -4302,13 +4383,6 @@ sync_lbs(struct northd_input *input_data, struct ovsdb_idl_txn *ovnsb_txn, smap_clone(&options, &lb->nlb->options); smap_replace(&options, "hairpin_orig_tuple", "true"); - struct sbrec_datapath_binding **lb_dps = - xmalloc(lb->n_nb_ls * sizeof *lb_dps); - for (size_t i = 0; i < lb->n_nb_ls; i++) { - lb_dps[i] = CONST_CAST(struct sbrec_datapath_binding *, - lb->nb_ls[i]->sb); - } - if (!lb->slb) { sbrec_lb = sbrec_load_balancer_insert(ovnsb_txn); lb->slb = sbrec_lb; @@ -4319,15 +4393,44 @@ sync_lbs(struct northd_input *input_data, struct ovsdb_idl_txn *ovnsb_txn, sbrec_load_balancer_set_external_ids(sbrec_lb, &external_ids); free(lb_id); } + + /* Find datapath group for this load balancer. */ + struct hmapx lb_dps = HMAPX_INITIALIZER(&lb_dps); + struct ovn_dp_group *dpg; + uint32_t hash; + + for (size_t i = 0; i < lb->n_nb_ls; i++) { + hmapx_add(&lb_dps, lb->nb_ls[i]); + } + + hash = hash_int(hmapx_count(&lb_dps), 0); + dpg = ovn_dp_group_find(&dp_groups, &lb_dps, hash); + if (!dpg) { + dpg = xzalloc(sizeof *dpg); + dpg->dp_group = ovn_sb_insert_logical_dp_group(ovnsb_txn, &lb_dps); + hmapx_clone(&dpg->map, &lb_dps); + hmap_insert(&dp_groups, &dpg->node, hash); + } + hmapx_destroy(&lb_dps); + + /* Update columns. */ sbrec_load_balancer_set_name(lb->slb, lb->nlb->name); sbrec_load_balancer_set_vips(lb->slb, &lb->nlb->vips); sbrec_load_balancer_set_protocol(lb->slb, lb->nlb->protocol); - sbrec_load_balancer_set_datapaths(lb->slb, lb_dps, lb->n_nb_ls); + sbrec_load_balancer_set_datapath_group(lb->slb, dpg->dp_group); sbrec_load_balancer_set_options(lb->slb, &options); + /* Clearing 'datapaths' column, since 'dp_group' is in use. */ + sbrec_load_balancer_set_datapaths(lb->slb, NULL, 0); smap_destroy(&options); - free(lb_dps); } + struct ovn_dp_group *dpg; + HMAP_FOR_EACH_POP (dpg, node, &dp_groups) { + hmapx_destroy(&dpg->map); + free(dpg); + } + hmap_destroy(&dp_groups); + /* Datapath_Binding.load_balancers is not used anymore, it's still in the * schema for compatibility reasons. Reset it to empty, just in case. */ @@ -14058,47 +14161,6 @@ build_lswitch_and_lrouter_flows(const struct hmap *datapaths, build_lswitch_flows(datapaths, lflows); } -struct ovn_dp_group { - struct hmapx map; - struct sbrec_logical_dp_group *dp_group; - struct hmap_node node; -}; - -static struct ovn_dp_group * -ovn_dp_group_find(const struct hmap *dp_groups, - const struct hmapx *od, uint32_t hash) -{ - struct ovn_dp_group *dpg; - - HMAP_FOR_EACH_WITH_HASH (dpg, node, hash, dp_groups) { - if (hmapx_equals(&dpg->map, od)) { - return dpg; - } - } - return NULL; -} - -static struct sbrec_logical_dp_group * -ovn_sb_insert_logical_dp_group(struct ovsdb_idl_txn *ovnsb_txn, - const struct hmapx *od) -{ - struct sbrec_logical_dp_group *dp_group; - const struct sbrec_datapath_binding **sb; - const struct hmapx_node *node; - int n = 0; - - sb = xmalloc(hmapx_count(od) * sizeof *sb); - HMAPX_FOR_EACH (node, od) { - sb[n++] = ((struct ovn_datapath *) node->data)->sb; - } - dp_group = sbrec_logical_dp_group_insert(ovnsb_txn); - sbrec_logical_dp_group_set_datapaths( - dp_group, (struct sbrec_datapath_binding **) sb, n); - free(sb); - - return dp_group; -} - static void ovn_sb_set_lflow_logical_dp_group( struct ovsdb_idl_txn *ovnsb_txn, diff --git a/ovn-sb.ovsschema b/ovn-sb.ovsschema index 3b78ea6f6..8770fc01d 100644 --- a/ovn-sb.ovsschema +++ b/ovn-sb.ovsschema @@ -1,7 +1,7 @@ { "name": "OVN_Southbound", - "version": "20.23.0", - "cksum": "4045988377 28575", + "version": "20.24.0", + "cksum": "3074645903 28786", "tables": { "SB_Global": { "columns": { @@ -505,6 +505,10 @@ "type": {"key": {"type": "uuid", "refTable": "Datapath_Binding"}, "min": 0, "max": "unlimited"}}, + "datapath_group": + {"type": {"key": {"type": "uuid", + "refTable": "Logical_DP_Group"}, + "min": 0, "max": 1}}, "options": { "type": {"key": "string", "value": "string", diff --git a/ovn-sb.xml b/ovn-sb.xml index 59ad3aa2d..74377a47d 100644 --- a/ovn-sb.xml +++ b/ovn-sb.xml @@ -4589,6 +4589,11 @@ tcp.flags = RST; Datapaths to which this load balancer applies to. + + The group of datapaths to which this load balancer applies to. This + means that the same load balancer applies to all datapaths in a group. + + IP to be used as source IP for packets that have been hair-pinned after diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at index ae0f66ad6..38d2baf07 100644 --- a/tests/ovn-northd.at +++ b/tests/ovn-northd.at @@ -2469,13 +2469,23 @@ lbg0_uuid=$(fetch_column sb:load_balancer _uuid name=lbg0) echo echo "__file__:__line__: Check that SB lb0 has sw0 in datapaths column." -check_column "$sw0_sb_uuid" sb:load_balancer datapaths name=lb0 +lb0_dp_group=$(fetch_column sb:load_balancer datapath_group name=lb0) +AT_CHECK_UNQUOTED([ovn-sbctl --bare --columns _uuid,datapaths find Logical_DP_Group dnl + | grep -A1 $lb0_dp_group | tail -1], [0], [dnl +$sw0_sb_uuid +]) + check_column "" sb:datapath_binding load_balancers external_ids:name=sw0 echo echo "__file__:__line__: Check that SB lbg0 has sw0 in datapaths column." -check_column "$sw0_sb_uuid" sb:load_balancer datapaths name=lbg0 +lbg0_dp_group=$(fetch_column sb:load_balancer datapath_group name=lbg0) +AT_CHECK_UNQUOTED([ovn-sbctl --bare --columns _uuid,datapaths find Logical_DP_Group dnl + | grep -A1 $lbg0_dp_group | tail -1], [0], [dnl +$sw0_sb_uuid +]) + check_column "" sb:datapath_binding load_balancers external_ids:name=sw0 check ovn-nbctl --wait=sb set load_balancer lb0 vips:"10.0.0.20\:90"="20.0.0.4:8080,30.0.0.4:8080" @@ -2499,23 +2509,43 @@ check ovn-nbctl --wait=sb lr-lb-add lr0 lb0 echo echo "__file__:__line__: Check that SB lb0 has only sw0 in datapaths column." -check_column "$sw0_sb_uuid" sb:load_balancer datapaths name=lb0 +lb0_dp_group=$(fetch_column sb:load_balancer datapath_group name=lb0) +AT_CHECK_UNQUOTED([ovn-sbctl --bare --columns _uuid,datapaths find Logical_DP_Group dnl + | grep -A1 $lb0_dp_group | tail -1], [0], [dnl +$sw0_sb_uuid +]) echo echo "__file__:__line__: Check that SB lbg0 has only sw0 in datapaths column." -check_column "$sw0_sb_uuid" sb:load_balancer datapaths name=lbg0 +lbg0_dp_group=$(fetch_column sb:load_balancer datapath_group name=lbg0) +AT_CHECK_UNQUOTED([ovn-sbctl --bare --columns _uuid,datapaths find Logical_DP_Group dnl + | grep -A1 $lbg0_dp_group | tail -1], [0], [dnl +$sw0_sb_uuid +]) check ovn-nbctl ls-add sw1 -- add logical_switch sw1 load_balancer_group $lbg check ovn-nbctl --wait=sb ls-lb-add sw1 lb0 sw1_sb_uuid=$(fetch_column datapath_binding _uuid external_ids:name=sw1) +echo "$sw0_sb_uuid" > sw_sb_uuids +echo "$sw1_sb_uuid" >> sw_sb_uuids + echo echo "__file__:__line__: Check that SB lb0 has sw0 and sw1 in datapaths column." -check_column "$sw0_sb_uuid $sw1_sb_uuid" sb:load_balancer datapaths name=lb0 +lb0_dp_group=$(fetch_column sb:load_balancer datapath_group name=lb0) +AT_CHECK_UNQUOTED([ovn-sbctl --bare --columns _uuid,datapaths find Logical_DP_Group dnl + | grep -A1 $lb0_dp_group | tail -1 | tr ' ' '\n' | sort], [0], [dnl +$(cat sw_sb_uuids | sort) +]) echo echo "__file__:__line__: Check that SB lbg0 has sw0 and sw1 in datapaths column." -check_column "$sw0_sb_uuid $sw1_sb_uuid" sb:load_balancer datapaths name=lbg0 +lbg0_dp_group=$(fetch_column sb:load_balancer datapath_group name=lbg0) +AT_CHECK_UNQUOTED([ovn-sbctl --bare --columns _uuid,datapaths find Logical_DP_Group dnl + | grep -A1 $lbg0_dp_group | tail -1 | tr ' ' '\n' | sort], [0], [dnl +$(cat sw_sb_uuids | sort) +]) + check_column "" sb:datapath_binding load_balancers external_ids:name=sw1 check ovn-nbctl --wait=sb lb-add lb1 10.0.0.30:80 20.0.0.50:8080 udp @@ -2542,18 +2572,26 @@ echo "__file__:__line__: Check that SB lbg1 has vips and protocol columns are se check_column "20.0.0.30:80=20.0.0.50:8080 udp" sb:load_balancer vips,protocol name=lbg1 lb1_uuid=$(fetch_column sb:load_balancer _uuid name=lb1) +lb1_dp_group=$(fetch_column sb:load_balancer datapath_group name=lb1) echo echo "__file__:__line__: Check that SB lb1 has sw1 in datapaths column." -check_column "$sw1_sb_uuid" sb:load_balancer datapaths name=lb1 +AT_CHECK_UNQUOTED([ovn-sbctl --bare --columns _uuid,datapaths find Logical_DP_Group dnl + | grep -A1 $lb1_dp_group | tail -1], [0], [dnl +$sw1_sb_uuid +]) lbg1_uuid=$(fetch_column sb:load_balancer _uuid name=lbg1) +lbg1_dp_group=$(fetch_column sb:load_balancer datapath_group name=lbg1) echo echo "__file__:__line__: Check that SB lbg1 has sw0 and sw1 in datapaths column." -check_column "$sw0_sb_uuid $sw1_sb_uuid" sb:load_balancer datapaths name=lbg1 +AT_CHECK_UNQUOTED([ovn-sbctl --bare --columns _uuid,datapaths find Logical_DP_Group dnl + | grep -A1 $lbg1_dp_group | tail -1 | tr ' ' '\n' | sort], [0], [dnl +$(cat sw_sb_uuids | sort) +]) echo echo "__file__:__line__: check that datapath sw1 has no entry in the load_balancers column." @@ -5335,9 +5373,9 @@ ovn_start check ovn-nbctl ls-add ls -- lb-add lb1 10.0.0.1:80 10.0.0.2:80 -- ls-lb-add ls lb1 check ovn-nbctl --wait=sb sync -dps=$(fetch_column Load_Balancer datapaths) +dps=$(fetch_column Load_Balancer datapath_group) nlb=$(fetch_column nb:Load_Balancer _uuid) -AT_CHECK([ovn-sbctl create Load_Balancer name=lb1 datapaths="$dps" external_ids="lb_id=$nlb"], [0], [ignore]) +AT_CHECK([ovn-sbctl create Load_Balancer name=lb1 datapath_group="$dps" external_ids="lb_id=$nlb"], [0], [ignore]) check ovn-nbctl --wait=sb sync check_row_count Load_Balancer 1 @@ -7614,9 +7652,13 @@ AT_CHECK([grep "ls_in_lb" S1flows | sort], [0], [dnl table=11(ls_in_lb ), priority=120 , match=(ct.new && ip4.dst == 172.16.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 172.16.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.2:80);) ]) -s0_uuid=$(ovn-sbctl get datapath S0 _uuid) -s1_uuid=$(ovn-sbctl get datapath S1 _uuid) -check_column "$s0_uuid $s1_uuid" sb:load_balancer datapaths name=lb0 +ovn-sbctl get datapath S0 _uuid > dp_uuids +ovn-sbctl get datapath S1 _uuid >> dp_uuids +lb_dp_group=$(ovn-sbctl --bare --columns datapath_group find Load_Balancer name=lb0) +AT_CHECK_UNQUOTED([ovn-sbctl --bare --columns _uuid,datapaths find Logical_DP_Group dnl + | grep -A1 $lb_dp_group | tail -1 | tr ' ' '\n' | sort], [0], [dnl +$(cat dp_uuids | sort) +]) ovn-nbctl --wait=sb set NB_Global . options:install_ls_lb_from_router=false diff --git a/utilities/ovn-sbctl.c b/utilities/ovn-sbctl.c index b008b5d0b..e35556d34 100644 --- a/utilities/ovn-sbctl.c +++ b/utilities/ovn-sbctl.c @@ -335,6 +335,7 @@ pre_get_info(struct ctl_context *ctx) ovsdb_idl_add_column(ctx->idl, &sbrec_mac_binding_col_mac); ovsdb_idl_add_column(ctx->idl, &sbrec_load_balancer_col_datapaths); + ovsdb_idl_add_column(ctx->idl, &sbrec_load_balancer_col_datapath_group); ovsdb_idl_add_column(ctx->idl, &sbrec_load_balancer_col_vips); ovsdb_idl_add_column(ctx->idl, &sbrec_load_balancer_col_name); ovsdb_idl_add_column(ctx->idl, &sbrec_load_balancer_col_protocol); @@ -826,6 +827,21 @@ cmd_lflow_list_chassis(struct ctl_context *ctx, struct vconn *vconn, } } +static bool +datapath_group_contains_datapath(const struct sbrec_logical_dp_group *g, + const struct sbrec_datapath_binding *dp) +{ + if (!g || !dp) { + return false; + } + for (size_t i = 0; i < g->n_datapaths; i++) { + if (g->datapaths[i] == dp) { + return true; + } + } + return false; +} + static void cmd_lflow_list_load_balancers(struct ctl_context *ctx, struct vconn *vconn, const struct sbrec_datapath_binding *datapath, @@ -843,6 +859,10 @@ cmd_lflow_list_load_balancers(struct ctl_context *ctx, struct vconn *vconn, break; } } + if (lb->datapath_group && !dp_found) { + dp_found = datapath_group_contains_datapath(lb->datapath_group, + datapath); + } if (!dp_found) { continue; } @@ -861,6 +881,11 @@ cmd_lflow_list_load_balancers(struct ctl_context *ctx, struct vconn *vconn, print_vflow_datapath_name(lb->datapaths[i], true, &ctx->output); } + for (size_t i = 0; lb->datapath_group + && i < lb->datapath_group->n_datapaths; i++) { + print_vflow_datapath_name(lb->datapath_group->datapaths[i], + true, &ctx->output); + } } ds_put_cstr(&ctx->output, "\n vips:\n"); @@ -879,21 +904,6 @@ cmd_lflow_list_load_balancers(struct ctl_context *ctx, struct vconn *vconn, } } -static bool -datapath_group_contains_datapath(const struct sbrec_logical_dp_group *g, - const struct sbrec_datapath_binding *dp) -{ - if (!g || !dp) { - return false; - } - for (size_t i = 0; i < g->n_datapaths; i++) { - if (g->datapaths[i] == dp) { - return true; - } - } - return false; -} - static void sbctl_lflow_add(struct sbctl_lflow **lflows, size_t *n_flows, size_t *n_capacity,