Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/1549933/
http://patchwork.ozlabs.org/api/patches/1549933/", "web_url": "http://patchwork.ozlabs.org/project/ovn/patch/20211102193042.615.91392.stgit@dceara.remote.csb/", "project": { "id": 68, "url": "http://patchwork.ozlabs.org/api/projects/68/", "name": "Open Virtual Network development", "link_name": "ovn", "list_id": "ovs-dev.openvswitch.org", "list_email": "ovs-dev@openvswitch.org", "web_url": "http://openvswitch.org/", "scm_url": "", "webscm_url": "", "list_archive_url": "", "list_archive_url_format": "", "commit_url_format": "" }, "msgid": "<20211102193042.615.91392.stgit@dceara.remote.csb>", "list_archive_url": null, "date": "2021-11-02T19:30:44", "name": "[ovs-dev,branch-21.09,2/3] nb: Add support for Load_Balancer_Groups.", "commit_ref": null, "pull_url": null, "state": "accepted", "archived": false, "hash": "d092b7d9afee7b4695dda2370ae79faf85a90e16", "submitter": { "id": 76591, "url": "http://patchwork.ozlabs.org/api/people/76591/", "name": "Dumitru Ceara", "email": "dceara@redhat.com" }, "delegate": null, "mbox": "http://patchwork.ozlabs.org/project/ovn/patch/20211102193042.615.91392.stgit@dceara.remote.csb/mbox/", "series": [ { "id": 270044, "url": "http://patchwork.ozlabs.org/api/series/270044/", "web_url": "http://patchwork.ozlabs.org/project/ovn/list/?series=270044", "date": "2021-11-02T19:30:10", "name": "Improve Load Balancer performance.", "version": 1, "mbox": "http://patchwork.ozlabs.org/series/270044/mbox/" } ], "comments": "http://patchwork.ozlabs.org/api/patches/1549933/comments/", "check": "success", "checks": "http://patchwork.ozlabs.org/api/patches/1549933/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<ovs-dev-bounces@openvswitch.org>", "X-Original-To": [ "incoming@patchwork.ozlabs.org", "ovs-dev@openvswitch.org" ], "Delivered-To": [ "patchwork-incoming@bilbo.ozlabs.org", "ovs-dev@lists.linuxfoundation.org" ], "Authentication-Results": [ "bilbo.ozlabs.org;\n\tdkim=fail reason=\"signature verification failed\" (1024-bit key;\n unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256\n header.s=mimecast20190719 header.b=Ydx8pwuC;\n\tdkim-atps=neutral", "ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org\n (client-ip=2605:bc80:3010::136; helo=smtp3.osuosl.org;\n envelope-from=ovs-dev-bounces@openvswitch.org; receiver=<UNKNOWN>)", "smtp1.osuosl.org (amavisd-new);\n dkim=pass (1024-bit key) header.d=redhat.com", "relay.mimecast.com;\n auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dceara@redhat.com" ], "Received": [ "from smtp3.osuosl.org (smtp3.osuosl.org [IPv6:2605:bc80:3010::136])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby bilbo.ozlabs.org (Postfix) with ESMTPS id 4HkKlQ2Lstz9sRK\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 3 Nov 2021 06:31:30 +1100 (AEDT)", "from localhost (localhost [127.0.0.1])\n\tby smtp3.osuosl.org (Postfix) with ESMTP id CEB4760897;\n\tTue, 2 Nov 2021 19:31:27 +0000 (UTC)", "from smtp3.osuosl.org ([127.0.0.1])\n\tby localhost (smtp3.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id dR4I7eqD4z_V; Tue, 2 Nov 2021 19:31:25 +0000 (UTC)", "from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56])\n\tby smtp3.osuosl.org (Postfix) with ESMTPS id 6F8B26068C;\n\tTue, 2 Nov 2021 19:31:24 +0000 (UTC)", "from lf-lists.osuosl.org (localhost [127.0.0.1])\n\tby lists.linuxfoundation.org (Postfix) with ESMTP id 442C9C0019;\n\tTue, 2 Nov 2021 19:31:24 +0000 (UTC)", "from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138])\n by lists.linuxfoundation.org (Postfix) with ESMTP id F0BE1C000E\n for <ovs-dev@openvswitch.org>; Tue, 2 Nov 2021 19:31:22 +0000 (UTC)", "from localhost (localhost [127.0.0.1])\n by smtp1.osuosl.org (Postfix) with ESMTP id 7288580E2F\n for <ovs-dev@openvswitch.org>; Tue, 2 Nov 2021 19:31:18 +0000 (UTC)", "from smtp1.osuosl.org ([127.0.0.1])\n by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n with ESMTP id cLt2oPj-wOv2 for <ovs-dev@openvswitch.org>;\n Tue, 2 Nov 2021 19:31:16 +0000 (UTC)", "from us-smtp-delivery-124.mimecast.com\n (us-smtp-delivery-124.mimecast.com [216.205.24.124])\n by smtp1.osuosl.org (Postfix) with ESMTPS id 7AB8C80E0C\n for <ovs-dev@openvswitch.org>; Tue, 2 Nov 2021 19:31:15 +0000 (UTC)", "from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com\n [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id\n us-mta-227-s5n_kLxeOwOZ7qRYomrA5w-1; Tue, 02 Nov 2021 15:30:56 -0400", "from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com\n [10.5.11.23])\n (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n (No client certificate requested)\n by mimecast-mx01.redhat.com (Postfix) with ESMTPS id AFD1C100C661;\n Tue, 2 Nov 2021 19:30:55 +0000 (UTC)", "from dceara.remote.csb (unknown [10.39.193.68])\n by smtp.corp.redhat.com (Postfix) with ESMTP id 0CAA419C79;\n Tue, 2 Nov 2021 19:30:48 +0000 (UTC)" ], "X-Virus-Scanned": [ "amavisd-new at osuosl.org", "amavisd-new at osuosl.org" ], "X-Greylist": "domain auto-whitelisted by SQLgrey-1.8.0", "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;\n s=mimecast20190719; t=1635881474;\n h=from:from:reply-to:subject:subject:date:date:message-id:message-id:\n to:to:cc:cc:mime-version:mime-version:content-type:content-type:\n content-transfer-encoding:content-transfer-encoding:\n in-reply-to:in-reply-to:references:references;\n bh=03jJsQ2Hm/s0myIFRfy2BGWISi6J2JifPqUR3pOof64=;\n b=Ydx8pwuCPFn6uL1RsGmpO92WF2HV8IYpMixIKxDT7R7Pi1rIQJwgLkMTFxe7NP0bXeXFWM\n beBWtwUIXO2QtK6gh3kKnoP1wKJ8ujiDWXUkmtzyBjCs4VG03QwJ8MRK+WVJFxy4ie4uEZ\n hw/y/wKLLNKNFMsyoHwr9l978twjNuQ=", "X-MC-Unique": "s5n_kLxeOwOZ7qRYomrA5w-1", "From": "Dumitru Ceara <dceara@redhat.com>", "To": "ovs-dev@openvswitch.org", "Date": "Tue, 2 Nov 2021 20:30:44 +0100", "Message-Id": "<20211102193042.615.91392.stgit@dceara.remote.csb>", "In-Reply-To": "<20211102193004.615.42236.stgit@dceara.remote.csb>", "References": "<20211102193004.615.42236.stgit@dceara.remote.csb>", "User-Agent": "StGit/0.17.1-dirty", "MIME-Version": "1.0", "X-Scanned-By": "MIMEDefang 2.84 on 10.5.11.23", "X-Mimecast-Spam-Score": "0", "X-Mimecast-Originator": "redhat.com", "Cc": "trozet@redhat.com", "Subject": "[ovs-dev] [PATCH ovn branch-21.09 2/3] nb: Add support for\n\tLoad_Balancer_Groups.", "X-BeenThere": "ovs-dev@openvswitch.org", "X-Mailman-Version": "2.1.15", "Precedence": "list", "List-Id": "<ovs-dev.openvswitch.org>", "List-Unsubscribe": "<https://mail.openvswitch.org/mailman/options/ovs-dev>,\n <mailto:ovs-dev-request@openvswitch.org?subject=unsubscribe>", "List-Archive": "<http://mail.openvswitch.org/pipermail/ovs-dev/>", "List-Post": "<mailto:ovs-dev@openvswitch.org>", "List-Help": "<mailto:ovs-dev-request@openvswitch.org?subject=help>", "List-Subscribe": "<https://mail.openvswitch.org/mailman/listinfo/ovs-dev>,\n <mailto:ovs-dev-request@openvswitch.org?subject=subscribe>", "Content-Type": "text/plain; charset=\"us-ascii\"", "Content-Transfer-Encoding": "7bit", "Errors-To": "ovs-dev-bounces@openvswitch.org", "Sender": "\"dev\" <ovs-dev-bounces@openvswitch.org>" }, "content": "For deployments when a large number of load balancers are associated\nto multiple logical switches/routers, introduce a syntactic sugar\nin the OVN_Northbound database (Load_Balancer_Groups) to simplify\nconfiguration.\n\nInstead of associating N Load_Balancer records to M Logical_Switches\n(M x N references in the NB database) we can instead create a single\nLoad_Balancer_Group record, associate all N Load_Balancer records to\nit, and associate it to all M Logical_Switches (in total M + N\nreferences in the NB database).\n\nThis makes it easier for the CMS to configure Load Balancers (e.g., in\nthe ovn-kubernetes use case cluster load balancers are applied to all\nnode logical switches and node logical gateway routers) but also\ndrastically improves performance on the ovsdb-server NB side. This\nhappens thanks to the fact that ovsdb-server now has to track M times\nless references.\n\nWith a micro benchmark which creates 120 logical switches and\nassociates 1000 load balancers to them (with ovn-nbctl daemon) we\nmeasure:\n\n CPU Time NB DB CPU Time ovn-nbctl\n -----------------------------------------------------\n Plain LB: 30s 35s\n LB Groups: 1s 2s\n\nReported-at: https://bugzilla.redhat.com/2001528\nSigned-off-by: Dumitru Ceara <dceara@redhat.com>\n(cherry picked from commit f6aba21c9de8952beccf7ee7e98cfa28618f1edf)\n---\n NEWS | 2 \n northd/northd.c | 242 +++++++++++++++++++++++++++++++-----------------\n ovn-nb.ovsschema | 24 ++++-\n ovn-nb.xml | 37 ++++++-\n tests/ovn-northd.at | 246 +++++++++++++++++++++++++++++++++++++++++--------\n utilities/ovn-nbctl.c | 3 +\n 6 files changed, 424 insertions(+), 130 deletions(-)", "diff": "diff --git a/NEWS b/NEWS\nindex 8fce36482..63d765aa4 100644\n--- a/NEWS\n+++ b/NEWS\n@@ -1,5 +1,7 @@\n OVN v21.09.1 - xx xxx xxxx\n --------------------------\n+ - Added Load_Balancer_Group support, which simplifies large scale\n+ configurations of load balancers.\n \n OVN v21.09.0 - 01 Oct 2021\n --------------------------\ndiff --git a/northd/northd.c b/northd/northd.c\nindex 502c263cc..c1d83548e 100644\n--- a/northd/northd.c\n+++ b/northd/northd.c\n@@ -827,17 +827,74 @@ static void destroy_router_enternal_ips(struct ovn_datapath *od)\n sset_destroy(&od->external_ips);\n }\n \n+static bool\n+lb_has_vip(const struct nbrec_load_balancer *lb)\n+{\n+ return !smap_is_empty(&lb->vips);\n+}\n+\n+static bool\n+lb_group_has_vip(const struct nbrec_load_balancer_group *lb_group)\n+{\n+ for (size_t i = 0; i < lb_group->n_load_balancer; i++) {\n+ if (lb_has_vip(lb_group->load_balancer[i])) {\n+ return true;\n+ }\n+ }\n+ return false;\n+}\n+\n+static bool\n+ls_has_lb_vip(struct ovn_datapath *od)\n+{\n+ for (size_t i = 0; i < od->nbs->n_load_balancer; i++) {\n+ if (lb_has_vip(od->nbs->load_balancer[i])) {\n+ return true;\n+ }\n+ }\n+\n+ for (size_t i = 0; i < od->nbs->n_load_balancer_group; i++) {\n+ if (lb_group_has_vip(od->nbs->load_balancer_group[i])) {\n+ return true;\n+ }\n+ }\n+ return false;\n+}\n+\n+static bool\n+lr_has_lb_vip(struct ovn_datapath *od)\n+{\n+ for (size_t i = 0; i < od->nbr->n_load_balancer; i++) {\n+ if (lb_has_vip(od->nbr->load_balancer[i])) {\n+ return true;\n+ }\n+ }\n+\n+ for (size_t i = 0; i < od->nbr->n_load_balancer_group; i++) {\n+ if (lb_group_has_vip(od->nbr->load_balancer_group[i])) {\n+ return true;\n+ }\n+ }\n+ return false;\n+}\n+\n static void\n-init_lb_ips(struct ovn_datapath *od)\n+init_lb_for_datapath(struct ovn_datapath *od)\n {\n sset_init(&od->lb_ips_v4);\n sset_init(&od->lb_ips_v4_routable);\n sset_init(&od->lb_ips_v6);\n sset_init(&od->lb_ips_v6_routable);\n+\n+ if (od->nbs) {\n+ od->has_lb_vip = ls_has_lb_vip(od);\n+ } else {\n+ od->has_lb_vip = lr_has_lb_vip(od);\n+ }\n }\n \n static void\n-destroy_lb_ips(struct ovn_datapath *od)\n+destroy_lb_for_datapath(struct ovn_datapath *od)\n {\n if (!od->nbs && !od->nbr) {\n return;\n@@ -895,7 +952,7 @@ ovn_datapath_destroy(struct hmap *datapaths, struct ovn_datapath *od)\n free(od->router_ports);\n destroy_nat_entries(od);\n destroy_router_enternal_ips(od);\n- destroy_lb_ips(od);\n+ destroy_lb_for_datapath(od);\n free(od->nat_entries);\n free(od->localnet_ports);\n free(od->l3dgw_ports);\n@@ -1224,7 +1281,7 @@ join_datapaths(struct northd_context *ctx, struct hmap *datapaths,\n \n init_ipam_info_for_datapath(od);\n init_mcast_info_for_datapath(od);\n- init_lb_ips(od);\n+ init_lb_for_datapath(od);\n }\n \n const struct nbrec_logical_router *nbr;\n@@ -1257,7 +1314,7 @@ join_datapaths(struct northd_context *ctx, struct hmap *datapaths,\n init_mcast_info_for_datapath(od);\n init_nat_entries(od);\n init_router_external_ips(od);\n- init_lb_ips(od);\n+ init_lb_for_datapath(od);\n if (smap_get(&od->nbr->options, \"chassis\")) {\n od->is_gw_router = true;\n }\n@@ -2590,7 +2647,7 @@ get_nat_addresses(const struct ovn_port *op, size_t *n, bool routable_only)\n size_t n_nats = 0;\n struct eth_addr mac;\n if (!op || !op->nbrp || !op->od || !op->od->nbr\n- || (!op->od->nbr->n_nat && !op->od->nbr->n_load_balancer)\n+ || (!op->od->nbr->n_nat && !op->od->has_lb_vip)\n || !eth_addr_from_string(op->nbrp->mac, &mac)\n || op->od->n_l3dgw_ports > 1) {\n *n = n_nats;\n@@ -3560,7 +3617,7 @@ build_ovn_lr_lbs(struct hmap *datapaths, struct hmap *lbs)\n }\n if (!smap_get(&od->nbr->options, \"chassis\")\n && od->n_l3dgw_ports != 1) {\n- if (od->n_l3dgw_ports > 1 && od->nbr->n_load_balancer) {\n+ if (od->n_l3dgw_ports > 1 && od->has_lb_vip) {\n static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);\n VLOG_WARN_RL(&rl, \"Load-balancers are configured on logical \"\n \"router %s, which has %\"PRIuSIZE\" distributed \"\n@@ -3578,6 +3635,17 @@ build_ovn_lr_lbs(struct hmap *datapaths, struct hmap *lbs)\n lb = ovn_northd_lb_find(lbs, lb_uuid);\n ovn_northd_lb_add_lr(lb, od);\n }\n+\n+ for (size_t i = 0; i < od->nbr->n_load_balancer_group; i++) {\n+ const struct nbrec_load_balancer_group *lbg =\n+ od->nbr->load_balancer_group[i];\n+ for (size_t j = 0; j < lbg->n_load_balancer; j++) {\n+ const struct uuid *lb_uuid =\n+ &lbg->load_balancer[j]->header_.uuid;\n+ lb = ovn_northd_lb_find(lbs, lb_uuid);\n+ ovn_northd_lb_add_lr(lb, od);\n+ }\n+ }\n }\n }\n \n@@ -3608,6 +3676,17 @@ build_ovn_lbs(struct northd_context *ctx, struct hmap *datapaths,\n lb = ovn_northd_lb_find(lbs, lb_uuid);\n ovn_northd_lb_add_ls(lb, od);\n }\n+\n+ for (size_t i = 0; i < od->nbs->n_load_balancer_group; i++) {\n+ const struct nbrec_load_balancer_group *lbg =\n+ od->nbs->load_balancer_group[i];\n+ for (size_t j = 0; j < lbg->n_load_balancer; j++) {\n+ const struct uuid *lb_uuid =\n+ &lbg->load_balancer[j]->header_.uuid;\n+ lb = ovn_northd_lb_find(lbs, lb_uuid);\n+ ovn_northd_lb_add_ls(lb, od);\n+ }\n+ }\n }\n \n /* Delete any stale SB load balancer rows. */\n@@ -3716,6 +3795,26 @@ build_ovn_lb_svcs(struct northd_context *ctx, struct hmap *ports,\n hmap_destroy(&monitor_map);\n }\n \n+static void\n+build_lrouter_lb_ips(struct ovn_datapath *od, const struct ovn_northd_lb *lb)\n+{\n+ bool is_routable = smap_get_bool(&lb->nlb->options, \"add_route\", false);\n+ const char *ip_address;\n+\n+ SSET_FOR_EACH (ip_address, &lb->ips_v4) {\n+ sset_add(&od->lb_ips_v4, ip_address);\n+ if (is_routable) {\n+ sset_add(&od->lb_ips_v4_routable, ip_address);\n+ }\n+ }\n+ SSET_FOR_EACH (ip_address, &lb->ips_v6) {\n+ sset_add(&od->lb_ips_v6, ip_address);\n+ if (is_routable) {\n+ sset_add(&od->lb_ips_v6_routable, ip_address);\n+ }\n+ }\n+}\n+\n static void\n build_lrouter_lbs(struct hmap *datapaths, struct hmap *lbs)\n {\n@@ -3730,20 +3829,17 @@ build_lrouter_lbs(struct hmap *datapaths, struct hmap *lbs)\n struct ovn_northd_lb *lb =\n ovn_northd_lb_find(lbs,\n &od->nbr->load_balancer[i]->header_.uuid);\n- const char *ip_address;\n- bool is_routable = smap_get_bool(&lb->nlb->options, \"add_route\",\n- false);\n- SSET_FOR_EACH (ip_address, &lb->ips_v4) {\n- sset_add(&od->lb_ips_v4, ip_address);\n- if (is_routable) {\n- sset_add(&od->lb_ips_v4_routable, ip_address);\n- }\n- }\n- SSET_FOR_EACH (ip_address, &lb->ips_v6) {\n- sset_add(&od->lb_ips_v6, ip_address);\n- if (is_routable) {\n- sset_add(&od->lb_ips_v6_routable, ip_address);\n- }\n+ build_lrouter_lb_ips(od, lb);\n+ }\n+\n+ for (size_t i = 0; i < od->nbr->n_load_balancer_group; i++) {\n+ const struct nbrec_load_balancer_group *lbg =\n+ od->nbr->load_balancer_group[i];\n+ for (size_t j = 0; j < lbg->n_load_balancer; j++) {\n+ struct ovn_northd_lb *lb =\n+ ovn_northd_lb_find(lbs,\n+ &lbg->load_balancer[j]->header_.uuid);\n+ build_lrouter_lb_ips(od, lb);\n }\n }\n }\n@@ -5551,22 +5647,8 @@ build_empty_lb_event_flow(struct ovn_lb_vip *lb_vip,\n return true;\n }\n \n-static bool\n-ls_has_lb_vip(struct ovn_datapath *od)\n-{\n- for (int i = 0; i < od->nbs->n_load_balancer; i++) {\n- struct nbrec_load_balancer *nb_lb = od->nbs->load_balancer[i];\n- if (!smap_is_empty(&nb_lb->vips)) {\n- return true;\n- }\n- }\n-\n- return false;\n-}\n-\n static void\n-build_pre_lb(struct ovn_datapath *od, struct hmap *lflows,\n- struct hmap *lbs)\n+build_pre_lb(struct ovn_datapath *od, struct hmap *lflows)\n {\n /* Do not send ND packets to conntrack */\n ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_LB, 110,\n@@ -5601,49 +5683,41 @@ build_pre_lb(struct ovn_datapath *od, struct hmap *lflows,\n 110, lflows);\n }\n \n- for (int i = 0; i < od->nbs->n_load_balancer; i++) {\n- struct nbrec_load_balancer *nb_lb = od->nbs->load_balancer[i];\n- struct ovn_northd_lb *lb =\n- ovn_northd_lb_find(lbs, &nb_lb->header_.uuid);\n- ovs_assert(lb);\n-\n- /* 'REGBIT_CONNTRACK_NAT' is set to let the pre-stateful table send\n- * packet to conntrack for defragmentation and possibly for unNATting.\n- *\n- * Send all the packets to conntrack in the ingress pipeline if the\n- * logical switch has a load balancer with VIP configured. Earlier\n- * we used to set the REGBIT_CONNTRACK_DEFRAG flag in the ingress\n- * pipeline if the IP destination matches the VIP. But this causes\n- * few issues when a logical switch has no ACLs configured with\n- * allow-related.\n- * To understand the issue, lets a take a TCP load balancer -\n- * 10.0.0.10:80=10.0.0.3:80.\n- * If a logical port - p1 with IP - 10.0.0.5 opens a TCP connection\n- * with the VIP - 10.0.0.10, then the packet in the ingress pipeline\n- * of 'p1' is sent to the p1's conntrack zone id and the packet is\n- * load balanced to the backend - 10.0.0.3. For the reply packet from\n- * the backend lport, it is not sent to the conntrack of backend\n- * lport's zone id. This is fine as long as the packet is valid.\n- * Suppose the backend lport sends an invalid TCP packet (like\n- * incorrect sequence number), the packet gets * delivered to the\n- * lport 'p1' without unDNATing the packet to the VIP - 10.0.0.10.\n- * And this causes the connection to be reset by the lport p1's VIF.\n- *\n- * We can't fix this issue by adding a logical flow to drop ct.inv\n- * packets in the egress pipeline since it will drop all other\n- * connections not destined to the load balancers.\n- *\n- * To fix this issue, we send all the packets to the conntrack in the\n- * ingress pipeline if a load balancer is configured. We can now\n- * add a lflow to drop ct.inv packets.\n- */\n- if (lb->n_vips) {\n- ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_LB,\n- 100, \"ip\", REGBIT_CONNTRACK_NAT\" = 1; next;\");\n- ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_LB,\n- 100, \"ip\", REGBIT_CONNTRACK_NAT\" = 1; next;\");\n- break;\n- }\n+ /* 'REGBIT_CONNTRACK_NAT' is set to let the pre-stateful table send\n+ * packet to conntrack for defragmentation and possibly for unNATting.\n+ *\n+ * Send all the packets to conntrack in the ingress pipeline if the\n+ * logical switch has a load balancer with VIP configured. Earlier\n+ * we used to set the REGBIT_CONNTRACK_DEFRAG flag in the ingress\n+ * pipeline if the IP destination matches the VIP. But this causes\n+ * few issues when a logical switch has no ACLs configured with\n+ * allow-related.\n+ * To understand the issue, lets a take a TCP load balancer -\n+ * 10.0.0.10:80=10.0.0.3:80.\n+ * If a logical port - p1 with IP - 10.0.0.5 opens a TCP connection\n+ * with the VIP - 10.0.0.10, then the packet in the ingress pipeline\n+ * of 'p1' is sent to the p1's conntrack zone id and the packet is\n+ * load balanced to the backend - 10.0.0.3. For the reply packet from\n+ * the backend lport, it is not sent to the conntrack of backend\n+ * lport's zone id. This is fine as long as the packet is valid.\n+ * Suppose the backend lport sends an invalid TCP packet (like\n+ * incorrect sequence number), the packet gets * delivered to the\n+ * lport 'p1' without unDNATing the packet to the VIP - 10.0.0.10.\n+ * And this causes the connection to be reset by the lport p1's VIF.\n+ *\n+ * We can't fix this issue by adding a logical flow to drop ct.inv\n+ * packets in the egress pipeline since it will drop all other\n+ * connections not destined to the load balancers.\n+ *\n+ * To fix this issue, we send all the packets to the conntrack in the\n+ * ingress pipeline if a load balancer is configured. We can now\n+ * add a lflow to drop ct.inv packets.\n+ */\n+ if (od->has_lb_vip) {\n+ ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_LB,\n+ 100, \"ip\", REGBIT_CONNTRACK_NAT\" = 1; next;\");\n+ ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_LB,\n+ 100, \"ip\", REGBIT_CONNTRACK_NAT\" = 1; next;\");\n }\n }\n \n@@ -7325,15 +7399,13 @@ static void\n build_lswitch_lflows_pre_acl_and_acl(struct ovn_datapath *od,\n struct hmap *port_groups,\n struct hmap *lflows,\n- struct shash *meter_groups,\n- struct hmap *lbs)\n+ struct shash *meter_groups)\n {\n if (od->nbs) {\n- od->has_lb_vip = ls_has_lb_vip(od);\n ls_get_acl_flags(od);\n \n build_pre_acls(od, port_groups, lflows);\n- build_pre_lb(od, lflows, lbs);\n+ build_pre_lb(od, lflows);\n build_pre_stateful(od, lflows);\n build_acl_hints(od, lflows);\n build_acls(od, lflows, port_groups, meter_groups);\n@@ -12583,7 +12655,7 @@ build_lrouter_nat_defrag_and_lb(struct ovn_datapath *od, struct hmap *lflows,\n * flag set. Some NICs are unable to offload these flows.\n */\n if ((od->is_gw_router || od->n_l3dgw_ports) &&\n- (od->nbr->n_nat || od->nbr->n_load_balancer)) {\n+ (od->nbr->n_nat || od->has_lb_vip)) {\n ovn_lflow_add(lflows, od, S_ROUTER_OUT_UNDNAT, 50,\n \"ip\", \"flags.loopback = 1; ct_dnat;\");\n ovn_lflow_add(lflows, od, S_ROUTER_OUT_POST_UNDNAT, 50,\n@@ -12801,7 +12873,7 @@ build_lswitch_and_lrouter_iterate_by_od(struct ovn_datapath *od,\n {\n /* Build Logical Switch Flows. */\n build_lswitch_lflows_pre_acl_and_acl(od, lsi->port_groups, lsi->lflows,\n- lsi->meter_groups, lsi->lbs);\n+ lsi->meter_groups);\n \n build_fwd_group_lflows(od, lsi->lflows);\n build_lswitch_lflows_admission_control(od, lsi->lflows);\ndiff --git a/ovn-nb.ovsschema b/ovn-nb.ovsschema\nindex 2ac8ef3ea..5dee04fe9 100644\n--- a/ovn-nb.ovsschema\n+++ b/ovn-nb.ovsschema\n@@ -1,7 +1,7 @@\n {\n \"name\": \"OVN_Northbound\",\n- \"version\": \"5.32.1\",\n- \"cksum\": \"2805328215 29734\",\n+ \"version\": \"5.33.1\",\n+ \"cksum\": \"1931852754 30731\",\n \"tables\": {\n \"NB_Global\": {\n \"columns\": {\n@@ -61,6 +61,11 @@\n \"refType\": \"weak\"},\n \"min\": 0,\n \"max\": \"unlimited\"}},\n+ \"load_balancer_group\": {\n+ \"type\": {\"key\": {\"type\": \"uuid\",\n+ \"refTable\": \"Load_Balancer_Group\"},\n+ \"min\": 0,\n+ \"max\": \"unlimited\"}},\n \"dns_records\": {\"type\": {\"key\": {\"type\": \"uuid\",\n \"refTable\": \"DNS\",\n \"refType\": \"weak\"},\n@@ -208,6 +213,16 @@\n \"type\": {\"key\": \"string\", \"value\": \"string\",\n \"min\": 0, \"max\": \"unlimited\"}}},\n \"isRoot\": true},\n+ \"Load_Balancer_Group\": {\n+ \"columns\": {\n+ \"name\": {\"type\": \"string\"},\n+ \"load_balancer\": {\"type\": {\"key\": {\"type\": \"uuid\",\n+ \"refTable\": \"Load_Balancer\",\n+ \"refType\": \"weak\"},\n+ \"min\": 0,\n+ \"max\": \"unlimited\"}}},\n+ \"indexes\": [[\"name\"]],\n+ \"isRoot\": true},\n \"Load_Balancer_Health_Check\": {\n \"columns\": {\n \"vip\": {\"type\": \"string\"},\n@@ -336,6 +351,11 @@\n \"refType\": \"weak\"},\n \"min\": 0,\n \"max\": \"unlimited\"}},\n+ \"load_balancer_group\": {\n+ \"type\": {\"key\": {\"type\": \"uuid\",\n+ \"refTable\": \"Load_Balancer_Group\"},\n+ \"min\": 0,\n+ \"max\": \"unlimited\"}},\n \"copp\": {\"type\": {\"key\": {\"type\": \"uuid\", \"refTable\": \"Copp\",\n \"refType\": \"weak\"},\n \"min\": 0, \"max\": 1}},\ndiff --git a/ovn-nb.xml b/ovn-nb.xml\nindex 390cc5a44..93e358f13 100644\n--- a/ovn-nb.xml\n+++ b/ovn-nb.xml\n@@ -450,8 +450,11 @@\n </column>\n \n <column name=\"load_balancer\">\n- Load balance a virtual ip address to a set of logical port endpoint\n- ip addresses.\n+ Set of load balancers associated to this logical switch.\n+ </column>\n+\n+ <column name=\"load_balancer_group\">\n+ Set of load balancers groups associated to this logical switch.\n </column>\n \n <column name=\"acls\">\n@@ -1812,6 +1815,26 @@\n </group>\n </table>\n \n+ <table name=\"Load_Balancer_Group\" title=\"load balancer group\">\n+ <p>\n+ Each row represents a logical grouping of load balancers. It is up to\n+ the CMS to decide the criteria on which load balancers are grouped\n+ together. To simplify configuration and to optimize its processing\n+ load balancers that must be associated to the same set of logical\n+ switches and/or logical routers should be grouped together.\n+ </p>\n+\n+ <column name=\"name\">\n+ A name for the load balancer group. This name has no special meaning or\n+ purpose other than to provide convenience for human interaction with\n+ the ovn-nb database.\n+ </column>\n+\n+ <column name=\"load_balancer\">\n+ A set of load balancers.\n+ </column>\n+ </table>\n+\n <table name=\"Load_Balancer_Health_Check\" title=\"load balancer\">\n <p>\n Each row represents one load balancer health check. Health checks\n@@ -2057,9 +2080,13 @@\n </column>\n \n <column name=\"load_balancer\">\n- Load balance a virtual ip address to a set of logical port ip\n- addresses. Load balancer rules only work on the Gateway routers or\n- routers with one and only one distributed gateway port.\n+ Set of load balancers associated to this logical router. Load balancer\n+ Load balancer rules only work on the Gateway routers or routers with one\n+ and only one distributed gateway port.\n+ </column>\n+\n+ <column name=\"load_balancer_group\">\n+ Set of load balancers groups associated to this logical router.\n </column>\n \n <group title=\"Naming\">\ndiff --git a/tests/ovn-northd.at b/tests/ovn-northd.at\nindex 66cd03bf2..166723502 100644\n--- a/tests/ovn-northd.at\n+++ b/tests/ovn-northd.at\n@@ -1438,7 +1438,40 @@ check ovn-nbctl --wait=sb lr-nat-add lr0 dnat 192.168.2.5 10.0.0.5\n ovn-sbctl dump-flows lr0 > sbflows\n AT_CAPTURE_FILE([sbflows])\n \n-# There shoule be no flows for LB VIPs in lr_in_unsnat if the VIP is not a\n+# There should be no flows for LB VIPs in lr_in_unsnat if the VIP is not a\n+# dnat_and_snat or snat entry.\n+AT_CHECK([grep \"lr_in_unsnat\" sbflows | sort], [0], [dnl\n+ table=4 (lr_in_unsnat ), priority=0 , match=(1), action=(next;)\n+ table=4 (lr_in_unsnat ), priority=120 , match=(ip4 && ip4.dst == 192.168.2.1 && tcp && tcp.dst == 8080), action=(next;)\n+ table=4 (lr_in_unsnat ), priority=120 , match=(ip4 && ip4.dst == 192.168.2.4 && udp && udp.dst == 8080), action=(next;)\n+ table=4 (lr_in_unsnat ), priority=120 , match=(ip4 && ip4.dst == 192.168.2.5 && tcp && tcp.dst == 8080), action=(next;)\n+ table=4 (lr_in_unsnat ), priority=90 , match=(ip && ip4.dst == 192.168.2.1), action=(ct_snat;)\n+ table=4 (lr_in_unsnat ), priority=90 , match=(ip && ip4.dst == 192.168.2.4), action=(ct_snat;)\n+])\n+\n+AS_BOX([Check behavior with LB Groups])\n+check ovn-nbctl lr-lb-del lr0 lb1\n+check ovn-nbctl lr-lb-del lr0 lb2\n+check ovn-nbctl lr-lb-del lr0 lb3\n+check ovn-nbctl lr-lb-del lr0 lb4\n+\n+lb1=$(fetch_column nb:load_balancer _uuid name=lb1)\n+lb2=$(fetch_column nb:load_balancer _uuid name=lb2)\n+lb3=$(fetch_column nb:load_balancer _uuid name=lb3)\n+lb4=$(fetch_column nb:load_balancer _uuid name=lb4)\n+\n+lbg=$(ovn-nbctl create load_balancer_group name=lbg -- \\\n+ add load_balancer_group lbg load_balancer $lb1 -- \\\n+ add load_balancer_group lbg load_balancer $lb2 -- \\\n+ add load_balancer_group lbg load_balancer $lb3 -- \\\n+ add load_balancer_group lbg load_balancer $lb4)\n+\n+check ovn-nbctl --wait=sb add logical_router lr0 load_balancer_group $lbg\n+\n+ovn-sbctl dump-flows lr0 > sbflows\n+AT_CAPTURE_FILE([sbflows])\n+\n+# There should be no flows for LB VIPs in lr_in_unsnat if the VIP is not a\n # dnat_and_snat or snat entry.\n AT_CHECK([grep \"lr_in_unsnat\" sbflows | sort], [0], [dnl\n table=4 (lr_in_unsnat ), priority=0 , match=(1), action=(next;)\n@@ -1857,53 +1890,79 @@ ovn-nbctl lsp-add sw0 sw0-p1\n ovn-nbctl lb-add lb1 \"10.0.0.10\" \"10.0.0.3\"\n ovn-nbctl lb-add lb2 \"10.0.0.11\" \"10.0.0.4\"\n \n+ovn-nbctl lb-add lb3 \"10.0.0.12\" \"10.0.0.5\"\n+ovn-nbctl lb-add lb4 \"10.0.0.13\" \"10.0.0.6\"\n+\n+lb1=$(fetch_column nb:load_balancer _uuid name=lb1)\n+lb2=$(fetch_column nb:load_balancer _uuid name=lb2)\n+lb3=$(fetch_column nb:load_balancer _uuid name=lb3)\n+lb4=$(fetch_column nb:load_balancer _uuid name=lb4)\n+\n+lbg=$(ovn-nbctl create load_balancer_group name=lbg)\n+check ovn-nbctl add logical_switch sw0 load_balancer_group $lbg\n+\n ovn-nbctl --wait=sb sync\n AT_CHECK([ovn-sbctl lflow-list | grep \"ls_out_pre_lb.*priority=100\" | grep reg0 | sort], [0], [dnl\n ])\n \n-ovn-nbctl ls-lb-add sw0 lb1\n-ovn-nbctl --wait=sb sync\n+check ovn-nbctl ls-lb-add sw0 lb1\n+check ovn-nbctl add load_balancer_group $lbg load_balancer $lb3\n+check ovn-nbctl --wait=sb sync\n AT_CHECK([ovn-sbctl lflow-list | grep \"ls_out_pre_lb.*priority=100\" | grep reg0 | sort], [0], [dnl\n table=0 (ls_out_pre_lb ), priority=100 , match=(ip), action=(reg0[[2]] = 1; next;)\n ])\n \n-ovn-nbctl ls-lb-add sw0 lb2\n-ovn-nbctl --wait=sb sync\n+check ovn-nbctl ls-lb-add sw0 lb2\n+check ovn-nbctl add load_balancer_group $lbg load_balancer $lb4\n+check ovn-nbctl --wait=sb sync\n AT_CHECK([ovn-sbctl lflow-list | grep \"ls_out_pre_lb.*priority=100\" | grep reg0 | sort], [0], [dnl\n table=0 (ls_out_pre_lb ), priority=100 , match=(ip), action=(reg0[[2]] = 1; next;)\n ])\n \n-lb1_uuid=$(ovn-nbctl --bare --columns _uuid find load_balancer name=lb1)\n-lb2_uuid=$(ovn-nbctl --bare --columns _uuid find load_balancer name=lb2)\n+check ovn-nbctl clear load_balancer $lb1 vips\n+check ovn-nbctl clear load_balancer $lb3 vips\n+check ovn-nbctl --wait=sb sync\n+AT_CHECK([ovn-sbctl lflow-list | grep \"ls_out_pre_lb.*priority=100\" | grep reg0 | sort], [0], [dnl\n+ table=0 (ls_out_pre_lb ), priority=100 , match=(ip), action=(reg0[[2]] = 1; next;)\n+])\n \n-ovn-nbctl clear load_balancer $lb1_uuid vips\n-ovn-nbctl --wait=sb sync\n+check ovn-nbctl clear load_balancer $lb2 vips\n+check ovn-nbctl --wait=sb sync\n AT_CHECK([ovn-sbctl lflow-list | grep \"ls_out_pre_lb.*priority=100\" | grep reg0 | sort], [0], [dnl\n table=0 (ls_out_pre_lb ), priority=100 , match=(ip), action=(reg0[[2]] = 1; next;)\n ])\n \n-ovn-nbctl clear load_balancer $lb2_uuid vips\n-ovn-nbctl --wait=sb sync\n+check ovn-nbctl clear load_balancer $lb4 vips\n+check ovn-nbctl --wait=sb sync\n AT_CHECK([ovn-sbctl lflow-list | grep \"ls_out_pre_lb.*priority=100\" | grep reg0 | sort], [0], [dnl\n ])\n \n-ovn-nbctl set load_balancer $lb1_uuid vips:\"10.0.0.10\"=\"10.0.0.3\"\n-ovn-nbctl set load_balancer $lb2_uuid vips:\"10.0.0.11\"=\"10.0.0.4\"\n+check ovn-nbctl set load_balancer $lb1 vips:\"10.0.0.10\"=\"10.0.0.3\"\n+check ovn-nbctl set load_balancer $lb2 vips:\"10.0.0.11\"=\"10.0.0.4\"\n+check ovn-nbctl set load_balancer $lb3 vips:\"10.0.0.12\"=\"10.0.0.5\"\n+check ovn-nbctl set load_balancer $lb4 vips:\"10.0.0.13\"=\"10.0.0.6\"\n \n-ovn-nbctl --wait=sb sync\n+check ovn-nbctl --wait=sb sync\n AT_CHECK([ovn-sbctl lflow-list | grep \"ls_out_pre_lb.*priority=100\" | grep reg0 | sort], [0], [dnl\n table=0 (ls_out_pre_lb ), priority=100 , match=(ip), action=(reg0[[2]] = 1; next;)\n ])\n \n # Now reverse the order of clearing the vip.\n-ovn-nbctl clear load_balancer $lb2_uuid vips\n-ovn-nbctl --wait=sb sync\n+check ovn-nbctl clear load_balancer $lb2 vips\n+check ovn-nbctl clear load_balancer $lb4 vips\n+check ovn-nbctl --wait=sb sync\n AT_CHECK([ovn-sbctl lflow-list | grep \"ls_out_pre_lb.*priority=100\" | grep reg0 | sort], [0], [dnl\n table=0 (ls_out_pre_lb ), priority=100 , match=(ip), action=(reg0[[2]] = 1; next;)\n ])\n \n-ovn-nbctl clear load_balancer $lb1_uuid vips\n-ovn-nbctl --wait=sb sync\n+check ovn-nbctl clear load_balancer $lb1 vips\n+check ovn-nbctl --wait=sb sync\n+AT_CHECK([ovn-sbctl lflow-list | grep \"ls_out_pre_lb.*priority=100\" | grep reg0 | sort], [0], [dnl\n+ table=0 (ls_out_pre_lb ), priority=100 , match=(ip), action=(reg0[[2]] = 1; next;)\n+])\n+\n+check ovn-nbctl clear load_balancer $lb3 vips\n+check ovn-nbctl --wait=sb sync\n AT_CHECK([ovn-sbctl lflow-list | grep \"ls_out_pre_lb.*priority=100\" | grep reg0 | sort], [0], [dnl\n ])\n \n@@ -2345,23 +2404,29 @@ OVN_FOR_EACH_NORTHD([\n AT_SETUP([NB to SB load balancer sync])\n ovn_start\n \n-check ovn-nbctl --wait=sb lb-add lb0 10.0.0.10:80 10.0.0.4:8080\n-check_row_count nb:load_balancer 1\n+check ovn-nbctl lb-add lb0 10.0.0.10:80 10.0.0.4:8080\n+check ovn-nbctl --wait=sb lb-add lbg0 20.0.0.10:80 20.0.0.4:8080\n+check_row_count nb:load_balancer 2\n \n echo\n echo \"__file__:__line__: Check that there are no SB load balancer rows.\"\n check_row_count sb:load_balancer 0\n \n-check ovn-nbctl ls-add sw0\n+lbg0=$(fetch_column nb:load_balancer _uuid name=lbg0)\n+lbg=$(ovn-nbctl create load_balancer_group name=lbg)\n+check ovn-nbctl add load_balancer_group $lbg load_balancer $lbg0\n+check ovn-nbctl ls-add sw0 -- add logical_switch sw0 load_balancer_group $lbg\n check ovn-nbctl --wait=sb ls-lb-add sw0 lb0\n sw0_sb_uuid=$(fetch_column datapath_binding _uuid external_ids:name=sw0)\n \n echo\n-echo \"__file__:__line__: Check that there is one SB load balancer row for lb0.\"\n-check_row_count sb:load_balancer 1\n+echo \"__file__:__line__: Check that there is one SB load balancer row for lb0 and one for lbg0\"\n+check_row_count sb:load_balancer 2\n check_column \"10.0.0.10:80=10.0.0.4:8080 tcp\" sb:load_balancer vips,protocol name=lb0\n+check_column \"20.0.0.10:80=20.0.0.4:8080 tcp\" sb:load_balancer vips,protocol name=lbg0\n \n lb0_uuid=$(fetch_column sb:load_balancer _uuid name=lb0)\n+lbg0_uuid=$(fetch_column sb:load_balancer _uuid name=lbg0)\n \n echo\n echo \"__file__:__line__: Check that SB lb0 has sw0 in datapaths column.\"\n@@ -2369,7 +2434,13 @@ echo \"__file__:__line__: Check that SB lb0 has sw0 in datapaths column.\"\n check_column \"$sw0_sb_uuid\" sb:load_balancer datapaths name=lb0\n check_column \"\" sb:datapath_binding load_balancers external_ids:name=sw0\n \n-check ovn-nbctl --wait=sb set load_balancer . vips:\"10.0.0.20\\:90\"=\"20.0.0.4:8080,30.0.0.4:8080\"\n+echo\n+echo \"__file__:__line__: Check that SB lbg0 has sw0 in datapaths column.\"\n+\n+check_column \"$sw0_sb_uuid\" sb:load_balancer datapaths name=lbg0\n+check_column \"\" sb:datapath_binding load_balancers external_ids:name=sw0\n+\n+check ovn-nbctl --wait=sb set load_balancer lb0 vips:\"10.0.0.20\\:90\"=\"20.0.0.4:8080,30.0.0.4:8080\"\n \n echo\n echo \"__file__:__line__: Check that SB lb0 has vips and protocol columns are set properly.\"\n@@ -2377,38 +2448,61 @@ echo \"__file__:__line__: Check that SB lb0 has vips and protocol columns are set\n check_column \"10.0.0.10:80=10.0.0.4:8080 10.0.0.20:90=20.0.0.4:8080,30.0.0.4:8080 tcp\" \\\n sb:load_balancer vips,protocol name=lb0\n \n-check ovn-nbctl lr-add lr0\n+check ovn-nbctl --wait=sb set load_balancer lbg0 vips:\"20.0.0.20\\:90\"=\"20.0.0.4:8080,30.0.0.4:8080\"\n+\n+echo\n+echo \"__file__:__line__: Check that SB lbg0 has vips and protocol columns are set properly.\"\n+\n+check_column \"20.0.0.10:80=20.0.0.4:8080 20.0.0.20:90=20.0.0.4:8080,30.0.0.4:8080 tcp\" \\\n+sb:load_balancer vips,protocol name=lbg0\n+\n+check ovn-nbctl lr-add lr0 -- add logical_router lr0 load_balancer_group $lbg\n check ovn-nbctl --wait=sb lr-lb-add lr0 lb0\n \n echo\n echo \"__file__:__line__: Check that SB lb0 has only sw0 in datapaths column.\"\n check_column \"$sw0_sb_uuid\" sb:load_balancer datapaths name=lb0\n \n-check ovn-nbctl ls-add sw1\n+echo\n+echo \"__file__:__line__: Check that SB lbg0 has only sw0 in datapaths column.\"\n+check_column \"$sw0_sb_uuid\" sb:load_balancer datapaths name=lbg0\n+\n+check ovn-nbctl ls-add sw1 -- add logical_switch sw1 load_balancer_group $lbg\n check ovn-nbctl --wait=sb ls-lb-add sw1 lb0\n sw1_sb_uuid=$(fetch_column datapath_binding _uuid external_ids:name=sw1)\n \n echo\n echo \"__file__:__line__: Check that SB lb0 has sw0 and sw1 in datapaths column.\"\n check_column \"$sw0_sb_uuid $sw1_sb_uuid\" sb:load_balancer datapaths name=lb0\n+\n+echo\n+echo \"__file__:__line__: Check that SB lbg0 has sw0 and sw1 in datapaths column.\"\n+check_column \"$sw0_sb_uuid $sw1_sb_uuid\" sb:load_balancer datapaths name=lbg0\n check_column \"\" sb:datapath_binding load_balancers external_ids:name=sw1\n \n check ovn-nbctl --wait=sb lb-add lb1 10.0.0.30:80 20.0.0.50:8080 udp\n-check_row_count sb:load_balancer 1\n+check ovn-nbctl --wait=sb lb-add lbg1 20.0.0.30:80 20.0.0.50:8080 udp\n+check_row_count sb:load_balancer 2\n \n+lbg1=$(fetch_column nb:load_balancer _uuid name=lbg1)\n+check ovn-nbctl add load_balancer_group $lbg load_balancer $lbg1\n check ovn-nbctl --wait=sb lr-lb-add lr0 lb1\n-check_row_count sb:load_balancer 1\n+check_row_count sb:load_balancer 3\n \n echo\n echo \"__file__:__line__: Associate lb1 to sw1 and check that lb1 is created in SB DB.\"\n \n check ovn-nbctl --wait=sb ls-lb-add sw1 lb1\n-check_row_count sb:load_balancer 2\n+check_row_count sb:load_balancer 4\n \n echo\n echo \"__file__:__line__: Check that SB lb1 has vips and protocol columns are set properly.\"\n check_column \"10.0.0.30:80=20.0.0.50:8080 udp\" sb:load_balancer vips,protocol name=lb1\n \n+echo\n+echo \"__file__:__line__: Check that SB lbg1 has vips and protocol columns are set properly.\"\n+check_column \"20.0.0.30:80=20.0.0.50:8080 udp\" sb:load_balancer vips,protocol name=lbg1\n+\n lb1_uuid=$(fetch_column sb:load_balancer _uuid name=lb1)\n \n echo\n@@ -2416,20 +2510,26 @@ echo \"__file__:__line__: Check that SB lb1 has sw1 in datapaths column.\"\n \n check_column \"$sw1_sb_uuid\" sb:load_balancer datapaths name=lb1\n \n+lbg1_uuid=$(fetch_column sb:load_balancer _uuid name=lbg1)\n+\n+echo\n+echo \"__file__:__line__: Check that SB lbg1 has sw0 and sw1 in datapaths column.\"\n+\n+check_column \"$sw0_sb_uuid $sw1_sb_uuid\" sb:load_balancer datapaths name=lbg1\n+\n echo\n echo \"__file__:__line__: check that datapath sw1 has no entry in the load_balancers column.\"\n check_column \"\" sb:datapath_binding load_balancers external_ids:name=sw1\n \n-\n echo\n echo \"__file__:__line__: Set hairpin_snat_ip on lb1 and check that SB DB is updated.\"\n check ovn-nbctl --wait=sb set Load_Balancer lb1 options:hairpin_snat_ip=\"42.42.42.42 4242::4242\"\n check_column \"$lb1_uuid\" sb:load_balancer _uuid name=lb1 options='{hairpin_orig_tuple=\"true\", hairpin_snat_ip=\"42.42.42.42 4242::4242\"}'\n \n echo\n-echo \"__file__:__line__: Delete load balancer lb1 and check that datapath sw1's load_balancers is still empty.\"\n+echo \"__file__:__line__: Delete load balancers lb1 and lbg1 and check that datapath sw1's load_balancers is still empty.\"\n \n-ovn-nbctl --wait=sb lb-del lb1\n+ovn-nbctl --wait=sb lb-del lb1 -- lb-del lbg1\n check_column \"\" sb:datapath_binding load_balancers external_ids:name=sw1\n AT_CLEANUP\n ])\n@@ -2438,11 +2538,52 @@ OVN_FOR_EACH_NORTHD([\n AT_SETUP([LS load balancer hairpin logical flows])\n ovn_start\n \n+lbg=$(ovn-nbctl create load_balancer_group name=lbg)\n+\n check ovn-nbctl \\\n- -- ls-add sw0 \\\n- -- lb-add lb0 10.0.0.10:80 10.0.0.4:8080 \\\n+ -- lb-add lb0 10.0.0.10:80 10.0.0.4:8080\n+\n+lb0=$(fetch_column nb:load_balancer _uuid name=lb0)\n+\n+check ovn-nbctl \\\n+ -- ls-add sw0 -- \\\n+ -- add logical_switch sw0 load_balancer_group $lbg \\\n -- ls-lb-add sw0 lb0\n+check ovn-nbctl --wait=sb sync\n+\n+AT_CHECK([ovn-sbctl lflow-list sw0 | grep ls_in_pre_hairpin | sort], [0], [dnl\n+ table=13(ls_in_pre_hairpin ), priority=0 , match=(1), action=(next;)\n+ table=13(ls_in_pre_hairpin ), priority=100 , match=(ip && ct.trk), action=(reg0[[6]] = chk_lb_hairpin(); reg0[[12]] = chk_lb_hairpin_reply(); next;)\n+])\n \n+AT_CHECK([ovn-sbctl lflow-list sw0 | grep ls_in_nat_hairpin | sort], [0], [dnl\n+ table=14(ls_in_nat_hairpin ), priority=0 , match=(1), action=(next;)\n+ table=14(ls_in_nat_hairpin ), priority=100 , match=(ip && ct.est && ct.trk && reg0[[6]] == 1), action=(ct_snat;)\n+ table=14(ls_in_nat_hairpin ), priority=100 , match=(ip && ct.new && ct.trk && reg0[[6]] == 1), action=(ct_snat_to_vip; next;)\n+ table=14(ls_in_nat_hairpin ), priority=90 , match=(ip && reg0[[12]] == 1), action=(ct_snat;)\n+])\n+\n+AT_CHECK([ovn-sbctl lflow-list sw0 | grep ls_in_hairpin | sort], [0], [dnl\n+ table=15(ls_in_hairpin ), priority=0 , match=(1), action=(next;)\n+ table=15(ls_in_hairpin ), priority=1 , match=((reg0[[6]] == 1 || reg0[[12]] == 1)), action=(eth.dst <-> eth.src; outport = inport; flags.loopback = 1; output;)\n+])\n+\n+check ovn-nbctl -- ls-lb-del sw0 lb0\n+check ovn-nbctl --wait=sb sync\n+\n+AT_CHECK([ovn-sbctl lflow-list sw0 | grep ls_in_pre_hairpin | sort], [0], [dnl\n+ table=13(ls_in_pre_hairpin ), priority=0 , match=(1), action=(next;)\n+])\n+\n+AT_CHECK([ovn-sbctl lflow-list sw0 | grep ls_in_nat_hairpin | sort], [0], [dnl\n+ table=14(ls_in_nat_hairpin ), priority=0 , match=(1), action=(next;)\n+])\n+\n+AT_CHECK([ovn-sbctl lflow-list sw0 | grep ls_in_hairpin | sort], [0], [dnl\n+ table=15(ls_in_hairpin ), priority=0 , match=(1), action=(next;)\n+])\n+\n+check ovn-nbctl -- add load_balancer_group $lbg load_balancer $lb0\n check ovn-nbctl --wait=sb sync\n \n AT_CHECK([ovn-sbctl lflow-list sw0 | grep ls_in_pre_hairpin | sort], [0], [dnl\n@@ -3254,8 +3395,14 @@ check ovn-nbctl lsp-set-type public-lr0 router\n check ovn-nbctl lsp-set-addresses public-lr0 router\n check ovn-nbctl lsp-set-options public-lr0 router-port=lr0-public\n \n+lbg=$(ovn-nbctl create load_balancer_group name=lbg)\n+\n check ovn-nbctl lb-add lb1 10.0.0.10:80 10.0.0.4:8080\n+check ovn-nbctl lb-add lbg1 10.0.0.100:80 10.0.0.40:8080\n+lbg1=$(fetch_column nb:load_balancer _uuid name=lbg1)\n+check ovn-nbctl add load_balancer_group $lbg load_balancer $lbg1\n check ovn-nbctl lr-lb-add lr0 lb1\n+check ovn-nbctl add logical_router lr0 load_balancer_group $lbg\n check ovn-nbctl set logical_router lr0 options:chassis=ch1\n \n check ovn-nbctl --wait=sb sync\n@@ -3270,12 +3417,15 @@ AT_CHECK([grep \"lr_in_unsnat\" lr0flows | sort], [0], [dnl\n AT_CHECK([grep \"lr_in_defrag\" lr0flows | sort], [0], [dnl\n table=5 (lr_in_defrag ), priority=0 , match=(1), action=(next;)\n table=5 (lr_in_defrag ), priority=110 , match=(ip && ip4.dst == 10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; reg9[[16..31]] = tcp.dst; ct_dnat;)\n+ table=5 (lr_in_defrag ), priority=110 , match=(ip && ip4.dst == 10.0.0.100 && tcp), action=(reg0 = 10.0.0.100; reg9[[16..31]] = tcp.dst; ct_dnat;)\n ])\n \n AT_CHECK([grep \"lr_in_dnat\" lr0flows | sort], [0], [dnl\n table=6 (lr_in_dnat ), priority=0 , match=(1), action=(next;)\n table=6 (lr_in_dnat ), priority=120 , match=(ct.est && ip4 && reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_label.natted == 1), action=(next;)\n+ table=6 (lr_in_dnat ), priority=120 , match=(ct.est && ip4 && reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80 && ct_label.natted == 1), action=(next;)\n table=6 (lr_in_dnat ), priority=120 , match=(ct.new && ip4 && reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80), action=(ct_lb(backends=10.0.0.4:8080);)\n+ table=6 (lr_in_dnat ), priority=120 , match=(ct.new && ip4 && reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80), action=(ct_lb(backends=10.0.0.40:8080);)\n ])\n \n AT_CHECK([grep \"lr_out_undnat\" lr0flows | sort], [0], [dnl\n@@ -3303,12 +3453,15 @@ AT_CHECK([grep \"lr_in_unsnat\" lr0flows | sort], [0], [dnl\n AT_CHECK([grep \"lr_in_defrag\" lr0flows | sort], [0], [dnl\n table=5 (lr_in_defrag ), priority=0 , match=(1), action=(next;)\n table=5 (lr_in_defrag ), priority=110 , match=(ip && ip4.dst == 10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; reg9[[16..31]] = tcp.dst; ct_dnat;)\n+ table=5 (lr_in_defrag ), priority=110 , match=(ip && ip4.dst == 10.0.0.100 && tcp), action=(reg0 = 10.0.0.100; reg9[[16..31]] = tcp.dst; ct_dnat;)\n ])\n \n AT_CHECK([grep \"lr_in_dnat\" lr0flows | sort], [0], [dnl\n table=6 (lr_in_dnat ), priority=0 , match=(1), action=(next;)\n table=6 (lr_in_dnat ), priority=120 , match=(ct.est && ip4 && reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_label.natted == 1), action=(flags.force_snat_for_lb = 1; next;)\n+ table=6 (lr_in_dnat ), priority=120 , match=(ct.est && ip4 && reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80 && ct_label.natted == 1), action=(flags.force_snat_for_lb = 1; next;)\n table=6 (lr_in_dnat ), priority=120 , match=(ct.new && ip4 && reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.4:8080);)\n+ table=6 (lr_in_dnat ), priority=120 , match=(ct.new && ip4 && reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.40:8080);)\n ])\n \n AT_CHECK([grep \"lr_out_snat\" lr0flows | sort], [0], [dnl\n@@ -3346,12 +3499,15 @@ AT_CHECK([grep \"lr_in_unsnat\" lr0flows | sort], [0], [dnl\n AT_CHECK([grep \"lr_in_defrag\" lr0flows | sort], [0], [dnl\n table=5 (lr_in_defrag ), priority=0 , match=(1), action=(next;)\n table=5 (lr_in_defrag ), priority=110 , match=(ip && ip4.dst == 10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; reg9[[16..31]] = tcp.dst; ct_dnat;)\n+ table=5 (lr_in_defrag ), priority=110 , match=(ip && ip4.dst == 10.0.0.100 && tcp), action=(reg0 = 10.0.0.100; reg9[[16..31]] = tcp.dst; ct_dnat;)\n ])\n \n AT_CHECK([grep \"lr_in_dnat\" lr0flows | sort], [0], [dnl\n table=6 (lr_in_dnat ), priority=0 , match=(1), action=(next;)\n table=6 (lr_in_dnat ), priority=120 , match=(ct.est && ip4 && reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_label.natted == 1), action=(flags.force_snat_for_lb = 1; next;)\n+ table=6 (lr_in_dnat ), priority=120 , match=(ct.est && ip4 && reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80 && ct_label.natted == 1), action=(flags.force_snat_for_lb = 1; next;)\n table=6 (lr_in_dnat ), priority=120 , match=(ct.new && ip4 && reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.4:8080);)\n+ table=6 (lr_in_dnat ), priority=120 , match=(ct.new && ip4 && reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.40:8080);)\n ])\n \n AT_CHECK([grep \"lr_out_snat\" lr0flows | sort], [0], [dnl\n@@ -3403,12 +3559,15 @@ AT_CHECK([grep \"lr_in_unsnat\" lr0flows | sort], [0], [dnl\n AT_CHECK([grep \"lr_in_defrag\" lr0flows | sort], [0], [dnl\n table=5 (lr_in_defrag ), priority=0 , match=(1), action=(next;)\n table=5 (lr_in_defrag ), priority=110 , match=(ip && ip4.dst == 10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; reg9[[16..31]] = tcp.dst; ct_dnat;)\n+ table=5 (lr_in_defrag ), priority=110 , match=(ip && ip4.dst == 10.0.0.100 && tcp), action=(reg0 = 10.0.0.100; reg9[[16..31]] = tcp.dst; ct_dnat;)\n ])\n \n AT_CHECK([grep \"lr_in_dnat\" lr0flows | sort], [0], [dnl\n table=6 (lr_in_dnat ), priority=0 , match=(1), action=(next;)\n table=6 (lr_in_dnat ), priority=120 , match=(ct.est && ip4 && reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_label.natted == 1), action=(flags.force_snat_for_lb = 1; next;)\n+ table=6 (lr_in_dnat ), priority=120 , match=(ct.est && ip4 && reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80 && ct_label.natted == 1), action=(flags.force_snat_for_lb = 1; next;)\n table=6 (lr_in_dnat ), priority=120 , match=(ct.new && ip4 && reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.4:8080);)\n+ table=6 (lr_in_dnat ), priority=120 , match=(ct.new && ip4 && reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.40:8080);)\n ])\n \n AT_CHECK([grep \"lr_out_snat\" lr0flows | sort], [0], [dnl\n@@ -3446,6 +3605,7 @@ AT_CHECK([grep \"lr_in_unsnat\" lr0flows | sort], [0], [dnl\n \n AT_CHECK([grep \"lr_in_defrag\" lr0flows | sort], [0], [dnl\n table=5 (lr_in_defrag ), priority=0 , match=(1), action=(next;)\n+ table=5 (lr_in_defrag ), priority=110 , match=(ip && ip4.dst == 10.0.0.100 && tcp), action=(reg0 = 10.0.0.100; reg9[[16..31]] = tcp.dst; ct_dnat;)\n table=5 (lr_in_defrag ), priority=110 , match=(ip && ip4.dst == 10.0.0.20 && tcp), action=(reg0 = 10.0.0.20; reg9[[16..31]] = tcp.dst; ct_dnat;)\n ])\n \n@@ -3577,10 +3737,17 @@ OVN_FOR_EACH_NORTHD([\n AT_SETUP([LS load balancer logical flows])\n ovn_start\n \n+lbg=$(ovn-nbctl create load_balancer_group name=lbg)\n check ovn-nbctl \\\n- -- ls-add sw0 \\\n -- lb-add lb0 10.0.0.10:80 10.0.0.4:8080 \\\n- -- ls-lb-add sw0 lb0\n+ -- lb-add lbg0 10.0.0.20:80 10.0.0.40:8080\n+lbg0=$(fetch_column nb:load_balancer _uuid name=lbg0)\n+\n+check ovn-nbctl \\\n+ -- ls-add sw0 \\\n+ -- add logical_switch sw0 load_balancer_group $lbg \\\n+ -- ls-lb-add sw0 lb0 \\\n+ -- add load_balancer_group $lbg load_balancer $lbg0\n \n check ovn-nbctl lr-add lr0\n check ovn-nbctl lrp-add lr0 lr0-sw0 00:00:00:00:ff:01 10.0.0.1/24\n@@ -3621,6 +3788,7 @@ check_stateful_flows() {\n table=12(ls_in_stateful ), priority=100 , match=(reg0[[1]] == 1 && reg0[[13]] == 0), action=(ct_commit { ct_label.blocked = 0; }; next;)\n table=12(ls_in_stateful ), priority=100 , match=(reg0[[1]] == 1 && reg0[[13]] == 1), action=(ct_commit { ct_label.blocked = 0; ct_label.label = reg3; }; next;)\n table=12(ls_in_stateful ), priority=120 , match=(ct.new && ip4.dst == 10.0.0.10 && tcp.dst == 80), action=(reg1 = 10.0.0.10; reg2[[0..15]] = 80; ct_lb(backends=10.0.0.4:8080);)\n+ table=12(ls_in_stateful ), priority=120 , match=(ct.new && ip4.dst == 10.0.0.20 && tcp.dst == 80), action=(reg1 = 10.0.0.20; reg2[[0..15]] = 80; ct_lb(backends=10.0.0.40:8080);)\n ])\n \n AT_CHECK([grep \"ls_out_pre_lb\" sw0flows | sort], [0], [dnl\n@@ -3654,8 +3822,10 @@ check ovn-nbctl --wait=sb acl-add sw0 to-lport 1002 \"ip4 && tcp && tcp.src == 80\n \n check_stateful_flows\n \n-# Remove load balancer from sw0\n-check ovn-nbctl --wait=sb ls-lb-del sw0 lb0\n+# Remove load balancers from sw0\n+check ovn-nbctl ls-lb-del sw0 lb0\n+check ovn-nbctl clear logical_switch sw0 load_balancer_group\n+check ovn-nbctl --wait=sb sync\n \n ovn-sbctl dump-flows sw0 > sw0flows\n AT_CAPTURE_FILE([sw0flows])\ndiff --git a/utilities/ovn-nbctl.c b/utilities/ovn-nbctl.c\nindex e34bb65f7..b6f93e0a5 100644\n--- a/utilities/ovn-nbctl.c\n+++ b/utilities/ovn-nbctl.c\n@@ -6803,6 +6803,9 @@ static const struct ctl_table_class tables[NBREC_N_TABLES] = {\n [NBREC_TABLE_LOAD_BALANCER].row_ids[0]\n = {&nbrec_load_balancer_col_name, NULL, NULL},\n \n+ [NBREC_TABLE_LOAD_BALANCER_GROUP].row_ids[0]\n+ = {&nbrec_load_balancer_group_col_name, NULL, NULL},\n+\n [NBREC_TABLE_LOAD_BALANCER_HEALTH_CHECK].row_ids[0]\n = {&nbrec_load_balancer_health_check_col_vip, NULL, NULL},\n \n", "prefixes": [ "ovs-dev", "branch-21.09", "2/3" ] }{ "id": 1549933, "url": "