Message ID | 8d11c7883b11529bdb6456dae7c5f4879b7a7bd0.1654555333.git.lorenzo.bianconi@redhat.com |
---|---|
State | Accepted |
Headers | show |
Series | [ovs-dev,v3] northd: add the capability to inherit logical routers lbs on logical switches | expand |
Context | Check | Description |
---|---|---|
ovsrobot/apply-robot | success | apply and check: success |
ovsrobot/github-robot-_Build_and_Test | success | github build: passed |
ovsrobot/github-robot-_ovn-kubernetes | success | github build: passed |
On Mon, Jun 6, 2022 at 6:44 PM Lorenzo Bianconi <lorenzo.bianconi@redhat.com> wrote: > > Add the capability to automatically deploy a load-balancer on each > logical-switch connected to a logical router where the load-balancer > has been installed by the CMS. This patch allow to overcome the > distributed gw router scenario limitation where a load-balancer must be > installed on each datapath to properly reach the load-balancer. > > Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=2043543 > Signed-off-by: Lorenzo Bianconi <lorenzo.bianconi@redhat.com> Thanks for the rebase. Acked-by: Numan Siddique <numans@ovn.org> Numan > --- > Changes since v2: > - rebase on top of ovn master > > Changes since v1: > - rebase on top of ovn master > - add NEWS entry > - improve selftests > --- > NEWS | 5 +++ > northd/northd.c | 56 +++++++++++++++++++++++++++++ > northd/ovn-northd.8.xml | 8 +++++ > tests/ovn-northd.at | 80 +++++++++++++++++++++++++++++++++++++++++ > 4 files changed, 149 insertions(+) > > diff --git a/NEWS b/NEWS > index e015ae8e7..8b4f91553 100644 > --- a/NEWS > +++ b/NEWS > @@ -4,6 +4,11 @@ Post v22.06.0 > "ovn-encap-df_default" to enable or disable tunnel DF flag. > - Add option "localnet_learn_fdb" to LSP that will allow localnet > ports to learn MAC addresses and store them in FDB table. > + - northd: introduce the capability to automatically deploy a load-balancer > + on each logical-switch connected to a logical router where the > + load-balancer has been installed by the CMS. In order to enable the > + feature the CMS has to set install_ls_lb_from_router to true in option > + column of NB_Global table. > > OVN v22.06.0 - XX XXX XXXX > -------------------------- > diff --git a/northd/northd.c b/northd/northd.c > index 0207f6ce1..a95a5148e 100644 > --- a/northd/northd.c > +++ b/northd/northd.c > @@ -63,6 +63,8 @@ static bool lflow_hash_lock_initialized = false; > > static bool check_lsp_is_up; > > +static bool install_ls_lb_from_router; > + > /* MAC allocated for service monitor usage. Just one mac is allocated > * for this purpose and ovn-controller's on each chassis will make use > * of this mac when sending out the packets to monitor the services > @@ -4140,6 +4142,55 @@ build_lrouter_lbs_reachable_ips(struct hmap *datapaths, struct hmap *lbs) > } > } > > +static void > +build_lswitch_lbs_from_lrouter(struct hmap *datapaths, struct hmap *lbs) > +{ > + if (!install_ls_lb_from_router) { > + return; > + } > + > + struct ovn_datapath *od; > + HMAP_FOR_EACH (od, key_node, datapaths) { > + if (!od->nbs) { > + continue; > + } > + > + struct ovn_port *op; > + LIST_FOR_EACH (op, dp_node, &od->port_list) { > + if (!lsp_is_router(op->nbsp)) { > + continue; > + } > + if (!op->peer) { > + continue; > + } > + > + struct ovn_datapath *peer_od = op->peer->od; > + for (size_t i = 0; i < peer_od->nbr->n_load_balancer; i++) { > + bool installed = false; > + const struct uuid *lb_uuid = > + &peer_od->nbr->load_balancer[i]->header_.uuid; > + struct ovn_northd_lb *lb = ovn_northd_lb_find(lbs, lb_uuid); > + if (!lb) { > + continue; > + } > + > + for (size_t j = 0; j < lb->n_nb_ls; j++) { > + if (lb->nb_ls[j] == od) { > + installed = true; > + break; > + } > + } > + if (!installed) { > + ovn_northd_lb_add_ls(lb, od); > + } > + if (lb->nlb) { > + od->has_lb_vip |= lb_has_vip(lb->nlb); > + } > + } > + } > + } > +} > + > /* This must be called after all ports have been processed, i.e., after > * build_ports() because the reachability check requires the router ports > * networks to have been parsed. > @@ -4152,6 +4203,7 @@ build_lb_port_related_data(struct hmap *datapaths, struct hmap *ports, > build_lrouter_lbs_check(datapaths); > build_lrouter_lbs_reachable_ips(datapaths, lbs); > build_lb_svcs(input_data, ovnsb_txn, ports, lbs); > + build_lswitch_lbs_from_lrouter(datapaths, lbs); > } > > /* Syncs relevant load balancers (applied to logical switches) to the > @@ -15378,6 +15430,10 @@ ovnnb_db_run(struct northd_input *input_data, > "ignore_lsp_down", true); > default_acl_drop = smap_get_bool(&nb->options, "default_acl_drop", false); > > + install_ls_lb_from_router = smap_get_bool(&nb->options, > + "install_ls_lb_from_router", > + false); > + > build_chassis_features(input_data, &data->features); > build_datapaths(input_data, ovnsb_txn, &data->datapaths, &data->lr_list); > build_lbs(input_data, &data->datapaths, &data->lbs); > diff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml > index 1f7022490..2a2b33051 100644 > --- a/northd/ovn-northd.8.xml > +++ b/northd/ovn-northd.8.xml > @@ -882,6 +882,10 @@ > <code>reg2</code>. For IPv6 traffic the flow also loads the original > destination IP and transport port in registers <code>xxreg1</code> and > <code>reg2</code>. > + The above flow is created even if the load balancer is attached to a > + logical router connected to the current logical switch and > + the <code>install_ls_lb_from_router</code> variable in > + <ref table="NB_Global" column="options"/> is set to true. > </li> > <li> > For all the configured load balancing rules for a switch in > @@ -898,6 +902,10 @@ > <code>reg2</code>. For IPv6 traffic the flow also loads the original > destination IP and transport port in registers <code>xxreg1</code> and > <code>reg2</code>. > + The above flow is created even if the load balancer is attached to a > + logical router connected to the current logical switch and > + the <code>install_ls_lb_from_router</code> variable in > + <ref table="NB_Global" column="options"/> is set to true. > </li> > > <li> > diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at > index a94a7d441..c1ec9e04a 100644 > --- a/tests/ovn-northd.at > +++ b/tests/ovn-northd.at > @@ -7613,3 +7613,83 @@ AT_CHECK([ovn-sbctl dump-flows ls0 | grep -e 'ls_in_\(put\|lookup\)_fdb' | sort > > AT_CLEANUP > ]) > + > +AT_SETUP([check install_ls_lb_from_router option]) > +AT_KEYWORDS([lb-ls-install-from-lrouter]) > +ovn_start > + > +ovn-nbctl lr-add R1 > +ovn-nbctl set logical_router R1 options:chassis=hv1 > +ovn-nbctl lrp-add R1 R1-S0 02:ac:10:01:00:01 10.0.0.1/24 > +ovn-nbctl lrp-add R1 R1-S1 02:ac:10:01:01:01 20.0.0.1/24 > +ovn-nbctl lrp-add R1 R1-PUB 02:ac:20:01:01:01 172.16.0.1/24 > + > +ovn-nbctl ls-add S0 > +ovn-nbctl lsp-add S0 S0-R1 > +ovn-nbctl lsp-set-type S0-R1 router > +ovn-nbctl lsp-set-addresses S0-R1 02:ac:10:01:00:01 > +ovn-nbctl lsp-set-options S0-R1 router-port=R1-S0 > + > +ovn-nbctl ls-add S1 > +ovn-nbctl lsp-add S1 S1-R1 > +ovn-nbctl lsp-set-type S1-R1 router > +ovn-nbctl lsp-set-addresses S1-R1 02:ac:10:01:01:01 > +ovn-nbctl lsp-set-options S1-R1 router-port=R1-S1 > + > +# Add load balancers on the logical router R1 > +ovn-nbctl lb-add lb0 172.16.0.10:80 10.0.0.2:80 > +ovn-nbctl lr-lb-add R1 lb0 > + > +ovn-sbctl dump-flows S0 > S0flows > +ovn-sbctl dump-flows S1 > S1flows > + > +AT_CAPTURE_FILE([S0flows]) > +AT_CAPTURE_FILE([S1flows]) > + > +AT_CHECK([grep "ls_in_lb" S0flows | sort], [0], [dnl > + table=11(ls_in_lb ), priority=0 , match=(1), action=(next;) > +]) > +AT_CHECK([grep "ls_in_lb" S1flows | sort], [0], [dnl > + table=11(ls_in_lb ), priority=0 , match=(1), action=(next;) > +]) > + > +ovn-nbctl --wait=sb set NB_Global . options:install_ls_lb_from_router=true > + > +ovn-sbctl dump-flows S0 > S0flows > +ovn-sbctl dump-flows S1 > S1flows > + > +AT_CAPTURE_FILE([S0flows]) > +AT_CAPTURE_FILE([S1flows]) > + > +AT_CHECK([grep "ls_in_lb" S0flows | sort], [0], [dnl > + table=11(ls_in_lb ), priority=0 , match=(1), action=(next;) > + table=11(ls_in_lb ), priority=120 , match=(ct.new && ip4.dst == 172.16.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 172.16.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.2:80);) > +]) > +AT_CHECK([grep "ls_in_lb" S1flows | sort], [0], [dnl > + table=11(ls_in_lb ), priority=0 , match=(1), action=(next;) > + table=11(ls_in_lb ), priority=120 , match=(ct.new && ip4.dst == 172.16.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 172.16.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.2:80);) > +]) > + > +s0_uuid=$(ovn-sbctl get datapath S0 _uuid) > +s1_uuid=$(ovn-sbctl get datapath S1 _uuid) > +check_column "$s0_uuid $s1_uuid" sb:load_balancer datapaths name=lb0 > + > +ovn-nbctl --wait=sb set NB_Global . options:install_ls_lb_from_router=false > + > +ovn-sbctl dump-flows S0 > S0flows > +ovn-sbctl dump-flows S1 > S1flows > + > +AT_CAPTURE_FILE([S0flows]) > +AT_CAPTURE_FILE([S1flows]) > + > +AT_CHECK([grep "ls_in_lb" S0flows | sort], [0], [dnl > + table=11(ls_in_lb ), priority=0 , match=(1), action=(next;) > +]) > +AT_CHECK([grep "ls_in_lb" S1flows | sort], [0], [dnl > + table=11(ls_in_lb ), priority=0 , match=(1), action=(next;) > +]) > + > +check_column "" sb:load_balancer datapaths name=lb0 > + > +AT_CLEANUP > +]) > -- > 2.35.3 > > _______________________________________________ > dev mailing list > dev@openvswitch.org > https://mail.openvswitch.org/mailman/listinfo/ovs-dev >
On Tue, Jun 7, 2022 at 9:07 AM Numan Siddique <numans@ovn.org> wrote: > > On Mon, Jun 6, 2022 at 6:44 PM Lorenzo Bianconi > <lorenzo.bianconi@redhat.com> wrote: > > > > Add the capability to automatically deploy a load-balancer on each > > logical-switch connected to a logical router where the load-balancer > > has been installed by the CMS. This patch allow to overcome the > > distributed gw router scenario limitation where a load-balancer must be > > installed on each datapath to properly reach the load-balancer. > > > > Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=2043543 > > Signed-off-by: Lorenzo Bianconi <lorenzo.bianconi@redhat.com> > > Thanks for the rebase. > > Acked-by: Numan Siddique <numans@ovn.org> I applied this patch to the main. Numan > > Numan > > > --- > > Changes since v2: > > - rebase on top of ovn master > > > > Changes since v1: > > - rebase on top of ovn master > > - add NEWS entry > > - improve selftests > > --- > > NEWS | 5 +++ > > northd/northd.c | 56 +++++++++++++++++++++++++++++ > > northd/ovn-northd.8.xml | 8 +++++ > > tests/ovn-northd.at | 80 +++++++++++++++++++++++++++++++++++++++++ > > 4 files changed, 149 insertions(+) > > > > diff --git a/NEWS b/NEWS > > index e015ae8e7..8b4f91553 100644 > > --- a/NEWS > > +++ b/NEWS > > @@ -4,6 +4,11 @@ Post v22.06.0 > > "ovn-encap-df_default" to enable or disable tunnel DF flag. > > - Add option "localnet_learn_fdb" to LSP that will allow localnet > > ports to learn MAC addresses and store them in FDB table. > > + - northd: introduce the capability to automatically deploy a load-balancer > > + on each logical-switch connected to a logical router where the > > + load-balancer has been installed by the CMS. In order to enable the > > + feature the CMS has to set install_ls_lb_from_router to true in option > > + column of NB_Global table. > > > > OVN v22.06.0 - XX XXX XXXX > > -------------------------- > > diff --git a/northd/northd.c b/northd/northd.c > > index 0207f6ce1..a95a5148e 100644 > > --- a/northd/northd.c > > +++ b/northd/northd.c > > @@ -63,6 +63,8 @@ static bool lflow_hash_lock_initialized = false; > > > > static bool check_lsp_is_up; > > > > +static bool install_ls_lb_from_router; > > + > > /* MAC allocated for service monitor usage. Just one mac is allocated > > * for this purpose and ovn-controller's on each chassis will make use > > * of this mac when sending out the packets to monitor the services > > @@ -4140,6 +4142,55 @@ build_lrouter_lbs_reachable_ips(struct hmap *datapaths, struct hmap *lbs) > > } > > } > > > > +static void > > +build_lswitch_lbs_from_lrouter(struct hmap *datapaths, struct hmap *lbs) > > +{ > > + if (!install_ls_lb_from_router) { > > + return; > > + } > > + > > + struct ovn_datapath *od; > > + HMAP_FOR_EACH (od, key_node, datapaths) { > > + if (!od->nbs) { > > + continue; > > + } > > + > > + struct ovn_port *op; > > + LIST_FOR_EACH (op, dp_node, &od->port_list) { > > + if (!lsp_is_router(op->nbsp)) { > > + continue; > > + } > > + if (!op->peer) { > > + continue; > > + } > > + > > + struct ovn_datapath *peer_od = op->peer->od; > > + for (size_t i = 0; i < peer_od->nbr->n_load_balancer; i++) { > > + bool installed = false; > > + const struct uuid *lb_uuid = > > + &peer_od->nbr->load_balancer[i]->header_.uuid; > > + struct ovn_northd_lb *lb = ovn_northd_lb_find(lbs, lb_uuid); > > + if (!lb) { > > + continue; > > + } > > + > > + for (size_t j = 0; j < lb->n_nb_ls; j++) { > > + if (lb->nb_ls[j] == od) { > > + installed = true; > > + break; > > + } > > + } > > + if (!installed) { > > + ovn_northd_lb_add_ls(lb, od); > > + } > > + if (lb->nlb) { > > + od->has_lb_vip |= lb_has_vip(lb->nlb); > > + } > > + } > > + } > > + } > > +} > > + > > /* This must be called after all ports have been processed, i.e., after > > * build_ports() because the reachability check requires the router ports > > * networks to have been parsed. > > @@ -4152,6 +4203,7 @@ build_lb_port_related_data(struct hmap *datapaths, struct hmap *ports, > > build_lrouter_lbs_check(datapaths); > > build_lrouter_lbs_reachable_ips(datapaths, lbs); > > build_lb_svcs(input_data, ovnsb_txn, ports, lbs); > > + build_lswitch_lbs_from_lrouter(datapaths, lbs); > > } > > > > /* Syncs relevant load balancers (applied to logical switches) to the > > @@ -15378,6 +15430,10 @@ ovnnb_db_run(struct northd_input *input_data, > > "ignore_lsp_down", true); > > default_acl_drop = smap_get_bool(&nb->options, "default_acl_drop", false); > > > > + install_ls_lb_from_router = smap_get_bool(&nb->options, > > + "install_ls_lb_from_router", > > + false); > > + > > build_chassis_features(input_data, &data->features); > > build_datapaths(input_data, ovnsb_txn, &data->datapaths, &data->lr_list); > > build_lbs(input_data, &data->datapaths, &data->lbs); > > diff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml > > index 1f7022490..2a2b33051 100644 > > --- a/northd/ovn-northd.8.xml > > +++ b/northd/ovn-northd.8.xml > > @@ -882,6 +882,10 @@ > > <code>reg2</code>. For IPv6 traffic the flow also loads the original > > destination IP and transport port in registers <code>xxreg1</code> and > > <code>reg2</code>. > > + The above flow is created even if the load balancer is attached to a > > + logical router connected to the current logical switch and > > + the <code>install_ls_lb_from_router</code> variable in > > + <ref table="NB_Global" column="options"/> is set to true. > > </li> > > <li> > > For all the configured load balancing rules for a switch in > > @@ -898,6 +902,10 @@ > > <code>reg2</code>. For IPv6 traffic the flow also loads the original > > destination IP and transport port in registers <code>xxreg1</code> and > > <code>reg2</code>. > > + The above flow is created even if the load balancer is attached to a > > + logical router connected to the current logical switch and > > + the <code>install_ls_lb_from_router</code> variable in > > + <ref table="NB_Global" column="options"/> is set to true. > > </li> > > > > <li> > > diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at > > index a94a7d441..c1ec9e04a 100644 > > --- a/tests/ovn-northd.at > > +++ b/tests/ovn-northd.at > > @@ -7613,3 +7613,83 @@ AT_CHECK([ovn-sbctl dump-flows ls0 | grep -e 'ls_in_\(put\|lookup\)_fdb' | sort > > > > AT_CLEANUP > > ]) > > + > > +AT_SETUP([check install_ls_lb_from_router option]) > > +AT_KEYWORDS([lb-ls-install-from-lrouter]) > > +ovn_start > > + > > +ovn-nbctl lr-add R1 > > +ovn-nbctl set logical_router R1 options:chassis=hv1 > > +ovn-nbctl lrp-add R1 R1-S0 02:ac:10:01:00:01 10.0.0.1/24 > > +ovn-nbctl lrp-add R1 R1-S1 02:ac:10:01:01:01 20.0.0.1/24 > > +ovn-nbctl lrp-add R1 R1-PUB 02:ac:20:01:01:01 172.16.0.1/24 > > + > > +ovn-nbctl ls-add S0 > > +ovn-nbctl lsp-add S0 S0-R1 > > +ovn-nbctl lsp-set-type S0-R1 router > > +ovn-nbctl lsp-set-addresses S0-R1 02:ac:10:01:00:01 > > +ovn-nbctl lsp-set-options S0-R1 router-port=R1-S0 > > + > > +ovn-nbctl ls-add S1 > > +ovn-nbctl lsp-add S1 S1-R1 > > +ovn-nbctl lsp-set-type S1-R1 router > > +ovn-nbctl lsp-set-addresses S1-R1 02:ac:10:01:01:01 > > +ovn-nbctl lsp-set-options S1-R1 router-port=R1-S1 > > + > > +# Add load balancers on the logical router R1 > > +ovn-nbctl lb-add lb0 172.16.0.10:80 10.0.0.2:80 > > +ovn-nbctl lr-lb-add R1 lb0 > > + > > +ovn-sbctl dump-flows S0 > S0flows > > +ovn-sbctl dump-flows S1 > S1flows > > + > > +AT_CAPTURE_FILE([S0flows]) > > +AT_CAPTURE_FILE([S1flows]) > > + > > +AT_CHECK([grep "ls_in_lb" S0flows | sort], [0], [dnl > > + table=11(ls_in_lb ), priority=0 , match=(1), action=(next;) > > +]) > > +AT_CHECK([grep "ls_in_lb" S1flows | sort], [0], [dnl > > + table=11(ls_in_lb ), priority=0 , match=(1), action=(next;) > > +]) > > + > > +ovn-nbctl --wait=sb set NB_Global . options:install_ls_lb_from_router=true > > + > > +ovn-sbctl dump-flows S0 > S0flows > > +ovn-sbctl dump-flows S1 > S1flows > > + > > +AT_CAPTURE_FILE([S0flows]) > > +AT_CAPTURE_FILE([S1flows]) > > + > > +AT_CHECK([grep "ls_in_lb" S0flows | sort], [0], [dnl > > + table=11(ls_in_lb ), priority=0 , match=(1), action=(next;) > > + table=11(ls_in_lb ), priority=120 , match=(ct.new && ip4.dst == 172.16.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 172.16.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.2:80);) > > +]) > > +AT_CHECK([grep "ls_in_lb" S1flows | sort], [0], [dnl > > + table=11(ls_in_lb ), priority=0 , match=(1), action=(next;) > > + table=11(ls_in_lb ), priority=120 , match=(ct.new && ip4.dst == 172.16.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 172.16.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.2:80);) > > +]) > > + > > +s0_uuid=$(ovn-sbctl get datapath S0 _uuid) > > +s1_uuid=$(ovn-sbctl get datapath S1 _uuid) > > +check_column "$s0_uuid $s1_uuid" sb:load_balancer datapaths name=lb0 > > + > > +ovn-nbctl --wait=sb set NB_Global . options:install_ls_lb_from_router=false > > + > > +ovn-sbctl dump-flows S0 > S0flows > > +ovn-sbctl dump-flows S1 > S1flows > > + > > +AT_CAPTURE_FILE([S0flows]) > > +AT_CAPTURE_FILE([S1flows]) > > + > > +AT_CHECK([grep "ls_in_lb" S0flows | sort], [0], [dnl > > + table=11(ls_in_lb ), priority=0 , match=(1), action=(next;) > > +]) > > +AT_CHECK([grep "ls_in_lb" S1flows | sort], [0], [dnl > > + table=11(ls_in_lb ), priority=0 , match=(1), action=(next;) > > +]) > > + > > +check_column "" sb:load_balancer datapaths name=lb0 > > + > > +AT_CLEANUP > > +]) > > -- > > 2.35.3 > > > > _______________________________________________ > > dev mailing list > > dev@openvswitch.org > > https://mail.openvswitch.org/mailman/listinfo/ovs-dev > >
diff --git a/NEWS b/NEWS index e015ae8e7..8b4f91553 100644 --- a/NEWS +++ b/NEWS @@ -4,6 +4,11 @@ Post v22.06.0 "ovn-encap-df_default" to enable or disable tunnel DF flag. - Add option "localnet_learn_fdb" to LSP that will allow localnet ports to learn MAC addresses and store them in FDB table. + - northd: introduce the capability to automatically deploy a load-balancer + on each logical-switch connected to a logical router where the + load-balancer has been installed by the CMS. In order to enable the + feature the CMS has to set install_ls_lb_from_router to true in option + column of NB_Global table. OVN v22.06.0 - XX XXX XXXX -------------------------- diff --git a/northd/northd.c b/northd/northd.c index 0207f6ce1..a95a5148e 100644 --- a/northd/northd.c +++ b/northd/northd.c @@ -63,6 +63,8 @@ static bool lflow_hash_lock_initialized = false; static bool check_lsp_is_up; +static bool install_ls_lb_from_router; + /* MAC allocated for service monitor usage. Just one mac is allocated * for this purpose and ovn-controller's on each chassis will make use * of this mac when sending out the packets to monitor the services @@ -4140,6 +4142,55 @@ build_lrouter_lbs_reachable_ips(struct hmap *datapaths, struct hmap *lbs) } } +static void +build_lswitch_lbs_from_lrouter(struct hmap *datapaths, struct hmap *lbs) +{ + if (!install_ls_lb_from_router) { + return; + } + + struct ovn_datapath *od; + HMAP_FOR_EACH (od, key_node, datapaths) { + if (!od->nbs) { + continue; + } + + struct ovn_port *op; + LIST_FOR_EACH (op, dp_node, &od->port_list) { + if (!lsp_is_router(op->nbsp)) { + continue; + } + if (!op->peer) { + continue; + } + + struct ovn_datapath *peer_od = op->peer->od; + for (size_t i = 0; i < peer_od->nbr->n_load_balancer; i++) { + bool installed = false; + const struct uuid *lb_uuid = + &peer_od->nbr->load_balancer[i]->header_.uuid; + struct ovn_northd_lb *lb = ovn_northd_lb_find(lbs, lb_uuid); + if (!lb) { + continue; + } + + for (size_t j = 0; j < lb->n_nb_ls; j++) { + if (lb->nb_ls[j] == od) { + installed = true; + break; + } + } + if (!installed) { + ovn_northd_lb_add_ls(lb, od); + } + if (lb->nlb) { + od->has_lb_vip |= lb_has_vip(lb->nlb); + } + } + } + } +} + /* This must be called after all ports have been processed, i.e., after * build_ports() because the reachability check requires the router ports * networks to have been parsed. @@ -4152,6 +4203,7 @@ build_lb_port_related_data(struct hmap *datapaths, struct hmap *ports, build_lrouter_lbs_check(datapaths); build_lrouter_lbs_reachable_ips(datapaths, lbs); build_lb_svcs(input_data, ovnsb_txn, ports, lbs); + build_lswitch_lbs_from_lrouter(datapaths, lbs); } /* Syncs relevant load balancers (applied to logical switches) to the @@ -15378,6 +15430,10 @@ ovnnb_db_run(struct northd_input *input_data, "ignore_lsp_down", true); default_acl_drop = smap_get_bool(&nb->options, "default_acl_drop", false); + install_ls_lb_from_router = smap_get_bool(&nb->options, + "install_ls_lb_from_router", + false); + build_chassis_features(input_data, &data->features); build_datapaths(input_data, ovnsb_txn, &data->datapaths, &data->lr_list); build_lbs(input_data, &data->datapaths, &data->lbs); diff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml index 1f7022490..2a2b33051 100644 --- a/northd/ovn-northd.8.xml +++ b/northd/ovn-northd.8.xml @@ -882,6 +882,10 @@ <code>reg2</code>. For IPv6 traffic the flow also loads the original destination IP and transport port in registers <code>xxreg1</code> and <code>reg2</code>. + The above flow is created even if the load balancer is attached to a + logical router connected to the current logical switch and + the <code>install_ls_lb_from_router</code> variable in + <ref table="NB_Global" column="options"/> is set to true. </li> <li> For all the configured load balancing rules for a switch in @@ -898,6 +902,10 @@ <code>reg2</code>. For IPv6 traffic the flow also loads the original destination IP and transport port in registers <code>xxreg1</code> and <code>reg2</code>. + The above flow is created even if the load balancer is attached to a + logical router connected to the current logical switch and + the <code>install_ls_lb_from_router</code> variable in + <ref table="NB_Global" column="options"/> is set to true. </li> <li> diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at index a94a7d441..c1ec9e04a 100644 --- a/tests/ovn-northd.at +++ b/tests/ovn-northd.at @@ -7613,3 +7613,83 @@ AT_CHECK([ovn-sbctl dump-flows ls0 | grep -e 'ls_in_\(put\|lookup\)_fdb' | sort AT_CLEANUP ]) + +AT_SETUP([check install_ls_lb_from_router option]) +AT_KEYWORDS([lb-ls-install-from-lrouter]) +ovn_start + +ovn-nbctl lr-add R1 +ovn-nbctl set logical_router R1 options:chassis=hv1 +ovn-nbctl lrp-add R1 R1-S0 02:ac:10:01:00:01 10.0.0.1/24 +ovn-nbctl lrp-add R1 R1-S1 02:ac:10:01:01:01 20.0.0.1/24 +ovn-nbctl lrp-add R1 R1-PUB 02:ac:20:01:01:01 172.16.0.1/24 + +ovn-nbctl ls-add S0 +ovn-nbctl lsp-add S0 S0-R1 +ovn-nbctl lsp-set-type S0-R1 router +ovn-nbctl lsp-set-addresses S0-R1 02:ac:10:01:00:01 +ovn-nbctl lsp-set-options S0-R1 router-port=R1-S0 + +ovn-nbctl ls-add S1 +ovn-nbctl lsp-add S1 S1-R1 +ovn-nbctl lsp-set-type S1-R1 router +ovn-nbctl lsp-set-addresses S1-R1 02:ac:10:01:01:01 +ovn-nbctl lsp-set-options S1-R1 router-port=R1-S1 + +# Add load balancers on the logical router R1 +ovn-nbctl lb-add lb0 172.16.0.10:80 10.0.0.2:80 +ovn-nbctl lr-lb-add R1 lb0 + +ovn-sbctl dump-flows S0 > S0flows +ovn-sbctl dump-flows S1 > S1flows + +AT_CAPTURE_FILE([S0flows]) +AT_CAPTURE_FILE([S1flows]) + +AT_CHECK([grep "ls_in_lb" S0flows | sort], [0], [dnl + table=11(ls_in_lb ), priority=0 , match=(1), action=(next;) +]) +AT_CHECK([grep "ls_in_lb" S1flows | sort], [0], [dnl + table=11(ls_in_lb ), priority=0 , match=(1), action=(next;) +]) + +ovn-nbctl --wait=sb set NB_Global . options:install_ls_lb_from_router=true + +ovn-sbctl dump-flows S0 > S0flows +ovn-sbctl dump-flows S1 > S1flows + +AT_CAPTURE_FILE([S0flows]) +AT_CAPTURE_FILE([S1flows]) + +AT_CHECK([grep "ls_in_lb" S0flows | sort], [0], [dnl + table=11(ls_in_lb ), priority=0 , match=(1), action=(next;) + table=11(ls_in_lb ), priority=120 , match=(ct.new && ip4.dst == 172.16.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 172.16.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.2:80);) +]) +AT_CHECK([grep "ls_in_lb" S1flows | sort], [0], [dnl + table=11(ls_in_lb ), priority=0 , match=(1), action=(next;) + table=11(ls_in_lb ), priority=120 , match=(ct.new && ip4.dst == 172.16.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; reg1 = 172.16.0.10; reg2[[0..15]] = 80; ct_lb_mark(backends=10.0.0.2:80);) +]) + +s0_uuid=$(ovn-sbctl get datapath S0 _uuid) +s1_uuid=$(ovn-sbctl get datapath S1 _uuid) +check_column "$s0_uuid $s1_uuid" sb:load_balancer datapaths name=lb0 + +ovn-nbctl --wait=sb set NB_Global . options:install_ls_lb_from_router=false + +ovn-sbctl dump-flows S0 > S0flows +ovn-sbctl dump-flows S1 > S1flows + +AT_CAPTURE_FILE([S0flows]) +AT_CAPTURE_FILE([S1flows]) + +AT_CHECK([grep "ls_in_lb" S0flows | sort], [0], [dnl + table=11(ls_in_lb ), priority=0 , match=(1), action=(next;) +]) +AT_CHECK([grep "ls_in_lb" S1flows | sort], [0], [dnl + table=11(ls_in_lb ), priority=0 , match=(1), action=(next;) +]) + +check_column "" sb:load_balancer datapaths name=lb0 + +AT_CLEANUP +])
Add the capability to automatically deploy a load-balancer on each logical-switch connected to a logical router where the load-balancer has been installed by the CMS. This patch allow to overcome the distributed gw router scenario limitation where a load-balancer must be installed on each datapath to properly reach the load-balancer. Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=2043543 Signed-off-by: Lorenzo Bianconi <lorenzo.bianconi@redhat.com> --- Changes since v2: - rebase on top of ovn master Changes since v1: - rebase on top of ovn master - add NEWS entry - improve selftests --- NEWS | 5 +++ northd/northd.c | 56 +++++++++++++++++++++++++++++ northd/ovn-northd.8.xml | 8 +++++ tests/ovn-northd.at | 80 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 149 insertions(+)