diff mbox series

[ovs-dev,v3] nb: Remove possibility of disabling logical datapath groups.

Message ID 20220706135847.947627-1-dceara@redhat.com
State Accepted
Headers show
Series [ovs-dev,v3] nb: Remove possibility of disabling logical datapath groups. | expand

Checks

Context Check Description
ovsrobot/apply-robot success apply and check: success
ovsrobot/github-robot-_Build_and_Test success github build: passed
ovsrobot/github-robot-_ovn-kubernetes success github build: passed

Commit Message

Dumitru Ceara July 6, 2022, 1:58 p.m. UTC
In large scale scenarios this option hugely reduces the size of the
Southbound database positively affecting end to end performance.  In
such scenarios there's no real reason to ever disable datapath groups.

In lower scale scenarios any potential overhead due to logical datapath
groups is, very likely, negligible.

Aside from potential scalability concerns, the
NB.NB_Global.options:use_logical_dp_group knob was kept until now to
ensure that in case of a bug in the logical datapath groups code a CMS
may turn it off and fall back to the mode in which logical flows are not
grouped together.  As far as I know, this has never happened until now.

Moreover, datpath_groups are enabled by default since v21.09.0 (4 stable
releases ago), via 90daa7ce18dc ("northd: Enable logical dp groups by
default.").

From a testing perspective removing this knob will halve the CI matrix.
This is desirable, especially in the context of more tests being added,
e.g.:
https://patchwork.ozlabs.org/project/ovn/patch/20220623110239.2973854-1-mheib@redhat.com/

This commit also adds OVN_FOR_EACH_NORTHD to the "ovn-northd -- lr
multiple gw ports NAT" test case.  That was previously skipped because
the duplicate logical flow would have been merged when running with
datapath groups enabled.  Instead, change the expected output to not
include the duplicate.

Acked-by: Mark Michelson <mmichels@redhat.com>
Signed-off-by: Dumitru Ceara <dceara@redhat.com>
---
V3:
- Rebased.
V2:
- Removed unused lflow_segs as suggested by Mark.
- Added Mark's ack.
---
 NEWS                |  1 +
 TODO.rst            |  6 +++
 northd/northd.c     | 78 ++++++++++---------------------------
 ovn-nb.xml          | 19 +++------
 tests/ovn-macros.at | 21 ++--------
 tests/ovn-northd.at | 95 +++++----------------------------------------
 tests/ovs-macros.at |  4 +-
 7 files changed, 47 insertions(+), 177 deletions(-)

Comments

Mark Michelson July 6, 2022, 6:53 p.m. UTC | #1
Thanks, Dumitru.

I pushed this to main.

On 7/6/22 09:58, Dumitru Ceara wrote:
> In large scale scenarios this option hugely reduces the size of the
> Southbound database positively affecting end to end performance.  In
> such scenarios there's no real reason to ever disable datapath groups.
> 
> In lower scale scenarios any potential overhead due to logical datapath
> groups is, very likely, negligible.
> 
> Aside from potential scalability concerns, the
> NB.NB_Global.options:use_logical_dp_group knob was kept until now to
> ensure that in case of a bug in the logical datapath groups code a CMS
> may turn it off and fall back to the mode in which logical flows are not
> grouped together.  As far as I know, this has never happened until now.
> 
> Moreover, datpath_groups are enabled by default since v21.09.0 (4 stable
> releases ago), via 90daa7ce18dc ("northd: Enable logical dp groups by
> default.").
> 
>  From a testing perspective removing this knob will halve the CI matrix.
> This is desirable, especially in the context of more tests being added,
> e.g.:
> https://patchwork.ozlabs.org/project/ovn/patch/20220623110239.2973854-1-mheib@redhat.com/
> 
> This commit also adds OVN_FOR_EACH_NORTHD to the "ovn-northd -- lr
> multiple gw ports NAT" test case.  That was previously skipped because
> the duplicate logical flow would have been merged when running with
> datapath groups enabled.  Instead, change the expected output to not
> include the duplicate.
> 
> Acked-by: Mark Michelson <mmichels@redhat.com>
> Signed-off-by: Dumitru Ceara <dceara@redhat.com>
> ---
> V3:
> - Rebased.
> V2:
> - Removed unused lflow_segs as suggested by Mark.
> - Added Mark's ack.
> ---
>   NEWS                |  1 +
>   TODO.rst            |  6 +++
>   northd/northd.c     | 78 ++++++++++---------------------------
>   ovn-nb.xml          | 19 +++------
>   tests/ovn-macros.at | 21 ++--------
>   tests/ovn-northd.at | 95 +++++----------------------------------------
>   tests/ovs-macros.at |  4 +-
>   7 files changed, 47 insertions(+), 177 deletions(-)
> 
> diff --git a/NEWS b/NEWS
> index 3737907ca..20cea579e 100644
> --- a/NEWS
> +++ b/NEWS
> @@ -13,6 +13,7 @@ Post v22.06.0
>       and ipsec_forceencaps=true (strongswan) to unconditionally enforce
>       NAT-T UDP encapsulation. Requires OVS support for IPsec custom tunnel
>       options (which will be available in OVS 2.18).
> +  - Removed possibility of disabling logical datapath groups.
>   
>   OVN v22.06.0 - 03 Jun 2022
>   --------------------------
> diff --git a/TODO.rst b/TODO.rst
> index 618ea4844..acbcacc4e 100644
> --- a/TODO.rst
> +++ b/TODO.rst
> @@ -170,3 +170,9 @@ OVN To-do List
>     * physical.c has a global simap -localvif_to_ofport which stores the
>       local OVS interfaces and the ofport numbers. Move this to the engine data
>       of the engine data node - ed_type_pflow_output.
> +
> +* ovn-northd parallel logical flow processing
> +
> +  * Multi-threaded logical flow computation was optimized for the case
> +    when datapath groups are disabled.  Datpath groups are always enabled
> +    now so northd parallel processing should be revisited.
> diff --git a/northd/northd.c b/northd/northd.c
> index 964af992f..509186392 100644
> --- a/northd/northd.c
> +++ b/northd/northd.c
> @@ -4896,10 +4896,6 @@ ovn_lflow_equal(const struct ovn_lflow *a, const struct ovn_datapath *od,
>               && nullable_string_is_equal(a->ctrl_meter, ctrl_meter));
>   }
>   
> -/* If this option is 'true' northd will combine logical flows that differ by
> - * logical datapath only by creating a datapath group. */
> -static bool use_logical_dp_groups = false;
> -
>   enum {
>       STATE_NULL,               /* parallelization is off */
>       STATE_INIT_HASH_SIZES,    /* parallelization is on; hashes sizing needed */
> @@ -4924,8 +4920,7 @@ ovn_lflow_init(struct ovn_lflow *lflow, struct ovn_datapath *od,
>       lflow->ctrl_meter = ctrl_meter;
>       lflow->dpg = NULL;
>       lflow->where = where;
> -    if ((parallelization_state != STATE_NULL)
> -        && use_logical_dp_groups) {
> +    if (parallelization_state != STATE_NULL) {
>           ovs_mutex_init(&lflow->odg_lock);
>       }
>   }
> @@ -4935,7 +4930,7 @@ ovn_dp_group_add_with_reference(struct ovn_lflow *lflow_ref,
>                                   struct ovn_datapath *od)
>                                   OVS_NO_THREAD_SAFETY_ANALYSIS
>   {
> -    if (!use_logical_dp_groups || !lflow_ref) {
> +    if (!lflow_ref) {
>           return false;
>       }
>   
> @@ -5014,13 +5009,11 @@ do_ovn_lflow_add(struct hmap *lflow_map, struct ovn_datapath *od,
>       struct ovn_lflow *old_lflow;
>       struct ovn_lflow *lflow;
>   
> -    if (use_logical_dp_groups) {
> -        old_lflow = ovn_lflow_find(lflow_map, NULL, stage, priority, match,
> -                                   actions, ctrl_meter, hash);
> -        if (old_lflow) {
> -            ovn_dp_group_add_with_reference(old_lflow, od);
> -            return old_lflow;
> -        }
> +    old_lflow = ovn_lflow_find(lflow_map, NULL, stage, priority, match,
> +                               actions, ctrl_meter, hash);
> +    if (old_lflow) {
> +        ovn_dp_group_add_with_reference(old_lflow, od);
> +        return old_lflow;
>       }
>   
>       lflow = xmalloc(sizeof *lflow);
> @@ -5076,8 +5069,7 @@ ovn_lflow_add_at_with_hash(struct hmap *lflow_map, struct ovn_datapath *od,
>       struct ovn_lflow *lflow;
>   
>       ovs_assert(ovn_stage_to_datapath_type(stage) == ovn_datapath_get_type(od));
> -    if (use_logical_dp_groups
> -        && (parallelization_state == STATE_USE_PARALLELIZATION)) {
> +    if (parallelization_state == STATE_USE_PARALLELIZATION) {
>           lflow = do_ovn_lflow_add_pd(lflow_map, od, hash, stage, priority,
>                                       match, actions, io_port, stage_hint, where,
>                                       ctrl_meter);
> @@ -5163,8 +5155,7 @@ static void
>   ovn_lflow_destroy(struct hmap *lflows, struct ovn_lflow *lflow)
>   {
>       if (lflow) {
> -        if ((parallelization_state != STATE_NULL)
> -            && use_logical_dp_groups) {
> +        if (parallelization_state != STATE_NULL) {
>               ovs_mutex_destroy(&lflow->odg_lock);
>           }
>           if (lflows) {
> @@ -13949,29 +13940,18 @@ build_lswitch_and_lrouter_flows(const struct hmap *datapaths,
>       char *svc_check_match = xasprintf("eth.dst == %s", svc_monitor_mac);
>   
>       if (parallelization_state == STATE_USE_PARALLELIZATION) {
> -        struct hmap *lflow_segs;
>           struct lswitch_flow_build_info *lsiv;
>           int index;
>   
>           lsiv = xcalloc(sizeof(*lsiv), build_lflows_pool->size);
> -        if (use_logical_dp_groups) {
> -            lflow_segs = NULL;
> -        } else {
> -            lflow_segs = xcalloc(sizeof(*lflow_segs), build_lflows_pool->size);
> -        }
>   
>           /* Set up "work chunks" for each thread to work on. */
>   
>           for (index = 0; index < build_lflows_pool->size; index++) {
> -            if (use_logical_dp_groups) {
> -                /* if dp_groups are in use we lock a shared lflows hash
> -                 * on a per-bucket level instead of merging hash frags */
> -                lsiv[index].lflows = lflows;
> -            } else {
> -                fast_hmap_init(&lflow_segs[index], lflows->mask);
> -                lsiv[index].lflows = &lflow_segs[index];
> -            }
> -
> +            /* dp_groups are in use so we lock a shared lflows hash
> +             * on a per-bucket level.
> +             */
> +            lsiv[index].lflows = lflows;
>               lsiv[index].datapaths = datapaths;
>               lsiv[index].ports = ports;
>               lsiv[index].port_groups = port_groups;
> @@ -13990,19 +13970,13 @@ build_lswitch_and_lrouter_flows(const struct hmap *datapaths,
>           }
>   
>           /* Run thread pool. */
> -        if (use_logical_dp_groups) {
> -            run_pool_callback(build_lflows_pool, NULL, NULL,
> -                              noop_callback);
> -            fix_flow_map_size(lflows, lsiv, build_lflows_pool->size);
> -        } else {
> -            run_pool_hash(build_lflows_pool, lflows, lflow_segs);
> -        }
> +        run_pool_callback(build_lflows_pool, NULL, NULL, noop_callback);
> +        fix_flow_map_size(lflows, lsiv, build_lflows_pool->size);
>   
>           for (index = 0; index < build_lflows_pool->size; index++) {
>               ds_destroy(&lsiv[index].match);
>               ds_destroy(&lsiv[index].actions);
>           }
> -        free(lflow_segs);
>           free(lsiv);
>       } else {
>           struct ovn_datapath *od;
> @@ -14148,21 +14122,15 @@ void run_update_worker_pool(int n_threads)
>               lflow_hash_lock_destroy();
>               parallelization_state = STATE_NULL;
>           } else if (parallelization_state != STATE_USE_PARALLELIZATION) {
> -            if (use_logical_dp_groups) {
> -                lflow_hash_lock_init();
> -                parallelization_state = STATE_INIT_HASH_SIZES;
> -            } else {
> -                parallelization_state = STATE_USE_PARALLELIZATION;
> -            }
> +            lflow_hash_lock_init();
> +            parallelization_state = STATE_INIT_HASH_SIZES;
>           }
>       }
>   }
>   
> -static void worker_pool_init_for_ldp(void)
> +static void init_worker_pool(void)
>   {
> -    /* If parallelization is enabled, make sure locks are initialized
> -     * when ldp are used.
> -     */
> +    /* If parallelization is enabled, make sure locks are initialized. */
>       if (parallelization_state != STATE_NULL) {
>           lflow_hash_lock_init();
>           parallelization_state = STATE_INIT_HASH_SIZES;
> @@ -15432,13 +15400,6 @@ ovnnb_db_run(struct northd_input *input_data,
>           nbrec_nb_global_set_options(nb, &options);
>       }
>   
> -    bool old_use_ldp = use_logical_dp_groups;
> -    use_logical_dp_groups = smap_get_bool(&nb->options,
> -                                          "use_logical_dp_groups", true);
> -    if (use_logical_dp_groups && !old_use_ldp) {
> -        worker_pool_init_for_ldp();
> -    }
> -
>       use_ct_inv_match = smap_get_bool(&nb->options,
>                                        "use_ct_inv_match", true);
>   
> @@ -15741,6 +15702,7 @@ void northd_run(struct northd_input *input_data,
>                   struct ovsdb_idl_txn *ovnsb_txn)
>   {
>       stopwatch_start(OVNNB_DB_RUN_STOPWATCH_NAME, time_msec());
> +    init_worker_pool();
>       ovnnb_db_run(input_data, data, ovnnb_txn, ovnsb_txn,
>                    input_data->sbrec_chassis_by_name,
>                    input_data->sbrec_chassis_by_hostname);
> diff --git a/ovn-nb.xml b/ovn-nb.xml
> index 9c09de8d8..22b7e8779 100644
> --- a/ovn-nb.xml
> +++ b/ovn-nb.xml
> @@ -196,21 +196,14 @@
>   
>         <column name="options" key="use_logical_dp_groups">
>           <p>
> -          If set to <code>true</code>, <code>ovn-northd</code> will combine
> -          logical flows that differs only by logical datapath into a single
> -          logical flow with logical datapath group attached.
> +          Note: This option is deprecated, the only behavior is to always
> +          combine logical flows by datapath groups.  Changing the value or
> +          removing this option all toghether will have no effect.
>           </p>
>           <p>
> -          While this should significantly reduce number of logical flows stored
> -          in Southbound database this could also increase processing complexity
> -          on the <code>ovn-controller</code> side, e.g.,
> -          <code>ovn-controller</code> will re-consider logical flow for all
> -          logical datapaths in a group.  If the option set to
> -          <code>false</code>, there will be separate logical flow per logical
> -          datapath and only this flow will be re-considered.
> -        </p>
> -        <p>
> -          The default value is <code>false</code>.
> +          <code>ovn-northd</code> combines logical flows that differs
> +          only by logical datapath into a single logical flow with
> +          logical datapath group attached.
>           </p>
>         </column>
>         <column name="options" key="use_parallel_build">
> diff --git a/tests/ovn-macros.at b/tests/ovn-macros.at
> index b71d02d44..427b7b669 100644
> --- a/tests/ovn-macros.at
> +++ b/tests/ovn-macros.at
> @@ -246,12 +246,6 @@ ovn_start () {
>                  --ic-nb-db=unix:"$ovs_base"/ovn-ic-nb/ovn-ic-nb.sock \
>                  --ic-sb-db=unix:"$ovs_base"/ovn-ic-sb/ovn-ic-sb.sock
>       fi
> -
> -    if test X$NORTHD_USE_DP_GROUPS = Xyes; then
> -        ovn-nbctl set NB_Global . options:use_logical_dp_groups=true
> -    else
> -        ovn-nbctl set NB_Global . options:use_logical_dp_groups=false
> -    fi
>   }
>   
>   # Interconnection networks.
> @@ -758,15 +752,6 @@ m4_define([OVN_POPULATE_ARP], [AT_CHECK(ovn_populate_arp__, [0], [ignore])])
>   # datapath groups.
>   m4_define([OVN_FOR_EACH_NORTHD],
>     [m4_foreach([NORTHD_TYPE], [ovn-northd],
> -     [m4_foreach([NORTHD_USE_DP_GROUPS], [yes, no],
> -       [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes, no],
> -         [m4_foreach([OVN_MONITOR_ALL], [yes, no], [$1
> -])])])])])
> -
> -# Some tests aren't prepared for dp groups to be enabled.
> -m4_define([OVN_FOR_EACH_NORTHD_WITHOUT_DP_GROUPS],
> -  [m4_foreach([NORTHD_TYPE], [ovn-northd],
> -     [m4_foreach([NORTHD_USE_DP_GROUPS], [no],
> -       [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes, no],
> -         [m4_foreach([OVN_MONITOR_ALL], [yes, no], [$1
> -])])])])])
> +     [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes, no],
> +       [m4_foreach([OVN_MONITOR_ALL], [yes, no], [$1
> +])])])])
> diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
> index fe97bedad..c456fc89c 100644
> --- a/tests/ovn-northd.at
> +++ b/tests/ovn-northd.at
> @@ -755,7 +755,7 @@ wait_row_count Datapath_Binding 2
>   AT_CLEANUP
>   ])
>   
> -OVN_FOR_EACH_NORTHD_WITHOUT_DP_GROUPS([
> +OVN_FOR_EACH_NORTHD([
>   AT_SETUP([northbound database reconnection])
>   
>   ovn_start --backup-northd=none
> @@ -765,7 +765,7 @@ ovn_start --backup-northd=none
>   check_row_count Datapath_Binding 0
>   check ovn-nbctl --wait=sb ls-add sw0
>   check_row_count Datapath_Binding 1
> -lf=$(count_rows Logical_Flow)
> +dp1=$(fetch_column Datapath_Binding _uuid external_ids:name=sw0)
>   
>   # Make nbdb ovsdb-server drop connection from ovn-northd.
>   conn=$(as ovn-nb ovs-appctl -t ovsdb-server ovsdb-server/list-remotes|grep ^punix)
> @@ -777,17 +777,17 @@ check as ovn-nb ovs-appctl -t ovsdb-server ovsdb-server/add-remote "$conn2"
>   check ovn-nbctl --db="${conn2#p}" ls-add sw1
>   sleep 5
>   check_row_count Datapath_Binding 1
> -check_row_count Logical_Flow $lf
>   
>   # Now re-enable the nbdb connection and observe ovn-northd catch up.
>   check as ovn-nb ovs-appctl -t ovsdb-server ovsdb-server/add-remote "$conn"
>   wait_row_count Datapath_Binding 2
> -wait_row_count Logical_Flow $(expr 2 \* $lf)
> +dp2=$(fetch_column Datapath_Binding _uuid external_ids:name=sw1)
> +wait_column "$dp1 $dp2" Logical_DP_Group datapaths
>   
>   AT_CLEANUP
>   ])
>   
> -OVN_FOR_EACH_NORTHD_WITHOUT_DP_GROUPS([
> +OVN_FOR_EACH_NORTHD([
>   AT_SETUP([southbound database reconnection])
>   
>   ovn_start --backup-northd=none
> @@ -797,7 +797,7 @@ ovn_start --backup-northd=none
>   check_row_count Datapath_Binding 0
>   check ovn-nbctl --wait=sb ls-add sw0
>   check_row_count Datapath_Binding 1
> -lf=$(count_rows Logical_Flow)
> +dp1=$(fetch_column Datapath_Binding _uuid external_ids:name=sw0)
>   
>   # Make sbdb ovsdb-server drop connection from ovn-northd.
>   conn=$(as ovn-sb ovs-appctl -t ovsdb-server ovsdb-server/list-remotes|grep ^punix)
> @@ -809,14 +809,14 @@ check as ovn-sb ovs-appctl -t ovsdb-server ovsdb-server/add-remote "$conn2"
>   check ovn-nbctl ls-add sw1
>   sleep 5
>   OVN_SB_DB=${conn2#p} check_row_count Datapath_Binding 1
> -OVN_SB_DB=${conn2#p} check_row_count Logical_Flow $lf
>   
>   # Now re-enable the sbdb connection and observe ovn-northd catch up.
>   #
>   # It's important to check both Datapath_Binding and Logical_Flow.
>   check as ovn-sb ovs-appctl -t ovsdb-server ovsdb-server/add-remote "$conn"
>   wait_row_count Datapath_Binding 2
> -wait_row_count Logical_Flow $(expr 2 \* $lf)
> +dp2=$(fetch_column Datapath_Binding _uuid external_ids:name=sw1)
> +wait_column "$dp1 $dp2" Logical_DP_Group datapaths
>   
>   AT_CLEANUP
>   ])
> @@ -2640,82 +2640,6 @@ AT_CHECK([ovn-sbctl lflow-list sw0 | grep ls_in_hairpin | sort | sed 's/table=..
>   AT_CLEANUP
>   ])
>   
> -OVN_FOR_EACH_NORTHD([
> -AT_SETUP([logical gatapath groups])
> -AT_KEYWORDS([use_logical_dp_groups])
> -ovn_start
> -
> -dnl Disabling datapath groups.
> -ovn-nbctl --wait=sb set NB_Global . options:use_logical_dp_groups=false
> -
> -ovn-nbctl ls-add sw1
> -ovn-nbctl ls-add sw2
> -ovn-nbctl lsp-add sw1 swp1
> -ovn-nbctl lsp-add sw2 swp2
> -ovn-nbctl --wait=sb sync
> -
> -sw1_sb_uuid=$(fetch_column datapath_binding _uuid external_ids:name=sw1)
> -sw2_sb_uuid=$(fetch_column datapath_binding _uuid external_ids:name=sw2)
> -
> -dnl Check that we have no datapath groups.
> -check_row_count Logical_DP_Group 0
> -
> -dnl Number of logical flows that depends on logical switch or multicast group.
> -dnl These will not be combined.
> -n_flows_specific=$(ovn-sbctl -f table find Logical_Flow | grep -cE 'swp')
> -echo "Number of specific flows: "${n_flows_specific}
> -
> -dnl Both logical switches configured identically, so there should be same
> -dnl number of logical flows per logical switch/logical datapath.
> -n_flows=$(count_rows Logical_Flow)
> -echo "Total number of flows with datapath groups disabled: "${n_flows}
> -n_flows_half=$((${n_flows} / 2))
> -check_row_count Logical_Flow ${n_flows_half} logical_datapath=${sw1_sb_uuid}
> -check_row_count Logical_Flow ${n_flows_half} logical_datapath=${sw2_sb_uuid}
> -
> -dnl Enabling datapath groups.
> -ovn-nbctl --wait=sb set NB_Global . options:use_logical_dp_groups=true
> -
> -dnl Check that one datapath group created.
> -check_row_count Logical_DP_Group 1
> -dp_group_uuid=$(fetch_column logical_dp_group _uuid)
> -
> -dnl Check that datapath group contains both datapaths.
> -check_column "${sw1_sb_uuid} ${sw2_sb_uuid}" Logical_DP_Group datapaths
> -
> -dnl Calculating number of flows that should be combined for a datapath group.
> -n_flows=$(count_rows Logical_Flow)
> -echo "Total number of flows with datapath groups enabled: "${n_flows}
> -n_flows_common=$((${n_flows} - ${n_flows_specific}))
> -
> -check_row_count Logical_Flow ${n_flows_common} logical_dp_group=${dp_group_uuid}
> -check_row_count Logical_Flow ${n_flows_common} logical_datapath=[[]]
> -check_row_count Logical_Flow ${n_flows_common} \
> -    logical_dp_group=${dp_group_uuid} logical_datapath=[[]]
> -
> -dnl Adding 8 more logical switches and ports.
> -for i in $(seq 3 10); do
> -    ovn-nbctl ls-add sw${i}
> -    ovn-nbctl lsp-add sw${i} swp${i}
> -done
> -ovn-nbctl --wait=sb sync
> -
> -dnl Number of logical flows should be increased only due to specific flows.
> -expected_n_flows=$((${n_flows_common} + 5 * ${n_flows_specific}))
> -echo "Total number of flows with 10 logical switches should be: " \
> -     ${expected_n_flows}
> -check_row_count Logical_Flow ${expected_n_flows}
> -
> -dnl Should be still only one datapath group.
> -check_row_count Logical_DP_Group 1
> -dp_group_uuid=$(fetch_column logical_dp_group _uuid)
> -
> -dnl Number of common flows should be the same.
> -check_row_count Logical_Flow ${n_flows_common} logical_dp_group=${dp_group_uuid}
> -
> -AT_CLEANUP
> -])
> -
>   OVN_FOR_EACH_NORTHD([
>   AT_SETUP([Router policies - ECMP reroute])
>   AT_KEYWORDS([router policies ecmp reroute])
> @@ -6510,6 +6434,7 @@ AT_CHECK([grep -e "ls_in_stateful" lsflows | sed 's/table=../table=??/' | sort],
>   AT_CLEANUP
>   ])
>   
> +OVN_FOR_EACH_NORTHD([
>   AT_SETUP([ovn-northd -- lr multiple gw ports NAT])
>   AT_KEYWORDS([multiple-l3dgw-ports])
>   ovn_start
> @@ -6622,7 +6547,6 @@ AT_CAPTURE_FILE([lrflows])
>   AT_CHECK([grep lr_in_ip_input lrflows | grep arp | grep -e 172.16.1.10 -e 10.0.0.10 | sed 's/table=../table=??/' | sort], [0], [dnl
>     table=??(lr_in_ip_input     ), priority=90   , match=(arp.op == 1 && arp.tpa == 10.0.0.10), action=(eth.dst = eth.src; eth.src = xreg0[[0..47]]; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = xreg0[[0..47]]; arp.tpa <-> arp.spa; outport = inport; flags.loopback = 1; output;)
>     table=??(lr_in_ip_input     ), priority=90   , match=(arp.op == 1 && arp.tpa == 172.16.1.10), action=(eth.dst = eth.src; eth.src = xreg0[[0..47]]; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = xreg0[[0..47]]; arp.tpa <-> arp.spa; outport = inport; flags.loopback = 1; output;)
> -  table=??(lr_in_ip_input     ), priority=90   , match=(arp.op == 1 && arp.tpa == 172.16.1.10), action=(eth.dst = eth.src; eth.src = xreg0[[0..47]]; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = xreg0[[0..47]]; arp.tpa <-> arp.spa; outport = inport; flags.loopback = 1; output;)
>     table=??(lr_in_ip_input     ), priority=91   , match=(inport == "DR-S1" && arp.op == 1 && arp.tpa == 172.16.1.10), action=(drop;)
>     table=??(lr_in_ip_input     ), priority=91   , match=(inport == "DR-S2" && arp.op == 1 && arp.tpa == 10.0.0.10), action=(drop;)
>     table=??(lr_in_ip_input     ), priority=91   , match=(inport == "DR-S3" && arp.op == 1 && arp.tpa == 172.16.1.10), action=(drop;)
> @@ -6706,6 +6630,7 @@ AT_CHECK([ovn-sbctl dump-flows DR | grep -e lr_in_unsnat -e lr_out_snat -e lr_in
>   ])
>   
>   AT_CLEANUP
> +])
>   
>   OVN_FOR_EACH_NORTHD([
>   AT_SETUP([LR NB Static_MAC_Binding table])
> diff --git a/tests/ovs-macros.at b/tests/ovs-macros.at
> index df4266d1f..1bb31f315 100644
> --- a/tests/ovs-macros.at
> +++ b/tests/ovs-macros.at
> @@ -9,11 +9,9 @@ dnl - If NORTHD_TYPE is defined, then append it to the test name and
>   dnl   set it as a shell variable as well.
>   m4_rename([AT_SETUP], [OVS_AT_SETUP])
>   m4_define([AT_SETUP],
> -  [OVS_AT_SETUP($@[]m4_ifdef([NORTHD_TYPE], [ -- NORTHD_TYPE])[]m4_ifdef([NORTHD_USE_DP_GROUPS], [ -- dp-groups=NORTHD_USE_DP_GROUPS])[]m4_ifdef([NORTHD_USE_PARALLELIZATION], [ -- parallelization=NORTHD_USE_PARALLELIZATION])[]m4_ifdef([OVN_MONITOR_ALL], [ -- ovn_monitor_all=OVN_MONITOR_ALL]))
> +  [OVS_AT_SETUP($@[]m4_ifdef([NORTHD_TYPE], [ -- NORTHD_TYPE])[]m4_ifdef([NORTHD_USE_PARALLELIZATION], [ -- parallelization=NORTHD_USE_PARALLELIZATION])[]m4_ifdef([OVN_MONITOR_ALL], [ -- ovn_monitor_all=OVN_MONITOR_ALL]))
>   m4_ifdef([NORTHD_TYPE], [[NORTHD_TYPE]=NORTHD_TYPE
>   ])dnl
> -m4_ifdef([NORTHD_USE_DP_GROUPS], [[NORTHD_USE_DP_GROUPS]=NORTHD_USE_DP_GROUPS
> -])dnl
>   m4_ifdef([NORTHD_USE_PARALLELIZATION], [[NORTHD_USE_PARALLELIZATION]=NORTHD_USE_PARALLELIZATION
>   ])dnl
>   m4_ifdef([NORTHD_DUMMY_NUMA], [[NORTHD_DUMMY_NUMA]=NORTHD_DUMMY_NUMA
>
diff mbox series

Patch

diff --git a/NEWS b/NEWS
index 3737907ca..20cea579e 100644
--- a/NEWS
+++ b/NEWS
@@ -13,6 +13,7 @@  Post v22.06.0
     and ipsec_forceencaps=true (strongswan) to unconditionally enforce
     NAT-T UDP encapsulation. Requires OVS support for IPsec custom tunnel
     options (which will be available in OVS 2.18).
+  - Removed possibility of disabling logical datapath groups.
 
 OVN v22.06.0 - 03 Jun 2022
 --------------------------
diff --git a/TODO.rst b/TODO.rst
index 618ea4844..acbcacc4e 100644
--- a/TODO.rst
+++ b/TODO.rst
@@ -170,3 +170,9 @@  OVN To-do List
   * physical.c has a global simap -localvif_to_ofport which stores the
     local OVS interfaces and the ofport numbers. Move this to the engine data
     of the engine data node - ed_type_pflow_output.
+
+* ovn-northd parallel logical flow processing
+
+  * Multi-threaded logical flow computation was optimized for the case
+    when datapath groups are disabled.  Datpath groups are always enabled
+    now so northd parallel processing should be revisited.
diff --git a/northd/northd.c b/northd/northd.c
index 964af992f..509186392 100644
--- a/northd/northd.c
+++ b/northd/northd.c
@@ -4896,10 +4896,6 @@  ovn_lflow_equal(const struct ovn_lflow *a, const struct ovn_datapath *od,
             && nullable_string_is_equal(a->ctrl_meter, ctrl_meter));
 }
 
-/* If this option is 'true' northd will combine logical flows that differ by
- * logical datapath only by creating a datapath group. */
-static bool use_logical_dp_groups = false;
-
 enum {
     STATE_NULL,               /* parallelization is off */
     STATE_INIT_HASH_SIZES,    /* parallelization is on; hashes sizing needed */
@@ -4924,8 +4920,7 @@  ovn_lflow_init(struct ovn_lflow *lflow, struct ovn_datapath *od,
     lflow->ctrl_meter = ctrl_meter;
     lflow->dpg = NULL;
     lflow->where = where;
-    if ((parallelization_state != STATE_NULL)
-        && use_logical_dp_groups) {
+    if (parallelization_state != STATE_NULL) {
         ovs_mutex_init(&lflow->odg_lock);
     }
 }
@@ -4935,7 +4930,7 @@  ovn_dp_group_add_with_reference(struct ovn_lflow *lflow_ref,
                                 struct ovn_datapath *od)
                                 OVS_NO_THREAD_SAFETY_ANALYSIS
 {
-    if (!use_logical_dp_groups || !lflow_ref) {
+    if (!lflow_ref) {
         return false;
     }
 
@@ -5014,13 +5009,11 @@  do_ovn_lflow_add(struct hmap *lflow_map, struct ovn_datapath *od,
     struct ovn_lflow *old_lflow;
     struct ovn_lflow *lflow;
 
-    if (use_logical_dp_groups) {
-        old_lflow = ovn_lflow_find(lflow_map, NULL, stage, priority, match,
-                                   actions, ctrl_meter, hash);
-        if (old_lflow) {
-            ovn_dp_group_add_with_reference(old_lflow, od);
-            return old_lflow;
-        }
+    old_lflow = ovn_lflow_find(lflow_map, NULL, stage, priority, match,
+                               actions, ctrl_meter, hash);
+    if (old_lflow) {
+        ovn_dp_group_add_with_reference(old_lflow, od);
+        return old_lflow;
     }
 
     lflow = xmalloc(sizeof *lflow);
@@ -5076,8 +5069,7 @@  ovn_lflow_add_at_with_hash(struct hmap *lflow_map, struct ovn_datapath *od,
     struct ovn_lflow *lflow;
 
     ovs_assert(ovn_stage_to_datapath_type(stage) == ovn_datapath_get_type(od));
-    if (use_logical_dp_groups
-        && (parallelization_state == STATE_USE_PARALLELIZATION)) {
+    if (parallelization_state == STATE_USE_PARALLELIZATION) {
         lflow = do_ovn_lflow_add_pd(lflow_map, od, hash, stage, priority,
                                     match, actions, io_port, stage_hint, where,
                                     ctrl_meter);
@@ -5163,8 +5155,7 @@  static void
 ovn_lflow_destroy(struct hmap *lflows, struct ovn_lflow *lflow)
 {
     if (lflow) {
-        if ((parallelization_state != STATE_NULL)
-            && use_logical_dp_groups) {
+        if (parallelization_state != STATE_NULL) {
             ovs_mutex_destroy(&lflow->odg_lock);
         }
         if (lflows) {
@@ -13949,29 +13940,18 @@  build_lswitch_and_lrouter_flows(const struct hmap *datapaths,
     char *svc_check_match = xasprintf("eth.dst == %s", svc_monitor_mac);
 
     if (parallelization_state == STATE_USE_PARALLELIZATION) {
-        struct hmap *lflow_segs;
         struct lswitch_flow_build_info *lsiv;
         int index;
 
         lsiv = xcalloc(sizeof(*lsiv), build_lflows_pool->size);
-        if (use_logical_dp_groups) {
-            lflow_segs = NULL;
-        } else {
-            lflow_segs = xcalloc(sizeof(*lflow_segs), build_lflows_pool->size);
-        }
 
         /* Set up "work chunks" for each thread to work on. */
 
         for (index = 0; index < build_lflows_pool->size; index++) {
-            if (use_logical_dp_groups) {
-                /* if dp_groups are in use we lock a shared lflows hash
-                 * on a per-bucket level instead of merging hash frags */
-                lsiv[index].lflows = lflows;
-            } else {
-                fast_hmap_init(&lflow_segs[index], lflows->mask);
-                lsiv[index].lflows = &lflow_segs[index];
-            }
-
+            /* dp_groups are in use so we lock a shared lflows hash
+             * on a per-bucket level.
+             */
+            lsiv[index].lflows = lflows;
             lsiv[index].datapaths = datapaths;
             lsiv[index].ports = ports;
             lsiv[index].port_groups = port_groups;
@@ -13990,19 +13970,13 @@  build_lswitch_and_lrouter_flows(const struct hmap *datapaths,
         }
 
         /* Run thread pool. */
-        if (use_logical_dp_groups) {
-            run_pool_callback(build_lflows_pool, NULL, NULL,
-                              noop_callback);
-            fix_flow_map_size(lflows, lsiv, build_lflows_pool->size);
-        } else {
-            run_pool_hash(build_lflows_pool, lflows, lflow_segs);
-        }
+        run_pool_callback(build_lflows_pool, NULL, NULL, noop_callback);
+        fix_flow_map_size(lflows, lsiv, build_lflows_pool->size);
 
         for (index = 0; index < build_lflows_pool->size; index++) {
             ds_destroy(&lsiv[index].match);
             ds_destroy(&lsiv[index].actions);
         }
-        free(lflow_segs);
         free(lsiv);
     } else {
         struct ovn_datapath *od;
@@ -14148,21 +14122,15 @@  void run_update_worker_pool(int n_threads)
             lflow_hash_lock_destroy();
             parallelization_state = STATE_NULL;
         } else if (parallelization_state != STATE_USE_PARALLELIZATION) {
-            if (use_logical_dp_groups) {
-                lflow_hash_lock_init();
-                parallelization_state = STATE_INIT_HASH_SIZES;
-            } else {
-                parallelization_state = STATE_USE_PARALLELIZATION;
-            }
+            lflow_hash_lock_init();
+            parallelization_state = STATE_INIT_HASH_SIZES;
         }
     }
 }
 
-static void worker_pool_init_for_ldp(void)
+static void init_worker_pool(void)
 {
-    /* If parallelization is enabled, make sure locks are initialized
-     * when ldp are used.
-     */
+    /* If parallelization is enabled, make sure locks are initialized. */
     if (parallelization_state != STATE_NULL) {
         lflow_hash_lock_init();
         parallelization_state = STATE_INIT_HASH_SIZES;
@@ -15432,13 +15400,6 @@  ovnnb_db_run(struct northd_input *input_data,
         nbrec_nb_global_set_options(nb, &options);
     }
 
-    bool old_use_ldp = use_logical_dp_groups;
-    use_logical_dp_groups = smap_get_bool(&nb->options,
-                                          "use_logical_dp_groups", true);
-    if (use_logical_dp_groups && !old_use_ldp) {
-        worker_pool_init_for_ldp();
-    }
-
     use_ct_inv_match = smap_get_bool(&nb->options,
                                      "use_ct_inv_match", true);
 
@@ -15741,6 +15702,7 @@  void northd_run(struct northd_input *input_data,
                 struct ovsdb_idl_txn *ovnsb_txn)
 {
     stopwatch_start(OVNNB_DB_RUN_STOPWATCH_NAME, time_msec());
+    init_worker_pool();
     ovnnb_db_run(input_data, data, ovnnb_txn, ovnsb_txn,
                  input_data->sbrec_chassis_by_name,
                  input_data->sbrec_chassis_by_hostname);
diff --git a/ovn-nb.xml b/ovn-nb.xml
index 9c09de8d8..22b7e8779 100644
--- a/ovn-nb.xml
+++ b/ovn-nb.xml
@@ -196,21 +196,14 @@ 
 
       <column name="options" key="use_logical_dp_groups">
         <p>
-          If set to <code>true</code>, <code>ovn-northd</code> will combine
-          logical flows that differs only by logical datapath into a single
-          logical flow with logical datapath group attached.
+          Note: This option is deprecated, the only behavior is to always
+          combine logical flows by datapath groups.  Changing the value or
+          removing this option all toghether will have no effect.
         </p>
         <p>
-          While this should significantly reduce number of logical flows stored
-          in Southbound database this could also increase processing complexity
-          on the <code>ovn-controller</code> side, e.g.,
-          <code>ovn-controller</code> will re-consider logical flow for all
-          logical datapaths in a group.  If the option set to
-          <code>false</code>, there will be separate logical flow per logical
-          datapath and only this flow will be re-considered.
-        </p>
-        <p>
-          The default value is <code>false</code>.
+          <code>ovn-northd</code> combines logical flows that differs
+          only by logical datapath into a single logical flow with
+          logical datapath group attached.
         </p>
       </column>
       <column name="options" key="use_parallel_build">
diff --git a/tests/ovn-macros.at b/tests/ovn-macros.at
index b71d02d44..427b7b669 100644
--- a/tests/ovn-macros.at
+++ b/tests/ovn-macros.at
@@ -246,12 +246,6 @@  ovn_start () {
                --ic-nb-db=unix:"$ovs_base"/ovn-ic-nb/ovn-ic-nb.sock \
                --ic-sb-db=unix:"$ovs_base"/ovn-ic-sb/ovn-ic-sb.sock
     fi
-
-    if test X$NORTHD_USE_DP_GROUPS = Xyes; then
-        ovn-nbctl set NB_Global . options:use_logical_dp_groups=true
-    else
-        ovn-nbctl set NB_Global . options:use_logical_dp_groups=false
-    fi
 }
 
 # Interconnection networks.
@@ -758,15 +752,6 @@  m4_define([OVN_POPULATE_ARP], [AT_CHECK(ovn_populate_arp__, [0], [ignore])])
 # datapath groups.
 m4_define([OVN_FOR_EACH_NORTHD],
   [m4_foreach([NORTHD_TYPE], [ovn-northd],
-     [m4_foreach([NORTHD_USE_DP_GROUPS], [yes, no],
-       [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes, no],
-         [m4_foreach([OVN_MONITOR_ALL], [yes, no], [$1
-])])])])])
-
-# Some tests aren't prepared for dp groups to be enabled.
-m4_define([OVN_FOR_EACH_NORTHD_WITHOUT_DP_GROUPS],
-  [m4_foreach([NORTHD_TYPE], [ovn-northd],
-     [m4_foreach([NORTHD_USE_DP_GROUPS], [no],
-       [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes, no],
-         [m4_foreach([OVN_MONITOR_ALL], [yes, no], [$1
-])])])])])
+     [m4_foreach([NORTHD_USE_PARALLELIZATION], [yes, no],
+       [m4_foreach([OVN_MONITOR_ALL], [yes, no], [$1
+])])])])
diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
index fe97bedad..c456fc89c 100644
--- a/tests/ovn-northd.at
+++ b/tests/ovn-northd.at
@@ -755,7 +755,7 @@  wait_row_count Datapath_Binding 2
 AT_CLEANUP
 ])
 
-OVN_FOR_EACH_NORTHD_WITHOUT_DP_GROUPS([
+OVN_FOR_EACH_NORTHD([
 AT_SETUP([northbound database reconnection])
 
 ovn_start --backup-northd=none
@@ -765,7 +765,7 @@  ovn_start --backup-northd=none
 check_row_count Datapath_Binding 0
 check ovn-nbctl --wait=sb ls-add sw0
 check_row_count Datapath_Binding 1
-lf=$(count_rows Logical_Flow)
+dp1=$(fetch_column Datapath_Binding _uuid external_ids:name=sw0)
 
 # Make nbdb ovsdb-server drop connection from ovn-northd.
 conn=$(as ovn-nb ovs-appctl -t ovsdb-server ovsdb-server/list-remotes|grep ^punix)
@@ -777,17 +777,17 @@  check as ovn-nb ovs-appctl -t ovsdb-server ovsdb-server/add-remote "$conn2"
 check ovn-nbctl --db="${conn2#p}" ls-add sw1
 sleep 5
 check_row_count Datapath_Binding 1
-check_row_count Logical_Flow $lf
 
 # Now re-enable the nbdb connection and observe ovn-northd catch up.
 check as ovn-nb ovs-appctl -t ovsdb-server ovsdb-server/add-remote "$conn"
 wait_row_count Datapath_Binding 2
-wait_row_count Logical_Flow $(expr 2 \* $lf)
+dp2=$(fetch_column Datapath_Binding _uuid external_ids:name=sw1)
+wait_column "$dp1 $dp2" Logical_DP_Group datapaths
 
 AT_CLEANUP
 ])
 
-OVN_FOR_EACH_NORTHD_WITHOUT_DP_GROUPS([
+OVN_FOR_EACH_NORTHD([
 AT_SETUP([southbound database reconnection])
 
 ovn_start --backup-northd=none
@@ -797,7 +797,7 @@  ovn_start --backup-northd=none
 check_row_count Datapath_Binding 0
 check ovn-nbctl --wait=sb ls-add sw0
 check_row_count Datapath_Binding 1
-lf=$(count_rows Logical_Flow)
+dp1=$(fetch_column Datapath_Binding _uuid external_ids:name=sw0)
 
 # Make sbdb ovsdb-server drop connection from ovn-northd.
 conn=$(as ovn-sb ovs-appctl -t ovsdb-server ovsdb-server/list-remotes|grep ^punix)
@@ -809,14 +809,14 @@  check as ovn-sb ovs-appctl -t ovsdb-server ovsdb-server/add-remote "$conn2"
 check ovn-nbctl ls-add sw1
 sleep 5
 OVN_SB_DB=${conn2#p} check_row_count Datapath_Binding 1
-OVN_SB_DB=${conn2#p} check_row_count Logical_Flow $lf
 
 # Now re-enable the sbdb connection and observe ovn-northd catch up.
 #
 # It's important to check both Datapath_Binding and Logical_Flow.
 check as ovn-sb ovs-appctl -t ovsdb-server ovsdb-server/add-remote "$conn"
 wait_row_count Datapath_Binding 2
-wait_row_count Logical_Flow $(expr 2 \* $lf)
+dp2=$(fetch_column Datapath_Binding _uuid external_ids:name=sw1)
+wait_column "$dp1 $dp2" Logical_DP_Group datapaths
 
 AT_CLEANUP
 ])
@@ -2640,82 +2640,6 @@  AT_CHECK([ovn-sbctl lflow-list sw0 | grep ls_in_hairpin | sort | sed 's/table=..
 AT_CLEANUP
 ])
 
-OVN_FOR_EACH_NORTHD([
-AT_SETUP([logical gatapath groups])
-AT_KEYWORDS([use_logical_dp_groups])
-ovn_start
-
-dnl Disabling datapath groups.
-ovn-nbctl --wait=sb set NB_Global . options:use_logical_dp_groups=false
-
-ovn-nbctl ls-add sw1
-ovn-nbctl ls-add sw2
-ovn-nbctl lsp-add sw1 swp1
-ovn-nbctl lsp-add sw2 swp2
-ovn-nbctl --wait=sb sync
-
-sw1_sb_uuid=$(fetch_column datapath_binding _uuid external_ids:name=sw1)
-sw2_sb_uuid=$(fetch_column datapath_binding _uuid external_ids:name=sw2)
-
-dnl Check that we have no datapath groups.
-check_row_count Logical_DP_Group 0
-
-dnl Number of logical flows that depends on logical switch or multicast group.
-dnl These will not be combined.
-n_flows_specific=$(ovn-sbctl -f table find Logical_Flow | grep -cE 'swp')
-echo "Number of specific flows: "${n_flows_specific}
-
-dnl Both logical switches configured identically, so there should be same
-dnl number of logical flows per logical switch/logical datapath.
-n_flows=$(count_rows Logical_Flow)
-echo "Total number of flows with datapath groups disabled: "${n_flows}
-n_flows_half=$((${n_flows} / 2))
-check_row_count Logical_Flow ${n_flows_half} logical_datapath=${sw1_sb_uuid}
-check_row_count Logical_Flow ${n_flows_half} logical_datapath=${sw2_sb_uuid}
-
-dnl Enabling datapath groups.
-ovn-nbctl --wait=sb set NB_Global . options:use_logical_dp_groups=true
-
-dnl Check that one datapath group created.
-check_row_count Logical_DP_Group 1
-dp_group_uuid=$(fetch_column logical_dp_group _uuid)
-
-dnl Check that datapath group contains both datapaths.
-check_column "${sw1_sb_uuid} ${sw2_sb_uuid}" Logical_DP_Group datapaths
-
-dnl Calculating number of flows that should be combined for a datapath group.
-n_flows=$(count_rows Logical_Flow)
-echo "Total number of flows with datapath groups enabled: "${n_flows}
-n_flows_common=$((${n_flows} - ${n_flows_specific}))
-
-check_row_count Logical_Flow ${n_flows_common} logical_dp_group=${dp_group_uuid}
-check_row_count Logical_Flow ${n_flows_common} logical_datapath=[[]]
-check_row_count Logical_Flow ${n_flows_common} \
-    logical_dp_group=${dp_group_uuid} logical_datapath=[[]]
-
-dnl Adding 8 more logical switches and ports.
-for i in $(seq 3 10); do
-    ovn-nbctl ls-add sw${i}
-    ovn-nbctl lsp-add sw${i} swp${i}
-done
-ovn-nbctl --wait=sb sync
-
-dnl Number of logical flows should be increased only due to specific flows.
-expected_n_flows=$((${n_flows_common} + 5 * ${n_flows_specific}))
-echo "Total number of flows with 10 logical switches should be: " \
-     ${expected_n_flows}
-check_row_count Logical_Flow ${expected_n_flows}
-
-dnl Should be still only one datapath group.
-check_row_count Logical_DP_Group 1
-dp_group_uuid=$(fetch_column logical_dp_group _uuid)
-
-dnl Number of common flows should be the same.
-check_row_count Logical_Flow ${n_flows_common} logical_dp_group=${dp_group_uuid}
-
-AT_CLEANUP
-])
-
 OVN_FOR_EACH_NORTHD([
 AT_SETUP([Router policies - ECMP reroute])
 AT_KEYWORDS([router policies ecmp reroute])
@@ -6510,6 +6434,7 @@  AT_CHECK([grep -e "ls_in_stateful" lsflows | sed 's/table=../table=??/' | sort],
 AT_CLEANUP
 ])
 
+OVN_FOR_EACH_NORTHD([
 AT_SETUP([ovn-northd -- lr multiple gw ports NAT])
 AT_KEYWORDS([multiple-l3dgw-ports])
 ovn_start
@@ -6622,7 +6547,6 @@  AT_CAPTURE_FILE([lrflows])
 AT_CHECK([grep lr_in_ip_input lrflows | grep arp | grep -e 172.16.1.10 -e 10.0.0.10 | sed 's/table=../table=??/' | sort], [0], [dnl
   table=??(lr_in_ip_input     ), priority=90   , match=(arp.op == 1 && arp.tpa == 10.0.0.10), action=(eth.dst = eth.src; eth.src = xreg0[[0..47]]; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = xreg0[[0..47]]; arp.tpa <-> arp.spa; outport = inport; flags.loopback = 1; output;)
   table=??(lr_in_ip_input     ), priority=90   , match=(arp.op == 1 && arp.tpa == 172.16.1.10), action=(eth.dst = eth.src; eth.src = xreg0[[0..47]]; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = xreg0[[0..47]]; arp.tpa <-> arp.spa; outport = inport; flags.loopback = 1; output;)
-  table=??(lr_in_ip_input     ), priority=90   , match=(arp.op == 1 && arp.tpa == 172.16.1.10), action=(eth.dst = eth.src; eth.src = xreg0[[0..47]]; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = xreg0[[0..47]]; arp.tpa <-> arp.spa; outport = inport; flags.loopback = 1; output;)
   table=??(lr_in_ip_input     ), priority=91   , match=(inport == "DR-S1" && arp.op == 1 && arp.tpa == 172.16.1.10), action=(drop;)
   table=??(lr_in_ip_input     ), priority=91   , match=(inport == "DR-S2" && arp.op == 1 && arp.tpa == 10.0.0.10), action=(drop;)
   table=??(lr_in_ip_input     ), priority=91   , match=(inport == "DR-S3" && arp.op == 1 && arp.tpa == 172.16.1.10), action=(drop;)
@@ -6706,6 +6630,7 @@  AT_CHECK([ovn-sbctl dump-flows DR | grep -e lr_in_unsnat -e lr_out_snat -e lr_in
 ])
 
 AT_CLEANUP
+])
 
 OVN_FOR_EACH_NORTHD([
 AT_SETUP([LR NB Static_MAC_Binding table])
diff --git a/tests/ovs-macros.at b/tests/ovs-macros.at
index df4266d1f..1bb31f315 100644
--- a/tests/ovs-macros.at
+++ b/tests/ovs-macros.at
@@ -9,11 +9,9 @@  dnl - If NORTHD_TYPE is defined, then append it to the test name and
 dnl   set it as a shell variable as well.
 m4_rename([AT_SETUP], [OVS_AT_SETUP])
 m4_define([AT_SETUP],
-  [OVS_AT_SETUP($@[]m4_ifdef([NORTHD_TYPE], [ -- NORTHD_TYPE])[]m4_ifdef([NORTHD_USE_DP_GROUPS], [ -- dp-groups=NORTHD_USE_DP_GROUPS])[]m4_ifdef([NORTHD_USE_PARALLELIZATION], [ -- parallelization=NORTHD_USE_PARALLELIZATION])[]m4_ifdef([OVN_MONITOR_ALL], [ -- ovn_monitor_all=OVN_MONITOR_ALL]))
+  [OVS_AT_SETUP($@[]m4_ifdef([NORTHD_TYPE], [ -- NORTHD_TYPE])[]m4_ifdef([NORTHD_USE_PARALLELIZATION], [ -- parallelization=NORTHD_USE_PARALLELIZATION])[]m4_ifdef([OVN_MONITOR_ALL], [ -- ovn_monitor_all=OVN_MONITOR_ALL]))
 m4_ifdef([NORTHD_TYPE], [[NORTHD_TYPE]=NORTHD_TYPE
 ])dnl
-m4_ifdef([NORTHD_USE_DP_GROUPS], [[NORTHD_USE_DP_GROUPS]=NORTHD_USE_DP_GROUPS
-])dnl
 m4_ifdef([NORTHD_USE_PARALLELIZATION], [[NORTHD_USE_PARALLELIZATION]=NORTHD_USE_PARALLELIZATION
 ])dnl
 m4_ifdef([NORTHD_DUMMY_NUMA], [[NORTHD_DUMMY_NUMA]=NORTHD_DUMMY_NUMA